Skip to main content

Home/ TOK Friends/ Group items tagged possible

Rss Feed Group items tagged

Javier E

Opinion | Have Some Sympathy - The New York Times - 0 views

  • Schools and parenting guides instruct children in how to cultivate empathy, as do workplace culture and wellness programs. You could fill entire bookshelves with guides to finding, embracing and sharing empathy. Few books or lesson plans extol sympathy’s virtues.
  • “Sympathy focuses on offering support from a distance,” a therapist explains on LinkedIn, whereas empathy “goes beyond sympathy by actively immersing oneself in another person’s emotions and attempting to comprehend their point of view.”
  • In use since the 16th century, when the Greek “syn-” (“with”) combined with pathos (experience, misfortune, emotion, condition) to mean “having common feelings,” sympathy preceded empathy by a good four centuries
  • ...8 more annotations...
  • Empathy (the “em” means “into”) barged in from the German in the 20th century and gained popularity through its usage in fields like philosophy, aesthetics and psychology. According to my benighted 1989 edition of Webster’s Unabridged, empathy was the more self-centered emotion, “the intellectual identification with or vicarious experiencing of the feelings, thoughts or attitudes of another.”
  • in more updated lexicons, it’s as if the two words had reversed. Sympathy now implies a hierarchy whereas empathy is the more egalitarian sentiment.
  • Sympathy, the session’s leader explained to school staff members, was seeing someone in a hole and saying, “Too bad you’re in a hole,” whereas empathy meant getting in the hole, too.
  • “Empathy is a choice and it’s a vulnerable choice because in order to connect with you, I have to connect with something in myself that knows that feeling,”
  • Still, it’s hard to square the new emphasis on empathy — you must feel what others feel — with another element of the current discourse. According to what’s known as “standpoint theory,” your view necessarily depends on your own experience: You can’t possibly know what others feel.
  • In short, no matter how much an empath you may be, unless you have actually been in someone’s place, with all its experiences and limitations, you cannot understand where that person is coming from. The object of your empathy may find it presumptuous of you to think that you “get it.”
  • Bloom asks us to imagine what empathy demands should a friend’s child drown. “A highly empathetic response would be to feel what your friend feels, to experience, as much as you can, the terrible sorrow and pain,” he writes. “In contrast, compassion involves concern and love for your friend, and the desire and motivation to help, but it need not involve mirroring your friend’s anguish.”
  • Bloom argues for a more rational, modulated, compassionate response. Something that sounds a little more like our old friend sympathy.
Javier E

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times - 0 views

  • Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions
  • Some are concerned about far-future risks that sound like science fiction.
  • Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now.
  • ...31 more annotations...
  • Some are motivated by potential business revenue, others by national security concerns.
  • Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I.
  • you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
  • It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
  • Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions
  • One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics.
  • The Doomsayers
  • These are the A.I. safety people, and their ranks include the “Godfathers of A.I.,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind
  • Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future
  • Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
  • The technology historian David C. Brock calls these fears “wishful worries” — that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”
  • OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups
  • the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course.
  • While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns.
  • The Reformers
  • While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower
  • Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
  • Propagators of these A.I. ethics concerns — like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy O’Neil — have been raising the alarm on inequities coded into A.I. for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q.
  • Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside — or even above — their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards
  • reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity.
  • Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
  • Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
  • Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security.
  • Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
  • they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
  • U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”
  • The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly.
  • As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism
  • Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
  • By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.
  • Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
Javier E

Netanyahu's Dark Worldview - The Atlantic - 0 views

  • as Netanyahu soon made clear, when it comes to AI, he believes that bad outcomes are the likely outcomes. The Israeli leader interrogated OpenAI’s Brockman about the impact of his company’s creations on the job market. By replacing more and more workers, Netanyahu argued, AI threatens to “cannibalize a lot more jobs than you create,” leaving many people adrift and unable to contribute to the economy. When Brockman suggested that AI could usher in a world where people would not have to work, Netanyahu countered that the benefits of the technology were unlikely to accrue to most people, because the data, computational power, and engineering talent required for AI are concentrated in a few countries.
  • “You have these trillion-dollar [AI] companies that are produced overnight, and they concentrate enormous wealth and power with a smaller and smaller number of people,” the Israeli leader said, noting that even a free-market evangelist like himself was unsettled by such monopolization. “That will create a bigger and bigger distance between the haves and the have-nots, and that’s another thing that causes tremendous instability in our world. And I don’t know if you have an idea of how you overcome that?”
  • The other panelists did not. Brockman briefly pivoted to talk about OpenAI’s Israeli employees before saying, “The world we should shoot for is one where all the boats are rising.” But other than mentioning the possibility of a universal basic income for people living in an AI-saturated society, Brockman agreed that “creative solutions” to this problem were needed—without providing any.
  • ...10 more annotations...
  • The AI boosters emphasized the incredible potential of their innovation, and Netanyahu raised practical objections to their enthusiasm. They cited futurists such as Ray Kurzweil to paint a bright picture of a post-AI world; Netanyahu cited the Bible and the medieval Jewish philosopher Maimonides to caution against upending human institutions and subordinating our existence to machines.
  • Musk matter-of-factly explained that the “very positive scenario of AI” is “actually in a lot of ways a description of heaven,” where “you can have whatever you want, you don’t need to work, you have no obligations, any illness you have can be cured,” and death is “a choice.” Netanyahu incredulously retorted, “You want this world?”
  • By the time the panel began to wind down, the Israeli leader had seemingly made up his mind. “This is like having nuclear technology in the Stone Age,” he said. “The pace of development [is] outpacing what solutions we need to put in place to maximize the benefits and limit the risks.”
  • Netanyahu was a naysayer about the Arab Spring, unwilling to join the rapturous ranks of hopeful politicians, activists, and democracy advocates. But he was also right.
  • This was less because he is a prophet and more because he is a pessimist. When it comes to grandiose predictions about a better tomorrow—whether through peace with the Palestinians, a nuclear deal with Iran, or the advent of artificial intelligence—Netanyahu always bets against. Informed by a dark reading of Jewish history, he is a cynic about human nature and a skeptic of human progress.
  • fter all, no matter how far civilization has advanced, it has always found ways to persecute the powerless, most notably, in his mind, the Jews. For Netanyahu, the arc of history is long, and it bends toward whoever is bending it.
  • This is why the Israeli leader puts little stock in utopian promises, whether they are made by progressive internationalists or Silicon Valley futurists, and places his trust in hard power instead
  • “The weak crumble, are slaughtered and are erased from history while the strong, for good or for ill, survive. The strong are respected, and alliances are made with the strong, and in the end peace is made with the strong.”
  • To his many critics, myself included, Netanyahu’s refusal to envision a different future makes him a “creature of the bunker,” perpetually governed by fear. Although his pessimism may sometimes be vindicated, it also holds his country hostag
  • In other words, the same cynicism that drives Netanyahu’s reactionary politics is the thing that makes him an astute interrogator of AI and its promoters. Just as he doesn’t trust others not to use their power to endanger Jews, he doesn’t trust AI companies or AI itself to police its rapidly growing capabilities.
Javier E

Hey, Elon Musk, Comedy Doesn't Want to Be Legal - The New York Times - 0 views

  • while labeling something parody might be bad for comedy, it can be essential for credibility. If people can’t tell whether an article was satirical or not, that chips away at trust that is essential for a news organization. But what’s good for comedy isn’t necessarily best practices for journalism or social media.
  • Even today when the lines between comedy and politics often blur — years after the press marveled that young people trusted Jon Stewart’s “The Daily Show” more than the news media, which now seems like a much darker development than it did back then — the idea that free speech might involve some trade-offs seems obvious.
  • But maybe not to Musk, who appears as naïve about comedy as he does about the economics of social media.
  • ...4 more annotations...
  • The reality is that good comedy can’t be suppressed, particularly these days when gatekeepers have never had less power.
  • While he’s not especially good at comedy, Musk is a wonderful comic character: The boss who thinks he’s funny but isn’t. He’s Michael Scott from “The Office,” whose terrible jokes everyone must if not laugh at, at least put up with.
  • One reason Musk might think he’s hilarious is that every joke he makes gets a glowing response from his vast population of followers. Why? Comedy is subjective. But I bet a few just admire him and want his attention. This can be its own form of cringe humor and mocking it can really bring people together.
  • Musk doesn’t need to own his haters in a tweet. They already work for him for free. It’s entirely possible that we will look back on his tenure at Twitter and conclude that this was his only good joke. Judging by recent moves, he might screw that up, too.
Javier E

What Did Twitter Turn Us Into? - The Atlantic - 0 views

  • The bedlam of Twitter, fused with the brevity of its form, offers an interpretation of the virtual town square as a bustling, modernist city.
  • It’s easy to get stuck in a feedback loop: That which appears on Twitter is current (if not always true), and what’s current is meaningful, and what’s meaningful demands contending with. And so, matters that matter little or not at all gain traction by virtue of the fact that they found enough initial friction to start moving.
  • The platform is optimized to make the nonevent of its own exaggerated demise seem significant.
  • ...9 more annotations...
  • the very existence of tweets about an event can make that event seem newsworthy—by virtue of having garnered tweets. This supposed newsworthiness can then result in literal news stories, written by journalists and based on inspiration or sourcing from tweets themselves, or it can entail the further spread of a tweet’s message by on-platform engagement, such as likes and quote tweets. Either way, the nature of Twitter is to assert the importance of tweets.
  • Tweets appear more meaningful when amplified, and when amplified they inspire more tweets in the same vein. A thing becomes “tweetworthy” when it spreads but then also justifies its value both on and beyond Twitter by virtue of having spread. This is the “famous for being famous” effect
  • This propensity is not unique to Twitter—all social media possesses it. But the frequency and quantity of posts on Twitter, along with their brevity, their focus on text, and their tendency to be vectors of news, official or not, make Twitter a particularly effective amplification house of mirrors
  • At least in theory. In practice, Twitter is more like an asylum, inmates screaming at everyone and no one in particular, histrionics displacing reason, posters posting at all costs because posting is all that is possible
  • Twitter shapes an epistemology for users under its thrall. What can be known, and how, becomes infected by what has, or can, be tweeted.
  • Producers of supposedly actual news see the world through tweet-colored glasses, by transforming tweets’ hypothetical status as news into published news—which produces more tweeting in turn.
  • For them, and others on this website, it has become an awful habit. Habits feel normal and even justified because they are familiar, not because they are righteous.
  • Twitter convinced us that it mattered, that it was the world’s news service, or a vector for hashtag activism, or a host for communities without voices, or a mouthpiece for the little gal or guy. It is those things, sometimes, for some of its users. But first, and mostly, it is a habit.
  • We never really tweeted to say something. We tweeted because Twitter offered a format for having something to say, over and over again. Just as the purpose of terrorism is terror, so the purpose of Twitter is tweeting.
Javier E

The New History Wars - The Atlantic - 0 views

  • Critical historians who thought they were winning the fight for control within the academy now face dire retaliation from outside the academy. The dizzying turn from seeming triumph in 2020 to imminent threat in 2022 has unnerved many practitioners of the new history. Against this background, they did not welcome it when their association’s president suggested that maybe their opponents had a smidgen of a point.
  • a background reality of the humanities in the contemporary academy: a struggle over who is entitled to speak about what. Nowhere does this struggle rage more fiercely than in anything to do with the continent of Africa. Who should speak? What may be said? Who will be hired?
  • ne obvious escape route from the generational divide in the academy—and the way the different approaches to history, presentist and antiquarian, tend to map onto it—is for some people, especially those on the older and whiter side of the divide, to keep their mouths shut about sensitive issues
  • ...15 more annotations...
  • The political and methodological stresses within the historical profession are intensified by economic troubles. For a long time, but especially since the economic crisis of 2008, university students have turned away from the humanities, preferring to major in fields that seem to offer more certain and lucrative employment. Consequently, academic jobs in the humanities and especially in history have become radically more precarious for younger faculty—even as universities have sought to meet diversity goals in their next-generation hiring by expanding offerings in history-adjacent specialties, such as gender and ethnic studies.
  • The result has produced a generational divide. Younger scholars feel oppressed and exploited by universities pressing them to do more labor for worse pay with less security than their elders; older scholars feel that overeager juniors are poised to pounce on the least infraction as an occasion to end an elder’s career and seize a job opening for themselves. Add racial difference as an accelerant, and what was intended as an interesting methodological discussion in a faculty newsletter can explode into a national culture war.
  • One of the greatest American Africanists was the late Philip Curtin. He wrote one of the first attempts to tally the exact number of persons trafficked by the transatlantic slave trade. Upon publication in 1972, his book was acclaimed as a truly pioneering work of history. By 1995, however, he was moved to protest against trends in the discipline at that time in an article in the Chronicle of Higher Education:I am troubled by increasing evidence of the use of racial criteria in filling faculty posts in the field of African history … This form of intellectual apartheid has been around for several decades, but it appears to have become much more serious in the past few years, to the extent that white scholars trained in African history now have a hard time finding jobs.
  • Much of academia is governed these days by a joke from the Soviet Union: “If you think it, don’t speak it. If you speak it, don’t write it. If you write it, don’t sign it. But if you do think it, speak it, write it, and sign it—don’t be surprised.”
  • Yet this silence has consequences, too. One of the most unsettling is the displacement of history by mythmaking
  • mythmaking is spreading from “just the movies” to more formal and institutional forms of public memory. If old heroes “must fall,” their disappearance opens voids for new heroes to be inserted in their place—and that insertion sometimes requires that new history be fabricated altogether, the “bad history” that Sweet tried to warn against.
  • If it is not the job of the president of the American Historical Association to confront those questions, then whose is it?
  • Sweet used a play on words—“Is History History?”—for the title of his complacency-shaking essay. But he was asking not whether history is finished, done with, but Is history still history? Is it continuing to do what history is supposed to do? Or is it being annexed for other purposes, ideological rather than historical ones?
  • Advocates of studying the more distant past to disturb and challenge our ideas about the present may accuse their academic rivals of “presentism.”
  • In real life, of course, almost everybody who cares about history believes in a little of each option. But how much of each? What’s the right balance? That’s the kind of thing that historians do argue about, and in the arguing, they have developed some dismissive labels for one another
  • Those who look to the more recent past to guide the future may accuse the other camp of “antiquarianism.”
  • The accusation of presentism hurts because it implies that the historian is sacrificing scholarly objectivity for ideological or political purposes. The accusation of antiquarianism stings because it implies that the historian is burrowing into the dust for no useful purpose at all.
  • In his mind, he was merely reopening one of the most familiar debates in professional history: the debate over why? What is the value of studying the past? To reduce the many available answers to a stark choice: Should we study the more distant past to explore its strangeness—and thereby jolt ourselves out of easy assumptions that the world we know is the only possible one?
  • Or should we study the more recent past to understand how our world came into being—and thereby learn some lessons for shaping the future?
  • The August edition of the association’s monthly magazine featured, as usual, a short essay by the association’s president, James H. Sweet, a professor at the University of Wisconsin at Madison. Within hours of its publication, an outrage volcano erupted on social media. A professor at Cornell vented about the author’s “white gaze.”
Javier E

Opinion | The Alt-Right Manipulated My Comic. Then A.I. Claimed It. - The New York Times - 1 views

  • Legally, it appears as though LAION was able to scour what seems like the entire internet because it deems itself a nonprofit organization engaging in academic research. While it was funded at least in part by Stability AI, the company that created Stable Diffusion, it is technically a separate entity. Stability AI then used its nonprofit research arm to create A.I. generators first via Stable Diffusion and then commercialized in a new model called DreamStudio.
  • hat makes up these data sets? Well, pretty much everything. For artists, many of us had what amounted to our entire portfolios fed into the data set without our consent. This means that A.I. generators were built on the backs of our copyrighted work, and through a legal loophole, they were able to produce copies of varying levels of sophistication.
  • eing able to imitate a living artist has obvious implications for our careers, and some artists are already dealing with real challenges to their livelihood.
  • ...4 more annotations...
  • Greg Rutkowski, a hugely popular concept artist, has been used in a prompt for Stable Diffusion upward of 100,000 times. Now, his name is no longer attached to just his own work, but it also summons a slew of imitations of varying quality that he hasn’t approved. This could confuse clients, and it muddies the consistent and precise output he usually produces. When I saw what was happening to him, I thought of my battle with my shadow self. We were each fighting a version of ourself that looked similar but that was uncanny, twisted in a way to which we didn’t consent.
  • In theory, everyone is at risk for their work or image to become a vulgarity with A.I., but I suspect those who will be the most hurt are those who are already facing the consequences of improving technology, namely members of marginalized groups.
  • In the future, with A.I. technology, many more people will have a shadow self with whom they must reckon. Once the features that we consider personal and unique — our facial structure, our handwriting, the way we draw — can be programmed and contorted at the click of a mouse, the possibilities for violations are endless.
  • I’ve been playing around with several generators, and so far none have mimicked my style in a way that can directly threaten my career, a fact that will almost certainly change as A.I. continues to improve. It’s undeniable; the A.I.s know me. Most have captured the outlines and signatures of my comics — black hair, bangs, striped T-shirts. To others, it may look like a drawing taking shape.I see a monster forming.
Javier E

Nobel Prize in Physics Is Awarded to 3 Scientists for Work Exploring Quantum Weirdness ... - 0 views

  • “We’re used to thinking that information about an object — say that a glass is half full — is somehow contained within the object.” Instead, he says, entanglement means objects “only exist in relation to other objects, and moreover these relationships are encoded in a wave function that stands outside the tangible physical universe.”
  • Einstein, though one of the founders of quantum theory, rejected it, saying famously, God did not play dice with the universe.In a 1935 paper written with Boris Podolsky and Nathan Rosen, he tried to demolish quantum mechanics as an incomplete theory by pointing out that by quantum rules, measuring a particle in one place could instantly affect measurements of the other particle, even if it was millions of miles away.
  • Dr. Clauser, who has a knack for electronics and experimentation and misgivings about quantum theory, was the first to perform Bell’s proposed experiment. He happened upon Dr. Bell’s paper while a graduate student at Columbia University and recognized it as something he could do.
  • ...13 more annotations...
  • In 1972, using duct tape and spare parts in the basement on the campus of the University of California, Berkeley, Dr. Clauser and a graduate student, Stuart Freedman, who died in 2012, endeavored to perform Bell’s experiment to measure quantum entanglement. In a series of experiments, he fired thousands of light particles, or photons, in opposite directions to measure a property known as polarization, which could have only two values — up or down. The result for each detector was always a series of seemingly random ups and downs. But when the two detectors’ results were compared, the ups and downs matched in ways that neither “classical physics” nor Einstein’s laws could explain. Something weird was afoot in the universe. Entanglement seemed to be real.
  • in 2002, Dr. Clauser admitted that he himself had expected quantum mechanics to be wrong and Einstein to be right. “Obviously, we got the ‘wrong’ result. I had no choice but to report what we saw, you know, ‘Here’s the result.’ But it contradicts what I believed in my gut has to be true.” He added, “I hoped we would overthrow quantum mechanics. Everyone else thought, ‘John, you’re totally nuts.’”
  • the correlations only showed up after the measurements of the individual particles, when the physicists compared their results after the fact. Entanglement seemed real, but it could not be used to communicate information faster than the speed of light.
  • In 1982, Dr. Aspect and his team at the University of Paris tried to outfox Dr. Clauser’s loophole by switching the direction along which the photons’ polarizations were measured every 10 nanoseconds, while the photons were already in the air and too fast for them to communicate with each other. He too, was expecting Einstein to be right.
  • Quantum predictions held true, but there were still more possible loopholes in the Bell experiment that Dr. Clauser had identified
  • For example, the polarization directions in Dr. Aspect’s experiment had been changed in a regular and thus theoretically predictable fashion that could be sensed by the photons or detectors.
  • Anton Zeilinger
  • added even more randomness to the Bell experiment, using random number generators to change the direction of the polarization measurements while the entangled particles were in flight.
  • Once again, quantum mechanics beat Einstein by an overwhelming margin, closing the “locality” loophole.
  • as scientists have done more experiments with entangled particles, entanglement is accepted as one of main features of quantum mechanics and is being put to work in cryptology, quantum computing and an upcoming “quantum internet
  • One of its first successes in cryptology is messages sent using entangled pairs, which can send cryptographic keys in a secure manner — any eavesdropping will destroy the entanglement, alerting the receiver that something is wrong.
  • , with quantum mechanics, just because we can use it, doesn’t mean our ape brains understand it. The pioneering quantum physicist Niels Bohr once said that anyone who didn’t think quantum mechanics was outrageous hadn’t understood what was being said.
  • In his interview with A.I.P., Dr. Clauser said, “I confess even to this day that I still don’t understand quantum mechanics, and I’m not even sure I really know how to use it all that well. And a lot of this has to do with the fact that I still don’t understand it.”
Javier E

What Is Mastodon and Why Are People Leaving Twitter for It? - The New York Times - 0 views

  • Mastodon is a part of the Fediverse, or federated universe, a group of federated platforms that share communication protocols.
  • Unlike Twitter, Mastodon presents posts in chronological order, rather than based on an algorithm.
  • It also has no ads; Mastodon is largely crowdfunded
  • ...7 more annotations...
  • Most servers are funded by the people who use them.
  • The servers that Mastodon oversees — Mastodon Social and Mastodon Online — are funded through Patreon, a membership and subscription service platform often used by content creators.
  • Although Mastodon visually resembles Twitter, its user experience is more akin to that of Discord, a talking and texting app where people also join servers that have their own cultures and rules.
  • Unlike Twitter and Discord, Mastodon does not have the ability to make its users, or the people who create servers, do anything.
  • But servers can dictate how they interact with one another — or whether they interact at all in a shared stream of posts. For example, when Gab used Mastodon’s code, Mastodon Social and other independent servers blocked Gab’s server, so posts from Gab did not appear on the feeds of people using those servers.
  • Like an email account, your username includes the name of the server itself. For example, a possible username on Mastodon Social would be janedoe@mastodon.social. Regardless of which server you sign up with, you can interact with people who use other Mastodon servers, or you can switch to another one
  • Once you sign up for an account, you can post “toots,” which are Mastodon’s version of tweets. You can also boost other people’s toots, the equivalent of a retweet.
  •  
    owned
Javier E

A Commencement Address Too Honest to Deliver in Person - The Atlantic - 0 views

  • Use this hiatus to do something you would never have done if this emergency hadn’t hit. When the lockdown lifts, move to another state or country. Take some job that never would have made sense if you were worrying about building a career—bartender, handyman, AmeriCorps volunteer.
  • If you use the next two years as a random hiatus, you may not wind up richer, but you’ll wind up more interesting.
  • The biggest way most colleges fail is this: They don’t plant the intellectual and moral seeds students are going to need later, when they get hit by the vicissitudes of life.
  • ...13 more annotations...
  • If you didn’t study Jane Austen while you were here, you probably lack the capacity to think clearly about making a marriage decision. If you didn’t read George Eliot, then you missed a master class on how to judge people’s character. If you didn’t read Nietzsche, you are probably unprepared to handle the complexities of atheism—and if you didn’t read Augustine and Kierkegaard, you’re probably unprepared to handle the complexities of faith.
  • The list goes on. If you didn’t read de Tocqueville, you probably don’t understand your own country. If you didn’t study Gibbon, you probably lack the vocabulary to describe the rise and fall of cultures and nations.
  • The wisdom of the ages is your inheritance; it can make your life easier. These resources often fail to get shared because universities are too careerist, or because faculty members are more interested in their academic specialties or politics than in teaching undergraduates, or because of a host of other reasons
  • What are you putting into your mind? Our culture spends a lot less time worrying about this, and when it does, it goes about it all wrong.
  • my worry is that, especially now that you’re out of college, you won’t put enough really excellent stuff into your brain.
  • I worry that it’s possible to grow up now not even aware that those upper registers of human feeling and thought exist.
  • The theory of maximum taste says that each person’s mind is defined by its upper limit—the best that it habitually consumes and is capable of consuming.
  • After college, most of us resolve to keep doing this kind of thing, but we’re busy and our brains are tired at the end of the day. Months and years go by. We get caught up in stuff, settle for consuming Twitter and, frankly, journalism. Our maximum taste shrinks.
  • I’m worried about the future of your maximum taste. People in my and earlier generations, at least those lucky enough to get a college education, got some exposure to the classics, which lit a fire that gets rekindled every time we sit down to read something really excellent.
  • the “theory of maximum taste.” This theory is based on the idea that exposure to genius has the power to expand your consciousness. If you spend a lot of time with genius, your mind will end up bigger and broader than if you spend your time only with run-of-the-mill stuff.
  • the whole culture is eroding the skill the UCLA scholar Maryanne Wolf calls “deep literacy,” the ability to deeply engage in a dialectical way with a text or piece of philosophy, literature, or art.
  • “To the extent that you cannot perceive the world in its fullness, to the same extent you will fall back into mindless, repetitive, self-reinforcing behavior, unable to escape.”
  • I can’t say that to you, because it sounds fussy and elitist and OK Boomer. And if you were in front of me, you’d roll your eyes.
  •  
    Or as the neurologist Richard Cytowic put it to Adam Garfinkle, "To the extent that you cannot perceive the world in its fullness, to the same extent you will fall back into mindless, repetitive, self-reinforcing behavior, unable to escape."*
Javier E

Opinion | Cloning Scientist Hwang Woo-suk Gets a Second Chance. Should He? - The New Yo... - 0 views

  • The Hwang Woo-suk saga is illustrative of the serious deficiencies in the self-regulation of science. His fraud was uncovered because of brave Korean television reporters. Even those efforts might not have been enough, had Dr. Hwang’s team not been so sloppy in its fraud. The team’s papers included fabricated data and pairs of images that on close comparison clearly indicated duplicity.
  • Yet as a cautionary tale about the price of fraud, it is, unfortunately, a mixed bag. He lost his academic standing, and he was convicted of bioethical violations and embezzlement, but he never ended up serving jail time
  • Although his efforts at cloning human embryos, ended in failure and fraud, they provided him the opportunities and resources he needed to take on projects, such as dog cloning, that were beyond the reach of other labs. The fame he earned in academia proved an asset in a business world where there’s no such thing as bad press.
  • ...3 more annotations...
  • it is comforting to think that scientific truth inevitably emerges and scientific frauds will be caught and punished.
  • Dr. Hwang’s scandal suggests something different. Researchers don’t always have the resources or motivation to replicate others’ experiments
  • Even if they try to replicate and fail, it is the institution where the scientist works that has the right and responsibility to investigate possible fraud. Research institutes and universities, facing the prospect of an embarrassing scandal, might not do so.
Javier E

Opinion | Transgender biology debates should focus on the brain, not the body - The Was... - 0 views

  • what, then, is a biological male, or female? What determines this supposedly simple truth? It’s about chromosomes, right?
  • The study found that adolescent boys and girls who described themselves as trans responded like the peers of their perceived gender.
  • It may be that what’s in your pants is less important than what’s between your ears.
  • ...11 more annotations...
  • What the research has found is that the brains of trans people are unique: neither female nor male, exactly, but something distinct.
  • But what does that mean, a male brain, or a female brain, or even a transgender one?
  • It’s a fraught topic, because brains are a collection of characteristics, rather than a binary classification of either/or
  • yet scientists continue to study the brain in hopes of understanding whether a sense of the gendered self can, at least in part, be the result of neurology
  • Well, not entirely. Because not every person with a Y chromosome is male, and not every person with a double X is female. The world is full of people with other combinations: XXY (or Klinefelter Syndrome), XXX (or Trisomy X), XXXY, and so on. There’s even something called Androgen Insensitivity Syndrome, a condition that keeps the brains of people with a Y from absorbing the information in that chromosome. Most of these people develop as female, and may not even know about their condition until puberty — or even later.
  • t there’s a problem with using neurology as an argument for trans acceptance — it suggests that, on some level, there is something wrong with transgender people, that we are who we are as a result of a sickness or a biological hiccup.
  • trying to open people’s hearts by saying “Check out my brain!” can do more harm than good, because this line of argument delegitimizes the experiences of many trans folks. It suggests that there’s only one way to be trans — to feel trapped in the wrong body, to go through transition, and to wind up, when all is said and done, on the opposite-gender pole. It suggests that the quest trans people go on can only be considered successful if it ends with fitting into the very society that rejected us in the first place.
  • All the science tells us, in the end, is that a biological male — or female — is not any one thing, but a collection of possibilities.
  • No one who embarks upon a life as a trans person in this country is doing so out of caprice, or a whim, or a delusion. We are living these wondrous and perilous lives for one reason only — because our hearts demand it.
  • what we need now is not new legislation to make things harder. What we need now is understanding, not cruelty. What we need now is not hatred, but love.
  • the important thing is not that they feel like a woman, or a man, or something else. What matters most is the plaintive desire, to be free to feel the way I feel.
Javier E

Where We Went Wrong | Harvard Magazine - 0 views

  • John Kenneth Galbraith assessed the trajectory of America’s increasingly “affluent society.” His outlook was not a happy one. The nation’s increasingly evident material prosperity was not making its citizens any more satisfied. Nor, at least in its existing form, was it likely to do so
  • One reason, Galbraith argued, was the glaring imbalance between the opulence in consumption of private goods and the poverty, often squalor, of public services like schools and parks
  • nother was that even the bountifully supplied private goods often satisfied no genuine need, or even desire; a vast advertising apparatus generated artificial demand for them, and satisfying this demand failed to provide meaningful or lasting satisfaction.
  • ...28 more annotations...
  • economist J. Bradford DeLong ’82, Ph.D. ’87, looking back on the twentieth century two decades after its end, comes to a similar conclusion but on different grounds.
  • DeLong, professor of economics at Berkeley, looks to matters of “contingency” and “choice”: at key junctures the economy suffered “bad luck,” and the actions taken by the responsible policymakers were “incompetent.”
  • these were “the most consequential years of all humanity’s centuries.” The changes they saw, while in the first instance economic, also “shaped and transformed nearly everything sociological, political, and cultural.”
  • DeLong’s look back over the twentieth century energetically encompasses political and social trends as well; nor is his scope limited to the United States. The result is a work of strikingly expansive breadth and scope
  • labeling the book an economic history fails to convey its sweeping frame.
  • The century that is DeLong’s focus is what he calls the “long twentieth century,” running from just after the Civil War to the end of the 2000s when a series of events, including the biggest financial crisis since the 1930s followed by likewise the most severe business downturn, finally rendered the advanced Western economies “unable to resume economic growth at anything near the average pace that had been the rule since 1870.
  • d behind those missteps in policy stood not just failures of economic thinking but a voting public that reacted perversely, even if understandably, to the frustrations poor economic outcomes had brought them.
  • Within this 140-year span, DeLong identifies two eras of “El Dorado” economic growth, each facilitated by expanding globalization, and each driven by rapid advances in technology and changes in business organization for applying technology to economic ends
  • from 1870 to World War I, and again from World War II to 197
  • fellow economist Robert J. Gordon ’62, who in his monumental treatise on The Rise and Fall of American Economic Growth (reviewed in “How America Grew,” May-June 2016, page 68) hailed 1870-1970 as a “special century” in this regard (interrupted midway by the disaster of the 1930s).
  • Gordon highlighted the role of a cluster of once-for-all-time technological advances—the steam engine, railroads, electrification, the internal combustion engine, radio and television, powered flight
  • Pessimistic that future technological advances (most obviously, the computer and electronics revolutions) will generate productivity gains to match those of the special century, Gordon therefore saw little prospect of a return to the rapid growth of those halcyon days.
  • DeLong instead points to a series of noneconomic (and non-technological) events that slowed growth, followed by a perverse turn in economic policy triggered in part by public frustration: In 1973 the OPEC cartel tripled the price of oil, and then quadrupled it yet again six years later.
  • For all too many Americans (and citizens of other countries too), the combination of high inflation and sluggish growth meant that “social democracy was no longer delivering the rapid progress toward utopia that it had delivered in the first post-World War II generation.”
  • Frustration over these and other ills in turn spawned what DeLong calls the “neoliberal turn” in public attitudes and economic policy. The new economic policies introduced under this rubric “did not end the slowdown in productivity growth but reinforced it.
  • the tax and regulatory changes enacted in this new climate channeled most of what economic gains there were to people already at the top of the income scale
  • Meanwhile, progressive “inclusion” of women and African Americans in the economy (and in American society more broadly) meant that middle- and lower-income white men saw even smaller gains—and, perversely, reacted by providing still greater support for policies like tax cuts for those with far higher incomes than their own.
  • Daniel Bell’s argument in his 1976 classic The Cultural Contradictions of Capitalism. Bell famously suggested that the very success of a capitalist economy would eventually undermine a society’s commitment to the values and institutions that made capitalism possible in the first plac
  • In DeLong’s view, the “greatest cause” of the neoliberal turn was “the extraordinary pace of rising prosperity during the Thirty Glorious Years, which raised the bar that a political-economic order had to surpass in order to generate broad acceptance.” At the same time, “the fading memory of the Great Depression led to the fading of the belief, or rather recognition, by the middle class that they, as well as the working class, needed social insurance.”
  • what the economy delivered to “hard-working white men” no longer matched what they saw as their just deserts: in their eyes, “the rich got richer, the unworthy and minority poor got handouts.”
  • As Bell would have put it, the politics of entitlement, bred by years of economic success that so many people had come to take for granted, squeezed out the politics of opportunity and ambition, giving rise to the politics of resentment.
  • The new era therefore became “a time to question the bourgeois virtues of hard, regular work and thrift in pursuit of material abundance.”
  • DeLong’s unspoken agenda would surely include rolling back many of the changes made in the U.S. tax code over the past half-century, as well as reinvigorating antitrust policy to blunt the dominance, and therefore outsize profits, of the mega-firms that now tower over key sectors of the economy
  • He would also surely reverse the recent trend moving away from free trade. Central bankers should certainly behave like Paul Volcker (appointed by President Carter), whose decisive action finally broke the 1970s inflation even at considerable economic cost
  • Not only Galbraith’s main themes but many of his more specific observations as well seem as pertinent, and important, today as they did then.
  • What will future readers of Slouching Towards Utopia conclude?
  • If anything, DeLong’s narratives will become more valuable as those events fade into the past. Alas, his description of fascism as having at its center “a contempt for limits, especially those implied by reason-based arguments; a belief that reality could be altered by the will; and an exaltation of the violent assertion of that will as the ultimate argument” will likely strike a nerve with many Americans not just today but in years to come.
  • what about DeLong’s core explanation of what went wrong in the latter third of his, and our, “long century”? I predict that it too will still look right, and important.
Javier E

Some on the Left Turn Against the Label 'Progressive' - The New York Times - 0 views

  • Christopher Lasch, the historian and social critic, posed a banger of a question in his 1991 book, “The True and Only Heaven: Progress and Its Critics.”
  • “How does it happen,” Lasch asked, “that serious people continue to believe in progress, in the face of massive evidence that might have been expected to refute the idea of progress once and for all?”
  • A review in The New York Times Book Review by William Julius Wilson, a professor at Harvard, was titled: “Where Has Progress Got Us?”
  • ...17 more annotations...
  • Essentially, Lasch was attacking the notion, fashionable as Americans basked in their seeming victory over the Soviet Union in the Cold War, that history had a direction — and that one would be wise to stay on the “right side” of it.
  • Francis Fukuyama expressed a version of this triumphalist idea in his famous 1992 book, “The End of History and the Last Man,” in which he celebrated the notion that History with a capital “H,” in the sense of a battle between competing ideas, was ending with communism left to smolder on Ronald Reagan’s famous ash heap.
  • One of Martin Luther King Jr.’s most frequently quoted lines speaks to a similar thought, albeit in a different context: “T​he arc of the moral universe is long, but it bends toward justice.” Though he had read Lasch, Obama quoted that line often, just as he liked to say that so-and-so would be “on the wrong side of history” if they didn’t live up to his ideals — whether the issue was same-sex marriage, health policy or the Russian occupation of Crimea.
  • The memo goes on to list two sets of words: “Optimistic Positive Governing Words” and “Contrasting Words,” which carried negative connotations. One of the latter group was the word “liberal,” sandwiched between “intolerant” and “lie.”
  • So what’s the difference between a progressive and a liberal?To vastly oversimplify matters, liberal usually refers to someone on the center-left on a two-dimensional political spectrum, while progressive refers to someone further left.
  • But “liberal” has taken a beating in recent decades — from both left and right.
  • In the late 1980s and 1990s, Republicans successfully demonized the word “liberal,” to the point where many Democrats shied away from it in favor of labels like “conservative Democrat” or, more recently, “progressive.”
  • None of this was an accident. In 1996, Representative Newt Gingrich of Georgia circulated a now-famous memo called “Language: A Key Mechanism of Control.”
  • “Is the story of the 20th century about the defeat of the Soviet Union, or was it about two world wars and a Holocaust?” asked Matthew Sitman, the co-host of the “Know Your Enemy” podcast, which recently hosted a discussion on Lasch and the fascination many conservatives have with his ideas. “It really depends on how you look at it.”
  • The authors urged their readers: “The words and phrases are powerful. Read them. Memorize as many as possible.”
  • Republicans subsequently had a great deal of success in associating the term “liberal” with other words and phrases voters found uncongenial: wasteful spending, high rates of taxation and libertinism that repelled socially conservative voters.
  • Many on the left began identifying themselves as “progressive” — which had the added benefit of harking back to movements of the late 19th and early 20th centuries that fought against corruption, opposed corporate monopolies, pushed for good-government reforms and food safety and labor laws and established women’s right to vote.
  • Allies of Bill Clinton founded the Progressive Policy Institute, a think tank associated with so-called Blue Dog Democrats from the South.
  • Now, scrambling the terminology, groups like the Progressive Change Campaign Committee agitate on behalf of proudly left-wing candidates
  • In 2014, Charles Murray, the polarizing conservative scholar, urged readers of The Wall Street Journal’s staunchly right-wing editorial section to “start using ‘liberal’ to designate the good guys on the left, reserving ‘progressive’ for those who are enthusiastic about an unrestrained regulatory state.”
  • As Sanders and acolytes like Representative Alexandria Ocasio-Cortez of New York have gained prominence over the last few election cycles, many on the left-wing end of the spectrum have begun proudly applying other labels to themselves, such as “democratic socialist.”
  • To little avail so far, Kazin, the Georgetown historian, has been urging them to call themselves “social democrats” instead — as many mainstream parties do in Europe.“It’s not a good way to win elections in this country, to call yourself a socialist,” he said.
Javier E

The Israel-Hamas War Shows Just How Broken Social Media Has Become - The Atlantic - 0 views

  • major social platforms have grown less and less relevant in the past year. In response, some users have left for smaller competitors such as Bluesky or Mastodon. Some have simply left. The internet has never felt more dense, yet there seem to be fewer reliable avenues to find a signal in all the noise. One-stop information destinations such as Facebook or Twitter are a thing of the past. The global town square—once the aspirational destination that social-media platforms would offer to all of us—lies in ruins, its architecture choked by the vines and tangled vegetation of a wild informational jungle
  • Musk has turned X into a deepfake version of Twitter—a facsimile of the once-useful social network, altered just enough so as to be disorienting, even terrifying.
  • At the same time, Facebook’s user base began to erode, and the company’s transparency reports revealed that the most popular content circulating on the platform was little more than viral garbage—a vast wasteland of CBD promotional content and foreign tabloid clickbait.
  • ...4 more annotations...
  • What’s left, across all platforms, is fragmented. News and punditry are everywhere online, but audiences are siloed; podcasts are more popular than ever, and millions of younger people online have turned to influencers and creators on Instagram and especially TikTok as trusted sources of news.
  • Social media, especially Twitter, has sometimes been an incredible news-gathering tool; it has also been terrible and inefficient, a game of do your own research that involves batting away bullshit and parsing half truths, hyperbole, outright lies, and invaluable context from experts on the fly. Social media’s greatest strength is thus its original sin: These sites are excellent at making you feel connected and informed, frequently at the expense of actually being informed.
  • At the center of these pleas for a Twitter alternative is a feeling that a fundamental promise has been broken. In exchange for our time, our data, and even our well-being, we uploaded our most important conversations onto platforms designed for viral advertising—all under the implicit understanding that social media could provide an unparalleled window to the world.
  • What comes next is impossible to anticipate, but it’s worth considering the possibility that the centrality of social media as we’ve known it for the past 15 years has come to an end—that this particular window to the world is being slammed shut.
Javier E

AI is about to completely change how you use computers | Bill Gates - 0 views

  • Health care
  • Entertainment and shopping
  • Today, AI’s main role in healthcare is to help with administrative tasks. Abridge, Nuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.
  • ...38 more annotations...
  • agents will open up many more learning opportunities.
  • Already, AI can help you pick out a new TV and recommend movies, books, shows, and podcasts. Likewise, a company I’ve invested in, recently launched Pix, which lets you ask questions (“Which Robert Redford movies would I like and where can I watch them?”) and then makes recommendations based on what you’ve liked in the past
  • Productivity
  • copilots can do a lot—such as turn a written document into a slide deck, answer questions about a spreadsheet using natural language, and summarize email threads while representing each person’s point of view.
  • before the sophisticated agents I’m describing become a reality, we need to confront a number of questions about the technology and how we’ll use it.
  • Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.
  • To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want. It will be able to write the code, design the look and feel of the app, create a logo, and publish the app to an online store
  • Agents will do even more. Having one will be like having a person dedicated to helping you with various tasks and doing them independently if you want. If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like
  • For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job.
  • Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it.
  • I don’t think any single company will dominate the agents business--there will be many different AI engines available.
  • The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
  • They’ll replace word processors, spreadsheets, and other productivity apps.
  • Education
  • For example, few families can pay for a tutor who works one-on-one with a student to supplement their classroom work. If agents can capture what makes a tutor effective, they’ll unlock this supplemental instruction for everyone who wants it. If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today’s text-based tutors.
  • your agent will be able to help you in the same way that personal assistants support executives today. If your friend just had surgery, your agent will offer to send flowers and be able to order them for you. If you tell it you’d like to catch up with your old college roommate, it will work with their agent to find a time to get together, and just before you arrive, it will remind you that their oldest child just started college at the local university.
  • To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences.
  • The current state of the art is Khanmigo, a text-based bot created by Khan Academy. It can tutor students in math, science, and the humanities—for example, it can explain the quadratic formula and create math problems to practice on. It can also help teachers do things like write lesson plans.
  • Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.
  • other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?
  • In the computing industry, we talk about platforms—the technologies that apps and services are built on. Android, iOS, and Windows are all platforms. Agents will be the next platform.
  • A shock wave in the tech industry
  • Agents won’t simply make recommendations; they’ll help you act on them. If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision.
  • Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you
  • they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter.
  • Companies will be able to make agents available for their employees to consult directly and be part of every meeting so they can answer questions.
  • AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.
  • If the number of companies that have started working on AI just this year is any indication, there will be an exceptional amount of competition, which will make agents very inexpensive.
  • Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.
  • Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.
  • The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people
  • The ramifications for the software business and for society will be profound.
  • In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
  • You’ll also be able to get news and entertainment that’s been tailored to your interests. CurioAI, which creates a custom podcast on any subject you ask about, is a glimpse of what’s coming.
  • An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.
  • even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
  • In the distant future, agents may even force humans to face profound questions about purpose. Imagine that agents become so good that everyone can have a high quality of life without working nearly as much. In a future like that, what would people do with their time? Would anyone still want to get an education when an agent has all the answers? Can you have a safe and thriving society when most people have a lot of free time on their hands?
  • They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
Javier E

Opinion | Your Angry Uncle Wants to Talk About Politics. What Do You Do? - The New York... - 0 views

  • In our combined years of experience helping people talk about difficult political issues from abortion to guns to race, we’ve found most can converse productively without sacrificing their beliefs or spoiling dinner
  • It’s not merely possible to preserve your relationships while talking with folks you disagree with, but engaging respectfully will actually make you a more powerful advocate for the causes you care about.
  • The key to persuasive political dialogue is creating a safe and welcoming space for diverse views with a compassionate spirit, active listening and personal storytelling
  • ...4 more annotations...
  • Select your reply I’m more liberal, so I’ll chat with Conservative Uncle Bot. I’m more conservative, so I’ll chat with Liberal Uncle Bot.
  • Hey, it’s the Angry Uncle Bot. I have LOTS of opinions. But what kind of Uncle Bot do you want to chat with?
  • To help you cook up a holiday impeachment conversation your whole family and country will appreciate, here’s the Angry Uncle Bot for practice.
  • As Americans gather for our annual Thanksgiving feast, many are sharpening their rhetorical knives while others are preparing to bury their heads in the mashed potatoes.
Javier E

An Existential Problem in the Search for Alien Life - The Atlantic - 0 views

  • The fact is, we still don’t know what life is.
  • since the days of Aristotle, scientists and philosophers have struggled to draw a precise line between what is living and what is not, often returning to criteria such as self-organization, metabolism, and reproduction but never finding a definition that includes, and excludes, all the right things.
  • If you say life consumes fuel to sustain itself with energy, you risk including fire; if you demand the ability to reproduce, you exclude mules. NASA hasn’t been able to do better than a working definition: “Life is a self-sustaining chemical system capable of Darwinian evolution.”
  • ...20 more annotations...
  • it lacks practical application. If humans found something on another planet that seemed to be alive, how much time would we have to sit around and wait for it to evolve?
  • The only life we know is life on Earth. Some scientists call this the n=1 problem, where n is the number of examples from which we can generalize.
  • Cronin studies the origin of life, also a major interest of Walker’s, and it turned out that, when expressed in math, their ideas were essentially the same. They had both zeroed in on complexity as a hallmark of life. Cronin is devising a way to systematize and measure complexity, which he calls Assembly Theory.
  • What we really want is more than a definition of life. We want to know what life, fundamentally, is. For that kind of understanding, scientists turn to theories. A theory is a scientific fundamental. It not only answers questions, but frames them, opening new lines of inquiry. It explains our observations and yields predictions for future experiments to test.
  • Consider the difference between defining gravity as “the force that makes an apple fall to the ground” and explaining it, as Newton did, as the universal attraction between all particles in the universe, proportional to the product of their masses and so on. A definition tells us what we already know; a theory changes how we understand things.
  • the potential rewards of unlocking a theory of life have captivated a clutch of researchers from a diverse set of disciplines. “There are certain things in life that seem very hard to explain,” Sara Imari Walker, a physicist at Arizona State University who has been at the vanguard of this work, told me. “If you scratch under the surface, I think there is some structure that suggests formalization and mathematical laws.”
  • Walker doesn’t think about life as a biologist—or an astrobiologist—does. When she talks about signs of life, she doesn’t talk about carbon, or water, or RNA, or phosphine. She reaches for different examples: a cup, a cellphone, a chair. These objects are not alive, of course, but they’re clearly products of life. In Walker’s view, this is because of their complexity. Life brings complexity into the universe, she says, in its own being and in its products, because it has memory: in DNA, in repeating molecular reactions, in the instructions for making a chair.
  • He measures the complexity of an object—say, a molecule—by calculating the number of steps necessary to put the object’s smallest building blocks together in that certain way. His lab has found, for example, when testing a wide range of molecules, that those with an “assembly number” above 15 were exclusively the products of life. Life makes some simpler molecules, too, but only life seems to make molecules that are so complex.
  • I reach for the theory of gravity as a familiar parallel. Someone might ask, “Okay, so in terms of gravity, where are we in terms of our understanding of life? Like, Newton?” Further back, further back, I say. Walker compares us to pre-Copernican astronomers, reliant on epicycles, little orbits within orbits, to make sense of the motion we observe in the sky. Cleland has put it in terms of chemistry, in which case we’re alchemists, not even true chemists yet
  • Walker’s whole notion is that it’s not only theoretically possible but genuinely achievable to identify something smaller—much smaller—that still nonetheless simply must be the result of life. The model would, in a sense, function like biosignatures as an indication of life that could be searched for. But it would drastically improve and expand the targets.
  • Walker would use the theory to predict what life on a given planet might look like. It would require knowing a lot about the planet—information we might have about Venus, but not yet about a distant exoplanet—but, crucially, would not depend at all on how life on Earth works, what life on Earth might do with those materials.
  • Without the ability to divorce the search for alien life from the example of life we know, Walker thinks, a search is almost pointless. “Any small fluctuations in simple chemistry can actually drive you down really radically different evolutionary pathways,” she told me. “I can’t imagine [life] inventing the same biochemistry on two worlds.”
  • Walker’s approach is grounded in the work of, among others, the philosopher of science Carol Cleland, who wrote The Quest for a Universal Theory of Life.
  • she warns that any theory of life, just like a definition, cannot be constrained by the one example of life we currently know. “It’s a mistake to start theorizing on the basis of a single example, even if you’re trying hard not to be Earth-centric. Because you’re going to be Earth-centric,” Cleland told me. In other words, until we find other examples of life, we won’t have enough data from which to devise a theory. Abstracting away from Earthliness isn’t a way to be agnostic, Cleland argues. It’s a way to be too abstract.
  • Cleland calls for a more flexible search guided by what she calls “tentative criteria.” Such a search would have a sense of what we’re looking for, but also be open to anomalies that challenge our preconceptions, detections that aren’t life as we expected but aren’t familiar not-life either—neither a flower nor a rock
  • it speaks to the hope that exploration and discovery might truly expand our understanding of the cosmos and our own world.
  • The astrobiologist Kimberley Warren-Rhodes studies life on Earth that lives at the borders of known habitability, such as in Chile’s Atacama Desert. The point of her experiments is to better understand how life might persist—and how it might be found—on Mars. “Biology follows some rules,” she told me. The more of those rules you observe, the better sense you have of where to look on other worlds.
  • In this light, the most immediate concern in our search for extraterrestrial life might be less that we only know about life on Earth, and more that we don’t even know that much about life on Earth in the first place. “I would say we understand about 5 percent,” Warren-Rhodes estimates of our cumulative knowledge. N=1 is a problem, and we might be at more like n=.05.
  • who knows how strange life on another world might be? What if life as we know it is the wrong life to be looking for?
  • We understand so little, and we think we’re ready to find other life?
« First ‹ Previous 701 - 720 of 748 Next › Last »
Showing 20 items per page