Skip to main content

Home/ TOK Friends/ Group items tagged advancements

Rss Feed Group items tagged

Javier E

J. Robert Oppenheimer's Defense of Humanity - WSJ - 0 views

  • Von Neumann, too, was deeply concerned about the inability of humanity to keep up with its own inventions. “What we are creating now,” he said to his wife Klári in 1945, “is a monster whose influence is going to change history, provided there is any history left.” Moving to the subject of future computing machines he became even more agitated, foreseeing disaster if “people” could not “keep pace with what they create.”
  • Oppenheimer, Einstein, von Neumann and other Institute faculty channeled much of their effort toward what AI researchers today call the “alignment” problem: how to make sure our discoveries serve us instead of destroying us. Their approaches to this increasingly pressing problem remain instructive.
  • Von Neumann focused on applying the powers of mathematical logic, taking insights from games of strategy and applying them to economics and war planning. Today, descendants of his “game theory” running on von Neumann computing architecture are applied not only to our nuclear strategy, but also many parts of our political, economic and social lives. This is one approach to alignment: humanity survives technology through more technology, and it is the researcher’s role to maximize progress.
  • ...5 more annotations...
  • he also thought that this approach was not enough. “What are we to make of a civilization,” he asked in 1959, a few years after von Neumann’s death, “which has always regarded ethics as an essential part of human life, and…which has not been able to talk about the prospect of killing almost everybody, except in prudential and game-theoretical terms?”
  • to design a “fairness algorithm” we need to know what fairness is. Fairness is not a mathematical constant or even a variable. It is a human value, meaning that there are many often competing and even contradictory visions of it on offer in our societies.
  • Hence Oppenheimer set out to make the Institute for Advanced Study a place for thinking about humanistic subjects like Russian culture, medieval history, or ancient philosophy, as well as about mathematics and the theory of the atom. He hired scholars like George Kennan, the diplomat who designed the Cold War policy of Soviet “containment”; Harold Cherniss, whose work on the philosophies of Plato and Aristotle influenced many Institute colleagues; and the mathematical physicist Freeman Dyson, who had been one of the youngest collaborators in the Manhattan Project. Traces of their conversations and collaborations are preserved not only in their letters and biographies, but also in their research, their policy recommendations, and in their ceaseless efforts to help the public understand the dangers and opportunities technology offers the world.
  • In their biography “American Prometheus,” which inspired Nolan’s film, Martin Sherwin and Kai Bird document Oppenheimer’s conviction that “the safety” of a nation or the world “cannot lie wholly or even primarily in its scientific or technical prowess.” If humanity wants to survive technology, he believed, it needs to pay attention not only to technology but also to ethics, religions, values, forms of political and social organization, and even feelings and emotions.
  • Preserving any human value worthy of the name will therefore require not only a computer scientist, but also a sociologist, psychologist, political scientist, philosopher, historian, theologian. Oppenheimer even brought the poet T.S. Eliot to the Institute, because he believed that the challenges of the future could only be met by bringing the technological and the human together. The technological challenges are growing, but the cultural abyss separating STEM from the arts, humanities, and social sciences has only grown wider. More than ever, we need institutions capable of helping them think together.
peterconnelly

Data Sharing Knowledge Gaps Widespread Among Patients - 0 views

  • The Health Care Data Sharing Survey, commissioned and published by Chicago-based clinical data management company Q-Centrix, was conducted in December 2021 with a sample size of 1,191 people.
  • Fifty-three percent of respondents were female, and 47 percent were male. Respondents fell into four age groups: 18-29 (21 percent), 30-44 (27 percent), 45-60 (29 percent), and over 60 (23 percent). Respondents were also split based on household income: $49,999 or less (41 percent), $50,000-$99,999 (34 percent), and $100,000 or more (25 percent).
  • These patient concerns may translate to a hesitancy to share data for purposes other than improving their own healthcare. Some respondents said they were unsure about whether they’d be willing to share their de-identified healthcare data for clinical research (21 percent), to improve hospital services (22 percent), to improve other patients’ healthcare (22 percent), and to advance care equity and identify disparities (24 percent).
  • ...2 more annotations...
  • Over half (51 percent) of respondents reported that they either didn’t believe or weren’t sure that the data recorded in their EMRs was accurate.
  • The healthcare industry’s growing reliance on clinical data and EHRs requires that patients are educated and empowered about data collection, sharing, and use, the report authors noted. Bridging knowledge gaps between health systems and patients has the potential to significantly improve care, medical research, and health equity.
peterconnelly

Ukrainians see their culture being erased as Russia hits beloved sites - 0 views

  • “This was intentional. It was a prepared plan. They knew that this legacy was here,” Micay said, wading through the scorched remains, pointing to where paintings, sculptures and books had filled the rooms during her nearly 30 years as the museum director.
  • “This was done so Russia can say that there is no Ukrainian culture, that Ukrainian identity does not exist.”
  • Since Russia launched its invasion of Ukraine on Feb. 24, Ukrainian officials have accused Moscow of intentionally attacking hundreds of cultural sites, which is a war crime under the 1954 Hague Convention.
  • ...8 more annotations...
  • Russian troops burned down a museum in the town of Ivankiv that housed a collection of paintings by the renowned Ukrainian folk artist Maria Prymachenko, who was an inspiration to Pablo Picasso. The House of Culture in Lozova was razed by a Russian missile. Other theaters, churches, monuments and libraries have been destroyed. 
  • Ukraine’s Minister of Culture Oleksandr Tkachenko said his office has recorded more than 350 “Russian war crimes against cultural heritage” as of May 19.
  • Attacks on cultural property are not uncommon during wartime, said Richard Kurin, a cultural anthropologist and the founder of the Smithsonian Cultural Rescue Initiative.
  • “In conflict, you get situations where people want to erase someone else’s culture,” Kurin said. 
  • “It’s demoralizing to people because people’s culture is highly symbolic and it gives them a sense of identity and morale,” Kurin said. “If you think about what the Ukrainians are fighting for, a lot of it has to do with their being Ukrainian.”
  • “The majority of damage to cultural property has been through collateral damage and the way that the Russian Federation is fighting the conflict,”
  • But whether the attacks are indiscriminate or targeted, Stone said Ukraine was at risk of losing irreplaceable cultural sites and artifacts that make up the fabric of Ukrainian identity.
  • “The Kharkiv legacy is in danger,” he said. “This is all part of Russia’s genocide of Ukrainian culture and national identity.”
peterconnelly

House Democrats look to pass gun control legislation by early June - 0 views

  • House Democrats will try to advance a raft of gun control bills on Thursday in the wake of two high-profile mass shootings that rocked the nation earlier this month.
  • The Democratic-led package will likely fail in the face of Republican opposition in the Senate. However, Democrats have acknowledged a hope — however slim — that bipartisan talks among senators can lead to lawmakers passing a more limited bill with support from both parties.
  • The Raise the Age Act would lift the purchasing age for semiautomatic rifles from 18 to 21, while the Keep Americans Safe Act would outlaw the import, sale, manufacture, transfer or possession of a large-capacity magazine.
  • ...4 more annotations...
  • Senate Republicans have for years blocked progress on any gun safety legislation. They opposed efforts to tighten gun regulations both when they held the majority, and even now when they can threaten an indefinite filibuster if Democrats can’t come up with the 60 votes required to circumvent the stalling tactic.
  • “It’s much easier to scream about guns than it is to demand answer about where our culture is failing,” Cruz added in a separate social media post on Saturday.
  • Disapproval from Cruz and other Senate Republicans will likely doom any legislation Nadler and other House Democrats manage to pass.
  • Democrats dispute the claim that lawmakers need to target mental illness more so than the availability of guns to reduce shooting violence in the U.S. They say that similar rates of mental illness in other developed nations across the globe prove that mental illness alone cannot fully explain the prevalence of mass shootings in the U.S.
marvelgr

THE BASES OF THE MIND:THE RELATIONSHIP OF LANGUAGE AND THOUGHT | by Koç Unive... - 0 views

  • We can talk about three different interactions when we investigate the complex relationships between language and thinking. First, the existence of language as a cognitive process affects the system of thinking. Second, thinking comes before language, and the learning of a language interacts with the conceptual process that is formed before language use. Third, each language spoken may affect the system of thinking. Here we will discuss these three interactions under these subsections: “thinking without language,” “thinking before language,” and “thinking with language.”
  • Babies can categorize objects and actions, understand the cause and effect relationship between events, and see the goals in a movement. Recent studies on action representation and spatial concepts have shown that babies’ universal and language-general action representation productively changes with the learning of the mother tongue. For example, languages use prepositions to express the relationship between objects, i.e., in, on, under. However, languages also vary how they use these relations. One of the most significant studies suggests that babies can differentiate between concepts expressed with prepositions such as containment (in) and support (on). The Korean language specifies the nature of these containment and support relationships using the tightness of the relationship between objects: tight or loose. For example, a pencil in a pencil-size box represents a tight relationship, while a pencil in a big basket represents a loose relationship.
  • In the late 1800s, anthropologist Franz Boas laid the foundations of cultural relativity. According to this point of view, individuals see and perceive the world within the boundaries of their cultures. The role of anthropology is to investigate how people are conditioned by their culture and how they interact with the world in different ways. To understand such mechanisms, it suggests, implications in culture and language should be studied. The reflection of this view in the relationship between language and thought is the linguistic determinism hypothesis advanced by Eric Safir and Benjamin Lee Whorf. This hypothesis suggests that thought emerges only with the effect of language and concepts that are believed to exist even in infancy fade away due to the language learned.
  • ...1 more annotation...
  • In conclusion, there is a nested relationship between language and thought. In the interaction processes mentioned above, the role of language changes. Even though the limits of our language are different from the limits of our thinking, it is inevitable that people prioritize concepts in their languages. This, however, does not mean that they cannot comprehend or think about concepts that do not exist in their language.
Javier E

You Have Permission to Be a Smartphone Skeptic - The Bulwark - 0 views

  • the brief return of one of my favorite discursive topics—are the kids all right?—in one of my least-favorite variations: why shouldn’t each of them have a smartphone and tablet?
  • One camp says yes, the kids are fine
  • complaints about screen time merely conceal a desire to punish hard-working parents for marginally benefiting from climbing luxury standards, provide examples of the moral panic occasioned by all new technologies, or mistakenly blame screens for ill effects caused by the general political situation.
  • ...38 more annotations...
  • No, says the other camp, led by Jonathan Haidt; the kids are not all right, their devices are partly to blame, and here are the studies showing why.
  • we should not wait for the replication crisis in the social sciences to resolve itself before we consider the question of whether the naysayers are on to something. And normal powers of observation and imagination should be sufficient to make us at least wary of smartphones.
  • These powerful instruments represent a technological advance on par with that of the power loom or the automobile
  • The achievement can be difficult to properly appreciate because instead of exerting power over physical processes and raw materials, they operate on social processes and the human psyche: They are designed to maximize attention, to make it as difficult as possible to look away.
  • they have transformed the qualitative experience of existing in the world. They give a person’s sociality the appearance and feeling of a theoretically endless open network, while in reality, algorithms quietly sort users into ideological, aesthetic, memetic cattle chutes of content.
  • Importantly, the process by which smartphones change us requires no agency or judgment on the part of a teen user, and yet that process is designed to provide what feels like a perfectly natural, inevitable, and complete experience of the world.
  • Smartphones offer a tactile portal to a novel digital environment, and this environment is not the kind of space you enter and leave
  • One reason commonly offered for maintaining our socio-technological status quo is that nothing really has changed with the advent of the internet, of Instagram, of Tiktok and Youtube and 4Chan
  • It is instead a complete shadow world of endless images; disembodied, manipulable personas; and the ever-present gaze of others. It lives in your pocket and in your mind.
  • The price you pay for its availability—and the engine of its functioning—is that you are always available to it, as well. Unless you have a strength of will that eludes most adults, its emissaries can find you at any hour and in any place to issue your summons to the grim pleasure palace.
  • the self-restraint and self-discipline required to use a smartphone well—that is, to treat it purely as an occasional tool rather than as a totalizing way of life—are unreasonable things to demand of teenagers
  • these are unreasonable things to demand of me, a fully adult woman
  • To enjoy the conveniences that a smartphone offers, I must struggle against the lure of the permanent scroll, the notification, the urge to fix my eyes on the circle of light and keep them fixed. I must resist the default pseudo-activity the smartphone always calls its user back to, if I want to have any hope of filling the moments of my day with the real activity I believe is actually valuable.
  • for a child or teen still learning the rudiments of self-control, still learning what is valuable and fulfilling, still learning how to prioritize what is good over the impulse of the moment, it is an absurd bar to be asked to clear
  • The expectation that children and adolescents will navigate new technologies with fully formed and muscular capacities for reason and responsibility often seems to go along with a larger abdication of responsibility on the part of the adults involved.
  • adults have frequently given in to a Faustian temptation: offering up their children’s generation to be used as guinea pigs in a mass longitudinal study in exchange for a bit more room to breathe in their own undeniably difficult roles as educators, caretakers, and parents.
  • It is not a particular activity that you start and stop and resume, and it is not a social scene that you might abandon when it suits you.
  • And this we must do without waiting for social science to hand us a comprehensive mandate it is fundamentally unable to provide; without cowering in panic over moral panics
  • The pre-internet advertising world was vicious, to be sure, but when the “pre-” came off, its vices were moved into a compound interest account. In the world of online advertising, at any moment, in any place, a user engaged in an infinite scroll might be presented with native content about how one Instagram model learned to accept her chunky (size 4) thighs, while in the next clip, another model relates how a local dermatologist saved her from becoming an unlovable crone at the age of 25
  • developing pathological interests and capacities used to take a lot more work than it does now
  • You had to seek it out, as you once had to seek out pornography and look someone in the eye while paying for it. You were not funneled into it by an omnipresent stream of algorithmically curated content—the ambience of digital life, so easily mistaken by the person experiencing it as fundamentally similar to the non-purposive ambience of the natural world.
  • And when interpersonal relations between teens become sour, nasty, or abusive, as they often do and always have, the unbalancing effects of transposing social life to the internet become quite clear
  • For both young men and young women, the pornographic scenario—dominance and degradation, exposure and monetization—creates an experiential framework for desires that they are barely experienced enough to understand.
  • This is not a world I want to live in. I think it hurts everyone; but I especially think it hurts those young enough to receive it as a natural state of affairs rather than as a profound innovation.
  • so I am baffled by the most routine objection to any blaming of smartphones for our society-wide implosion of teenagers’ mental health,
  • In short, and inevitably, today’s teenagers are suffering from capitalism—specifically “late capitalism,
  • what shocks me about this rhetorical approach is the rush to play defense for Apple and its peers, the impulse to wield the abstract concept of capitalism as a shield for actually existing, extremely powerful, demonstrably ruthless capitalist actors.
  • This motley alliance of left-coded theory about the evils of business and right-coded praxis in defense of a particular evil business can be explained, I think, by a deeper desire than overthrowing capitalism. It is the desire not to be a prude or hysteric of bumpkin
  • No one wants to come down on the side of tamping off pleasures and suppressing teen activity.
  • No one wants to be the shrill or leaden antagonist of a thousand beloved movies, inciting moral panics, scheming about how to stop the youths from dancing on Sunday.
  • But commercial pioneers are only just beginning to explore new frontiers in the profit-driven, smartphone-enabled weaponization of our own pleasures against us
  • To limit your moral imagination to the archetypes of the fun-loving rebel versus the stodgy enforcers in response to this emerging reality is to choose to navigate it with blinders on, to be a useful idiot for the robber barons of online life rather than a challenger to the corrupt order they maintain.
  • The very basic question that needs to be asked with every product rollout and implementation is what technologies enable a good human life?
  • this question is not, ultimately, the province of social scientists, notwithstanding how useful their work may be on the narrower questions involved. It is the free privilege, it is the heavy burden, for all of us, to think—to deliberate and make judgments about human good, about what kind of world we want to live in, and to take action according to that thought.
  • I am not sure how to build a world in which childrens and adolescents, at least, do not feel they need to live their whole lives online.
  • whatever particular solutions emerge from our negotiations with each other and our reckonings with the force of cultural momentum, they will remain unavailable until we give ourselves permission to set the terms of our common life.
  • But the environments in which humans find themselves vary significantly, and in ways that have equally significant downstream effects on the particular expression of human nature in that context.
  • most of all, without affording Apple, Facebook, Google, and their ilk the defensive allegiance we should reserve for each other.
Javier E

Reality Is Broken. We Have AI Photos to Blame. - WSJ - 0 views

  • AI headshots aren’t yet perfect, but they’re so close I expect we’ll start seeing them on LinkedIn, Tinder and other social profiles. Heck, we may already see them. How would we know?
  • Welcome to our new reality, where nothing is real. We now have photos initially captured with cameras that AI changes into something that never was
  • Or, like the headshot above, there are convincingly photographic images AI generates out of thin air.
  • ...11 more annotations...
  • Adobe ADBE 7.19%increase; green up pointing triangle, maker of the Photoshop, released a new tool in Firefly, its generative-AI image suite, that lets you change and add in parts of a photo with AI imagery. Earlier this month, Google showed off a new Magic Editor, initially for Pixel phones, that allows you to easily manipulate a scene. And people are all over TikTok posting the results of AI headshot services like Try It On.
  • After testing a mix of AI editing and generating tools, I just have one question for all of you armchair philosophers: What even is a photo anymore?
  • I have always wondered what I’d look like as a naval officer. Now I don’t have to. I snapped a selfie and uploaded it to Adobe Firefly’s generative-fill tool. One click of the Background button and my cluttered office was wiped out. I typed “American flag” and in it went. Then I selected the Add tool, erased my torso and typed in “naval uniform.” Boom! Adobe even found me worthy of numerous awards and decorations.
  • Astronaut, fighter pilot, pediatrician. I turned myself into all of them in under a minute each. The AI-generated images did have noticeable issues: The uniforms were strange and had odd lettering, the stethoscope seemed to be cut in half and the backgrounds were warped and blurry. Yet the final images are fun, and the quality will only get better. 
  • In FaceApp, for iOS and Android, I was able to change my frown to a smile—with the right amount of teeth! I was also able to add glasses and change my hair color. Some said it looked completely real, others who know me well figured something was up. “Your teeth look too perfect.”
  • The real reality-bending happens in Midjourney, which can turn text prompts into hyper-realistic images and blend existing images in new ways. The image quality of generated images exceeds OpenAI’s Dall-E and Adobe’s Firefly.
  • it’s more complicated to use, since it runs through the chat app Discord. Sign up for service, access the Midjourney bot through your Discord account (via web or app), then start typing in prompts. My video producer Kenny Wassus started working with a more advanced Midjourney plugin called Insight Face Swap-Bot, which allows you to sub in a face to a scene you’ve already made. He’s become a master—making me a Game of Thrones warrior and a Star Wars rebel, among other things.
  • We’re headed for a time when we won’t be able to tell how manipulated a photo is, what parts are real or fake.
  • when influential messages are conveyed through images—be they news or misinformation—people have reason to know a photo’s origin and what’s been done to it.
  • Firefly adds a “content credential,” digital information baked into the file, that says the image was manipulated with AI. Adobe is pushing to get news, tech and social-media platforms to use this open-source standard so we can all understand where the images we see came from.
  • So, yeah, our ability to spot true photos might depend on the cooperation of the entire internet. And by “true photo,” I mean one that captures a real moment—where you’re wearing your own boring clothes and your hair is just so-so, but you have the exact right number of teeth in your head.
Javier E

Netanyahu's Dark Worldview - The Atlantic - 0 views

  • as Netanyahu soon made clear, when it comes to AI, he believes that bad outcomes are the likely outcomes. The Israeli leader interrogated OpenAI’s Brockman about the impact of his company’s creations on the job market. By replacing more and more workers, Netanyahu argued, AI threatens to “cannibalize a lot more jobs than you create,” leaving many people adrift and unable to contribute to the economy. When Brockman suggested that AI could usher in a world where people would not have to work, Netanyahu countered that the benefits of the technology were unlikely to accrue to most people, because the data, computational power, and engineering talent required for AI are concentrated in a few countries.
  • “You have these trillion-dollar [AI] companies that are produced overnight, and they concentrate enormous wealth and power with a smaller and smaller number of people,” the Israeli leader said, noting that even a free-market evangelist like himself was unsettled by such monopolization. “That will create a bigger and bigger distance between the haves and the have-nots, and that’s another thing that causes tremendous instability in our world. And I don’t know if you have an idea of how you overcome that?”
  • The other panelists did not. Brockman briefly pivoted to talk about OpenAI’s Israeli employees before saying, “The world we should shoot for is one where all the boats are rising.” But other than mentioning the possibility of a universal basic income for people living in an AI-saturated society, Brockman agreed that “creative solutions” to this problem were needed—without providing any.
  • ...10 more annotations...
  • The AI boosters emphasized the incredible potential of their innovation, and Netanyahu raised practical objections to their enthusiasm. They cited futurists such as Ray Kurzweil to paint a bright picture of a post-AI world; Netanyahu cited the Bible and the medieval Jewish philosopher Maimonides to caution against upending human institutions and subordinating our existence to machines.
  • Musk matter-of-factly explained that the “very positive scenario of AI” is “actually in a lot of ways a description of heaven,” where “you can have whatever you want, you don’t need to work, you have no obligations, any illness you have can be cured,” and death is “a choice.” Netanyahu incredulously retorted, “You want this world?”
  • By the time the panel began to wind down, the Israeli leader had seemingly made up his mind. “This is like having nuclear technology in the Stone Age,” he said. “The pace of development [is] outpacing what solutions we need to put in place to maximize the benefits and limit the risks.”
  • Netanyahu was a naysayer about the Arab Spring, unwilling to join the rapturous ranks of hopeful politicians, activists, and democracy advocates. But he was also right.
  • This was less because he is a prophet and more because he is a pessimist. When it comes to grandiose predictions about a better tomorrow—whether through peace with the Palestinians, a nuclear deal with Iran, or the advent of artificial intelligence—Netanyahu always bets against. Informed by a dark reading of Jewish history, he is a cynic about human nature and a skeptic of human progress.
  • fter all, no matter how far civilization has advanced, it has always found ways to persecute the powerless, most notably, in his mind, the Jews. For Netanyahu, the arc of history is long, and it bends toward whoever is bending it.
  • This is why the Israeli leader puts little stock in utopian promises, whether they are made by progressive internationalists or Silicon Valley futurists, and places his trust in hard power instead
  • “The weak crumble, are slaughtered and are erased from history while the strong, for good or for ill, survive. The strong are respected, and alliances are made with the strong, and in the end peace is made with the strong.”
  • To his many critics, myself included, Netanyahu’s refusal to envision a different future makes him a “creature of the bunker,” perpetually governed by fear. Although his pessimism may sometimes be vindicated, it also holds his country hostag
  • In other words, the same cynicism that drives Netanyahu’s reactionary politics is the thing that makes him an astute interrogator of AI and its promoters. Just as he doesn’t trust others not to use their power to endanger Jews, he doesn’t trust AI companies or AI itself to police its rapidly growing capabilities.
Javier E

Book Review: 'The Maniac,' by Benjamín Labatut - The New York Times - 0 views

  • it quickly becomes clear that what “The Maniac” is really trying to get a lock on is our current age of digital-informational mastery and subjection
  • When von Neumann proclaims that, thanks to his computational advances, “all processes that are stable we shall predict” and “all processes that are unstable we shall control,” we’re being prompted to reflect on today’s ubiquitous predictive-slash-determinative algorithms.
  • When he publishes a paper about the feasibility of a self-reproducing machine — “you need to have a mechanism, not only of copying a being, but of copying the instructions that specify that being” — few contemporary readers will fail to home straight in on the fraught subject of A.I.
  • ...9 more annotations...
  • Haunting von Neumann’s thought experiment is the specter of a construct that, in its very internal perfection, lacks the element that would account for itself as a construct. “If someone succeeded in creating a formal system of axioms that was free of all internal paradoxes and contradictions,” another of von Neumann’s interlocutors, the logician Kurt Gödel, explains, “it would always be incomplete, because it would contain truths and statements that — while being undeniably true — could never be proven within the laws of that system.”
  • its deeper (and, for me, more compelling) theme: the relation between reason and madness.
  • Almost all the scientists populating the book are mad, their desire “to understand, to grasp the core of things” invariably wedded to “an uncontrollable mania”; even their scrupulously observed reason, their mode of logic elevated to religion, is framed as a form of madness. Von Neumann’s response to the detonation of the Trinity bomb, the world’s first nuclear explosion, is “so utterly rational that it bordered on the psychopathic,” his second wife, Klara Dan, muses
  • fanaticism, in the 1930s, “was the norm … even among us mathematicians.”
  • Pondering Gödel’s own descent into mania, the physicist Eugene Wigner claims that “paranoia is logic run amok.” If you’ve convinced yourself that there’s a reason for everything, “it’s a small step to begin to see hidden machinations and agents operating to manipulate the most common, everyday occurrences.”
  • the game theory-derived system of mutually assured destruction he devises in its wake is “perfectly rational insanity,” according to its co-founder Oskar Morgenstern.
  • Labatut has Morgenstern end his MAD deliberations by pointing out that humans are not perfect poker players. They are irrational, a fact that, while instigating “the ungovernable chaos that we see all around us,” is also the “mercy” that saves us, “a strange angel that protects us from the mad dreams of reason.”
  • But does von Neumann really deserve the title “Father of Computers,” granted him here by his first wife, Mariette Kovesi? Doesn’t Ada Lovelace have a prior claim as their mother? Feynman’s description of the Trinity bomb as “a little Frankenstein monster” should remind us that it was Mary Shelley, not von Neumann and his coterie, who first grasped the monumental stakes of modeling the total code of life, its own instructions for self-replication, and that it was Rosalind Franklin — working alongside, not under, Maurice Wilkins — who first carried out this modeling.
  • he at least grants his women broader, more incisive wisdom. Ehrenfest’s lover Nelly Posthumus Meyjes delivers a persuasive lecture on the Pythagorean myth of the irrational, suggesting that while scientists would never accept the fact that “nature cannot be cognized as a whole,” artists, by contrast, “had already fully embraced it.”
Javier E

Musk, SBF, and the Myth of Smug, Castle-Building Nerds - 0 views

  • Experts in content moderation suggested that Musk’s actual policies lacked any coherence and, if implemented, would have all kinds of unintended consequences. That has happened with verification. Almost every decision he makes is an unforced error made with extreme confidence in front of a growing audience of people who already know he has messed up, and is supported by a network of sycophants and blind followers who refuse to see or tell him that he’s messing up. The dynamic is … very Trumpy!
  • As with the former president, it can be hard at times for people to believe or accept that our systems are so broken that a guy who is clearly this inept can also be put in charge of something so important. A common pundit claim before Donald Trump got into the White House was that the gravity of the job and prestige of the office might humble or chasten him.
  • The same seems true for Musk. Even people skeptical of Musk’s behavior pointed to his past companies as predictors of future success. He’s rich. He does smart-people stuff. The rockets land pointy-side up!
  • ...18 more annotations...
  • Time and again, we learned there was never a grand plan or big ideas—just weapons-grade ego, incompetence, thin skin, and prejudice against those who don’t revere him.
  • Despite all the incredible, damning reporting coming out of Twitter and all of Musk’s very public mistakes, many people still refuse to believe—even if they detest him—that he is simply incompetent.
  • What is amazing about the current moment is that, despite how ridiculous it all feels, a fundamental tenet of reality and logic appears to be holding true: If you don’t know what you’re doing or don’t really care, you’ll run the thing you’re in charge of into the ground, and people will notice.
  • And so the moment feels too dumb and too on the nose to be real and yet also very real—kind of like all of reality in 2022.
  • I don’t really know where any of this will lead, but one interesting possibility is that Musk gets increasingly reactionary and trollish in his politics and stewardship of Twitter.
  • Leaving the politics aside, from a basic customer-service standpoint this is generally an ill-advised way for the owner of a company to treat an elected official when that elected official wishes to know why your service has failed them. The reason it is ill-advised is because then the elected official could tweet something like what Senator Markey tweeted on Sunday: “One of your companies is under an FTC consent decree. Auto safety watchdog NHTSA is investigating another for killing people. And you’re spending your time picking fights online. Fix your companies. Or Congress will.”
  • It seems clear that Musk, like any dedicated social-media poster, thrives on validation, so it makes sense that, as he continues to dismantle his own mystique as an innovator, he might look for adoration elsewhere
  • Recent history has shown that, for a specific audience, owning the libs frees a person from having to care about competency or outcome of their actions. Just anger the right people and you’re good, even if you’re terrible at your job. This won’t help Twitter’s financial situation, which seems bleak, but it’s … something!
  • Bankman-Fried, the archetype, appealed to people for all kinds of reasons. His narrative as a philanthropist, and a smart rationalist, and a stone-cold weirdo was something people wanted to buy into because, generally, people love weirdos who don’t conform to systems and then find clever ways to work around them and become wildly successful as a result.
  • Bankman-Fried was a way that a lot of people could access and maybe obliquely understand what was going on in crypto. They may not have understood what FTX did, but they could grasp a nerd trying to leverage a system in order to do good in the world and advance progressive politics. In that sense, Bankman-Fried is easy to root for and exciting to cover. His origin story and narrative become more important than the particulars of what he may or may not be doing.
  • the past few weeks have been yet another reminder that the smug-nerd-genius narrative may sell magazines, and it certainly raises venture funding, but the visionary founder is, first and foremost, a marketing product, not a reality. It’s a myth that perpetuates itself. Once branded a visionary, the founder can use the narrative to raise money and generate a formidable net worth, and then the financial success becomes its own résumé. But none of it is real.
  • Adversarial journalism ideally questions and probes power. If it is trained on technology companies and their founders, it is because they either wield that power or have the potential to do so. It is, perhaps unintuitively, a form of respect for their influence and potential to disrupt. But that’s not what these founders want.
  • even if all tech coverage had been totally flawless, Silicon Valley would have rejected adversarial tech journalism because most of its players do not actually want the responsibility that comes with their potential power. They want only to embody the myth and reap the benefits. They want the narrative, which is focused on origins, ambitions, ethos, and marketing, and less on the externalities and outcomes.
  • Looking at Musk and Bankman-Fried, it would appear that the tech visionaries mostly get their way. For all the complaints of awful, negative coverage and biased reporting, people still want to cheer for and give money to the “‘smug nerds building castles in the sky.’” Though they vary wildly right now in magnitude, their wounds are self-inflicted—and, perhaps, the result of believing their own hype.
  • That’s because, almost always, the smug-nerd-genius narrative is a trap. It’s one that people fall into because they need to believe that somebody out there is so brilliant, they can see the future, or that they have some greater, more holistic understanding of the world (or that such an understanding is possible)
  • It’s not unlike a conspiracy theory in that way. The smug-nerd-genius narrative helps take the complexity of the world and make it more manageable.
  • Putting your faith in a space billionaire or a crypto wunderkind isn’t just sad fanboydom; it is also a way for people to outsource their brain to somebody else who, they believe, can see what they can’t
  • the smug nerd genius is exceedingly rare, and, even when they’re not outed as a fraud or a dilettante, they can be assholes or flawed like anyone else. There aren’t shortcuts for making sense of the world, and anyone who is selling themselves that way or buying into that narrative about them should read to us as a giant red flag.
Javier E

Strange things are taking place - at the same time - 0 views

  • In February 1973, Dr. Bernard Beitman found himself hunched over a kitchen sink in an old Victorian house in San Francisco, choking uncontrollably. He wasn’t eating or drinking, so there was nothing to cough up, and yet for several minutes he couldn’t catch his breath or swallow.The next day his brother called to tell him that 3,000 miles away, in Wilmington, Del., their father had died. He had bled into his throat, choking on his own blood at the same time as Beitman’s mysterious episode.
  • Overcome with awe and emotion, Beitman became fascinated with what he calls meaningful coincidences. After becoming a professor of psychiatry at the University of Missouri-Columbia, he published several papers and two books on the subject and started a nonprofit, the Coincidence Project, to encourage people to share their coincidence stories.
  • “What I look for as a scientist and a spiritual seeker are the patterns that lead to meaningful coincidences,” said Beitman, 80, from his home in Charlottesville, Va. “So many people are reporting this kind of experience. Understanding how it happens is part of the fun.”
  • ...20 more annotations...
  • Beitman defines a coincidence as “two events coming together with apparently no causal explanation.” They can be life-changing, like his experience with his father, or comforting, such as when a loved one’s favorite song comes on the radio just when you are missing them most.
  • Although Beitman has long been fascinated by coincidences, it wasn’t until the end of his academic career that he was able to study them in earnest. (Before then, his research primarily focused on the relationship between chest pain and panic disorder.)
  • He started by developing the Weird Coincidence Survey in 2006 to assess what types of coincidences are most commonly observed, what personality types are most correlated with noticing them and how most people explain them. About 3,000 people have completed the survey so far.
  • he has drawn a few conclusions. The most commonly reported coincidences are associated withmass media: A person thinks of an idea and then hears or sees it on TV, the radio or the internet. Thinking of someone and then having that person call unexpectedly is next on the list, followed by being in the right place at the right time to advance one’s work, career or education.
  • People who describe themselves as spiritual or religious report noticing more meaningful coincidences than those who do not, and people are more likely to experience coincidences when they are in a heightened emotional state — perhaps under stress or grieving.
  • The most popular explanation among survey respondents for mysterious coincidences: God or fate. The second explanation: randomness. The third is that our minds are connected to one another. The fourth is that our minds are connected to the environment.
  • “Some say God, some say universe, some say random and I say ‘Yes,’ ” he said. “People want things to be black and white, yes or no, but I say there is mystery.”
  • He’s particularly interested in what he’s dubbed “simulpathity”: feeling a loved one’s pain at a distance, as he believes he did with his father. Science can’t currently explain how it might occur, but in his books he offers some nontraditional ideas, such as the existence of “the psychosphere,” a kind of mental atmosphere through which information and energy can travel between two people who are emotionally close though physically distant.
  • In his new book published in September, “Meaningful Coincidences: How and Why Synchronicity and Serendipity Happen,” he shares the story of a young man who intended to end his life by the shore of an isolated lake. While he sat crying in his car, another car pulled up and his brother got out. When the young man asked for an explanation, the brother said he didn’t know why he got in the car, where he was going, or what he would do when he got there. He just knew he needed to get in the car and drive.
  • David Hand, a British statistician and author of the 2014 book “The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day,” sits at the opposite end of the spectrum from Beitman. He says most coincidences are fairly easy to explain, and he specializes in demystifying even the strangest ones.
  • “When you look closely at a coincidence, you can often discover the chance of it happening is not as small as you think,” he said. “It’s perhaps not a 1-in-a-billion chance, but in fact a 1-in-a-hundred chance, and yeah, you would expect that would happen quite often.”
  • the law of truly large numbers. “You take something that has a very small chance of happening and you give it lots and lots and lots of opportunities to happen,” he said. “Then the overall probability becomes big.”
  • But just because Hand has a mathematical perspective doesn’t mean he finds coincidences boring. “It’s like looking at a rainbow,” he said. “Just because I understand the physics behind it doesn’t make it any the less wonderful.
  • Paying attention to coincidences, Osman and Johansen say, is an essential part of how humans make sense of the world. We rely constantly on our understanding of cause and effect to survive.
  • “Coincidences are often associated with something mystical or supernatural, but if you look under the hood, noticing coincidences is what humans do all the time,”
  • Zeltzer has spent 50 years studying the writings of Carl Jung, the 20th century Swiss psychologist who introduced the modern Western world to the idea of synchronicity. Jung defined synchronicity as “the coincidence in time of two or more causally unrelated events which have the same meaning.”
  • One of Jung’s most iconic synchronistic stories concerned a patient who he felt had become so stuck in her rationality that it interfered with her ability to understand her psychology and emotional life.
  • One day, the patient was recounting a dream in which she’d received a golden scarab. Just then, Jung heard a gentle tapping at the window. He opened the window and a scarab-like beetle flew into the room. Jung plucked the insect out of the air and presented it to his patient. “Here is your scarab,” he said.The experience proved therapeutic because it demonstrated to Jung’s patient that the world is not always rational, leading her to break her own identification with rationality and thus become more open to her emotional life, Zeltzer explained
  • Like Jung, Zeltzer believes meaningful coincidences can encourage people to acknowledge the irrational and mysterious. “We have a fantasy that there is always an answer, and that we should know everything,”
  • Honestly, I’m not sure what to believe, but I’m not sure it matters. Like Beitman, my attitude is “Yes.”
Javier E

'The Godfather of AI' Quits Google and Warns of Danger Ahead - The New York Times - 0 views

  • he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
  • Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
  • ...24 more annotations...
  • Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
  • But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
  • “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
  • After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.
  • Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
  • Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job
  • Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
  • Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
  • In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
  • In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
  • Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
  • Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
  • “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
  • As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
  • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
  • His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
  • He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
  • Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their ow
  • And he fears a day when truly autonomous weapons — those killer robots — become reality.
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
  • Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
  • But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
  • He does not say that anymore.
Javier E

CarynAI, created with GPT-4 technology, will be your girlfriend - The Washington Post - 0 views

  • CarynAI also shows how AI applications can increase the ability of a single person to reach an audience of thousands in a way that, for users, may feel distinctly personal.
  • The impact could be enormous for someone forming something resembling a personal relationship with thousands or millions of online followers. It could also show how thin and tenuous these simulations of human connection could become.
  • CarynAI also is a reminder that sex and romance are often the first realm in which technological progress becomes profitable. Marjorie acknowledges that some of the exchanges with CarynAI become sexually explicit
  • ...11 more annotations...
  • CarynAI is the first major release from a company called Forever Voices. The company previously has created realistic AI chatbots that allow users to talk with replicated versions of Steve Jobs, Kanye West, Donald Trump and Taylor Swift
  • CarynAI is a far more sophisticated product, the company says, and part of Forever Voices’ new AI companion initiative, meant to provide users with a girlfriend-like experience that fans can emotionally bond with.
  • John Meyer, CEO and founder of Forever Voices, said that he created the company last year, after trying to use AI to develop ways to reconnect with his late father, who passed away in 2017. He built an AI voice chatbot that replicated his late father’s voice and personality to talk to and found the experience incredibly healing. “It was a remarkable experience to talk to him again in a super realistic way,” Meyer said. “I’ve been in tech my whole life, I’m a programmer, so it was easy for me to start building something like that especially as things got more advanced with the AI space.”
  • Meyer’s company has about 10 employees. One job Meyer is hoping to fill soon is chief ethics officer. “There are a lot of ways to do this wrong,”
  • One safeguard is trying to limit the amount of time a user is allowed to chat with CarynAI. To keep users from becoming addicted, CarynAI is programmed to wind down conversations after about an hour, encouraging users to pick back up later. But there is no hard time limit on use, and some users are spending hours speaking to CarynAI per day, according to Marjorie’s manager, Ishan Goel.
  • “I consider myself a futurist at heart and when I look into the future I believe this is the beginning of a very diverse future consisting of AI to human companionship,”
  • Elizabeth Snower, founder of ICONIQ, which creates conversational 3D avatars, predicts that soon there will be “AI influencers on every social platform that are influencing consumer decisions.”
  • “A lot of people have just been kind of really mad at the existence of this. They think that it’s the end of humanity,” she said.
  • Marjorie hopes the backlash will fade when other online personalities begin rolling out their own AI companions
  • “I think in the next five years, most Americans will have an AI companion in their pocket in some way, shape or form, whether it’s an ultra flirty AI that you’re dating, an AI that’s your personal trainer, or simply a tutor companion. Those are all things that we are building internally,
  • That strikes AI adviser and investor Allie K. Miller as a likely outcome. “I can imagine a future in which everyone — celebrities, TV characters, influencers, your brother — has an online avatar that they invite their audience or friends to engage with. … With the accessibility of these models, I’m not surprised it’s expanding to scaled interpersonal relationships.”
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
Javier E

Is Anything Still True? On the Internet, No One Knows Anymore - WSJ - 1 views

  • Creating and disseminating convincing propaganda used to require the resources of a state. Now all it takes is a smartphone.
  • Generative artificial intelligence is now capable of creating fake pictures, clones of our voices, and even videos depicting and distorting world events. The result: From our personal circles to the political circuses, everyone must now question whether what they see and hear is true.
  • exposure to AI-generated fakes can make us question the authenticity of everything we see. Real images and real recordings can be dismissed as fake. 
  • ...20 more annotations...
  • “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says David Rand, a professor at MIT Sloan who studies the creation, spread and impact of misinformation.
  • This problem, which has grown more acute in the age of generative AI, is known as the “liar’s dividend,
  • The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe, adds DiResta, leading to what she calls “bespoke realities.”
  • Examples of misleading content created by generative AI are not hard to come by, especially on social media
  • The signs that an image is AI-generated are easy to miss for a user simply scrolling past, who has an instant to decide whether to like or boost a post on social media. And as generative AI continues to improve, it’s likely that such signs will be harder to spot in the future.
  • “What our work suggests is that most regular people do not want to share false things—the problem is they are not paying attention,”
  • in the course of a lawsuit over the death of a man using Tesla’s “full self-driving” system, Elon Musk’s lawyers responded to video evidence of Musk making claims about this software by suggesting that the proliferation of “deepfakes” of Musk was grounds to dismiss such evidence. They advanced that argument even though the clip of Musk was verifiably real
  • are now using its existence as a pretext to dismiss accurate information
  • People’s attention is already limited, and the way social media works—encouraging us to gorge on content, while quickly deciding whether or not to share it—leaves us precious little capacity to determine whether or not something is true
  • If the crisis of authenticity were limited to social media, we might be able to take solace in communication with those closest to us. But even those interactions are now potentially rife with AI-generated fakes.
  • what sounds like a call from a grandchild requesting bail money may be scammers who have scraped recordings of the grandchild’s voice from social media to dupe a grandparent into sending money.
  • companies like Alphabet, the parent company of Google, are trying to spin the altering of personal images as a good thing. 
  • With its latest Pixel phone, the company unveiled a suite of new and upgraded tools that can automatically replace a person’s face in one image with their face from another, or quickly remove someone from a photo entirely.
  • Joseph Stalin, who was fond of erasing people he didn’t like from official photos, would have loved this technology.
  • In Google’s defense, it is adding a record of whether an image was altered to data attached to it. But such metadata is only accessible in the original photo and some copies, and is easy enough to strip out.
  • The rapid adoption of many different AI tools means that we are now forced to question everything that we are exposed to in any medium, from our immediate communities to the geopolitical, said Hany Farid, a professor at the University of California, Berkeley who
  • To put our current moment in historical context, he notes that the PC revolution made it easy to store and replicate information, the internet made it easy to publish it, the mobile revolution made it easier than ever to access and spread, and the rise of AI has made creating misinformation a cinch. And each revolution arrived faster than the one before it.
  • Not everyone agrees that arming the public with easy access to AI will exacerbate our current difficulties with misinformation. The primary argument of such experts is that there is already vastly more misinformation on the internet than a person can consume, so throwing more into the mix won’t make things worse.
  • it’s not exactly reassuring, especially given that trust in institutions is already at one of the lowest points in the past 70 years, according to the nonpartisan Pew Research Center, and polarization—a measure of how much we distrust one another—is at a high point.
  • “What happens when we have eroded trust in media, government, and experts?” says Farid. “If you don’t trust me and I don’t trust you, how do we respond to pandemics, or climate change, or have fair and open elections? This is how authoritarianism arises—when you erode trust in institutions.”
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
karenmcgregor

Mastering Network Security: Your Trusted Network Security Assignment Helper - 2 views

In the rapidly advancing world of technology, mastering network security is pivotal for academic success. Students navigating the complexities of this dynamic field often seek the expertise of a re...

#networksecurityassignmenthelper #assignmenthelpservicesonline #students #college #universityassessment

started by karenmcgregor on 04 Dec 23 no follow-up yet
Javier E

'Oppenheimer,' 'The Maniac' and Our Terrifying Prometheus Moment - The New York Times - 0 views

  • Prometheus was the Titan who stole fire from the gods of Olympus and gave it to human beings, setting us on a path of glory and disaster and incurring the jealous wrath of Zeus. In the modern world, especially since the beginning of the Industrial Revolution, he has served as a symbol of progress and peril, an avatar of both the liberating power of knowledge and the dangers of technological overreach.
  • More than 200 years after the Shelleys, Prometheus is having another moment, one closer in spirit to Mary’s terrifying ambivalence than to Percy’s fulsome gratitude. As technological optimism curdles in the face of cyber-capitalist villainy, climate disaster and what even some of its proponents warn is the existential threat of A.I., that ancient fire looks less like an ember of divine ingenuity than the start of a conflagration. Prometheus is what we call our capacity for self-destruction.
  • Annie Dorsen’s theater piece “Prometheus Firebringer,” which was performed at Theater for a New Audience in September, updates the Greek myth for the age of artificial intelligence, using A.I. to weave a cautionary tale that my colleague Laura Collins-Hughes called “forcefully beneficial as an examination of our obeisance to technology.”
  • ...13 more annotations...
  • Something similar might be said about “The Maniac,” Benjamín Labatut’s new novel, whose designated Prometheus is the Hungarian-born polymath John von Neumann, a pioneer of A.I. as well as an originator of game theory.
  • both narratives are grounded in fact, using the lives and ideas of real people as fodder for allegory and attempting to write a new mythology of the modern world.
  • Oppenheimer wasn’t a principal author of that theory. Those scientists, among them Niels Bohr, Erwin Schrödinger and Werner Heisenberg, were characters in Labatut’s previous novel, “When We Cease to Understand the World.” That book provides harrowing illumination of a zone where scientific insight becomes indistinguishable from madness or, perhaps, divine inspiration. The basic truths of the new science seem to explode all common sense: A particle is also a wave; one thing can be in many places at once; “scientific method and its object could no longer be prised apart.”
  • More than most intellectual bastions, the institute is a house of theory. The Promethean mad scientists of the 19th century were creatures of the laboratory, tinkering away at their infernal machines and homemade monsters. Their 20th-century counterparts were more likely to be found at the chalkboard, scratching out our future in charts, equations and lines of code.
  • The consequences are real enough, of course. The bombs dropped on Hiroshima and Nagasaki killed at least 100,000 people. Their successor weapons, which Oppenheimer opposed, threatened to kill everybody els
  • on Neumann and Oppenheimer were close contemporaries, born a year apart to prosperous, assimilated Jewish families in Budapest and New York. Von Neumann, conversant in theoretical physics, mathematics and analytic philosophy, worked for Oppenheimer at Los Alamos during the Manhattan Project. He spent most of his career at the Institute for Advanced Study, where Oppenheimer served as director after the war.
  • the intellectual drama of “Oppenheimer” — as distinct from the dramas of his personal life and his political fate — is about how abstraction becomes reality. The atomic bomb may be, for the soldiers and politicians, a powerful strategic tool in war and diplomacy. For the scientists, it’s something else: a proof of concept, a concrete manifestation of quantum theory.
  • . Oppenheimer’s designation as Prometheus is precise. He snatched a spark of quantum insight from those divinities and handed it to Harry S. Truman and the U.S. Army Air Forces.
  • Labatut’s account of von Neumann is, if anything, more unsettling than “Oppenheimer.” We had decades to get used to the specter of nuclear annihilation, and since the end of the Cold War it has been overshadowed by other terrors. A.I., on the other hand, seems newly sprung from science fiction, and especially terrifying because we can’t quite grasp what it will become.
  • Von Neumann, who died in 1957, did not teach machines to play Go. But when asked “what it would take for a computer, or some other mechanical entity, to begin to think and behave like a human being,” he replied that “it would have to play, like a child.”
  • MANIAC. The name was an acronym for “Mathematical Analyzer, Numerical Integrator and Computer,” which doesn’t sound like much of a threat. But von Neumann saw no limit to its potential. “If you tell me precisely what it is a machine cannot do,” he declared, “then I can always make a machine which will do just that.” MANIAC didn’t just represent a powerful new kind of machine, but “a new type of life.”
  • If Oppenheimer took hold of the sacred fire of atomic power, von Neumann’s theft was bolder and perhaps more insidious: He stole a piece of the human essence. He’s not only a modern Prometheus; he’s a second Frankenstein, creator of an all but human, potentially more than human monster.
  • “Technological power as such is always an ambivalent achievement,” Labatut’s von Neumann writes toward the end of his life, “and science is neutral all through, providing only means of control applicable to any purpose, and indifferent to all. It is not the particularly perverse destructiveness of one specific invention that creates danger. The danger is intrinsic. For progress there is no cure.”
Javier E

The Constitution of Knowledge - Persuasion - 0 views

  • But ideas in the marketplace do not talk directly to each other, and for the most part neither do individuals.
  • It is a good metaphor as far as it goes, yet woefully incomplete. It conjures up an image of ideas being traded by individuals in a kind of flea market, or of disembodied ideas clashing and competing in some ethereal realm of their own
  • When Americans think about how we find truth amid a world full of discordant viewpoints, we usually turn to a metaphor, that of the marketplace of ideas
  • ...31 more annotations...
  • Rather, our conversations are mediated through institutions like journals and newspapers and social-media platforms. They rely on a dense network of norms and rules, like truthfulness and fact-checking. They depend on the expertise of professionals, like peer reviewers and editors. The entire system rests on a foundation of values: a shared understanding that there are right and wrong ways to make knowledge.
  • Those values and rules and institutions do for knowledge what the U.S. Constitution does for politics: They create a governing structure, forcing social contestation onto peaceful and productive pathways.
  • I call them, collectively, the Constitution of Knowledge. If we want to defend that system from its many persistent attackers, we need to understand it—and its very special notion of reality.
  • What reality really is
  • The question “What is reality?” may seem either too metaphysical to answer meaningfully or too obvious to need answering
  • The whole problem is that humans have no direct access to an objective world independent of our minds and senses, and subjective certainty is no guarantee of truth. Faced with those problems and others, philosophers and practitioners think of reality as a set of propositions (or claims, or statements) that have been validated in some way, and that have thereby been shown to be at least conditionally true—true, that is, unless debunked
  • Some propositions reflect reality as we perceive it in everyday life (“The sky is blue”). Others, like the equations on a quantum physicist’s blackboard, are incomprehensible to intuition. Many fall somewhere in between.
  • a phrase I used a few sentences ago, “validated in some way,” hides a cheat. In epistemology, the whole question is, validated in what way? If we care about knowledge, freedom, and peace, then we need to stake a strong claim: Anyone can believe anything, but liberal science—open-ended, depersonalized checking by an error-seeking social network—is the only legitimate validator of knowledge, at least in the reality-based community.
  • That is a very bold, very broad, very tough claim, and it goes down very badly with lots of people and communities who feel ignored or oppressed by the Constitution of Knowledge: creationists, Christian Scientists, homeopaths, astrologists, flat-earthers, anti-vaxxers, birthers, 9/11 truthers, postmodern professors, political partisans, QAnon followers, and adherents of any number of other belief systems and religions.
  • But, like the U.S. Constitution’s claim to exclusivity in governing (“unconstitutional” means “illegal,” period), the Constitution of Knowledge’s claim to exclusivity is its sine qua non.
  • Rules for reality
  • The specific proposition does not matter. What does matter is that the only way to validate it is to submit it to the reality-based community. Otherwise, you could win dominance for your proposition by, say, brute force, threatening and jailing and torturing and killing those who see things differently—a standard method down through history
  • Say you believe something (X) to be true, and you believe that its acceptance as true by others is important or at least warranted
  • Or you and your like-minded friends could go off and talk only to each other, in which case you would have founded a cult—which is lawful but socially divisive and epistemically worthless.
  • Or you could engage in a social-media campaign to shame and intimidate those who disagree with you—a very common method these days, but one that stifles debate and throttles knowledge (and harms a lot of people).
  • What the reality-based community does is something else again. Its distinctive qualities derive from two core rules: 
  • what counts is the way the rule directs us to behave: You must assume your own and everyone else’s fallibility and you must hunt for your own and others’ errors, even if you are confident you are right. Otherwise, you are not reality-based.
  • The fallibilist rule: No one gets the final say. You may claim that a statement is established as knowledge only if it can be debunked, in principle, and only insofar as it withstands attempts to debunk it.
  • The empirical rule: No one has personal authority. You may claim that a statement has been established as knowledge only insofar as the method used to check it gives the same result regardless of the identity of the checker, and regardless of the source of the statement
  • Who you are does not count; the rules apply to everybody and persons are interchangeable. If your method is valid only for you or your affinity group or people who believe as you do, then you are not reality-based.
  • Whatever you do to check a proposition must be something that anyone can do, at least in principle, and get the same result. Also, no one proposing a hypothesis gets a free pass simply because of who she is or what group she belongs to.
  • Both rules have very profound social implications. “No final say” insists that to be knowledge, a statement must be checked; and it also says that knowledge is always provisional, standing only as long as it withstands checking.
  • “No personal authority” adds a crucial second step by defining what properly counts as checking. The point, as the great American philosopher Charles Sanders Peirce emphasized more than a century ago, is not that I look or you look but that we look; and then we compare, contest, and justify our views. Critically, then, the empirical rule is a social principle that forces us into the same conversation—a requirement that all of us, however different our viewpoints, agree to discuss what is in principle only one reality.
  • By extension, the empirical rule also dictates what does not count as checking: claims to authority by dint of a personally or tribally privileged perspective.
  • In principle, persons and groups are interchangeable. If I claim access to divine revelation, or if I claim the support of miracles that only believers can witness, or if I claim that my class or race or historically dominant status or historically oppressed status allows me to know and say things that others cannot, then I am breaking the empirical rule by exempting my views from contestability by others.
  • Though seemingly simple, the two rules define a style of social learning that prohibits a lot of the rhetorical moves we see every day.
  • Claiming that a conversation is too dangerous or blasphemous or oppressive or traumatizing to tolerate will almost always break the fallibilist rule.
  • Claims which begin “as a Jew,” or “as a queer,” or for that matter “as minister of information” or “as Pope” or “as head of the Supreme Soviet,” can be valid if they provide useful information about context or credentials; but if they claim to settle an argument by appealing to personal or tribal authority, rather than earned authority, they violate the empirical rule. 
  • “No personal authority” says nothing against trying to understand where people are coming from. If we are debating same-sex marriage, I may mention my experience as a gay person, and my experience may (I hope) be relevant.
  • But statements about personal standing and interest inform the conversation; they do not control it, dominate it, or end it. The rule acknowledges, and to an extent accepts, that people’s social positions and histories matter; but it asks its adherents not to burrow into their social identities, and not to play them as rhetorical trump cards, but to bring them to the larger project of knowledge-building and thereby transcend them.
  • the fallibilist and empirical rules are the common basis of science, journalism, law, and all the other branches of today’s reality-based community. For that reason, both rules also attract hostility, defiance, interference, and open warfare from those who would rather manipulate truth than advance it.
karenmcgregor

Interview with a Packet Tracer Assignment Writing Help Expert - 0 views

Welcome, everyone! Today, we have the privilege of gaining insights from an expert in the field of Packet Tracer assignments. Our distinguished guest from https://www.computernetworkassignmenthelp....

#professionalpackettracerassignmenthelp #assignmenthelpservice #packettracer #packettracerassignmenthelp

started by karenmcgregor on 29 Dec 23 no follow-up yet
« First ‹ Previous 261 - 280 of 289 Next ›
Showing 20 items per page