Skip to main content

Home/ TOK Friends/ Group items tagged algorithm

Rss Feed Group items tagged

marleen_ueberall

Does Democracy Need Truth?: A Conversation with the Historian Sophia Rosenfeld | The Ne... - 0 views

  • Does Democracy Need Truth?: A Conversation with the Historian Sophia Rosenfeld | The New Yorker
  • Ever since Donald Trump announced his Presidential candidacy, in June of 2015, there has been considerable concern about whether his allergy to truth is endangering American democracy
  • the relationship between truth and democracy was fraught for centuries before the time of Twitter and Trump.
  • ...14 more annotations...
  • One, it’s a story about how democracy itself is always based on uncertain notions of truth, in moral terms and in epistemological terms. The other is a story about a continual conflict between a kind of expert truth and a more populist, everyday, common-sense truth that supposedly stems not from experts but the wisdom of the crowd.
  • Democracy insists on the idea that truth both matters and that nobody gets to say definitively what it is. That’s a tension that’s built into democracy from the beginning, and it’s not solvable but is, in fact, intrinsic to democracy.
  • We don’t want to have one definitive source of truth. Part of the reason ideas evolve and culture changes is that we’re constantly debating what is an accurate rendition of reality in some form.
  • Can we accept evolution as a set truth or not? They have not exploded to the point where they’ve destabilized our political or social life, but they’ve been a controversial question for over a hundred years. That’s a public contest that, actually, democracy’s pretty good for. You know, you contest things in court, you contest things in universities, you contest things in the public sphere.
  • I think it’s important that there be a contest about what is true and also about, How do you know what’s true? Where does your information come from? I would say, largely, science has won. That is, that the mainstream educational institutions, the National Institutes of Health, et cetera, all accept that evolution is as close as we’re going to get to truth.
  • One says that experts often make [bad] decisions because there’s been no popular input on them—not just because they don’t know enough but because they haven’t actually taken account of popular knowledge.
  • The most common example involves things like the World Bank coming up with a plan about water use in some part of the world without studying how people actually think and use water, simply imagining a kind of technocratic solution with no local input, and it turns out to be totally ineffective because it runs contrary to cultural norms and everyday life. There’s every chance that experts alone get things wrong.
  • Social media and the Internet more broadly have clearly had a rather revolutionary effect on not just what we take to be true but how truths circulate, what we believe, how we know anything.
  • new technology causes certain kinds of panics about truth. The Internet is particularly important because of its reach and because of the algorithmic way in which it promotes what’s popular rather than what’s true. It creates a culture of untruth, probably, that other forms of publishing can’t easily.
  • I actually approve of fact-checking, even if I think it’s often not very effective, because it doesn’t persuade people who aren’t already inclined to want to look at fact-checking. And I don’t think it’s much of a substitute for real politics
  • I don’t think facts are pure in any sense. You know, if I give you something like an unemployment rate, it implies all kinds of interpretative work already about what is work and who should be looking for it and how old you should be when you’re working.
  • It’s important that that’s part of democracy, too—questioning received wisdom. If somebody says that’s how it is, it’s correct to think, Is that really how it is? Do I have enough information to be sure that’s how it is?
  • Conspiracy theories, the complex ones that arise from the bottom, tend to involve seeing through official truths and often seeing how the rich and powerful have pulled the wool over people’s eyes, that what looked like this turned out to be that because there was a kind of subterfuge going on from above.
  • Whereas, the climate-change one, which we know has been sort of promoted by the Koch brothers and others in business interest groups, as you say, didn’t start really organically as much as it became a kind of position of industry that then took on a life of its own because it got mixed in with a whole bunch of other assumptions, whether it was about political norms, government overreach, guns.
katherineharron

Google's algorithm for happiness - CNN - 0 views

  • Step one: "Calm your mind"Read MoreTo introduce his first piece of advice, Meng led the SXSW audience through a short collective breathing exercise to calm the fluffy particles in the "snow-globes" (his metaphor) in our skulls. He advocates finding easy ways to take pauses during the day and be mindful of your breath. "If that's too hard, then just think about nothing for little bit," he joked.
  • And yet, at the same time the more I practiced the three-step method, the more it seemed to be working. I started meditating at work. I programmed my mobile phone to send me hourly reminders to wish happiness on others. And I remembered to tell myself "I'm having a moment of joy!" when I was having fun with my daughters, running in the park, drinking a delicious beer and even writing this column.
  • Step three: "Wish other people to be happy"According to Meng, altruistic thoughts benefit us because we derive a lot of joy from giving, even more than from receiving.Meng makes eloquent arguments for the (I think) self-evident need to infuse your life with more compassion, but only cites one study -- on people performing acts for others -- to back his claim that "kindness is a sustainable source of happiness."
  • ...3 more annotations...
  • Step two: "Log moments of joy"This means simply saying to yourself -- as you sip a great espresso, laugh at your friend's joke or buy that shirt you've wanted -- "I am having a moment of joy!" When negative things happen to us throughout the day we tend to hold on to them, while the good things are more fleeting and ephemeral. So, by consciously acknowledging the good things, says Meng, we increase our chances that when we reflect on our day, we conclude it was happy one.
  • I asked psychologist Tom Stafford, who writes the Neurohacks column for BBC Future about the gap. "Squaring what works for you and what the science says is difficult because happiness is a complex object," he told me. "There will be local variations due to individual personality, so we've immediately got a reason for expecting a gap between the science -- which tends to work with group averages -- and any one person's experience."The interesting general question, to me, is when do we trust our experience and when do we listen to science," Stafford added. "Obviously some things we don't need science for ('Does dropping a rock on my foot hurt?'), and some things we do ('Is smoking bad for my health?'). Happiness, I'd argue, is in between these two cases."
  • To many, Meng's three steps may seem obvious or simplistic. Yet he compared his advice to showing us how to do a single push up or arm curl at the gym. You know it does you good, but you have to do the exercise every day to get results. I may be more experientially convinced than scientifically sated, but it's enough to keep me going to Google's happiness gym and doing those push-ups.
johnsonel7

Max Planck Neuroscience on Nautilus: Understanding the Brain with the Help of Artificia... - 0 views

  • Unfortunately, however, little is known about the wiring of the brain. This is due also to a problem of time: tracking down connections in collected data would require man-hours amounting to many lifetimes, as no computer has been able to identify the neural cell contacts reliably enough up to now. Scientists from the Max Planck Institute of Neurobiology in Martinsried plan to change this with the help of artificial intelligence.
  • To be able to use this key, the connectome, that is every single neuron in the brain with its thousands of contacts and partner cells, must be mapped. Only a few years ago, the prospect of achieving this seemed unattainable.
  • The Max Planck scientists led by Jörgen Kornfeld have now overcome this obstacle with the help of artificial neural networks. These algorithms can learn from examples and experience and make generalizations based on this knowledge.
  • ...1 more annotation...
  • And he has every reason to be delighted, as the newly developed neural networks will relieve neurobiologists of many thousands of hours of monotonous work in the future. As a result, they will also reduce the time needed to decode the connectome and, perhaps also, consciousness, by many years.
annabaldwin_

How Fiction Becomes Fact on Social Media - The New York Times - 0 views

  • In the coming weeks, executives from Facebook and Twitter will appear before congressional committees to answer questions about the use of their platforms by Russian hackers and others to spread misinformation and skew elections.
  • Yet the psychology behind social media platforms — the dynamics that make them such powerful vectors of misinformation in the first place — is at least as important, experts say, especially for those who think they’re immune to being duped.
  • Skepticism of online “news” serves as a decent filter much of the time, but our innate biases allow it to be bypassed, researchers have found — especially when presented with the right kind of algorithmically selected “meme.”
  • ...3 more annotations...
  • That kind of curating acts as a fertile host for falsehoods by simultaneously engaging two predigital social-science standbys: the urban myth as “meme,” or viral idea; and individual biases, the automatic, subconscious presumptions that color belief.
  • “My experience is that once this stuff gets going, people just pass these stories on without even necessarily stopping to read them,” Mr. McKinney said.
  • “The networks make information run so fast that it outruns fact-checkers’ ability to check it.
anniina03

A.I. Comes to the Operating Room - The New York Times - 0 views

  • Brain surgeons are bringing artificial intelligence and new imaging techniques into the operating room, to diagnose tumors as accurately as pathologists, and much faster, according to a report in the journal Nature Medicine.
  • The traditional method, which requires sending the tissue to a lab, freezing and staining it, then peering at it through a microscope, takes 20 to 30 minutes or longer. The new technique takes two and a half minutes.
  • In addition to speeding up the process, the new technique can also detect some details that traditional methods may miss, like the spread of a tumor along nerve fibers
  • ...6 more annotations...
  • The new process may also help in other procedures where doctors need to analyze tissue while they are still operating, such as head and neck, breast, skin and gynecologic surgery, the report said. It also noted that there is a shortage of neuropathologists, and suggested that the new technology might help fill the gap in medical centers that lack the specialty. Video Advertisement LIVE 00:00 1:05
  • Algorithms are also being developed to help detect lung cancers on CT scans, diagnose eye disease in people with diabetes and find cancer on microscope slides.
  • The diagnoses were later judged right or wrong based on whether they agreed with the findings of lengthier and more extensive tests performed after the surgery.The result was a draw: humans, 93.9 percent correct; A.I., 94.6 percent.
  • At some centers, he said, brain surgeons do not even order frozen sections because they do not trust them and prefer to wait for tissue processing after the surgery, which may take weeks to complete.
  • Some types of brain tumor are so rare that there is not enough data on them to train an A.I. system, so the system in the study was designed to essentially toss out samples it could not identify.
  • “It won’t change brain surgery,” he said, “but it’s going to add a significant new tool, more significant than they’ve stated.”
Javier E

Understanding What's Wrong With Facebook | Talking Points Memo - 0 views

  • to really understand the problem with Facebook we need to understand the structural roots of that problem, how much of it is baked into the core architecture of the site and its very business model
  • much of it is inherent in the core strategies of the post-2000, second wave Internet tech companies that now dominate our information space and economy.
  • Facebook is an ingenious engine for information and ideational manipulation.
  • ...17 more annotations...
  • Good old fashioned advertising does that to a degree. But Facebook is much more powerful, adaptive and efficient.
  • Facebook is designed to do specific things. It’s an engine to understand people’s minds and then manipulate their thinking.
  • Those tools are refined for revenue making but can be used for many other purposes. That makes it ripe for misuse and bad acting.
  • The core of all second wave Internet commerce operations was finding network models where costs grow mathematically and revenues grow exponentially.
  • The network and its dominance is the product and once it takes hold the cost inputs remained constrained while the revenues grow almost without limit.
  • Facebook is best understood as a fantastically profitable nuclear energy company whose profitability is based on dumping the waste on the side of the road and accepting frequent accidents and explosions as inherent to the enterprise.
  • That’s why these companies employ so few people relative to scale and profitability.
  • That’s why there’s no phone support for Google or Facebook or Twitter. If half the people on the planet are ‘customers’ or users that’s not remotely possible.
  • The core economic model requires doing all of it on the cheap. Indeed, what Zuckerberg et al. have created with Facebook is so vast that the money required not to do it on the cheap almost defies imagination.
  • Facebook’s core model and concept requires not taking responsibility for what others do with the engine created to drive revenue.
  • It all amounts to a grand exercise in socializing the externalities and keeping all the revenues for the owners.
  • Here’s a way to think about it. Nuclear power is actually incredibly cheap. The fuel is fairly plentiful and easy to pull out of the ground. You set up a little engine and it generates energy almost without limit. What makes it ruinously expensive is managing the externalities – all the risks and dangers, the radiation, accidents, the constant production of radioactive waste.
  • managing or distinguishing between legitimate and bad-acting uses of the powerful Facebook engine is one that would require huge, huge investments of money and armies of workers to manage
  • But back to Facebook. The point is that they’ve created a hugely powerful and potentially very dangerous machine
  • The core business model is based on harvesting the profits from the commercial uses of the machine and using algorithms and very, very limited personnel (relative to scale) to try to get a handle on the most outrageous and shocking abuses which the engine makes possible.
  • Zuckerberg may be a jerk and there really is a culture of bad acting within the organization. But it’s not about him being a jerk. Replace him and his team with non-jerks and you’d still have a similar core problem.
  • To manage the potential negative externalities, to take some responsibility for all the dangerous uses the engine makes possible would require money the owners are totally unwilling and in some ways are unable to spend.
Javier E

No, Trump's sister did not publicly back him. He was duped by a fake account. - The New... - 0 views

  • That article, on the website of a conservative talk-radio host named Wayne Dupree, quoted a post from a Twitter account named “Betty Trump” that used a photo of Ms. Trump Grau as its profile picture.
  • “This election inspired me to break my silence and speak out on behalf of my family,” the account said in a post on Wednesday. “My brother Don won this election and will fight this to the very end. We’ve always been a family of fighters.”
  • Had the article’s author looked more closely, though, she would have noticed some suspicious details about the account. It was a day old. The photos it used of Ms. Trump Grau were taken from Getty Images and past news articles about her. And since that first post, the account had tweeted increasingly bizarre messages, sharply criticizing Democrats, journalists and Republicans who had questioned the false claim that Mr. Trump was re-elected.
  • ...1 more annotation...
  • The bizarre episode illustrates how easily misinformation spreads online, often with the help of the president himself. Right-wing websites that seek to support the president’s baseless claims, or simply attract clicks so they can sell more ads, often eschew the traditional principles of journalism, such as simple fact-checking. And the social media companies aid the cycle by making it simple to share misinformation, including via fake accounts, and by training their algorithms to promote material that attracts more attention, as sensational and divisive posts often do.
blythewallick

What the brains of people with excellent general knowledge look like: Some people seem ... - 0 views

  • "Although we can precisely measure the general knowledge of people and this wealth of knowledge is very important for an individual's journey through life, we currently know little about the links between general knowledge and the characteristics of the brain,"
  • This makes it possible to reconstruct the pathways of nerve fibres and thus gain an insight into the structural network properties of the brain. By means of mathematical algorithms, the researchers assigned an individual value to the brain of each participant, which reflected the efficiency of his or her structural fibre network.
  • The participants also completed a general knowledge test called the Bochum Knowledge Test, which was developed in Bochum by Dr. Rüdiger Hossiep. It is comprised of over 300 questions from various fields of knowledge such as art and architecture or biology and chemistry. The team led by Erhan Genç finally investigated whether the efficiency of structural networking is associated with the amount of general knowledge stored.
  • ...2 more annotations...
  • "We assume that individual units of knowledge are dispersed throughout the entire brain in the form of pieces of information," explains Erhan Genç. "Efficient networking of the brain is essential in order to put together the information stored in various areas of the brain and successfully recall knowledge content."
  • To answer the question of which constants occur in Einstein's theory of relativity, you have to connect the meaning of the term "constant" with knowledge of the theory of relativity. "We assume that more efficient networking of the brain contributes to better integration of pieces of information and thus leads to better results in a general knowledge test,
manhefnawi

Two New Studies Explore the Neuroscience of Negative Emotions | Mental Floss - 0 views

  • We've all had experiences we'd prefer not to remember. That's especially true for people who have gone through a traumatic event such as childhood abuse, combat-related PTSD, or a bad accident. But there may be positive health applications for identifying, predicting, and retrieving negative emotions in the brain, according to two new studies. 
  • Researchers identified the different networks in the brain that all work together during a participant’s negative emotional experience, which they call a “brain signature.” Then, they used machine-learning algorithms to find global patterns of brain activity that best predicted the participants’ responses. “What we’re calling a 'brain signature' is basically a configuration—a brain pattern that is predictive of a state,” Chang tells mental_floss. He compares the process to the way that Netflix predicts who is watching a certain type of show based on the watcher’s choices in programming.
  • MEMORIES CAUSED—AND LOST—BY TRAUMA
  • ...2 more annotations...
  • Many psychologists believe that in order for patients to recover from trauma, they often need to be able to recall what happened to them. The second study, published in Nature Neuroscience, investigated how the brain stores negative memories, known as “state-dependent learning.” The study, conducted in mice at Northwestern University’s Feinberg School of Medicine, suggests that negative memories caused—and then “lost”—by traumatic experiences may be retrieved by re-creating the state of the brain in which the memory first occurred.
  • The study suggests that in response to trauma, the brain activates this extra-synaptic GABA system, which appears to encode memories of fear-inducing events and hide them away from consciousness, rather than the glutamate system, which helps to store all memories, positive and negative. This research may provide a window into how to access these traumatic memories when needed for therapeutic reasons.
Javier E

Why Baseball Is Obsessed With the Book 'Thinking, Fast and Slow' - The New York Times - 0 views

  • In Teaford’s case, the scouting evaluation was predisposed to a mental shortcut called the representativeness heuristic, which was first defined by the psychologists Daniel Kahneman and Amos Tversky. In such cases, an assessment is heavily influenced by what is believed to be the standard or the ideal.
  • Kahneman, a professor emeritus at Princeton University and a winner of the Nobel Prize in economics in 2002, later wrote “Thinking, Fast and Slow,” a book that has become essential among many of baseball’s front offices and coaching staffs.
  • “Pretty much wherever I go, I’m bothering people, ‘Have you read this?’” said Mejdal, now an assistant general manager with the Baltimore Orioles.
  • ...12 more annotations...
  • There aren’t many explicit references to baseball in “Thinking, Fast and Slow,” yet many executives swear by it
  • “From coaches to front office people, some get back to me and say this has changed their life. They never look at decisions the same way.
  • A few, though, swear by it. Andrew Friedman, the president of baseball operations for the Dodgers, recently cited the book as having “a real profound impact,” and said he reflects back on it when evaluating organizational processes. Keith Law, a former executive for the Toronto Blue Jays, wrote the book “Inside Game” — an examination of bias and decision-making in baseball — that was inspired by “Thinking, Fast and Slow.”
  • “As the decision tree in baseball has changed over time, this helps all of us better understand why it needed to change,” Mozeliak wrote in an email. He said that was especially true when “working in a business that many decisions are based on what we see, what we remember, and what is intuitive to our thinking.”
  • The central thesis of Kahneman’s book is the interplay between each mind’s System 1 and System 2, which he described as a “psychodrama with two characters.”
  • System 1 is a person’s instinctual response — one that can be enhanced by expertise but is automatic and rapid. It seeks coherence and will apply relevant memories to explain events.
  • System 2, meanwhile, is invoked for more complex, thoughtful reasoning — it is characterized by slower, more rational analysis but is prone to laziness and fatigue.
  • Kahneman wrote that when System 2 is overloaded, System 1 could make an impulse decision, often at the expense of self-control
  • No area of baseball is more susceptible to bias than scouting, in which organizations aggregate information from disparate sources:
  • “The independent opinion aspect is critical to avoid the groupthink and be aware of momentum,”
  • Matt Blood, the director of player development for the Orioles, first read “Thinking, Fast and Slow” as a Cardinals area scout nine years ago and said that he still consults it regularly. He collaborated with a Cardinals analyst to develop his own scouting algorithm as a tripwire to mitigate bias
  • Mejdal himself fell victim to the trap of the representativeness heuristic when he started with the Cardinals in 2005
Javier E

How Zeynep Tufekci Keeps Getting the Big Things Right - The New York Times - 0 views

  • When the Centers for Disease Control and Prevention told Americans in January that they didn’t need to wear masks, Dr. S. Vincent Rajkumar, a professor at the Mayo Clinic and the editor of the Blood Cancer Journal, couldn’t believe his ears.
  • “Here I am, the editor of a journal in a high profile institution, yet I didn’t have the guts to speak out that it just doesn’t make sense,” Dr. Rajkumar told me. “Everybody should be wearing masks.”
  • Ms. Tufekci, an associate professor at the University of North Carolina’s School of Information and Library Science with no obvious qualifications in epidemiology, came out against the C.D.C. recommendation in a March 1 tweetstorm before expanding on her criticism in a March 17 Op-Ed article for The New York Times.
  • ...22 more annotations...
  • The C.D.C. changed its tune in April, advising all Americans above the age of 2 to wear masks to slow the spread of the coronavirus. Michael Basso, a senior health scientist at the agency who had been pushing internally to recommend masks, told me Ms. Tufekci’s public criticism of the agency was the “tipping point.”
  • Ms. Tufekci, a 40-something who speaks a mile a minute with a light Turkish accent, has none of the trappings of the celebrity academic or the professional pundit. But long before she became perhaps the only good amateur epidemiologist, she had quietly made a habit of being right on the big things.
  • In 2011, she went against the current to say the case for Twitter as a driver of broad social movements had been oversimplified. In 2012, she warned news media outlets that their coverage of school shootings could inspire more. In 2013, she argued that Facebook could fuel ethnic cleansing. In 2017, she warned that YouTube’s recommendation algorithm could be used as a tool of radicalization.
  • And when it came to the pandemic, she sounded the alarm early while also fighting to keep parks and beaches open.
  • “I’ve just been struck by how right she has been,” said Julia Marcus, an infectious disease epidemiologist at Harvard Medical School.
  • She told me she chalks up her habits of mind in part to a childhood she wouldn’t wish on anyone.
  • Mr. Goff was enthusing about the campaign’s ability to send different messages to individual voters based on the digital data it had gathered about them. Ms. Tufekci quickly objected to the practice, saying that microtargeting would more likely be used to sow division.
  • An international point of view she picked up while bouncing as a child between Turkey and Belgium and then working in the United States.
  • Knowledge that spans subject areas and academic disciplines, which she happened onto as a computer programmer who got into sociology.
  • A habit of complex, systems-based thinking, which led her to a tough critique in The Atlantic of America’s news media in the run-up to the pandemic
  • it began, she says, with growing up in an unhappy home in Istanbul. She said her alcoholic mother was liable to toss her into the street in the early hours of the morning. She found some solace in science fiction — Ursula K. Le Guin was a favorite — and in the optimistic, early internet.
  • Perhaps because of a kind of egalitarian nerd ideology that has served her well, she never sought to meet the rebels’ charismatic leader, known as Subcomandante Marcos.
  • “I have a thing that fame and charisma screws with your head,” she said. “I’ve made an enormous effort throughout my life to preserve my thinking.”
  • While many American thinkers were wide-eyed about the revolutionary potential of social media, she developed a more complex view, one she expressed when she found herself sitting to the left of Teddy Goff, the digital director for President Obama’s re-election campaign, at a South by Southwest panel in Austin in 2012
  • “A bunch of things came together, which I’m happy I survived,” she said, sitting outside a brick house she rents for $2,300 a month in Chapel Hill, N.C., where she is raising her 11-year-old son as a single parent. “But the way they came together was not super happy, when it was happening.”
  • “At a time when everybody was being stupidly optimistic about the potential of the internet, she didn’t buy the hype,” he told me. “She was very prescient in seeing that there would be a deeper rot to the role of data-driven politics in our world.”
  • Many tech journalists, entranced by the internet-fueled movements sweeping the globe, were slow to spot the ways they might fail, or how social media could be used against them. Ms. Tufekci, though, had “seen movement after movement falter because of a lack of organizational depth and experience, of tools or culture for collective decision making, and strategic, long-term action,” she wrote in her 2017 book, “Twitter and Tear Gas.”
  • One of the things that makes Ms. Tufekci stand out in this gloomy moment is her lack of irony or world-weariness. She is not a prophet of doom, having hung on to an early-internet optimism
  • Ms. Tufekci has taught epidemiology as a way to introduce her students to globalization and to make a point about human nature: Politicians and the news media often expect looting and crime when disaster strikes, as they did when Hurricane Katrina hit New Orleans in 2005. But the reality on the ground has more to do with communal acts of generosity and kindness, she believes.
  • Her March column on masks was among the most influential The Times has published, although — or perhaps because —  it lacked the political edge that brings wide attention to an opinion piece.
  • “The real question is not whether Zuck is doing what I like or not,” she said. “The real question is why he’s getting to decide what hate speech is.”
  • She also suggested that we may get it wrong when we focus on individuals — on chief executives, on social media activists like her. The probable answer to a media environment that amplifies false reports and hate speech, she believes, is the return of functional governments, along with the birth of a new framework, however imperfect, that will hold the digital platforms responsible for what they host.
Javier E

Law professor Kim Wehle's latest book is 'How To Think Like a Lawyer - and Why' : NPR - 0 views

  • a five-step process she calls the BICAT method - BICAT.
  • KIM WEHLE: B is to break a problem down into smaller pieces
  • I is to identify our values. A lot of people think lawyers are really about winning all the time. But the law is based on a value system. And I suggest that people be very deliberate about what matters to them with whatever decision there is
  • ...19 more annotations...
  • C is to collect a lot of information. Thirty years ago, the challenge was finding information in a card catalog at the library. Now it's, how do we separate the good stuff from the bad stuff?
  • A is to analyze both sides. Lawyers have to turn the coin over and exhaust counterarguments or we'll lose in court.
  • So lawyers are trained to look for the gray areas, to look for the questions are not the answers. And if we kind of orient our thinking that way, I think we're less likely to shut down competing points of view.
  • My argument in the book is, we can feel good about a decision even if we don't get everything that we want. We have to make compromises.
  • I tell my students, you'll get through the bar. The key is to look for questions and not answers. If you could answer every legal question with a Wikipedia search, there would be no reason to hire lawyers.
  • Lawyers are hired because there are arguments on both sides, you know? Every Supreme Court decision that is split 6-3, 5-4, that means there were really strong arguments on both sides.
  • T is, tolerate the fact that you won't get everything you want every time
  • So we have to be very careful about the source of what you're getting, OK? Is this source neutral? Is this source really care about facts and not so much about an agenda?
  • Step 3, the collecting information piece. I think it's a new skill for all of us that we are overloaded with information into our phones. We have algorithms that somebody else developed that tailor the information that comes into our phones based on what the computer thinks we already believe
  • No. 2 - this is the beauty of social media and the internet - you can pull original sources. We can click on the indictment. Click on the new bill that has been proposed in the United States Congress.
  • then the book explains ways that you can then sort through that information for yourself. Skills are empowering.
  • Maybe as a replacement for sort of being empowered by being part of a team - a red team versus a blue team - that's been corrosive, I think, in American politics and American society. But arming ourselves with good facts, that leads to self-determination.
  • MARTINEZ: Now, you've written two other books - "How To Read The Constitution" and "What You Need To Know About Voting" - along with this one, "How To Think Like A Lawyer - And Why.
  • It kind of makes me think, Kim, that you feel that Americans might be lacking a basic level of civics education or understanding. So what is lacking when it comes to teaching civics or in civics discourse today?
  • studies have shown that around a third of Americans can't name the three branches of government. But if we don't understand our government, we don't know how to hold our government accountable
  • Democracies can't stay open if we've got elected leaders that are caring more about entrenching their own power and misinformation than actually preserving democracy by the people. I think that's No. 1.
  • No. 2 has to do with a value system. We talk about American values - reward for hard work, integrity, honesty. The same value system should apply to who we hire for government positions. And I think Americans have lost that.
  • in my own life, I'm very careful about who gets to be part of the inner circle because I have a strong value system. Bring that same sense to bear at the voting booth. Don't vote for red versus blue. Vote for people that live your value system
  • just like the Ukrainians are fighting for their children's democracy, we need to do that as well. And we do that through informing ourselves with good information, tolerating competing points of view and voting - voting, voting, voting - to hold elected leaders accountable if they cross boundaries that matter to us in our own lives.
Javier E

Opinion | Barack Obama's smart way to change the disinformation debate - The Washington... - 0 views

  • The former president spoke at Stanford University on April 21 to lay out his vision for fighting disinformation on the Internet. His focus on the subject is fitting; the dusk of his administration marked a turning point from techno-optimism to pessimism after election interference revealed how easily malicious actors could exploit the free flow of information.
  • His diagnosis is on target. The Internet has given us access to more people, more opportunities and more knowledge
  • This has helped activists drum up attention for overlooked causes. It has also enabled the nation’s adversaries to play on our preexisting prejudices and divisions to sow discord
  • ...5 more annotations...
  • Mr. Obama starts where most lawmakers are stuck: Section 230 of the Communications Decency Act, which gives platforms immunity from legal liability for most third-party posts. He suggested a “higher standard of care” for ads than for so-called organic content that everyday users post. This would strike a sensible balance between eviscerating Section 230, making sites accountable for everything they host, and doing nothing.
  • On top of that, “an instant, 24/7 global information stream,” from which audiences can pick and choose material that confirms their biases, has deepened the social divides that bad actors seek to exploit.
  • Mr. Obama identified another problem with the Section 230 talk: homing in on what material platforms do and don’t take down risks missing how the “very design” of these sites privileges polarizing, inflammatory posts.
  • With this, Mr. Obama adds something vital to the mainstream debate over social media regulation, shifting attention away from a debate about whack-a-mole content removal and toward the sites’ underlying structures. His specific suggestions, while fuzzy, also have promise — from slowing down viral material to imposing transparency obligations that would subject social media companies’ algorithms to scrutiny from researchers and regulators.
  • Mr. Obama calls this “democratic oversight.” But the material companies reveal could be highly technical. Ideally, it would get translated into layman’s terms so that everyday people, too, can understand how decisions so significant in their daily lives and the life of the country are made.
Javier E

The Thread Vibes Are Off - by Anne Helen Petersen - 0 views

  • The way people post on Twitter is different from the way people post on LinkedIn which is different than how people post Facebook which is different from the way people post on Instagram, no matter how much Facebook keeps telling you to cross-post your IG stories
  • Some people whose job relies on onlineness (like me) have to refine their voices, their ways of being, across several platforms. But most normal people have found their lane — the medium that fits their message — and have stuck with it.
  • People post where they feel public speech “belongs.”
  • ...24 more annotations...
  • For some, the only speech they feel should be truly public should also be “professional.” Hence: LinkedIn, where the only associated image is a professional headshot, and the only conversations are those related to work.
  • Which is how some people really would like to navigate the public sphere: with total freedom and total impunity
  • Twitter is where you could publicly (if often anonymously) fight, troll, dunk, harass, joke, and generally speak without consequence; it’s also where the mundane status update/life musing (once the foundation of Facebook) could live peacefully.
  • Twitter was for publicly observing — through the scroll, but also by tweeting, retweeting, quote tweeting — while remaining effectively invisible, a reply-guy amongst reply-guys, a troll amongst trolls.
  • The Facebook of the 2010s was for broadcasting ideological stances under your real name and fighting with your close and extended community about them; now it’s (largely) about finding advice (and fighting about advice) in affinity groups (often) composed of people you’ve never met.
  • It rewards the esoteric, the visually witty, the mimetic — even more than Twitter.
  • Tiktok is for monologues, for expertise, for timing and performance. It’s without pretense.
  • On TikTok, you don’t reshare memes, you use them as the soundtrack to your reimagining, even if that reimagining is just “what if I do the same dance, only with my slightly dorky parents?
  • Instagram is serious and sincere (see: the success of the social justice slideshow) and almost never ironic — maybe because static visual irony is pretty hard to pull off.
  • Like YouTube, far fewer people are posting than consuming, which means that most people aren’t speaking at all.
  • And then there’s Instagram. People think Instagram is for extroverts, for people who want to broadcast every bit of their lives, but most Instagram users I know are shy — at least with public words. Instagram is where parents post pictures of their kids with the caption “these guys right here” or a picture of their dog with “a very good boy.”
  • The text doesn’t matter; the photo speaks loudest. Each post becomes overdetermined, especially when so readily viewed within the context of the greater grid
  • The more you understand your value as the sum of your visual parts, the more addictive, essential, and anxiety-producing Instagram becomes.
  • That emphasis on aesthetic perfection is part of what feminizes Instagram — but it’s also what makes it the most natural home for brands, celebrities, and influencers.
  • a static image can communicate a whole lifestyle — and brands have had decades of practice honing the practice in magazine ads and catalogs.
  • And what is an influencer if not a conduit for brands? What is a celebrity if not a conduit for their own constellation of brands?
  • If LinkedIn is the place where you can pretend that your whole life and personality is “business,” then Instagram is where you can pretend it’s all some form of leisure — or at least fun
  • A “fun” work trip, a “fun” behind-the-scenes shot, a brand doing the very hard work of trying to get you to click through and make a purchase with images that are fun fun fun.
  • On the flip side, Twitter was where you spoke with your real (verified) name — and with great, algorithm-assisted importance. You could amass clout simply by rephrasing others’ scoops in your own words, declaring opinions as facts, or just declaring. If Twitter was gendered masculine — which it certainly was, and is arguably even more so now — it was only because all of those behaviors are as well.
  • Instagram is a great place to post an announcement and feel celebrated or consoled but not feel like you have to respond to people
  • The conversation is easier to both control and ignore; of all the social networks, it most closely resembles the fawning broadcast style of the fan magazine, only the celebs control the final edit, not the magazine publisher
  • Celebrities initially glommed to Twitte
  • But its utility gradually faded: part of the problem was harassment, but part of it was context collapse, and the way it allowed words to travel across the platform and out of the celebrity’s control.
  • Instagram was just so much simpler, the communication so clearly in the celebrity wheelhouse. There is very little context collapse on Instagram — it’s all curation and control. As such, you can look interesting but say very little.
Javier E

J. Robert Oppenheimer's Defense of Humanity - WSJ - 0 views

  • Von Neumann, too, was deeply concerned about the inability of humanity to keep up with its own inventions. “What we are creating now,” he said to his wife Klári in 1945, “is a monster whose influence is going to change history, provided there is any history left.” Moving to the subject of future computing machines he became even more agitated, foreseeing disaster if “people” could not “keep pace with what they create.”
  • Oppenheimer, Einstein, von Neumann and other Institute faculty channeled much of their effort toward what AI researchers today call the “alignment” problem: how to make sure our discoveries serve us instead of destroying us. Their approaches to this increasingly pressing problem remain instructive.
  • Von Neumann focused on applying the powers of mathematical logic, taking insights from games of strategy and applying them to economics and war planning. Today, descendants of his “game theory” running on von Neumann computing architecture are applied not only to our nuclear strategy, but also many parts of our political, economic and social lives. This is one approach to alignment: humanity survives technology through more technology, and it is the researcher’s role to maximize progress.
  • ...5 more annotations...
  • he also thought that this approach was not enough. “What are we to make of a civilization,” he asked in 1959, a few years after von Neumann’s death, “which has always regarded ethics as an essential part of human life, and…which has not been able to talk about the prospect of killing almost everybody, except in prudential and game-theoretical terms?”
  • to design a “fairness algorithm” we need to know what fairness is. Fairness is not a mathematical constant or even a variable. It is a human value, meaning that there are many often competing and even contradictory visions of it on offer in our societies.
  • Hence Oppenheimer set out to make the Institute for Advanced Study a place for thinking about humanistic subjects like Russian culture, medieval history, or ancient philosophy, as well as about mathematics and the theory of the atom. He hired scholars like George Kennan, the diplomat who designed the Cold War policy of Soviet “containment”; Harold Cherniss, whose work on the philosophies of Plato and Aristotle influenced many Institute colleagues; and the mathematical physicist Freeman Dyson, who had been one of the youngest collaborators in the Manhattan Project. Traces of their conversations and collaborations are preserved not only in their letters and biographies, but also in their research, their policy recommendations, and in their ceaseless efforts to help the public understand the dangers and opportunities technology offers the world.
  • In their biography “American Prometheus,” which inspired Nolan’s film, Martin Sherwin and Kai Bird document Oppenheimer’s conviction that “the safety” of a nation or the world “cannot lie wholly or even primarily in its scientific or technical prowess.” If humanity wants to survive technology, he believed, it needs to pay attention not only to technology but also to ethics, religions, values, forms of political and social organization, and even feelings and emotions.
  • Preserving any human value worthy of the name will therefore require not only a computer scientist, but also a sociologist, psychologist, political scientist, philosopher, historian, theologian. Oppenheimer even brought the poet T.S. Eliot to the Institute, because he believed that the challenges of the future could only be met by bringing the technological and the human together. The technological challenges are growing, but the cultural abyss separating STEM from the arts, humanities, and social sciences has only grown wider. More than ever, we need institutions capable of helping them think together.
peterconnelly

AI model's insight helps astronomers propose new theory for observing far-off worlds | ... - 0 views

  • Machine learning models are increasingly augmenting human processes, either performing repetitious tasks faster or providing some systematic insight that helps put human knowledge in perspective.
  • Astronomers at UC Berkeley were surprised to find both happen after modeling gravitational microlensing events, leading to a new unified theory for the phenomenon.
  • Gravitational lensing occurs when light from far-off stars and other stellar objects bends around a nearer one directly between it and the observer, briefly giving a brighter — but distorted — view of the farther one.
  • ...7 more annotations...
  • Ambiguities are often reconciled with other observed data, such as that we know by other means that the planet is too small to cause the scale of distortion seen.
  • “The two previous theories of degeneracy deal with cases where the background star appears to pass close to the foreground star or the foreground planet. The AI algorithm showed us hundreds of examples from not only these two cases, but also situations where the star doesn’t pass close to either the star or planet and cannot be explained by either previous theory,” said Zhang in a Berkeley news release.
  • But without the systematic and confident calculations of the AI, it’s likely the simplified, less correct theory would have persisted for many more years.
  • As a result — and after some convincing, since a grad student questioning established doctrine is tolerated but perhaps not encouraged — they ended up proposing a new, “unified” theory of how degeneracy in these observations can be explained, of which the two known theories were simply the most common cases.
  • “People were seeing these microlensing events, which actually were exhibiting this new degeneracy but just didn’t realize it. It was really just the machine learning looking at thousands of events where it became impossible to miss,” said Scott Gaudi
  • But Zhang seemed convinced that the AI had clocked something that human observers had systematically overlooked.
  • Just as people learned to trust calculators and later computers, we are learning to trust some AI models to output an interesting truth clear of preconceptions and assumptions — that is, if we haven’t just coded our own preconceptions and assumptions into them.
criscimagnael

Social media CEO hopes to 'remove any temptation for bad behavior' from its platform - 0 views

  • A new platform being beta-tested hopes to re-introduce an element of what many social media companies have seemingly lost sight of amid industry controversies — maintaining a sense of community.
  • There are no likes. There are no followers. There are no algorithms tracking you. We've removed any temptation for bad behavior that you'll find on other platforms
  • However, Austin stressed how her platform will be very different from Twitter and other social media networks where online discord can create toxic digital environments.
  • ...2 more annotations...
  • "In terms of moderation, we have a lot of thoughts — we're trying to move very slowly and intentionally around what the experience is like," Austin said. "I will say, unequivocally, that we won't allow any language that revolves around hate and all the encompassing ways of what that can mean."
  • "[We're] definitely [going] for scale, but I think in scale there can be very different groups inside that space," Austin said. "I think fragmentation has been happening since the beginning of the internet, we just had these larger conglomerates come in and bring everyone to that space."
peterconnelly

Your Bosses Could Have a File on You, and They May Misinterpret It - The New York Times - 0 views

  • The company you work for may want to know. Some corporate employers fear that employees could leak information, allow access to confidential files, contact clients inappropriately or, in the extreme, bring a gun to the office.
  • at times using behavioral science tools like psychology.
  • But in spite of worries that workers might be, reasonably, put off by a feeling that technology and surveillance are invading yet another sphere of their lives, employers want to know which clock-punchers may harm their organizations.
  • ...13 more annotations...
  • “There is so much technology out there that employers are experimenting with or investing in,” said Edgar Ndjatou
  • Software can watch for suspicious computer behavior or it can dig into an employee’s credit reports, arrest records and marital-status updates. It can check to see if Cheryl is downloading bulk cloud data or run a sentiment analysis on Tom’s emails to see if he’s getting testier over time. Analysis of this data, say the companies that monitor insider risk, can point to potential problems in the workplace.
  • Organizations that produce monitoring software and behavioral analysis for the feds also may offer conceptually similar tools to private companies, either independently or packaged with broader cybersecurity tools.
  • But corporations are moving forward with their own software-enhanced surveillance. While private-sector workers may not be subjected to the rigors of a 136-page clearance form, private companies help build these “continuous vetting” technologies for the federal government, said Lindy Kyzer of ClearanceJobs. Then, she adds, “Any solution would have private-sector applications.”
  • “Can we build a system that checks on somebody and keeps checking on them and is aware of that person’s disposition as they exist in the legal systems and the public record systems on a continuous basis?” said Chris Grijalva
  • But the interest in anticipating insider threats in the private sector raises ethical questions about what level of monitoring nongovernmental employees should be subject to.
  • “People are starting to understand that the insider threat is a business problem and should be handled accordingly,” said Mr. Grijalva.
  • The linguistic software package they developed, called SCOUT, uses psycholinguistic analysis to seek flags that, among other things, indicate feelings of disgruntlement, like victimization, anger and blame.
  • “The language changes in subtle ways that you’re not aware of,” Mr. Stroz said.
  • There’s not enough information, in other words, to construct algorithms about trustworthiness from the ground up. And that would hold in either the private or the public sector.
  • Even if all that dystopian data did exist, it would still be tricky to draw individual — rather than simply aggregate — conclusions about which behavioral indicators potentially presaged ill actions.
  • “Depending too heavily on personal factors identified using software solutions is a mistake, as we are unable to determine how much they influence future likelihood of engaging in malicious behaviors,” Dr. Cunningham said.
  • “I have focused very heavily on identifying indicators that you can actually measure, versus those that require a lot of interpretation,” Dr. Cunningham said. “Especially those indicators that require interpretation by expert psychologists or expert so-and-sos. Because I find that it’s a little bit too dangerous, and I don’t know that it’s always ethical.”
Javier E

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times - 0 views

  • Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions
  • Some are concerned about far-future risks that sound like science fiction.
  • Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now.
  • ...31 more annotations...
  • Some are motivated by potential business revenue, others by national security concerns.
  • Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I.
  • you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
  • It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
  • Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions
  • One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics.
  • The Doomsayers
  • These are the A.I. safety people, and their ranks include the “Godfathers of A.I.,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind
  • Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future
  • Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
  • The technology historian David C. Brock calls these fears “wishful worries” — that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”
  • OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups
  • the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course.
  • While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns.
  • The Reformers
  • While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower
  • Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
  • Propagators of these A.I. ethics concerns — like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy O’Neil — have been raising the alarm on inequities coded into A.I. for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q.
  • Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside — or even above — their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards
  • reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity.
  • Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
  • Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
  • Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security.
  • Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
  • they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
  • U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”
  • The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly.
  • As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism
  • Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
  • By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.
  • Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
Javier E

Book Review: 'The Maniac,' by Benjamín Labatut - The New York Times - 0 views

  • it quickly becomes clear that what “The Maniac” is really trying to get a lock on is our current age of digital-informational mastery and subjection
  • When von Neumann proclaims that, thanks to his computational advances, “all processes that are stable we shall predict” and “all processes that are unstable we shall control,” we’re being prompted to reflect on today’s ubiquitous predictive-slash-determinative algorithms.
  • When he publishes a paper about the feasibility of a self-reproducing machine — “you need to have a mechanism, not only of copying a being, but of copying the instructions that specify that being” — few contemporary readers will fail to home straight in on the fraught subject of A.I.
  • ...9 more annotations...
  • Haunting von Neumann’s thought experiment is the specter of a construct that, in its very internal perfection, lacks the element that would account for itself as a construct. “If someone succeeded in creating a formal system of axioms that was free of all internal paradoxes and contradictions,” another of von Neumann’s interlocutors, the logician Kurt Gödel, explains, “it would always be incomplete, because it would contain truths and statements that — while being undeniably true — could never be proven within the laws of that system.”
  • its deeper (and, for me, more compelling) theme: the relation between reason and madness.
  • Almost all the scientists populating the book are mad, their desire “to understand, to grasp the core of things” invariably wedded to “an uncontrollable mania”; even their scrupulously observed reason, their mode of logic elevated to religion, is framed as a form of madness. Von Neumann’s response to the detonation of the Trinity bomb, the world’s first nuclear explosion, is “so utterly rational that it bordered on the psychopathic,” his second wife, Klara Dan, muses
  • fanaticism, in the 1930s, “was the norm … even among us mathematicians.”
  • Pondering Gödel’s own descent into mania, the physicist Eugene Wigner claims that “paranoia is logic run amok.” If you’ve convinced yourself that there’s a reason for everything, “it’s a small step to begin to see hidden machinations and agents operating to manipulate the most common, everyday occurrences.”
  • the game theory-derived system of mutually assured destruction he devises in its wake is “perfectly rational insanity,” according to its co-founder Oskar Morgenstern.
  • Labatut has Morgenstern end his MAD deliberations by pointing out that humans are not perfect poker players. They are irrational, a fact that, while instigating “the ungovernable chaos that we see all around us,” is also the “mercy” that saves us, “a strange angel that protects us from the mad dreams of reason.”
  • But does von Neumann really deserve the title “Father of Computers,” granted him here by his first wife, Mariette Kovesi? Doesn’t Ada Lovelace have a prior claim as their mother? Feynman’s description of the Trinity bomb as “a little Frankenstein monster” should remind us that it was Mary Shelley, not von Neumann and his coterie, who first grasped the monumental stakes of modeling the total code of life, its own instructions for self-replication, and that it was Rosalind Franklin — working alongside, not under, Maurice Wilkins — who first carried out this modeling.
  • he at least grants his women broader, more incisive wisdom. Ehrenfest’s lover Nelly Posthumus Meyjes delivers a persuasive lecture on the Pythagorean myth of the irrational, suggesting that while scientists would never accept the fact that “nature cannot be cognized as a whole,” artists, by contrast, “had already fully embraced it.”
« First ‹ Previous 121 - 140 of 159 Next ›
Showing 20 items per page