Skip to main content

Home/ TOK Friends/ Group items matching "into" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
peterconnelly

AI model's insight helps astronomers propose new theory for observing far-off worlds | TechCrunch - 0 views

  • Machine learning models are increasingly augmenting human processes, either performing repetitious tasks faster or providing some systematic insight that helps put human knowledge in perspective.
  • Astronomers at UC Berkeley were surprised to find both happen after modeling gravitational microlensing events, leading to a new unified theory for the phenomenon.
  • Gravitational lensing occurs when light from far-off stars and other stellar objects bends around a nearer one directly between it and the observer, briefly giving a brighter — but distorted — view of the farther one.
  • ...7 more annotations...
  • Ambiguities are often reconciled with other observed data, such as that we know by other means that the planet is too small to cause the scale of distortion seen.
  • “The two previous theories of degeneracy deal with cases where the background star appears to pass close to the foreground star or the foreground planet. The AI algorithm showed us hundreds of examples from not only these two cases, but also situations where the star doesn’t pass close to either the star or planet and cannot be explained by either previous theory,” said Zhang in a Berkeley news release.
  • But without the systematic and confident calculations of the AI, it’s likely the simplified, less correct theory would have persisted for many more years.
  • As a result — and after some convincing, since a grad student questioning established doctrine is tolerated but perhaps not encouraged — they ended up proposing a new, “unified” theory of how degeneracy in these observations can be explained, of which the two known theories were simply the most common cases.
  • “People were seeing these microlensing events, which actually were exhibiting this new degeneracy but just didn’t realize it. It was really just the machine learning looking at thousands of events where it became impossible to miss,” said Scott Gaudi
  • But Zhang seemed convinced that the AI had clocked something that human observers had systematically overlooked.
  • Just as people learned to trust calculators and later computers, we are learning to trust some AI models to output an interesting truth clear of preconceptions and assumptions — that is, if we haven’t just coded our own preconceptions and assumptions into them.
peterconnelly

How an Organized Republican Effort Punishes Companies for Climate Action - The New York Times - 0 views

  • In Texas, a new law bars the state’s retirement and investment funds from doing business with companies that the state comptroller says are boycotting fossil fuels.
  • Conservative lawmakers in 15 other states are promoting similar legislation.
  • Across the country, Republican lawmakers and their allies have launched a campaign to try to rein in what they see as activist companies trying to reduce the greenhouse gases that are dangerously heating the planet.
  • ...16 more annotations...
  • In doing so, Mr. Moore and others have pushed climate change from the scientific realm into the political battles already raging over topics like voting rights, abortion and L.G.B.T.Q. issues.
  • “There is a coordinated effort to chill corporate engagement on these issues,” said Daniella Ballou-Aares
  • They have worked alongside a nonprofit organization that has run television ads, dispatched roaming billboard trucks and rented out a Times Square billboard criticizing BlackRock for championing what they call woke causes, including environmentalism.
  • That activism has often put companies at odds with the Republican Party, traditionally the ally of big business.
  • as pressure has grown from consumers and liberal groups to take action, corporations have warmed to the notion of using capital and markets to create a cleaner economy
  • When President Trump declared in 2017 that he would pull the United States from the Paris climate accord, more than 2,000 businesses and investors — including Apple, Amazon and Mars — signed a pledge to continue to work toward climate goals.
  • “Every company and every industry will be transformed by the transition to a net-zero world,” Mr. Fink wrote. “The question is, will you lead, or will you be led?”
  • And in January, Mr. Moore pulled about $20 million out of a fund managed by BlackRock because the firm has encouraged other companies to reduce emissions. BlackRock still manages several billion for West Virginia’s state retirement system. “We’re divesting from BlackRock because they’re divesting from us,” Mr. Moore said in an interview.
  • “These big banks are virtue signaling because they are woke,”
  • Mr. Fink of BlackRock has emerged as a main target of conservatives.
  • “We are perhaps the world’s largest investor in fossil fuel companies, and, as a long-term investor in these companies, we want to see these companies succeed and prosper,” BlackRock’s head of external affairs, Dalia Blass, wrote in a letter to Texas regulators in January.
  • “BlackRock is trying to have it all ways, acting like it is trying to please everyone.”
  • “ESG is a scam,” he said on Twitter on this month. “It has been weaponized by phony social justice warriors.” Shortly after that he shared a meme that declared an ESG score “determines how compliant your business is with the leftist agenda.”
  • “Climate change is not a financial risk that we need to worry about,” adding, “Who cares if Miami is six meters underwater in 100 years?”
  • That view is at odds with the findings of the world’s leading climate scientists. A major United Nations report warned last month that the world could reach a threshold by the end of this decade beyond which the dangers of global warming — including worsening floods, droughts and wildfires — will grow considerably. In 2021, there were 20 weather or climate-related disasters in the United States that each cost more than $1 billion in losses, according to the federal government.
  • “Our ambition is to be the leading bank supporting the global economy in the transition to net zero,” he said.
peterconnelly

Sheryl Sandberg's Legacy - The New York Times - 0 views

  • It’s not clear how history will judge Sheryl Sandberg.
  • Sandberg, who said on Wednesday that she was quitting Meta after 14 years as the company’s second in command, leaves behind a complicated professional and personal legacy.
  • But Sandberg was also partly responsible for Facebook’s failures during crucial moments, notably when the company initially denied and deflected blame for Russia-backed trolls that were abusing the site to inflame divisions among Americans ahead of the 2016 U.S. presidential election.
  • ...5 more annotations...
  • The 23-year-old Zuckerberg hired Sandberg in 2008 to figure out how to build Facebook into a large and lasting business.
  • Sandberg spearheaded a plan to build from scratch a more sophisticated system of advertising that was largely based on what she had helped develop at Google. Ads on Facebook were tied to people’s activities and interests on the site. As at Google, many advertisers bought Facebook ads online rather than through sales personnel, as had been typical for TV or newspaper ads. Later, Sandberg cultivated new systems for Facebook advertisers to pinpoint their potential customers with even more precision.
  • Google and Facebook transformed product marketing from largely an art to a sometimes creepy science, and Sandberg is among the architects of that change. She shares in the credit (or blame) for developing two of the most successful, and perhaps least defensible, business models in internet history.
  • All the anxiety today about apps snooping on people to glean every morsel of activity to better pitch us dishwashers — that’s partly Sandberg’s doing. So are Facebook and Google’s combined $325 billion in annual advertising sales and those of all other online companies that make money from ads.
  • In their 2021 book, “An Ugly Truth,” Sheera and Cecilia wrote that to Sandberg’s detractors, her response was part of a pattern of trying to preserve the company’s reputation or her own rather than do the right thing.
peterconnelly

They Did Their Own 'Research.' Now What? - The New York Times - 0 views

  • the crash of two linked cryptocurrencies caused tens of billions of dollars in value to evaporate from digital wallets around the world.
  • People who thought they knew what they were getting into had, in the space of 24 hours, lost nearly everything. Messages of desperation flooded a Reddit forum for traders of one of the currencies, a coin called Luna, prompting moderators to share phone numbers for international crisis hotlines.
  • “DYOR” is shorthand for “do your own research,”
  • ...8 more annotations...
  • a reminder to stay informed and vigilant against groupthink.
  • A common refrain in battles about Covid-19 and vaccination, politics and conspiracy theories, parenting, drugs, food, stock trading and media, it signals not just a rejection of authority but often trust in another kind.
  • “Do your own research” is an idea central to Joe Rogan’s interview podcast, the most listened to program on Spotify, where external claims of expertise are synonymous with admissions of malice. In its current usage, DYOR is often an appeal to join in, rendered in the language of opting out.
  • “There’s this idea that the goal of science is consensus,” Professor Carrion said. “The model they brought to it was that we didn’t need consensus.” She noted that the women she surveyed often used singular rather than plural pronouns. “It was ‘she needs to do her own research,” Professor Carrion said, rather than we need to do ours. Unlike some critical health movements in the past, this was an individualist endeavor.
  • One of the enticing aspects of cryptocurrencies, which pose an alternative to traditional financial institutions, is that expertise is available to anyone who wants to claim it.
  • In crypto, the uses of DYOR are various and contradictory, earnest and ironic sometimes within the same discussion. Breathless investment pitches for new coins are punctuated with “NFA/DYOR” (not financial advice), or admonitions not to invest more than you can afford to lose, which many people are obviously ignoring; stories about getting rich are prefaced with DYOR; requests for advice about which coins to hold are answered with DYOR. It is the siren song of crypto investing.
  • In that way — the momentum of a group — crypto investing isn’t altogether distinct from how people have invested in the stock market for decades. Though here it is tinged with a rebellious, anti-authoritarian streak: We’re outsiders, in this together; we’re doing something sort of ridiculous, but also sort of cool.
  • “Now it seems like DYOR can only do so much,” the user wrote. Eventually, the user said, you end up relying on “trust.”
criscimagnael

Living better with algorithms | MIT News | Massachusetts Institute of Technology - 0 views

  • At a talk on ethical artificial intelligence, the speaker brought up a variation on the famous trolley problem, which outlines a philosophical choice between two undesirable outcomes.
  • Say a self-driving car is traveling down a narrow alley with an elderly woman walking on one side and a small child on the other, and no way to thread between both without a fatality. Who should the car hit?
  • To get a sense of what this means, suppose that regulators require that any public health content — for example, on vaccines — not be vastly different for politically left- and right-leaning users. How should auditors check that a social media platform complies with this regulation? Can a platform be made to comply with the regulation without damaging its bottom line? And how does compliance affect the actual content that users do see?
  • ...12 more annotations...
  • a self-driving car could have avoided choosing between two bad outcomes by making a decision earlier on — the speaker pointed out that, when entering the alley, the car could have determined that the space was narrow and slowed to a speed that would keep everyone safe.
  • Auditors have to inspect the algorithm without accessing sensitive user data.
  • Other considerations come into play as well, such as balancing the removal of misinformation with the protection of free speech.
  • To meet these challenges, Cen and Shah developed an auditing procedure that does not need more than black-box access to the social media algorithm (which respects trade secrets), does not remove content (which avoids issues of censorship), and does not require access to users (which preserves users’ privacy).
  • which is known to help reduce the spread of misinformation
  • In labor markets, for example, workers learn their preferences about what kinds of jobs they want, and employers learn their preferences about the qualifications they seek from workers.
  • But learning can be disrupted by competition
  • it is indeed possible to get to a stable outcome (workers aren’t incentivized to leave the matching market), with low regret (workers are happy with their long-term outcomes), fairness (happiness is evenly distributed), and high social welfare.
  • For instance, when Covid-19 cases surged in the pandemic, many cities had to decide what restrictions to adopt, such as mask mandates, business closures, or stay-home orders. They had to act fast and balance public health with community and business needs, public spending, and a host of other considerations.
  • But of course, no county exists in a vacuum.
  • These complex interactions matter,
  • “Accountability, legitimacy, trust — these principles play crucial roles in society and, ultimately, will determine which systems endure with time.” 
peterconnelly

How Some States Are Combating Election Misinformation Ahead of Midterms - The New York Times - 0 views

  • Ahead of the 2020 elections, Connecticut confronted a bevy of falsehoods about voting that swirled around online. One, widely viewed on Facebook, wrongly said absentee ballots had been sent to dead people. On Twitter, users spread a false post that a tractor-trailer carrying ballots had crashed on Interstate 95, sending thousands of voter slips into the air and across the highway.
  • the state plans to spend nearly $2 million on marketing to share factual information about voting, and to create its first-ever position for an expert in combating misinformation.
  • With a salary of $150,000, the person is expected to comb fringe sites like 4chan, far-ri
  • ...7 more annotations...
  • ght social networks like Gettr and Rumble, and mainstream social media sites to root out early misinformation narratives about voting before they go viral, and then urge the companies to remove or flag the posts that contain false information.
  • These states, most of them under Democratic control, have been acting as voter confidence in election integrity has plummeted.
  • In an ABC/Ipsos poll from January, only 20 percent of respondents said they were “very confident” in the integrity of the election system and 39 percent said they felt “somewhat confident.”
  • Some conservatives and civil rights groups are almost certain to complain that the efforts to limit misinformation could restrict free speech.
  • “State and local governments are well situated to reduce harms from dis- and misinformation by providing timely, accurate and trustworthy information,” said Rachel Goodman
  • “Facts still exist, and lies are being used to chip away at our fundamental freedoms,” Ms. Griswold said.
  • Officials said they would prefer candidates fluent in both English and Spanish, to address the spread of misinformation in both languages. The officer would track down viral misinformation posts on Facebook, Instagram, Twitter and YouTube, and look for emerging narratives and memes, especially on fringe social media platforms and the dark web.
criscimagnael

'I don't even remember what I read': People enter a 'dissociative state' when using social media | UW News - 0 views

  • “I think people experience a lot of shame around social media use,” said lead author Amanda Baughan, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “One of the things I like about this framing of ‘dissociation’ rather than ‘addiction’ is that it changes the narrative. Instead of: ‘I should be able to have more self-control,’ it’s more like: ‘We all naturally dissociate in many ways throughout our day – whether it’s daydreaming or scrolling through Instagram, we stop paying attention to what’s happening around us.'”
  • “Having a stop built into a list meant that it was only going to be a few minutes of reading and then, if they wanted to really go crazy, they could read another list. But again, it’s only a few minutes. Having that bite-sized piece of content to consume was something that really resonated.”
  • Over the course of the month, 42% of participants (18 people) agreed or strongly agreed with that statement at least once. After the month, the researchers did in-depth interviews with 11 participants. Seven described experiencing dissociation while using Chirp.
  • ...3 more annotations...
  • “But people only realize that they’ve dissociated in hindsight. So once you exit dissociation there’s sometimes this feeling of: How did I get here? It’s like when people on social media realize: ‘Oh my gosh, how did 30 minutes go by? I just meant to check one notification.'”
  • The problem with social media platforms, the researchers said, is not that people lack the self-control needed to not get sucked in, but instead that the platforms themselves are not designed to maximize what people value.
  • These platforms need to create an end-of-use experience, so that people can have it fit in their day with their time-management goals.”
peterconnelly

Google's I/O Conference Offers Modest Vision of the Future - The New York Times - 0 views

  • SAN FRANCISCO — There was a time when Google offered a wondrous vision of the future, with driverless cars, augmented-reality eyewear, unlimited storage of emails and photos, and predictive texts to complete sentences in progress.
  • The bold vision is still out there — but it’s a ways away. The professional executives who now run Google are increasingly focused on wringing money out of those years of spending on research and development.
  • The company’s biggest bet in artificial intelligence does not, at least for now, mean science fiction come to life. It means more subtle changes to existing products.
  • ...2 more annotations...
  • At the same time, it was not immediately clear how some of the other groundbreaking work, like language models that better understand natural conversation or that can break down a task into logical smaller steps, will ultimately lead to the next generation of computing that Google has touted.
  • Much of those capabilities are powered by the deep technological work Google has done for years using so-called machine learning, image recognition and natural language understanding. It’s a sign of an evolution rather than revolution for Google and other large tech giants.
peterconnelly

Your Bosses Could Have a File on You, and They May Misinterpret It - The New York Times - 0 views

  • The company you work for may want to know. Some corporate employers fear that employees could leak information, allow access to confidential files, contact clients inappropriately or, in the extreme, bring a gun to the office.
  • at times using behavioral science tools like psychology.
  • But in spite of worries that workers might be, reasonably, put off by a feeling that technology and surveillance are invading yet another sphere of their lives, employers want to know which clock-punchers may harm their organizations.
  • ...13 more annotations...
  • “There is so much technology out there that employers are experimenting with or investing in,” said Edgar Ndjatou
  • Software can watch for suspicious computer behavior or it can dig into an employee’s credit reports, arrest records and marital-status updates. It can check to see if Cheryl is downloading bulk cloud data or run a sentiment analysis on Tom’s emails to see if he’s getting testier over time. Analysis of this data, say the companies that monitor insider risk, can point to potential problems in the workplace.
  • Organizations that produce monitoring software and behavioral analysis for the feds also may offer conceptually similar tools to private companies, either independently or packaged with broader cybersecurity tools.
  • But corporations are moving forward with their own software-enhanced surveillance. While private-sector workers may not be subjected to the rigors of a 136-page clearance form, private companies help build these “continuous vetting” technologies for the federal government, said Lindy Kyzer of ClearanceJobs. Then, she adds, “Any solution would have private-sector applications.”
  • “Can we build a system that checks on somebody and keeps checking on them and is aware of that person’s disposition as they exist in the legal systems and the public record systems on a continuous basis?” said Chris Grijalva
  • But the interest in anticipating insider threats in the private sector raises ethical questions about what level of monitoring nongovernmental employees should be subject to.
  • “People are starting to understand that the insider threat is a business problem and should be handled accordingly,” said Mr. Grijalva.
  • The linguistic software package they developed, called SCOUT, uses psycholinguistic analysis to seek flags that, among other things, indicate feelings of disgruntlement, like victimization, anger and blame.
  • “The language changes in subtle ways that you’re not aware of,” Mr. Stroz said.
  • There’s not enough information, in other words, to construct algorithms about trustworthiness from the ground up. And that would hold in either the private or the public sector.
  • Even if all that dystopian data did exist, it would still be tricky to draw individual — rather than simply aggregate — conclusions about which behavioral indicators potentially presaged ill actions.
  • “Depending too heavily on personal factors identified using software solutions is a mistake, as we are unable to determine how much they influence future likelihood of engaging in malicious behaviors,” Dr. Cunningham said.
  • “I have focused very heavily on identifying indicators that you can actually measure, versus those that require a lot of interpretation,” Dr. Cunningham said. “Especially those indicators that require interpretation by expert psychologists or expert so-and-sos. Because I find that it’s a little bit too dangerous, and I don’t know that it’s always ethical.”
criscimagnael

Can Forensic Science Be Trusted? - The Atlantic - 0 views

  • When asked, years later, why she had failed to photograph what she said she’d seen on the enhanced bedsheet, Yezzo replied, “This is one time that I didn’t manage to get it soon enough.” She added: “Operator error.”
  • The words were deployed as definitive by prosecutors—“the evidence is uncontroverted by the scientist, totally uncontroverted”
  • Michael Donnelly, now a justice on the Ohio Supreme Court, did not preside over this case, but he has had ample exposure to the use of forensic evidence. “As a trial judge,” he told me, “I sat there for 14 years. And when forensics experts testified, the jury hung on their every word.”
  • ...10 more annotations...
  • Forensic science, which drives the plots of movies and television shows, is accorded great respect by the public. And in the proper hands, it can provide persuasive insight. But in the wrong hands, it can trap innocent people in a vise of seeming inerrancy—and it has done so far too often. What’s more, although some forensic disciplines, such as DNA analysis, are reliable, others have been shown to have serious limitations.
  • Yezzo is not like Annie Dookhan, a chemist in a Massachusetts crime laboratory who boosted her productivity by falsifying reports and by “dry labbing”—that is, reporting results without actually conducting any tests.
  • Nor is Yezzo like Michael West, a forensic odontologist who claimed that he could identify bite marks on a victim and then match those marks to a specific person.
  • The deeper issue with forensic science lies not in malfeasance or corruption—or utter incompetence—but in the gray area where Yezzo can be found. Her alleged personal problems are unusual: Only because of them did the details of her long career come to light.
  • to the point of alignment; how rarely an analyst’s skills are called into question in court; and how seldom the performance of crime labs is subjected to any true oversight.
  • More than half of those exonerated by post-conviction DNA testing had been wrongly convicted based on flawed forensic evidence.
  • The quality of the work done in crime labs is almost never audited.
  • Even the best forensic scientists can fall prey to unintentional bias.
  • Study after study has demonstrated the power of cognitive bias.
  • Cognitive bias can of course affect anyone, in any circumstance—but it is particularly dangerous in a criminal-justice system where forensic scientists have wide latitude as well as some incentive to support the views of prosecutors and the police.
Javier E

Opinion | We Have Two Visions of the Future, and Both Are Wrong - The New York Times - 0 views

  • these fears can no longer be confined to a fanatical fringe of gun-toting survivalists. The relentless onslaught of earthshaking crises, unfolding against the backdrop of flash floods and forest fires, has steadily pushed apocalyptic sentiment into the mainstream. When even the head of the United Nations warns that rising sea levels could unleash “a mass exodus on a biblical scale,” it is hard to remain sanguine about the state of the world. One survey found that over half of young adults now believe that “humanity is doomed” and “the future is frightening.”
  • At the same time, recent years have also seen the resurgence of a very different kind of narrative. Exemplified by a slew of best-selling books and viral TED talks, this view tends to downplay the challenges we face and instead insists on the inexorable march of human progress. If doomsday thinkers worry endlessly that things are about to get a lot worse, the prophets of progress maintain that things have only been getting better — and are likely to continue to do so in the future.
  • If things are really getting better, there is clearly no need for transformative change to confront the most pressing problems of our time. So long as we stick to the script and keep our faith in the redeeming qualities of human ingenuity and technological innovation, all our problems will eventually resolve themselves.
  • ...9 more annotations...
  • It is easy to understand the appeal of such one-sided tales. As human beings, we seem to prefer to impose clear and linear narratives on a chaotic and unpredictable reality; ambiguity and contradiction are much harder to live with.
  • To truly grasp the complex nature of our current time, we need first of all to embrace its most terrifying aspect: its fundamental open-endedness. It is precisely this radical uncertainty — not knowing where we are and what lies ahead — that gives rise to such existential anxiety.
  • Anthropologists have a name for this disturbing type of experience: liminality
  • liminality originally referred to the sense of disorientation that arises during a rite of passage. In a traditional coming-of-age ritual, for instance, it marks the point at which the adolescent is no longer considered a child but is not yet recognized as an adult — betwixt and between
  • We are ourselves in the midst of a painful transition, a sort of interregnum, as the Italian political theorist Antonio Gramsci famously called it, between an old world that is dying and a new one that is struggling to be born. Such epochal shifts are inevitably fraught with danger
  • the great upheavals in world history can equally be seen “as genuine signs of vitality” that “clear the ground” of discredited ideas and decaying institutions. “The crisis,” he wrote, “is to be regarded as a new nexus of growth.”
  • Once we embrace this Janus-faced nature of our times, at once frightening yet generative, a very different vision of the future emerges.
  • we see phases of relative calm punctuated every so often by periods of great upheaval. These crises can be devastating, but they are also the drivers of history.
  • even the collapse of modern civilization — but it may also open up possibilities for transformative change
Javier E

Politics should be taught in primary schools, Alastair Campbell says | Alastair Campbell | The Guardian - 0 views

  • the co-host of podcast The Rest is Politics expressed dismay that most students in the UK do not take politics classes unless they choose to study it at A-level.
  • Political education needs to start in primary schools, and then become part of the “everyday debate” in children’s entire school experience, he said. “Maybe you don’t call it politics,” he said, suggesting that it could be called “arguing”, “policy” or “big issues”.
  • “Some of the most enjoyable stuff I do is going into schools and trying to teach young kids what politics is,” he added. “When they sit down and they start thinking about stuff, it’s just so fascinating and innovative.”
  • ...1 more annotation...
  • more state school students need to be taught “how to communicate, how to argue, how to fight their corner” from a young age.
Javier E

'The Power of One,' Facebook whistleblower Frances Haugen's memoir - The Washington Post - 0 views

  • When an internal group proposed the conditions under which Facebook should step in and take down speech from political actors, Zuckerberg discarded its work. He said he’d address the issue himself over a weekend. His “solution”? Facebook would not touch speech by any politician, under any circumstances — a fraught decision under the simplistic surface, as Haugen points out. After all, who gets to count as a politician? The municipal dogcatcher?
  • t was also Zuckerberg, she says, who refused to make a small change that would have made the content in people’s feeds less incendiary — possibly because doing so would have caused a key metric to decline.
  • When the Wall Street Journal’s Jeff Horwitz began to break the stories that Haugen helped him document, the most damning one concerned Facebook’s horrifyingly disingenuous response to a congressional inquiry asking if the company had any research showing that its products were dangerous to teens. Facebook said it wasn’t aware of any consensus indicating how much screen time was too much. What Facebook did have was a pile of research showing that kids were being harmed by its products. Allow a clever company a convenient deflection, and you get something awfully close to a lie.
  • ...5 more annotations...
  • after the military in Myanmar used Facebook to stoke the murder of the Rohingya people, Haugen began to worry that this was a playbook that could be infinitely repeated — and only because Facebook chose not to invest in safety measures, such as detecting hate speech in poorer, more vulnerable places. “The scale of the problems was so vast,” she writes. “I believed people were going to die (in certain countries, at least) and for no reason other than higher profit margins.”
  • After a trip to Cambodia, where neighbors killed neighbors in the 1970s because of a “story that labeled people who had lived next to each other for generations as existential threats,” she’d started to wonder about what caused people to turn on one another to such a horr
  • ifying degree. “How quickly could a story become the truth people perceived?”
  • she points out is the false choice posited by most social media companies: free speech vs. censorship. She argues that lack of transparency is what contributed most to the problems at Facebook. No one on the outside can see inside the algorithms. Even many of those on the inside can’t. “You can’t take a single academic course, anywhere in the world, on the tradeoffs and choices that go into building a social media algorithm or, more importantly, the consequences of those choices,” she writes.
  • In that lack of accountability, social media is a very different ecosystem than the one that helped Ralph Nader take on the auto industry back in the 1960s. Then, there was a network of insurers and plaintiff’s lawyers who also wanted change — and the images of mangled bodies were a lot more visible than what happens inside the mind of a teenage girl. But what if the government forced companies to share their inner workings in the same way it mandates that food companies disclose the nutrition in what they make? What if the government forced social media companies to allow academics and other researchers access to the algorithms they use?
Javier E

Silicon Valley's Safe Space - The New York Times - 0 views

  • The roots of Slate Star Codex trace back more than a decade to a polemicist and self-described A.I. researcher named Eliezer Yudkowsky, who believed that intelligent machines could end up destroying humankind. He was a driving force behind the rise of the Rationalists.
  • Because the Rationalists believed A.I. could end up destroying the world — a not entirely novel fear to anyone who has seen science fiction movies — they wanted to guard against it. Many worked for and donated money to MIRI, an organization created by Mr. Yudkowsky whose stated mission was “A.I. safety.”
  • The community was organized and close-knit. Two Bay Area organizations ran seminars and high-school summer camps on the Rationalist way of thinking.
  • ...27 more annotations...
  • “The curriculum covers topics from causal modeling and probability to game theory and cognitive science,” read a website promising teens a summer of Rationalist learning. “How can we understand our own reasoning, behavior, and emotions? How can we think more clearly and better achieve our goals?”
  • Some lived in group houses. Some practiced polyamory. “They are basically just hippies who talk a lot more about Bayes’ theorem than the original hippies,” said Scott Aaronson, a University of Texas professor who has stayed in one of the group houses.
  • For Kelsey Piper, who embraced these ideas in high school, around 2010, the movement was about learning “how to do good in a world that changes very rapidly.”
  • Yes, the community thought about A.I., she said, but it also thought about reducing the price of health care and slowing the spread of disease.
  • Slate Star Codex, which sprung up in 2013, helped her develop a “calibrated trust” in the medical system. Many people she knew, she said, felt duped by psychiatrists, for example, who they felt weren’t clear about the costs and benefits of certain treatment.
  • That was not the Rationalist way.
  • “There is something really appealing about somebody explaining where a lot of those ideas are coming from and what a lot of the questions are,” she said.
  • Sam Altman, chief executive of OpenAI, an artificial intelligence lab backed by a billion dollars from Microsoft. He was effusive in his praise of the blog.It was, he said, essential reading among “the people inventing the future” in the tech industry.
  • Mr. Altman, who had risen to prominence as the president of the start-up accelerator Y Combinator, moved on to other subjects before hanging up. But he called back. He wanted to talk about an essay that appeared on the blog in 2014.The essay was a critique of what Mr. Siskind, writing as Scott Alexander, described as “the Blue Tribe.” In his telling, these were the people at the liberal end of the political spectrum whose characteristics included “supporting gay rights” and “getting conspicuously upset about sexists and bigots.”
  • But as the man behind Slate Star Codex saw it, there was one group the Blue Tribe could not tolerate: anyone who did not agree with the Blue Tribe. “Doesn’t sound quite so noble now, does it?” he wrote.
  • Mr. Altman thought the essay nailed a big problem: In the face of the “internet mob” that guarded against sexism and racism, entrepreneurs had less room to explore new ideas. Many of their ideas, such as intelligence augmentation and genetic engineering, ran afoul of the Blue Tribe.
  • Mr. Siskind was not a member of the Blue Tribe. He was not a voice from the conservative Red Tribe (“opposing gay marriage,” “getting conspicuously upset about terrorists and commies”). He identified with something called the Grey Tribe — as did many in Silicon Valley.
  • The Grey Tribe was characterized by libertarian beliefs, atheism, “vague annoyance that the question of gay rights even comes up,” and “reading lots of blogs,” he wrote. Most significantly, it believed in absolute free speech.
  • The essay on these tribes, Mr. Altman told me, was an inflection point for Silicon Valley. “It was a moment that people talked about a lot, lot, lot,” he said.
  • And in some ways, two of the world’s prominent A.I. labs — organizations that are tackling some of the tech industry’s most ambitious and potentially powerful projects — grew out of the Rationalist movement.
  • In 2005, Peter Thiel, the co-founder of PayPal and an early investor in Facebook, befriended Mr. Yudkowsky and gave money to MIRI. In 2010, at Mr. Thiel’s San Francisco townhouse, Mr. Yudkowsky introduced him to a pair of young researchers named Shane Legg and Demis Hassabis. That fall, with an investment from Mr. Thiel’s firm, the two created an A.I. lab called DeepMind.
  • Like the Rationalists, they believed that A.I could end up turning against humanity, and because they held this belief, they felt they were among the only ones who were prepared to build it in a safe way.
  • In 2014, Google bought DeepMind for $650 million. The next year, Elon Musk — who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.
  • Mr. Aaronson, the University of Texas professor, was turned off by the more rigid and contrarian beliefs of the Rationalists, but he is one of the blog’s biggest champions and deeply admired that it didn’t avoid live-wire topics.
  • “It must have taken incredible guts for Scott to express his thoughts, misgivings and questions about some major ideological pillars of the modern world so openly, even if protected by a quasi-pseudonym,” he said
  • In late June of last year, not long after talking to Mr. Altman, the OpenAI chief executive, I approached the writer known as Scott Alexander, hoping to get his views on the Rationalist way and its effect on Silicon Valley. That was when the blog vanished.
  • The issue, it was clear to me, was that I told him I could not guarantee him the anonymity he’d been writing with. In fact, his real name was easy to find because people had shared it online for years and he had used it on a piece he’d written for a scientific journal. I did a Google search for Scott Alexander and one of the first results I saw in the auto-complete list was Scott Alexander Siskind.
  • More than 7,500 people signed a petition urging The Times not to publish his name, including many prominent figures in the tech industry. “Putting his full name in The Times,” the petitioners said, “would meaningfully damage public discourse, by discouraging private citizens from sharing their thoughts in blog form.” On the internet, many in Silicon Valley believe, everyone has the right not only to say what they want but to say it anonymously.
  • I spoke with Manoel Horta Ribeiro, a computer science researcher who explores social networks at the Swiss Federal Institute of Technology in Lausanne. He was worried that Slate Star Codex, like other communities, was allowing extremist views to trickle into the influential tech world. “A community like this gives voice to fringe groups,” he said. “It gives a platform to people who hold more extreme views.”
  • I assured her my goal was to report on the blog, and the Rationalists, with rigor and fairness. But she felt that discussing both critics and supporters could be unfair. What I needed to do, she said, was somehow prove statistically which side was right.
  • When I asked Mr. Altman if the conversation on sites like Slate Star Codex could push people toward toxic beliefs, he said he held “some empathy” for these concerns. But, he added, “people need a forum to debate ideas.”
  • In August, Mr. Siskind restored his old blog posts to the internet. And two weeks ago, he relaunched his blog on Substack, a company with ties to both Andreessen Horowitz and Y Combinator. He gave the blog a new title: Astral Codex Ten. He hinted that Substack paid him $250,000 for a year on the platform. And he indicated the company would give him all the protection he needed.
Javier E

No rides, but lots of rows: 'reactionary' French theme park plots expansion | France | The Guardian - 0 views

  • Nicolas de Villiers said the theme park – whose subject matter includes Clovis, king of the Franks, and a new €20m (£17m) show about the birth of modern cinema – was not about politics. He said: “What we want when an audience leaves our shows – which are works of art and were never history lessons – is to feel better and bigger, because the hero has brought some light into their hearts … Puy du Fou is more about legends than a history book.”
  • He said the park’s trademark high-drama historical extravaganzas worked because, at a time of global crisis, people had a hunger to understand their roots and traditions. “The artistic language we invented corresponds to the era we live in. People have a thirst for their roots, a thirst to understand what made them what they are today, which means their civilisation. They want to understand what went before them.” He called it a “profound desire to rediscover who we are”.
  • e added: “People who come here don’t have an ideology, they come here and say it’s beautiful, it’s good, I liked it.”
  • ...4 more annotations...
  • Guillaume Lancereau, Max Weber fellow at the European University Institute in Florence, was part of a group of historians who published the book Puy du Faux (Puy of Fakes), analysing the park’s take on history. They viewed the park as having a Catholic slant, questionable depictions of nobility and a presentation of rural peasants as unchanged through the ages.
  • Lancereau did not question the park’s entertainment value. But he said: “Professional historians have repeatedly criticised the park for taking liberties with historical events and characters and, more importantly, for distorting the past to serve a nationalistic, religious and conservative political agenda. This raises important questions about the contemporary entanglement between entertainment, collective memory and politically oriented historical production …
  • “At a time when increasing numbers of undergraduates are acquiring their historical knowledge from popular culture and historical reenactments, the Puy du Fou’s considerable expansion calls for further investigation of a phenomenon that appears to be influencing the making of historical memory in contemporary Europe.”
  • Outside the park’s musketeers show, André, 76, had driven 650km (400 miles) from Burgundy with his wife and grandson. “We came because we’re interested in history,” he said. “The shows are technically brilliant and really make you think. You can tell it’s a bit on the right – the focus on war, warriors and anti-revolution – but I don’t think that matters.”
Javier E

Opinion | For the F.D.A., Cold Medicine That Doesn't Work Is Just the Tip of the Iceberg - The New York Times - 0 views

  • Congress needs to develop a way of better funding the F.D.A. review process. Perhaps a small excise tax could be levied on over-the-counter sales or fees assessed to makers of over-the-counter drugs to fund the F.D.A. review process or to fund studies into drugs that went on the market before 1962. Leaders need to suggest more options. There should also be a way to prioritize which drugs to look at first. The agency should review old drugs for which there are already many complaints about lack of effectiveness in the manner it did recently for phenylephrine.
  • Right now, Americans spend billions on drugs that contain ingredients that will not help them. That’s not just a waste of money — it could mean they are delaying appropriate treatment, which can lead to more severe illnesses. This is risky not only for health but also for trust. The American public deserves medicines that do what they are advertised to do.
Javier E

Opinion | Have Some Sympathy - The New York Times - 0 views

  • Schools and parenting guides instruct children in how to cultivate empathy, as do workplace culture and wellness programs. You could fill entire bookshelves with guides to finding, embracing and sharing empathy. Few books or lesson plans extol sympathy’s virtues.
  • “Sympathy focuses on offering support from a distance,” a therapist explains on LinkedIn, whereas empathy “goes beyond sympathy by actively immersing oneself in another person’s emotions and attempting to comprehend their point of view.”
  • In use since the 16th century, when the Greek “syn-” (“with”) combined with pathos (experience, misfortune, emotion, condition) to mean “having common feelings,” sympathy preceded empathy by a good four centuries
  • ...8 more annotations...
  • Empathy (the “em” means “into”) barged in from the German in the 20th century and gained popularity through its usage in fields like philosophy, aesthetics and psychology. According to my benighted 1989 edition of Webster’s Unabridged, empathy was the more self-centered emotion, “the intellectual identification with or vicarious experiencing of the feelings, thoughts or attitudes of another.”
  • in more updated lexicons, it’s as if the two words had reversed. Sympathy now implies a hierarchy whereas empathy is the more egalitarian sentiment.
  • Sympathy, the session’s leader explained to school staff members, was seeing someone in a hole and saying, “Too bad you’re in a hole,” whereas empathy meant getting in the hole, too.
  • “Empathy is a choice and it’s a vulnerable choice because in order to connect with you, I have to connect with something in myself that knows that feeling,”
  • Still, it’s hard to square the new emphasis on empathy — you must feel what others feel — with another element of the current discourse. According to what’s known as “standpoint theory,” your view necessarily depends on your own experience: You can’t possibly know what others feel.
  • In short, no matter how much an empath you may be, unless you have actually been in someone’s place, with all its experiences and limitations, you cannot understand where that person is coming from. The object of your empathy may find it presumptuous of you to think that you “get it.”
  • Bloom asks us to imagine what empathy demands should a friend’s child drown. “A highly empathetic response would be to feel what your friend feels, to experience, as much as you can, the terrible sorrow and pain,” he writes. “In contrast, compassion involves concern and love for your friend, and the desire and motivation to help, but it need not involve mirroring your friend’s anguish.”
  • Bloom argues for a more rational, modulated, compassionate response. Something that sounds a little more like our old friend sympathy.
Javier E

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times - 0 views

  • Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions
  • Some are concerned about far-future risks that sound like science fiction.
  • Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now.
  • ...31 more annotations...
  • Some are motivated by potential business revenue, others by national security concerns.
  • Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I.
  • you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
  • It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
  • Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions
  • One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics.
  • The Doomsayers
  • These are the A.I. safety people, and their ranks include the “Godfathers of A.I.,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind
  • Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future
  • Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
  • The technology historian David C. Brock calls these fears “wishful worries” — that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”
  • OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups
  • the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course.
  • While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns.
  • The Reformers
  • While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower
  • Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
  • Propagators of these A.I. ethics concerns — like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy O’Neil — have been raising the alarm on inequities coded into A.I. for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q.
  • Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside — or even above — their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards
  • reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity.
  • Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
  • Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
  • Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security.
  • Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
  • they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
  • U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”
  • The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly.
  • As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism
  • Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
  • By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.
  • Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
Javier E

Book Review: 'The Maniac,' by Benjamín Labatut - The New York Times - 0 views

  • it quickly becomes clear that what “The Maniac” is really trying to get a lock on is our current age of digital-informational mastery and subjection
  • When von Neumann proclaims that, thanks to his computational advances, “all processes that are stable we shall predict” and “all processes that are unstable we shall control,” we’re being prompted to reflect on today’s ubiquitous predictive-slash-determinative algorithms.
  • When he publishes a paper about the feasibility of a self-reproducing machine — “you need to have a mechanism, not only of copying a being, but of copying the instructions that specify that being” — few contemporary readers will fail to home straight in on the fraught subject of A.I.
  • ...9 more annotations...
  • Haunting von Neumann’s thought experiment is the specter of a construct that, in its very internal perfection, lacks the element that would account for itself as a construct. “If someone succeeded in creating a formal system of axioms that was free of all internal paradoxes and contradictions,” another of von Neumann’s interlocutors, the logician Kurt Gödel, explains, “it would always be incomplete, because it would contain truths and statements that — while being undeniably true — could never be proven within the laws of that system.”
  • its deeper (and, for me, more compelling) theme: the relation between reason and madness.
  • Almost all the scientists populating the book are mad, their desire “to understand, to grasp the core of things” invariably wedded to “an uncontrollable mania”; even their scrupulously observed reason, their mode of logic elevated to religion, is framed as a form of madness. Von Neumann’s response to the detonation of the Trinity bomb, the world’s first nuclear explosion, is “so utterly rational that it bordered on the psychopathic,” his second wife, Klara Dan, muses
  • fanaticism, in the 1930s, “was the norm … even among us mathematicians.”
  • Pondering Gödel’s own descent into mania, the physicist Eugene Wigner claims that “paranoia is logic run amok.” If you’ve convinced yourself that there’s a reason for everything, “it’s a small step to begin to see hidden machinations and agents operating to manipulate the most common, everyday occurrences.”
  • the game theory-derived system of mutually assured destruction he devises in its wake is “perfectly rational insanity,” according to its co-founder Oskar Morgenstern.
  • Labatut has Morgenstern end his MAD deliberations by pointing out that humans are not perfect poker players. They are irrational, a fact that, while instigating “the ungovernable chaos that we see all around us,” is also the “mercy” that saves us, “a strange angel that protects us from the mad dreams of reason.”
  • But does von Neumann really deserve the title “Father of Computers,” granted him here by his first wife, Mariette Kovesi? Doesn’t Ada Lovelace have a prior claim as their mother? Feynman’s description of the Trinity bomb as “a little Frankenstein monster” should remind us that it was Mary Shelley, not von Neumann and his coterie, who first grasped the monumental stakes of modeling the total code of life, its own instructions for self-replication, and that it was Rosalind Franklin — working alongside, not under, Maurice Wilkins — who first carried out this modeling.
  • he at least grants his women broader, more incisive wisdom. Ehrenfest’s lover Nelly Posthumus Meyjes delivers a persuasive lecture on the Pythagorean myth of the irrational, suggesting that while scientists would never accept the fact that “nature cannot be cognized as a whole,” artists, by contrast, “had already fully embraced it.”
« First ‹ Previous 1501 - 1520 of 1578 Next › Last »
Showing 20 items per page