Skip to main content

Home/ TOK Friends/ Group items tagged image

Rss Feed Group items tagged

tongoscar

Ridgecrest earthquakes show how small faults can trigger big quakes - Los Angeles Times - 0 views

  • When an earthquake strikes, the instinct of many Californians is to ask: Which fault ruptured — the Newport-Inglewood, the Hayward, the mighty San Andreas?
  • But scientists are increasingly saying it’s not that simple.
  • New research shows that the Ridgecrest earthquakes that began in July ruptured at least two dozen faults.
  • ...9 more annotations...
  • The findings are important in helping understand how earthquakes can grow in the seconds after a fault ruptures, when two blocks of earth move away from each other.
  • The results provide even more evidence to support the idea that California faults once thought to be limited by their individual length can actually link together in a much more massive earthquake.
  • “The point is that the Landers earthquake and this earthquake are daisy-chaining up faults that previously were thought to rupture only by themselves, and that’s an important observation,”
  • The study raises the possibility that past earthquakes actually may have been bigger than previously thought.
  • In New Zealand, scientists were stunned at the bizarre map of the faults ruptured in the magnitude 7.8 Kaikoura earthquake of 2016, resembling an upside-down trident aimed at the silhouette of an eagle.
  • On a practical level, the research underscores the potential limitations of state earthquake zones designated to prevent new construction directly on top of faults,
  • Further analysis needs to be done to determine whether the 20 cross faults identified in the Ridgecrest study using computer analysis of shaking records actually broke the ground at the surface, according to Tim Dawson, a senior engineering geologist with the California Geological Survey.
  • A significant achievement of this study, Dolan said, was being able to image what faults look like deep underground, at a depth where earthquakes begin.
  • what this study proves is that the structural complexity continues deep underground where earthquakes begin, Dolan said.That’s important, Dolan said, because it may help scientists determine where future earthquakes are likely to stop, which tends to happen where faults become structurally complicated.
manhefnawi

Why Do We Forget What We're Doing the Minute We Enter a Room? | Mental Floss - 0 views

  • Scientists used to believe that memory was like a filing cabinet. You have an experience, and it gets its own little file in your brain. Then, later, you can go back and open the file, which is unchanged and where it should be. It’s a nice, tidy image—but it’s wrong. Your brain is much more complicated and sophisticated than that. It’s more like a super-high-powered computer, with dozens of tasks and applications running at once.
  • A 2011 study found that the Doorway Effect is the result of several of these brain programs running simultaneously. Researchers taught 55 college students to play a computer game in which they moved through a virtual building, collecting and carrying objects from room to room. Every so often as the participants traversed the space, a picture of an object popped up on the screen. If the object shown was the one they were carrying or the one they had just put down, the participants clicked “Yes.” Sometimes these pictures appeared after the participant had walked into a room; other times they appeared while the participant was still in the middle of a room. The researchers then built a real-world version of the environment and ran the experiment again, using a box to hide the objects people were carrying so they couldn’t double-check.
Javier E

The Equality Conundrum | The New Yorker - 0 views

  • The philosopher Ronald Dworkin considered this type of parental conundrum in an essay called “What Is Equality?,” from 1981. The parents in such a family, he wrote, confront a trade-off between two worthy egalitarian goals. One goal, “equality of resources,” might be achieved by dividing the inheritance evenly, but it has the downside of failing to recognize important differences among the parties involved.
  • Another goal, “equality of welfare,” tries to take account of those differences by means of twisty calculations.
  • Take the first path, and you willfully ignore meaningful facts about your children. Take the second, and you risk dividing the inheritance both unevenly and incorrectly.
  • ...33 more annotations...
  • In 2014, the Pew Research Center asked Americans to rank the “greatest dangers in the world.” A plurality put inequality first, ahead of “religious and ethnic hatred,” nuclear weapons, and environmental degradation. And yet people don’t agree about what, exactly, “equality” means.
  • One side argues that the city should guarantee procedural equality: it should insure that all students and families are equally informed about and encouraged to study for the entrance exam. The other side argues for a more direct, representation-based form of equality: it would jettison the exam, adopting a new admissions system designed to produce student bodies reflective of the city’s demography
  • In the past year, for example, New York City residents have found themselves in a debate over the city’s élite public high schools
  • The complexities of egalitarianism are especially frustrating because inequalities are so easy to grasp. C.E.O.s, on average, make almost three hundred times what their employees make; billionaire donors shape our politics; automation favors owners over workers; urban economies grow while rural areas stagnate; the best health care goes to the richest.
  • It’s not just about money. Tocqueville, writing in 1835, noted that our “ordinary practices of life” were egalitarian, too: we behaved as if there weren’t many differences among us. Today, there are “premiere” lines for popcorn at the movies and five tiers of Uber;
  • Inequality is everywhere, and unignorable. We’ve diagnosed the disease. Why can’t we agree on a cure?
  • In a book based on those lectures, “One Another’s Equals: The Basis of Human Equality,” Waldron points out that people are also marked by differences of skill, experience, creativity, and virtue. Given such consequential differences, he asks, in what sense are people “equal”?
  • According to the Declaration of Independence, it is “self-evident” that all men are created equal. But, from a certain perspective, it’s our inequality that’s self-evident.
  • More than twenty per cent of Americans, according to a 2015 poll, agree: they believe that the statement “All men are created equal” is false.
  • In Waldron’s view, though, it’s not a binary choice; it’s possible to see people as equal and unequal simultaneously. A society can sort its members into various categories—lawful and criminal, brilliant and not—while also allowing some principle of basic equality to circumscribe its judgments and, in some contexts, override them
  • Egalitarians like Dworkin and Waldron call this principle “deep equality.” It’s because of deep equality that even those people who acquire additional, justified worth through their actions—heroes, senators, pop stars—can still be considered fundamentally no better than anyone else.
  • In the course of his search, he explores centuries of intellectual history. Many thinkers, from Cicero to Locke, have argued that our ability to reason is what makes us equals.
  • Other thinkers, including Immanuel Kant, have cited our moral sense.
  • Some philosophers, such as Jeremy Bentham, have suggested that it’s our capacity to suffer that equalizes us
  • Waldron finds none of these arguments totally persuasive.
  • In various religious traditions, he observes, equality flows not just from broad assurances that we are all made in God’s image but from some sense that everyone is the protagonist in a saga of error, realization, and redemption: we’re equal because God cares about how things turn out for each of us.
  • Waldron himself is taken by Hannah Arendt’s related concept of “natality,” the notion that what each of us share is having been born as a “newcomer,” entering into history with “the capacity of beginning something anew, that is, of acting.”
  • equality may be not a self-evident fact about human beings but a human-made social construction that we must choose to put into practice.
  • In the end, Waldron concludes that there is no “small polished unitary soul-like substance” that makes us equal; there’s only a patchwork of arguments for our deep equality, collectively compelling but individually limited.
  • Equality is a composite idea—a nexus of complementary and competing intuitions.
  • The blurry nature of equality makes it hard to solve egalitarian dilemmas from first principles. In each situation, we must feel our way forward, reconciling our conflicting intuitions about what “equal” means.
  • The communities that have the easiest time doing that tend to have some clearly defined, shared purpose. Sprinters competing in a hundred-metre dash have varied endowments and train in different conditions; from a certain perspective, those differences make every race unfair.
  • By embracing an agreed-upon theory of equality before the race, the sprinters can find collective meaning in the ranked inequalities that emerge when it ends
  • Perhaps because necessity is so demanding, our egalitarian commitments tend to rest on a different principle: luck.
  • “Some people are blessed with good luck, some are cursed with bad luck, and it is the responsibility of society—all of us regarded collectively—to alter the distribution of goods and evils that arises from the jumble of lotteries that constitutes human life as we know it.” Anderson, in an influential coinage, calls this outlook “luck egalitarianism.”
  • This sort of artisanal egalitarianism is comparatively easy to arrange. Mass-producing it is what’s hard. A whole society can’t get together in a room to hash things out. Instead, consensus must coalesce slowly around broad egalitarian principles.
  • No principle is perfect; each contains hidden dangers that emerge with time. Many people, in contemplating the division of goods, invoke the principle of necessity: the idea that our first priority should be the equal fulfillment of fundamental needs. The hidden danger here becomes apparent once we go past a certain point of subsistence.
  • a core problem that bedevils egalitarianism—what philosophers call “the problem of expensive tastes.”
  • The problem—what feels like a necessity to one person seems like a luxury to another—is familiar to anyone who’s argued with a foodie spouse or roommate about the grocery bil
  • The problem is so insistent that a whole body of political philosophy—“prioritarianism”—is devoted to the challenge of sorting people with needs from people with wants
  • the line shifts as the years pass. Medical procedures that seem optional today become necessities tomorrow; educational attainments that were once unusual, such as college degrees, become increasingly indispensable with time
  • Some thinkers try to tame the problem of expensive tastes by asking what a “normal” or “typical” person might find necessary. But it’s easy to define “typical” too narrowly, letting unfair assumptions influence our judgment
  • an odd feature of our social contract: if you’re fired from your job, unemployment benefits help keep you afloat, while if you stop working to have a child you must deal with the loss of income yourself. This contradiction, she writes, reveals an assumption that “the desire to procreate is just another expensive taste”; it reflects, she argues, the sexist presumption that “atomistic egoism and self-sufficiency” are the human norm. The word “necessity” suggests the idea of a bare minimum. In fact, it sets a high bar. Clearing it may require rethinking how society functions.
Javier E

Covid-19 expert Karl Friston: 'Germany may have more immunological "dark matter"' | Wor... - 0 views

  • Our approach, which borrows from physics and in particular the work of Richard Feynman, goes under the bonnet. It attempts to capture the mathematical structure of the phenomenon – in this case, the pandemic – and to understand the causes of what is observed. Since we don’t know all the causes, we have to infer them. But that inference, and implicit uncertainty, is built into the models
  • That’s why we call them generative models, because they contain everything you need to know to generate the data. As more data comes in, you adjust your beliefs about the causes, until your model simulates the data as accurately and as simply as possible.
  • A common type of epidemiological model used today is the SEIR model, which considers that people must be in one of four states – susceptible (S), exposed (E), infected (I) or recovered (R). Unfortunately, reality doesn’t break them down so neatly. For example, what does it mean to be recovered?
  • ...12 more annotations...
  • SEIR models start to fall apart when you think about the underlying causes of the data. You need models that can allow for all possible states, and assess which ones matter for shaping the pandemic’s trajectory over time.
  • These techniques have enjoyed enormous success ever since they moved out of physics. They’ve been running your iPhone and nuclear power stations for a long time. In my field, neurobiology, we call the approach dynamic causal modelling (DCM). We can’t see brain states directly, but we can infer them given brain imaging data
  • Epidemiologists currently tackle the inference problem by number-crunching on a huge scale, making use of high-performance computers. Imagine you want to simulate an outbreak in Scotland. Using conventional approaches, this would take you a day or longer with today’s computing resources. And that’s just to simulate one model or hypothesis – one set of parameters and one set of starting conditions.
  • Using DCM, you can do the same thing in a minute. That allows you to score different hypotheses quickly and easily, and so to home in sooner on the best one.
  • This is like dark matter in the universe: we can’t see it, but we know it must be there to account for what we can see. Knowing it exists is useful for our preparations for any second wave, because it suggests that targeted testing of those at high risk of exposure to Covid-19 might be a better approach than non-selective testing of the whole population.
  • Our response as individuals – and as a society – becomes part of the epidemiological process, part of one big self-organising, self-monitoring system. That means it is possible to predict not only numbers of cases and deaths in the future, but also societal and institutional responses – and to attach precise dates to those predictions.
  • How well have your predictions been borne out in this first wave of infections?For London, we predicted that hospital admissions would peak on 5 April, deaths would peak five days later, and critical care unit occupancy would not exceed capacity – meaning the Nightingale hospitals would not be required. We also predicted that improvements would be seen in the capital by 8 May that might allow social distancing measures to be relaxed – which they were in the prime minister’s announcement on 10 May. To date our predictions have been accurate to within a day or two, so there is a predictive validity to our models that the conventional ones lack.
  • What do your models say about the risk of a second wave?The models support the idea that what happens in the next few weeks is not going to have a great impact in terms of triggering a rebound – because the population is protected to some extent by immunity acquired during the first wave. The real worry is that a second wave could erupt some months down the line when that immunity wears off.
  • the important message is that we have a window of opportunity now, to get test-and-trace protocols in place ahead of that putative second wave. If these are implemented coherently, we could potentially defer that wave beyond a time horizon where treatments or a vaccine become available, in a way that we weren’t able to before the first one.
  • We’ve been comparing the UK and Germany to try to explain the comparatively low fatality rates in Germany. The answers are sometimes counterintuitive. For example, it looks as if the low German fatality rate is not due to their superior testing capacity, but rather to the fact that the average German is less likely to get infected and die than the average Brit. Why? There are various possible explanations, but one that looks increasingly likely is that Germany has more immunological “dark matter” – people who are impervious to infection, perhaps because they are geographically isolated or have some kind of natural resistance
  • Any other advantages?Yes. With conventional SEIR models, interventions and surveillance are something you add to the model – tweaks or perturbations – so that you can see their effect on morbidity and mortality. But with a generative model these things are built into the model itself, along with everything else that matters.
  • Are generative models the future of disease modelling?That’s a question for the epidemiologists – they’re the experts. But I would be very surprised if at least some part of the epidemiological community didn’t become more committed to this approach in future, given the impact that Feynman’s ideas have had in so many other disciplines.
Javier E

Climate Change Data Deluge Has Scientists Scrambling for Solutions - WSJ - 0 views

  • For decades, scientists working to predict changes in the climate relied mostly on calculations involving simple laws of physics and chemistry but little data from the real world. But with temperatures world-wide continuing to rise—and with data-collection techniques and technologies continuing to advance—scientists now rely on meticulous measurements of temperatures, ocean currents, soil moisture, air quality, cloud cover and hundreds of other phenomena on Earth and in its atmosphere.
  • “Now we can truly do climate studies because now we have observations to precisely say how weather trends have changed and are changing,
  • “When you are trying to develop long-term environmental records, including climate records, consistent measurement is incredibly valuable,” says Kevin Murphy, who as NASA’s chief science data officer oversees an archive of Earth observation data used by 3.9 million people last year. “It’s irreplaceable data.”
  • ...13 more annotations...
  • Over the next decade, officials managing the main U.S. repositories of climate-related information expect their archives’ total volume to grow from about 83 petabytes today to more than 650 petabytes.
  • One petabyte of digital memory can hold thousands of feature-length movies, with 650 enough to hold the contents of the Library of Congress 30 times over.
  • All that information, though, is more than conventional data storage can handle and more than any human mind can readily assimilate,
  • To accommodate it all, the federal workers tasked with managing the data are moving it into the cloud, which offers almost unlimited memory storage while eliminating the need for scientists to maintain their own on-site archive
  • archive managers are devising new analytical techniques and adapting a standard format for the data no matter who collected it and who wants to study it.
  • In essence, they are reinventing climate science from the ground up.
  • “We are in the midst of a technology evolution,
  • As of last September, government agencies and private companies had about 900 Earth-orbiting satellites gathering data about our planet, according to the Union of Concerned Scientists. That is almost three times as many as were aloft in 2008. More are being readied for launch.
  • ASA’s $1 billion Surface Water and Ocean Topography mission will measure Earth’s lakes, rivers and oceans in the first detailed global survey of the planet’s surface water.
  • That is a drop in the data bucket compared with the space agency’s $1.5 billion Nisar radar imaging satellite, which is scheduled for launch in January 2023. Its sensors will detect movements of the planet’s land, ice sheets and sea ice as small as 0.4 inches, transmitting 80 terabytes of data every day.
  • With current data handling systems and typical internet connections, it would take a climate researcher about a year to download just four days’ worth of Nisar dat
  • NASA and NOAA are working with Amazon Web Services, Google Cloud and Microsoft Corp. to move their climate databases into the cloud.
  • Earlier this year, the United Nations Intergovernmental Panel on Climate Change for the first time used data on past climate behavior to gauge the reliability of climate models for policy makers.
peterconnelly

Debunking 3 Viral Rumors About the Texas Shooting - The New York Times - 0 views

  • Here are three of the most prominent rumors that have spread on online platforms such as Twitter, Gab, 4chan and Reddit.
  • Among their unfounded claims were that the shooting had been orchestrated to draw local law enforcement away from the border, allowing criminals and drugs to cross into the United States, and that gun-control advocates had organized the tragedy to stoke public outrage.
  • he conspiracy theorist and broadcaster Alex Jones of Infowars has lied for years that the 2012 massacre at Sandy Hook Elementary School in Newtown, Conn., was staged by the federal government, with people pretending to be survivors and victims’ parents. Last year, Mr. Jones lost four defamation lawsuits filed by victims’ families, many of whom have been harassed by his believers.
  • ...6 more annotations...
  • Hours after the attack, a post on the fringe online message board 4chan circulated claiming that the gunman was transgender.
  • where people falsely claimed that the shooting was a result of hormone therapy undertaken by the gunman.
  • “There is an overwhelming number of individuals who are posting images of this person, who was the shooter, and information about the nature of them being transgender,”
  • On Tuesday, a transgender artist said on Reddit that people online “just took my photos and used it to spread misinformation.”
  • False claims that the gunman was born outside the United States began to circulate within hours of the shooting. Spread largely on white nationalist Telegram channels and Gab accounts, the claims alleged that he was an undocumented immigrant in the United States, even after authorities including Roland Gutierrez, a Texas state senator, confirmed that the gunman was born in North Dakota.
  • “Did he cross the border illegally?” Code of Vets, a veterans organization, posted on Twitter. “Our nation has a serious national security crisis evolving.”
peterconnelly

CNN boss' message for staffers: Cool it with 'Breaking News' banner - 0 views

  • New CNN chief Chris Licht has a message for his employees: not everything needs to be labeled “Breaking News.”
  • Licht came to the conclusion there should be parameters around when to use the red chyron
  • “This is a great starting point to try to make ‘Breaking News’ mean something BIG is happening,” Licht wrote in the memo, which CNBC has obtained. “We are truth-tellers, focused on informing, not alarming our viewers. You’ve already seen far less of the ‘Breaking News’ banner across our programming. The tenor of our voice holistically has to reflect that.”
  • ...2 more annotations...
  • “I would like to see CNN evolve back to the kind of journalism that it started with,” Malone told CNBC in November.
  • Zaslav said in April that CNN’s measured take on news is essential for “a civilized society” and crucial for it to avoid the image of being an “advocacy” network.
criscimagnael

DOJ investigates Instagram's impact on young adults - 0 views

  • I probably check Instagram about 25 times a day," said Marika Marsh, a sophomore.She said while she's scrolling, she feels a bit of anxiety."Especially with girls today, because everyone looks so perfect all the time and it makes you feel insecure,"
  • "Social media platforms identify young girls who click on information suggesting that they feel bad about how they look and they don't flood them with resources, helping them feel better."
  • "In reality, their days might be as boring as ours. I think it’s more of an image they're trying to portray for themselves,"
  • ...1 more annotation...
  • "A lot of the times I catch myself editing this to brighten my eyes, it will look nicer it's all about angles and perspective.
peterconnelly

Google's I/O Conference Offers Modest Vision of the Future - The New York Times - 0 views

  • SAN FRANCISCO — There was a time when Google offered a wondrous vision of the future, with driverless cars, augmented-reality eyewear, unlimited storage of emails and photos, and predictive texts to complete sentences in progress.
  • The bold vision is still out there — but it’s a ways away. The professional executives who now run Google are increasingly focused on wringing money out of those years of spending on research and development.
  • The company’s biggest bet in artificial intelligence does not, at least for now, mean science fiction come to life. It means more subtle changes to existing products.
  • ...2 more annotations...
  • At the same time, it was not immediately clear how some of the other groundbreaking work, like language models that better understand natural conversation or that can break down a task into logical smaller steps, will ultimately lead to the next generation of computing that Google has touted.
  • Much of those capabilities are powered by the deep technological work Google has done for years using so-called machine learning, image recognition and natural language understanding. It’s a sign of an evolution rather than revolution for Google and other large tech giants.
Javier E

You Have Permission to Be a Smartphone Skeptic - The Bulwark - 0 views

  • the brief return of one of my favorite discursive topics—are the kids all right?—in one of my least-favorite variations: why shouldn’t each of them have a smartphone and tablet?
  • One camp says yes, the kids are fine
  • complaints about screen time merely conceal a desire to punish hard-working parents for marginally benefiting from climbing luxury standards, provide examples of the moral panic occasioned by all new technologies, or mistakenly blame screens for ill effects caused by the general political situation.
  • ...38 more annotations...
  • No, says the other camp, led by Jonathan Haidt; the kids are not all right, their devices are partly to blame, and here are the studies showing why.
  • we should not wait for the replication crisis in the social sciences to resolve itself before we consider the question of whether the naysayers are on to something. And normal powers of observation and imagination should be sufficient to make us at least wary of smartphones.
  • These powerful instruments represent a technological advance on par with that of the power loom or the automobile
  • The achievement can be difficult to properly appreciate because instead of exerting power over physical processes and raw materials, they operate on social processes and the human psyche: They are designed to maximize attention, to make it as difficult as possible to look away.
  • they have transformed the qualitative experience of existing in the world. They give a person’s sociality the appearance and feeling of a theoretically endless open network, while in reality, algorithms quietly sort users into ideological, aesthetic, memetic cattle chutes of content.
  • Importantly, the process by which smartphones change us requires no agency or judgment on the part of a teen user, and yet that process is designed to provide what feels like a perfectly natural, inevitable, and complete experience of the world.
  • Smartphones offer a tactile portal to a novel digital environment, and this environment is not the kind of space you enter and leave
  • One reason commonly offered for maintaining our socio-technological status quo is that nothing really has changed with the advent of the internet, of Instagram, of Tiktok and Youtube and 4Chan
  • It is instead a complete shadow world of endless images; disembodied, manipulable personas; and the ever-present gaze of others. It lives in your pocket and in your mind.
  • The price you pay for its availability—and the engine of its functioning—is that you are always available to it, as well. Unless you have a strength of will that eludes most adults, its emissaries can find you at any hour and in any place to issue your summons to the grim pleasure palace.
  • the self-restraint and self-discipline required to use a smartphone well—that is, to treat it purely as an occasional tool rather than as a totalizing way of life—are unreasonable things to demand of teenagers
  • these are unreasonable things to demand of me, a fully adult woman
  • To enjoy the conveniences that a smartphone offers, I must struggle against the lure of the permanent scroll, the notification, the urge to fix my eyes on the circle of light and keep them fixed. I must resist the default pseudo-activity the smartphone always calls its user back to, if I want to have any hope of filling the moments of my day with the real activity I believe is actually valuable.
  • for a child or teen still learning the rudiments of self-control, still learning what is valuable and fulfilling, still learning how to prioritize what is good over the impulse of the moment, it is an absurd bar to be asked to clear
  • The expectation that children and adolescents will navigate new technologies with fully formed and muscular capacities for reason and responsibility often seems to go along with a larger abdication of responsibility on the part of the adults involved.
  • adults have frequently given in to a Faustian temptation: offering up their children’s generation to be used as guinea pigs in a mass longitudinal study in exchange for a bit more room to breathe in their own undeniably difficult roles as educators, caretakers, and parents.
  • It is not a particular activity that you start and stop and resume, and it is not a social scene that you might abandon when it suits you.
  • And this we must do without waiting for social science to hand us a comprehensive mandate it is fundamentally unable to provide; without cowering in panic over moral panics
  • The pre-internet advertising world was vicious, to be sure, but when the “pre-” came off, its vices were moved into a compound interest account. In the world of online advertising, at any moment, in any place, a user engaged in an infinite scroll might be presented with native content about how one Instagram model learned to accept her chunky (size 4) thighs, while in the next clip, another model relates how a local dermatologist saved her from becoming an unlovable crone at the age of 25
  • developing pathological interests and capacities used to take a lot more work than it does now
  • You had to seek it out, as you once had to seek out pornography and look someone in the eye while paying for it. You were not funneled into it by an omnipresent stream of algorithmically curated content—the ambience of digital life, so easily mistaken by the person experiencing it as fundamentally similar to the non-purposive ambience of the natural world.
  • And when interpersonal relations between teens become sour, nasty, or abusive, as they often do and always have, the unbalancing effects of transposing social life to the internet become quite clear
  • For both young men and young women, the pornographic scenario—dominance and degradation, exposure and monetization—creates an experiential framework for desires that they are barely experienced enough to understand.
  • This is not a world I want to live in. I think it hurts everyone; but I especially think it hurts those young enough to receive it as a natural state of affairs rather than as a profound innovation.
  • so I am baffled by the most routine objection to any blaming of smartphones for our society-wide implosion of teenagers’ mental health,
  • In short, and inevitably, today’s teenagers are suffering from capitalism—specifically “late capitalism,
  • what shocks me about this rhetorical approach is the rush to play defense for Apple and its peers, the impulse to wield the abstract concept of capitalism as a shield for actually existing, extremely powerful, demonstrably ruthless capitalist actors.
  • This motley alliance of left-coded theory about the evils of business and right-coded praxis in defense of a particular evil business can be explained, I think, by a deeper desire than overthrowing capitalism. It is the desire not to be a prude or hysteric of bumpkin
  • No one wants to come down on the side of tamping off pleasures and suppressing teen activity.
  • No one wants to be the shrill or leaden antagonist of a thousand beloved movies, inciting moral panics, scheming about how to stop the youths from dancing on Sunday.
  • But commercial pioneers are only just beginning to explore new frontiers in the profit-driven, smartphone-enabled weaponization of our own pleasures against us
  • To limit your moral imagination to the archetypes of the fun-loving rebel versus the stodgy enforcers in response to this emerging reality is to choose to navigate it with blinders on, to be a useful idiot for the robber barons of online life rather than a challenger to the corrupt order they maintain.
  • The very basic question that needs to be asked with every product rollout and implementation is what technologies enable a good human life?
  • this question is not, ultimately, the province of social scientists, notwithstanding how useful their work may be on the narrower questions involved. It is the free privilege, it is the heavy burden, for all of us, to think—to deliberate and make judgments about human good, about what kind of world we want to live in, and to take action according to that thought.
  • I am not sure how to build a world in which childrens and adolescents, at least, do not feel they need to live their whole lives online.
  • whatever particular solutions emerge from our negotiations with each other and our reckonings with the force of cultural momentum, they will remain unavailable until we give ourselves permission to set the terms of our common life.
  • But the environments in which humans find themselves vary significantly, and in ways that have equally significant downstream effects on the particular expression of human nature in that context.
  • most of all, without affording Apple, Facebook, Google, and their ilk the defensive allegiance we should reserve for each other.
Javier E

'The Power of One,' Facebook whistleblower Frances Haugen's memoir - The Washington Post - 0 views

  • When an internal group proposed the conditions under which Facebook should step in and take down speech from political actors, Zuckerberg discarded its work. He said he’d address the issue himself over a weekend. His “solution”? Facebook would not touch speech by any politician, under any circumstances — a fraught decision under the simplistic surface, as Haugen points out. After all, who gets to count as a politician? The municipal dogcatcher?
  • t was also Zuckerberg, she says, who refused to make a small change that would have made the content in people’s feeds less incendiary — possibly because doing so would have caused a key metric to decline.
  • When the Wall Street Journal’s Jeff Horwitz began to break the stories that Haugen helped him document, the most damning one concerned Facebook’s horrifyingly disingenuous response to a congressional inquiry asking if the company had any research showing that its products were dangerous to teens. Facebook said it wasn’t aware of any consensus indicating how much screen time was too much. What Facebook did have was a pile of research showing that kids were being harmed by its products. Allow a clever company a convenient deflection, and you get something awfully close to a lie.
  • ...5 more annotations...
  • after the military in Myanmar used Facebook to stoke the murder of the Rohingya people, Haugen began to worry that this was a playbook that could be infinitely repeated — and only because Facebook chose not to invest in safety measures, such as detecting hate speech in poorer, more vulnerable places. “The scale of the problems was so vast,” she writes. “I believed people were going to die (in certain countries, at least) and for no reason other than higher profit margins.”
  • After a trip to Cambodia, where neighbors killed neighbors in the 1970s because of a “story that labeled people who had lived next to each other for generations as existential threats,” she’d started to wonder about what caused people to turn on one another to such a horr
  • ifying degree. “How quickly could a story become the truth people perceived?”
  • she points out is the false choice posited by most social media companies: free speech vs. censorship. She argues that lack of transparency is what contributed most to the problems at Facebook. No one on the outside can see inside the algorithms. Even many of those on the inside can’t. “You can’t take a single academic course, anywhere in the world, on the tradeoffs and choices that go into building a social media algorithm or, more importantly, the consequences of those choices,” she writes.
  • In that lack of accountability, social media is a very different ecosystem than the one that helped Ralph Nader take on the auto industry back in the 1960s. Then, there was a network of insurers and plaintiff’s lawyers who also wanted change — and the images of mangled bodies were a lot more visible than what happens inside the mind of a teenage girl. But what if the government forced companies to share their inner workings in the same way it mandates that food companies disclose the nutrition in what they make? What if the government forced social media companies to allow academics and other researchers access to the algorithms they use?
Javier E

Opinion | Elle Mills: Why I Quit YouTube - The New York Times - 0 views

  • The peak of my YouTube career didn’t always match my childhood fantasy of what this sort of fame might look like. Instead, I was constantly terrified of losing my audience and the validation that came with it. My self-worth had become so intertwined with my career that maintaining it genuinely felt life-or-death. I was stuck in a never-ending cycle of constantly trying to top myself to remain relevant.
  • YouTube soon became a game of, “What’s the craziest thing you’d do for attention?”
  • there’s an overwhelming guilt I feel when I look back at all those who naïvely participated in my videos. A part of me feels like I took advantage of their own longing to be seen. I gained fame and success from the exploitation of their lives. They didn’t.
  • ...6 more annotations...
  • I knew that my audience wanted to feel authenticity from me. To give that to them, I revealed pieces of myself that I might have been wiser to keep private.
  • when metrics substitute for self-worth, it’s easy to fall into the trap of giving precious pieces of yourself away to feed an audience that’s always hungry for more and more.
  • In 2018, I impulsively released a video about my struggle with burnout, which featured intimate footage of my emotional breakdowns. Those breakdowns were, in part, a product of severe anxiety and depression brought about by chasing the exact success for which many other teenagers yearn.
  • I was entering adulthood and trying to live my childhood dream, but now, to be “authentic,” I had to be the product I had long been posting online, as opposed to the person I was growing up to be.
  • Online culture encourages young people to turn themselves into a product at an age when they’re only starting to discover who they are. When an audience becomes emotionally invested in a version of you that you outgrow, keeping the product you’ve made aligned with yourself becomes an impossible dilemma.
  • Sometimes, I barely recognize the person I used to be. Although a part of me resents that I’ll never be able to forget her, I’m also grateful to her. My YouTube channel, for all the trouble it brought me, connected me to the people who wanted to hear my stories and prepared me for a real shot at a directing career. In the last year, I’ve directed a short film and am writing a feature, which showed me new ways of creating that aren’t at the expense of my privacy.
Javier E

Opinion | The Alt-Right Manipulated My Comic. Then A.I. Claimed It. - The New York Times - 1 views

  • Legally, it appears as though LAION was able to scour what seems like the entire internet because it deems itself a nonprofit organization engaging in academic research. While it was funded at least in part by Stability AI, the company that created Stable Diffusion, it is technically a separate entity. Stability AI then used its nonprofit research arm to create A.I. generators first via Stable Diffusion and then commercialized in a new model called DreamStudio.
  • hat makes up these data sets? Well, pretty much everything. For artists, many of us had what amounted to our entire portfolios fed into the data set without our consent. This means that A.I. generators were built on the backs of our copyrighted work, and through a legal loophole, they were able to produce copies of varying levels of sophistication.
  • eing able to imitate a living artist has obvious implications for our careers, and some artists are already dealing with real challenges to their livelihood.
  • ...4 more annotations...
  • Greg Rutkowski, a hugely popular concept artist, has been used in a prompt for Stable Diffusion upward of 100,000 times. Now, his name is no longer attached to just his own work, but it also summons a slew of imitations of varying quality that he hasn’t approved. This could confuse clients, and it muddies the consistent and precise output he usually produces. When I saw what was happening to him, I thought of my battle with my shadow self. We were each fighting a version of ourself that looked similar but that was uncanny, twisted in a way to which we didn’t consent.
  • In theory, everyone is at risk for their work or image to become a vulgarity with A.I., but I suspect those who will be the most hurt are those who are already facing the consequences of improving technology, namely members of marginalized groups.
  • In the future, with A.I. technology, many more people will have a shadow self with whom they must reckon. Once the features that we consider personal and unique — our facial structure, our handwriting, the way we draw — can be programmed and contorted at the click of a mouse, the possibilities for violations are endless.
  • I’ve been playing around with several generators, and so far none have mimicked my style in a way that can directly threaten my career, a fact that will almost certainly change as A.I. continues to improve. It’s undeniable; the A.I.s know me. Most have captured the outlines and signatures of my comics — black hair, bangs, striped T-shirts. To others, it may look like a drawing taking shape.I see a monster forming.
Javier E

Molly Russell died while suffering from effects of online content, coroner says | Inter... - 0 views

  • Molly viewed more than 16,000 pieces of content on Instagram in the final six months of her life, of which 2,100 were related to suicide, self-harm and depression. The inquest also heard how she had compiled a digital pinboard on Pinterest with 469 images related to similar subjects.
  • Elizabeth Lagone, the head of health and wellbeing policy at Meta, the owner of Instagram and Facebook, apologised and admitted Molly had viewed posts that violated its content policies.
  • A senior Pinterest executive also apologised for the platform showing inappropriate content and acknowledged that the platform was not safe at the time Molly was on it.
  • ...1 more annotation...
  • The inquest heard evidence from a child psychiatrist, Dr Navin Venugopal, who said Molly had been “placed at risk” by the content she had viewed. The headteacher at Molly’s secondary school also gave evidence, describing how it was “almost impossible” to keep track of the risks posed to pupils by social media.
Javier E

Quantum Computing Advance Begins New Era, IBM Says - The New York Times - 0 views

  • While researchers at Google in 2019 claimed that they had achieved “quantum supremacy” — a task performed much more quickly on a quantum computer than a conventional one — IBM’s researchers say they have achieved something new and more useful, albeit more modestly named.
  • “We’re entering this phase of quantum computing that I call utility,” said Jay Gambetta, a vice president of IBM Quantum. “The era of utility.”
  • Present-day computers are called digital, or classical, because they deal with bits of information that are either 1 or 0, on or off. A quantum computer performs calculations on quantum bits, or qubits, that capture a more complex state of information. Just as a thought experiment by the physicist Erwin Schrödinger postulated that a cat could be in a quantum state that is both dead and alive, a qubit can be both 1 and 0 simultaneously.
  • ...15 more annotations...
  • That allows quantum computers to make many calculations in one pass, while digital ones have to perform each calculation separately. By speeding up computation, quantum computers could potentially solve big, complex problems in fields like chemistry and materials science that are out of reach today.
  • When Google researchers made their supremacy claim in 2019, they said their quantum computer performed a calculation in 3 minutes 20 seconds that would take about 10,000 years on a state-of-the-art conventional supercomputer.
  • The IBM researchers in the new study performed a different task, one that interests physicists. They used a quantum processor with 127 qubits to simulate the behavior of 127 atom-scale bar magnets — tiny enough to be governed by the spooky rules of quantum mechanics — in a magnetic field. That is a simple system known as the Ising model, which is often used to study magnetism.
  • This problem is too complex for a precise answer to be calculated even on the largest, fastest supercomputers.
  • On the quantum computer, the calculation took less than a thousandth of a second to complete. Each quantum calculation was unreliable — fluctuations of quantum noise inevitably intrude and induce errors — but each calculation was quick, so it could be performed repeatedly.
  • Indeed, for many of the calculations, additional noise was deliberately added, making the answers even more unreliable. But by varying the amount of noise, the researchers could tease out the specific characteristics of the noise and its effects at each step of the calculation.“We can amplify the noise very precisely, and then we can rerun that same circuit,” said Abhinav Kandala, the manager of quantum capabilities and demonstrations at IBM Quantum and an author of the Nature paper. “And once we have results of these different noise levels, we can extrapolate back to what the result would have been in the absence of noise.”In essence, the researchers were able to subtract the effects of noise from the unreliable quantum calculations, a process they call error mitigation.
  • Altogether, the computer performed the calculation 600,000 times, converging on an answer for the overall magnetization produced by the 127 bar magnets.
  • Although an Ising model with 127 bar magnets is too big, with far too many possible configurations, to fit in a conventional computer, classical algorithms can produce approximate answers, a technique similar to how compression in JPEG images throws away less crucial data to reduce the size of the file while preserving most of the image’s details
  • Certain configurations of the Ising model can be solved exactly, and both the classical and quantum algorithms agreed on the simpler examples. For more complex but solvable instances, the quantum and classical algorithms produced different answers, and it was the quantum one that was correct.
  • Thus, for other cases where the quantum and classical calculations diverged and no exact solutions are known, “there is reason to believe that the quantum result is more accurate,”
  • Mr. Anand is currently trying to add a version of error mitigation for the classical algorithm, and it is possible that could match or surpass the performance of the quantum calculations.
  • In the long run, quantum scientists expect that a different approach, error correction, will be able to detect and correct calculation mistakes, and that will open the door for quantum computers to speed ahead for many uses.
  • Error correction is already used in conventional computers and data transmission to fix garbles. But for quantum computers, error correction is likely years away, requiring better processors able to process many more qubits
  • “This is one of the simplest natural science problems that exists,” Dr. Gambetta said. “So it’s a good one to start with. But now the question is, how do you generalize it and go to more interesting natural science problems?”
  • Those might include figuring out the properties of exotic materials, accelerating drug discovery and modeling fusion reactions.
Javier E

Opinion | Cloning Scientist Hwang Woo-suk Gets a Second Chance. Should He? - The New Yo... - 0 views

  • The Hwang Woo-suk saga is illustrative of the serious deficiencies in the self-regulation of science. His fraud was uncovered because of brave Korean television reporters. Even those efforts might not have been enough, had Dr. Hwang’s team not been so sloppy in its fraud. The team’s papers included fabricated data and pairs of images that on close comparison clearly indicated duplicity.
  • Yet as a cautionary tale about the price of fraud, it is, unfortunately, a mixed bag. He lost his academic standing, and he was convicted of bioethical violations and embezzlement, but he never ended up serving jail time
  • Although his efforts at cloning human embryos, ended in failure and fraud, they provided him the opportunities and resources he needed to take on projects, such as dog cloning, that were beyond the reach of other labs. The fame he earned in academia proved an asset in a business world where there’s no such thing as bad press.
  • ...3 more annotations...
  • it is comforting to think that scientific truth inevitably emerges and scientific frauds will be caught and punished.
  • Dr. Hwang’s scandal suggests something different. Researchers don’t always have the resources or motivation to replicate others’ experiments
  • Even if they try to replicate and fail, it is the institution where the scientist works that has the right and responsibility to investigate possible fraud. Research institutes and universities, facing the prospect of an embarrassing scandal, might not do so.
Javier E

How the Shoggoth Meme Has Come to Symbolize the State of A.I. - The New York Times - 0 views

  • the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.
  • it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.
  • Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, said it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.
  • ...17 more annotations...
  • Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.
  • In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.
  • Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.
  • @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.
  • And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
  • “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
  • when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.
  • @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”
  • In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.
  • That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)
  • If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.
  • A great many people are dismissive of suggestions that any of these systems are “really” thinking, because they’re “just” doing something banal (like making statistical predictions about the next word in a sentence). What they fail to appreciate is that there is every reason to suspect that human cognition is “just” doing those exact same things. It matters not that birds flap their wings but airliners don’t. Both fly. And these machines think. And, just as airliners fly faster and higher and farther than birds while carrying far more weight, these machines are already outthinking the majority of humans at the majority of tasks. Further, that machines aren’t perfect thinkers is about as relevant as the fact that air travel isn’t instantaneous. Now consider: we’re well past the Wright flyer level of thinking machine, past the early biplanes, somewhere about the first commercial airline level. Not quite the DC-10, I think. Can you imagine what the AI equivalent of a 777 will be like? Fasten your seatbelts.
  • @thomas h. You make my point perfectly. You’re observing that the way a plane flies — by using a turbine to generate thrust from combusting kerosene, for example — is nothing like the way that a bird flies, which is by using the energy from eating plant seeds to contract the muscles in its wings to make them flap. You are absolutely correct in that observation, but it’s also almost utterly irrelevant. And it ignores that, to a first approximation, there’s no difference in the physics you would use to describe a hawk riding a thermal and an airliner gliding (essentially) unpowered in its final descent to the runway. Further, you do yourself a grave disservice in being dismissive of the abilities of thinking machines, in exactly the same way that early skeptics have been dismissive of every new technology in all of human history. Writing would make people dumb; automobiles lacked the intelligence of horses; no computer could possibly beat a chess grandmaster because it can’t comprehend strategy; and on and on and on. Humans aren’t nearly as special as we fool ourselves into believing. If you want to have any hope of acting responsibly in the age of intelligent machines, you’ll have to accept that, like it or not, and whether or not it fits with your preconceived notions of what thinking is and how it is or should be done … machines can and do think, many of them better than you in a great many ways. b&
  • @BLA. You are incorrect. Everything has nature. Its nature is manifested in making humans react. Sure, no humans, no nature, but here we are. The writer and various sources are not attributing nature to AI so much as admitting that they don’t know what this nature might be, and there are reasons to be scared of it. More concerning to me is the idea that this field is resorting to geek culture reference points to explain and comprehend itself. It’s not so much the algorithm has no soul, but that the souls of the humans making it possible are stupendously and tragically underdeveloped.
  • When even tech companies are saying AI is moving too fast, and the articles land on page 1 of the NYT (there's an old reference), I think the greedy will not think twice about exploiting this technology, with no ethical considerations, at all.
  • @nome sane? The problem is it isn't data as we understand it. We know what the datasets are -- they were used to train the AI's. But once trained, the AI is thinking for itself, with results that have surprised everybody.
  • The unique feature of a shoggoth is it can become whatever is needed for a particular job. There's no actual shape so it's not a bad metaphor, if an imperfect image. Shoghoths also turned upon and destroyed their creators, so the cautionary metaphor is in there, too. A shame more Asimov wasn't baked into AI. But then the conflict about how to handle AI in relation to people was key to those stories, too.
Javier E

'The Godfather of AI' Quits Google and Warns of Danger Ahead - The New York Times - 0 views

  • he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
  • Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
  • ...24 more annotations...
  • Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
  • But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
  • “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
  • After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.
  • Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
  • Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job
  • Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
  • Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
  • In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
  • In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
  • Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
  • Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
  • “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
  • As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
  • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
  • His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
  • He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
  • Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their ow
  • And he fears a day when truly autonomous weapons — those killer robots — become reality.
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
  • Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
  • But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
  • He does not say that anymore.
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
Javier E

AI is about to completely change how you use computers | Bill Gates - 0 views

  • Health care
  • Entertainment and shopping
  • Today, AI’s main role in healthcare is to help with administrative tasks. Abridge, Nuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.
  • ...38 more annotations...
  • agents will open up many more learning opportunities.
  • Already, AI can help you pick out a new TV and recommend movies, books, shows, and podcasts. Likewise, a company I’ve invested in, recently launched Pix, which lets you ask questions (“Which Robert Redford movies would I like and where can I watch them?”) and then makes recommendations based on what you’ve liked in the past
  • Productivity
  • copilots can do a lot—such as turn a written document into a slide deck, answer questions about a spreadsheet using natural language, and summarize email threads while representing each person’s point of view.
  • before the sophisticated agents I’m describing become a reality, we need to confront a number of questions about the technology and how we’ll use it.
  • Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.
  • To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want. It will be able to write the code, design the look and feel of the app, create a logo, and publish the app to an online store
  • Agents will do even more. Having one will be like having a person dedicated to helping you with various tasks and doing them independently if you want. If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like
  • For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job.
  • Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it.
  • I don’t think any single company will dominate the agents business--there will be many different AI engines available.
  • The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
  • They’ll replace word processors, spreadsheets, and other productivity apps.
  • Education
  • For example, few families can pay for a tutor who works one-on-one with a student to supplement their classroom work. If agents can capture what makes a tutor effective, they’ll unlock this supplemental instruction for everyone who wants it. If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today’s text-based tutors.
  • your agent will be able to help you in the same way that personal assistants support executives today. If your friend just had surgery, your agent will offer to send flowers and be able to order them for you. If you tell it you’d like to catch up with your old college roommate, it will work with their agent to find a time to get together, and just before you arrive, it will remind you that their oldest child just started college at the local university.
  • To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences.
  • The current state of the art is Khanmigo, a text-based bot created by Khan Academy. It can tutor students in math, science, and the humanities—for example, it can explain the quadratic formula and create math problems to practice on. It can also help teachers do things like write lesson plans.
  • Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.
  • other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?
  • In the computing industry, we talk about platforms—the technologies that apps and services are built on. Android, iOS, and Windows are all platforms. Agents will be the next platform.
  • A shock wave in the tech industry
  • Agents won’t simply make recommendations; they’ll help you act on them. If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision.
  • Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you
  • they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter.
  • Companies will be able to make agents available for their employees to consult directly and be part of every meeting so they can answer questions.
  • AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.
  • If the number of companies that have started working on AI just this year is any indication, there will be an exceptional amount of competition, which will make agents very inexpensive.
  • Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.
  • Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.
  • The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people
  • The ramifications for the software business and for society will be profound.
  • In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
  • You’ll also be able to get news and entertainment that’s been tailored to your interests. CurioAI, which creates a custom podcast on any subject you ask about, is a glimpse of what’s coming.
  • An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.
  • even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
  • In the distant future, agents may even force humans to face profound questions about purpose. Imagine that agents become so good that everyone can have a high quality of life without working nearly as much. In a future like that, what would people do with their time? Would anyone still want to get an education when an agent has all the answers? Can you have a safe and thriving society when most people have a lot of free time on their hands?
  • They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.
« First ‹ Previous 321 - 340 of 355 Next ›
Showing 20 items per page