Skip to main content

Home/ GAVNet Collaborative Curation/ Group items tagged calculations

Rss Feed Group items tagged

Bill Fulkerson

Neoliberalism drives climate breakdown, not human nature | openDemocracy - 0 views

  •  
    "The idea that all humanity is equally and collectively responsible for climate change - or any other environmental or social problem - is extremely weak. In a basic and easily calculable way, not everyone is responsible for the same quantity of greenhouse gasses. People in the world's poorest countries produce roughly one hundredth of the emissions of the richest people in the richest countries. Through the chance of our births, and the lifestyle we choose we are not all equally responsible for climate change."
Bill Fulkerson

Chinese quantum computer completes 2.5-billion-year task in minutes - 0 views

  •  
    Researchers in China claim to have achieved quantum supremacy, the point where a quantum computer completes a task that would be virtually impossible for a classical computer to perform. The device, named Jiuzhang, reportedly conducted a calculation in 200 seconds that would take a regular supercomputer a staggering 2.5 billion years to complete.
Bill Fulkerson

Risky business: the shadow of constant threat is changing us | Books | The Guardian - 0 views

  •  
    Covid-19 has heightened our perception of danger so that every day is a series of finely balanced calculations. How do we decide which are the risks worth taking, asks Sarah Perry
Bill Fulkerson

Severe undercounting of COVID-19 cases in U.S., other countries estimated via model - 0 views

  •  
    A new machine-learning framework uses reported test results and death rates to calculate estimates of the actual number of current COVID-19 infections within all 50 U.S. states and 50 countries. Jungsik Noh and Gaudenz Danuser of the University of Texas Southwestern Medical Center present these findings in the open-access journal PLOS ONE on February 8,
Bill Fulkerson

New approach refines the Hubble's constant and age of universe - 0 views

  •  
    Using known distances of 50 galaxies from Earth to refine calculations in Hubble's constant, a research team led by a University of Oregon astronomer estimates the age of the universe at 12.6 billion years.
Bill Fulkerson

SARS-CoV-2 viral load predicts COVID-19 mortality - The Lancet Respiratory Medicine - 0 views

  •  
    Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) detection platforms currently report qualitative results. However, technology based on RT-PCR allows for calculation of viral load, which is associated with transmission risk and disease severity in other viral illnesses.1 Viral load in COVID-19 might correlate with infectivity, disease phenotype, morbidity, and mortality. To date, no studies have assessed the association between viral load and mortality in a large patient cohort.2, 3, 4 To our knowledge, we are the first to report on SARS-CoV-2 viral load at diagnosis as an independent predictor of mortality in a large hospitalised cohort (n=1145).
Steve Bosserman

How Cheap Labor Drives China's A.I. Ambitions - The New York Times - 1 views

  • But the ability to tag that data may be China’s true A.I. strength, the only one that the United States may not be able to match. In China, this new industry offers a glimpse of a future that the government has long promised: an economy built on technology rather than manufacturing.
  • “We’re the construction workers in the digital world. Our job is to lay one brick after another,” said Yi Yake, co-founder of a data labeling factory in Jiaxian, a city in central Henan province. “But we play an important role in A.I. Without us, they can’t build the skyscrapers.”
  • While A.I. engines are superfast learners and good at tackling complex calculations, they lack cognitive abilities that even the average 5-year-old possesses. Small children know that a furry brown cocker spaniel and a black Great Dane are both dogs. They can tell a Ford pickup from a Volkswagen Beetle, and yet they know both are cars.A.I. has to be taught. It must digest vast amounts of tagged photos and videos before it realizes that a black cat and a white cat are both cats. This is where the data factories and their workers come in.
  • ...2 more annotations...
  • “All the artificial intelligence is built on human labor,” Mr. Liang said.
  • “We’re the assembly lines 10 years ago,” said Mr. Yi, the co-founder of the data factory in Henan.
Steve Bosserman

Are You Creditworthy? The Algorithm Will Decide. - 0 views

  • The decisions made by algorithmic credit scoring applications are not only said to be more accurate in predicting risk than traditional scoring methods; its champions argue they are also fairer because the algorithm is unswayed by the racial, gender, and socioeconomic biases that have skewed access to credit in the past.
  • Algorithmic credit scores might seem futuristic, but these practices do have roots in credit scoring practices of yore. Early credit agencies, for example, hired human reporters to dig into their customers’ credit histories. The reports were largely compiled from local gossip and colored by the speculations of the predominantly white, male middle class reporters. Remarks about race and class, asides about housekeeping, and speculations about sexual orientation all abounded.
  • By 1935, whole neighborhoods in the U.S. were classified according to their credit characteristics. A map from that year of Greater Atlanta comes color-coded in shades of blue (desirable), yellow (definitely declining) and red (hazardous). The legend recalls a time when an individual’s chances of receiving a mortgage were shaped by their geographic status.
  • ...1 more annotation...
  • These systems are fast becoming the norm. The Chinese Government is now close to launching its own algorithmic “Social Credit System” for its 1.4 billion citizens, a metric that uses online data to rate trustworthiness. As these systems become pervasive, and scores come to stand for individual worth, determining access to finance, services, and basic freedoms, the stakes of one bad decision are that much higher. This is to say nothing of the legitimacy of using such algorithmic proxies in the first place. While it might seem obvious to call for greater transparency in these systems, with machine learning and massive datasets it’s extremely difficult to locate bias. Even if we could peer inside the black box, we probably wouldn’t find a clause in the code instructing the system to discriminate against the poor, or people of color, or even people who play too many video games. More important than understanding how these scores get calculated is giving users meaningful opportunities to dispute and contest adverse decisions that are made about them by the algorithm.
Steve Bosserman

How We Made AI As Racist and Sexist As Humans - 0 views

  • Artificial intelligence may have cracked the code on certain tasks that typically require human smarts, but in order to learn, these algorithms need vast quantities of data that humans have produced. They hoover up that information, rummage around in search of commonalities and correlations, and then offer a classification or prediction (whether that lesion is cancerous, whether you’ll default on your loan) based on the patterns they detect. Yet they’re only as clever as the data they’re trained on, which means that our limitations—our biases, our blind spots, our inattention—become theirs as well.
  • The majority of AI systems used in commercial applications—the ones that mediate our access to services like jobs, credit, and loans— are proprietary, their algorithms and training data kept hidden from public view. That makes it exceptionally difficult for an individual to interrogate the decisions of a machine or to know when an algorithm, trained on historical examples checkered by human bias, is stacked against them. And forget about trying to prove that AI systems may be violating human rights legislation.
  • Data is essential to the operation of an AI system. And the more complicated the system—the more layers in the neural nets, to translate speech or identify faces or calculate the likelihood someone defaults on a loan—the more data must be collected.
  • ...8 more annotations...
  • The power of the system is its “ability to recognize that correlations occur between gender and professions,” says Kathryn Hume. “The downside is that there’s no intentionality behind the system—it’s just math picking up on correlations. It doesn’t know this is a sensitive issue.” There’s a tension between the futuristic and the archaic at play in this technology. AI is evolving much more rapidly than the data it has to work with, so it’s destined not just to reflect and replicate biases but also to prolong and reinforce them.
  • And sometimes, even when ample data exists, those who build the training sets don’t take deliberate measures to ensure its diversity
  • But not everyone will be equally represented in that data.
  • Accordingly, groups that have been the target of systemic discrimination by institutions that include police forces and courts don’t fare any better when judgment is handed over to a machine.
  • A growing field of research, in fact, now looks to apply algorithmic solutions to the problems of algorithmic bias.
  • Still, algorithmic interventions only do so much; addressing bias also demands diversity in the programmers who are training machines in the first place.
  • A growing awareness of algorithmic bias isn’t only a chance to intervene in our approaches to building AI systems. It’s an opportunity to interrogate why the data we’ve created looks like this and what prejudices continue to shape a society that allows these patterns in the data to emerge.
  • Of course, there’s another solution, elegant in its simplicity and fundamentally fair: get better data.
1 - 14 of 14
Showing 20 items per page