Skip to main content

Home/ GAVNet Collaborative Curation/ Group items tagged coding

Rss Feed Group items tagged

Bill Fulkerson

Reverse Engineering the source code of the BioNTech/Pfizer SARS-CoV-2 Vaccine - Articles - 0 views

  •  
    RNA is the volatile 'working memory' version of DNA. DNA is like the flash drive storage of biology. DNA is very durable, internally redundant and very reliable. But much like computers do not execute code directly from a flash drive, before something happens, code gets copied to a faster, more versatile yet far more fragile system.
Steve Bosserman

Situational Assessment 2017: Trump Edition - Deep Code - Medium - 0 views

  • I use John Robb’s term “Trump Insurgency” here to highlight the fact that the election of 2016 was not an example of “ordinary politics”. Anyone who fails to understand this is going to be making significant errors. For example, the 2016 election is not comparable to the 2000 election (e.g., merely a “close” election) nor to the 1980 election (e.g., an “ideological transition” election). While it is tempting to compare it to 1860, I’m not sure that is a good match either.In fact, as I go back and try to do pattern matching, the only real pattern I can find is the 1776 “election” (AKA the American Revolution). In other words, while 2016 still formally looked like politics, what is really going on here is a revolutionary war. For now this is war using memes rather than bullets, but war is much more than a metaphor.
Steve Bosserman

UK can lead the way on ethical AI, says Lords Committee - News from Parliament - UK Par... - 0 views

  • AI Code One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are: Artificial intelligence should be developed for the common good and benefit of humanity. Artificial intelligence should operate on principles of intelligibility and fairness. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
Steve Bosserman

Are You Creditworthy? The Algorithm Will Decide. - 0 views

  • The decisions made by algorithmic credit scoring applications are not only said to be more accurate in predicting risk than traditional scoring methods; its champions argue they are also fairer because the algorithm is unswayed by the racial, gender, and socioeconomic biases that have skewed access to credit in the past.
  • Algorithmic credit scores might seem futuristic, but these practices do have roots in credit scoring practices of yore. Early credit agencies, for example, hired human reporters to dig into their customers’ credit histories. The reports were largely compiled from local gossip and colored by the speculations of the predominantly white, male middle class reporters. Remarks about race and class, asides about housekeeping, and speculations about sexual orientation all abounded.
  • By 1935, whole neighborhoods in the U.S. were classified according to their credit characteristics. A map from that year of Greater Atlanta comes color-coded in shades of blue (desirable), yellow (definitely declining) and red (hazardous). The legend recalls a time when an individual’s chances of receiving a mortgage were shaped by their geographic status.
  • ...1 more annotation...
  • These systems are fast becoming the norm. The Chinese Government is now close to launching its own algorithmic “Social Credit System” for its 1.4 billion citizens, a metric that uses online data to rate trustworthiness. As these systems become pervasive, and scores come to stand for individual worth, determining access to finance, services, and basic freedoms, the stakes of one bad decision are that much higher. This is to say nothing of the legitimacy of using such algorithmic proxies in the first place. While it might seem obvious to call for greater transparency in these systems, with machine learning and massive datasets it’s extremely difficult to locate bias. Even if we could peer inside the black box, we probably wouldn’t find a clause in the code instructing the system to discriminate against the poor, or people of color, or even people who play too many video games. More important than understanding how these scores get calculated is giving users meaningful opportunities to dispute and contest adverse decisions that are made about them by the algorithm.
1 - 20 of 42 Next › Last »
Showing 20 items per page