Skip to main content

Home/ TOK Friends/ Group items tagged Big Data

Rss Feed Group items tagged

Javier E

Opinion | Two visions of 'normal' collided in our abnormal pandemic year - The Washingt... - 0 views

  • The date was Sept. 17, 2001. The rubble was still smoking. As silly as this sounds, I was hoping it would make me cry.
  • That didn’t happen. The truth is, it still looked like something on television, a surreal shot from a disaster movie. I was stunned but unmoved.
  • ADLater, trying to understand the difference between those two moments, I told people, “The rubble still didn’t feel real.”
  • ...11 more annotations...
  • now, after a year of pandemic, I realize that wasn’t the problem. The rubble was real, all right. It just wasn’t normal.
  • it always, somehow, came back to that essential human craving for things to be normal, and our inability to believe that they are not, even when presented with compelling evidence.
  • This phenomenon is well-known to cognitive scientists, who have dubbed it “normalcy bias.”
  • the greater risk is more often the opposite: People can’t quite believe. They ignore the fire alarm, defy the order to evacuate ahead of the hurricane, or pause to grab their luggage when exiting the crashed plane. Too often, they die.
  • Calling the quest for normalcy a bias makes it sound bad, but most of the time this tendency is a good thing. The world is full of aberrations, most of them meaningless. If we aimed for maximal reaction to every anomaly we encountered, we’d break down from sheer nervous exhaustion.
  • But when things go disastrously wrong, our optimal response is at war with the part of our brain that insists things are fine. We try to reoccupy the old normal even if it’s become radioactive and salted with mines. We still resist the new normal — even when it’s staring us in the face.
  • Nine months into our current disaster, I now see that our bitter divides over pandemic response were most fundamentally a contest between two ideas of what it meant to get “back to normal.”
  • One group wanted to feel as safe as they had before a virus invaded our shores; the other wanted to feel as unfettered
  • he disputes that followed weren’t just a fight to determine whose idea of normal would prevail. They were a battle against an unthinkable reality, which was that neither kind of normalcy was fully possible anymore.
  • I suspect we all might have been less willing to make war on our opponents if only we’d believed that we were fighting people not very different from how we were — exhausted by the whole thing and frantic to feel like themselves again
  • Some catastrophes are simply too big to be understood except in the smallest way, through their most ordinary human details
Javier E

Covid-19 expert Karl Friston: 'Germany may have more immunological "dark matter"' | Wor... - 0 views

  • Our approach, which borrows from physics and in particular the work of Richard Feynman, goes under the bonnet. It attempts to capture the mathematical structure of the phenomenon – in this case, the pandemic – and to understand the causes of what is observed. Since we don’t know all the causes, we have to infer them. But that inference, and implicit uncertainty, is built into the models
  • That’s why we call them generative models, because they contain everything you need to know to generate the data. As more data comes in, you adjust your beliefs about the causes, until your model simulates the data as accurately and as simply as possible.
  • A common type of epidemiological model used today is the SEIR model, which considers that people must be in one of four states – susceptible (S), exposed (E), infected (I) or recovered (R). Unfortunately, reality doesn’t break them down so neatly. For example, what does it mean to be recovered?
  • ...12 more annotations...
  • SEIR models start to fall apart when you think about the underlying causes of the data. You need models that can allow for all possible states, and assess which ones matter for shaping the pandemic’s trajectory over time.
  • These techniques have enjoyed enormous success ever since they moved out of physics. They’ve been running your iPhone and nuclear power stations for a long time. In my field, neurobiology, we call the approach dynamic causal modelling (DCM). We can’t see brain states directly, but we can infer them given brain imaging data
  • Epidemiologists currently tackle the inference problem by number-crunching on a huge scale, making use of high-performance computers. Imagine you want to simulate an outbreak in Scotland. Using conventional approaches, this would take you a day or longer with today’s computing resources. And that’s just to simulate one model or hypothesis – one set of parameters and one set of starting conditions.
  • Using DCM, you can do the same thing in a minute. That allows you to score different hypotheses quickly and easily, and so to home in sooner on the best one.
  • This is like dark matter in the universe: we can’t see it, but we know it must be there to account for what we can see. Knowing it exists is useful for our preparations for any second wave, because it suggests that targeted testing of those at high risk of exposure to Covid-19 might be a better approach than non-selective testing of the whole population.
  • Our response as individuals – and as a society – becomes part of the epidemiological process, part of one big self-organising, self-monitoring system. That means it is possible to predict not only numbers of cases and deaths in the future, but also societal and institutional responses – and to attach precise dates to those predictions.
  • How well have your predictions been borne out in this first wave of infections?For London, we predicted that hospital admissions would peak on 5 April, deaths would peak five days later, and critical care unit occupancy would not exceed capacity – meaning the Nightingale hospitals would not be required. We also predicted that improvements would be seen in the capital by 8 May that might allow social distancing measures to be relaxed – which they were in the prime minister’s announcement on 10 May. To date our predictions have been accurate to within a day or two, so there is a predictive validity to our models that the conventional ones lack.
  • What do your models say about the risk of a second wave?The models support the idea that what happens in the next few weeks is not going to have a great impact in terms of triggering a rebound – because the population is protected to some extent by immunity acquired during the first wave. The real worry is that a second wave could erupt some months down the line when that immunity wears off.
  • the important message is that we have a window of opportunity now, to get test-and-trace protocols in place ahead of that putative second wave. If these are implemented coherently, we could potentially defer that wave beyond a time horizon where treatments or a vaccine become available, in a way that we weren’t able to before the first one.
  • We’ve been comparing the UK and Germany to try to explain the comparatively low fatality rates in Germany. The answers are sometimes counterintuitive. For example, it looks as if the low German fatality rate is not due to their superior testing capacity, but rather to the fact that the average German is less likely to get infected and die than the average Brit. Why? There are various possible explanations, but one that looks increasingly likely is that Germany has more immunological “dark matter” – people who are impervious to infection, perhaps because they are geographically isolated or have some kind of natural resistance
  • Any other advantages?Yes. With conventional SEIR models, interventions and surveillance are something you add to the model – tweaks or perturbations – so that you can see their effect on morbidity and mortality. But with a generative model these things are built into the model itself, along with everything else that matters.
  • Are generative models the future of disease modelling?That’s a question for the epidemiologists – they’re the experts. But I would be very surprised if at least some part of the epidemiological community didn’t become more committed to this approach in future, given the impact that Feynman’s ideas have had in so many other disciplines.
Javier E

Do Political Experts Know What They're Talking About? | Wired Science | Wired... - 1 views

  • I often joke that every cable news show should be forced to display a disclaimer, streaming in a loop at the bottom of the screen. The disclaimer would read: “These talking heads have been scientifically proven to not know what they are talking about. Their blather is for entertainment purposes only.” The viewer would then be referred to Tetlock’s most famous research project, which began in 1984.
  • He picked a few hundred political experts – people who made their living “commenting or offering advice on political and economic trends” – and began asking them to make predictions about future events. He had a long list of pertinent questions. Would George Bush be re-elected? Would there be a peaceful end to apartheid in South Africa? Would Quebec secede from Canada? Would the dot-com bubble burst? In each case, the pundits were asked to rate the probability of several possible outcomes. Tetlock then interrogated the pundits about their thought process, so that he could better understand how they made up their minds.
  • Most of Tetlock’s questions had three possible answers; the pundits, on average, selected the right answer less than 33 percent of the time. In other words, a dart-throwing chimp would have beaten the vast majority of professionals. These results are summarized in his excellent Expert Political Judgment.
  • ...5 more annotations...
  • Some experts displayed a top-down style of reasoning: politics as a deductive art. They started with a big-idea premise about human nature, society, or economics and applied it to the specifics of the case. They tended to reach more confident conclusions about the future. And the positions they reached were easier to classify ideologically: that is the Keynesian prediction and that is the free-market fundamentalist prediction and that is the worst-case environmentalist prediction and that is the best case technology-driven growth prediction etc. Other experts displayed a bottom-up style of reasoning: politics as a much messier inductive art. They reached less confident conclusions and they are more likely to draw on a seemingly contradictory mix of ideas in reaching those conclusions (sometimes from the left, sometimes from the right). We called the big-idea experts “hedgehogs” (they know one big thing) and the more eclectic experts “foxes” (they know many, not so big things).
  • The most consistent predictor of consistently more accurate forecasts was “style of reasoning”: experts with the more eclectic, self-critical, and modest cognitive styles tended to outperform the big-idea people (foxes tended to outperform hedgehogs).
  • Lehrer: Can non-experts do anything to encourage a more effective punditocracy?
  • Tetlock: Yes, non-experts can encourage more accountability in the punditocracy. Pundits are remarkably skillful at appearing to go out on a limb in their claims about the future, without actually going out on one. For instance, they often “predict” continued instability and turmoil in the Middle East (predicting the present) but they virtually never get around to telling you exactly what would have to happen to disconfirm their expectations. They are essentially impossible to pin down. If pundits felt that their public credibility hinged on participating in level playing field forecasting exercises in which they must pit their wits against an extremely difficult-to-predict world, I suspect they would be learn, quite quickly, to be more flexible and foxlike in their policy pronouncements.
  • tweetmeme_style = 'compact'; Digg Stumble Upon Delicious Reddit if(typeof CN!=='undefined' && CN.dart){ CN.dart.call("blogsBody",{sz: "300x250", kws : ["bottom"]}); } Disqus Login About Disqus Like Dislike and 5 others liked this. Glad you liked it. Would you like to share? Facebook Twitter Share No thanks Sharing this page … Thanks! Close Login Add New Comment Post as … Image http://mediacdn.disqus.com/1312506743/build/system/upload.html#xdm_e=http%3A%2F%2Fwww.wired.com&xdm_c=default5471&xdm_p=1&f=wiredscience&t=do_political_experts_know_what_they8217re_talking_
qkirkpatrick

Research funding: Is size really the most important thing? | Science | The Guardian - 0 views

  • Though investment had declined under the previous government, all the major parties said some warm words on the topic. Going beyond that vague-but-positive consensus would have required pinning politicians down to specific pledges
  • There are also important discussions to be had about how funding is managed and distributed, and how such decisions are made. In arguments about levels of funding, expect most researchers to agree that more is better – no surprise there, and the quality of the arguments deserves scrutiny.
  • Those hoardings are coming down now, which makes the whole thing seem more approachable - as does the fact that a couple of physicists from my department have won access to the labs there. It will be a huge concentration of resource - intellectual and financial
  • ...3 more annotations...
  • The hope is that the facilities, and perhaps more importantly the close interconnections between outstanding scientists in different fields, that it provides, will lead to it being more than the sum of its parts.
  • Some science directly addresses so-called “big questions”. How did life begin? What is everything made of? How did the universe begin? Often these big questions are posed within a specific theoretical framework; the Higgs boson is an example of how a good theory can condense a set of very big questions - essentially “What is mass?”
  • big projects are inevitably political to some extent, if only because of the fondness leaders have for making grand (or “grandiose”, as Amos would have it) announcements. Many big projects are international, which can bring in other elements, and requires decision-making frameworks that, while imperfect, do exist even if not all scientists are fully aware of them.
  •  
    Can politics affect science and what information is released and not released?
Javier E

[Six Questions] | Astra Taylor on The People's Platform: Taking Back Power and Culture ... - 1 views

  • Astra Taylor, a cultural critic and the director of the documentaries Zizek! and Examined Life, challenges the notion that the Internet has brought us into an age of cultural democracy. While some have hailed the medium as a platform for diverse voices and the free exchange of information and ideas, Taylor shows that these assumptions are suspect at best. Instead, she argues, the new cultural order looks much like the old: big voices overshadow small ones, content is sensationalist and powered by advertisements, quality work is underfunded, and corporate giants like Google and Facebook rule. The Internet does offer promising tools, Taylor writes, but a cultural democracy will be born only if we work collaboratively to develop the potential of this powerful resource
  • Most people don’t realize how little information can be conveyed in a feature film. The transcripts of both of my movies are probably equivalent in length to a Harper’s cover story.
  • why should Amazon, Apple, Facebook, and Google get a free pass? Why should we expect them to behave any differently over the long term? The tradition of progressive media criticism that came out of the Frankfurt School, not to mention the basic concept of political economy (looking at the way business interests shape the cultural landscape), was nowhere to be seen, and that worried me. It’s not like political economy became irrelevant the second the Internet was invented.
  • ...15 more annotations...
  • How do we reconcile our enjoyment of social media even as we understand that the corporations who control them aren’t always acting in our best interests?
  • hat was because the underlying economic conditions hadn’t been changed or “disrupted,” to use a favorite Silicon Valley phrase. Google has to serve its shareholders, just like NBCUniversal does. As a result, many of the unappealing aspects of the legacy-media model have simply carried over into a digital age — namely, commercialism, consolidation, and centralization. In fact, the new system is even more dependent on advertising dollars than the one that preceded it, and digital advertising is far more invasive and ubiquitous
  • the popular narrative — new communications technologies would topple the establishment and empower regular people — didn’t accurately capture reality. Something more complex and predictable was happening. The old-media dinosaurs weren’t dying out, but were adapting to the online environment; meanwhile the new tech titans were coming increasingly to resemble their predecessors
  • I use lots of products that are created by companies whose business practices I object to and that don’t act in my best interests, or the best interests of workers or the environment — we all do, since that’s part of living under capitalism. That said, I refuse to invest so much in any platform that I can’t quit without remorse
  • these services aren’t free even if we don’t pay money for them; we pay with our personal data, with our privacy. This feeds into the larger surveillance debate, since government snooping piggybacks on corporate data collection. As I argue in the book, there are also negative cultural consequences (e.g., when advertisers are paying the tab we get more of the kind of culture marketers like to associate themselves with and less of the stuff they don’t) and worrying social costs. For example, the White House and the Federal Trade Commission have both recently warned that the era of “big data” opens new avenues of discrimination and may erode hard-won consumer protections.
  • I’m resistant to the tendency to place this responsibility solely on the shoulders of users. Gadgets and platforms are designed to be addictive, with every element from color schemes to headlines carefully tested to maximize clickability and engagement. The recent news that Facebook tweaked its algorithms for a week in 2012, showing hundreds of thousands of users only “happy” or “sad” posts in order to study emotional contagion — in other words, to manipulate people’s mental states — is further evidence that these platforms are not neutral. In the end, Facebook wants us to feel the emotion of wanting to visit Facebook frequently
  • social inequalities that exist in the real world remain meaningful online. What are the particular dangers of discrimination on the Internet?
  • That it’s invisible or at least harder to track and prove. We haven’t figured out how to deal with the unique ways prejudice plays out over digital channels, and that’s partly because some folks can’t accept the fact that discrimination persists online. (After all, there is no sign on the door that reads Minorities Not Allowed.)
  • just because the Internet is open doesn’t mean it’s equal; offline hierarchies carry over to the online world and are even amplified there. For the past year or so, there has been a lively discussion taking place about the disproportionate and often outrageous sexual harassment women face simply for entering virtual space and asserting themselves there — research verifies that female Internet users are dramatically more likely to be threatened or stalked than their male counterparts — and yet there is very little agreement about what, if anything, can be done to address the problem.
  • What steps can we take to encourage better representation of independent and non-commercial media? We need to fund it, first and foremost. As individuals this means paying for the stuff we believe in and want to see thrive. But I don’t think enlightened consumption can get us where we need to go on its own. I’m skeptical of the idea that we can shop our way to a better world. The dominance of commercial media is a social and political problem that demands a collective solution, so I make an argument for state funding and propose a reconceptualization of public media. More generally, I’m struck by the fact that we use these civic-minded metaphors, calling Google Books a “library” or Twitter a “town square” — or even calling social media “social” — but real public options are off the table, at least in the United States. We hand the digital commons over to private corporations at our peril.
  • 6. You advocate for greater government regulation of the Internet. Why is this important?
  • I’m for regulating specific things, like Internet access, which is what the fight for net neutrality is ultimately about. We also need stronger privacy protections and restrictions on data gathering, retention, and use, which won’t happen without a fight.
  • I challenge the techno-libertarian insistence that the government has no productive role to play and that it needs to keep its hands off the Internet for fear that it will be “broken.” The Internet and personal computing as we know them wouldn’t exist without state investment and innovation, so let’s be real.
  • there’s a pervasive and ill-advised faith that technology will promote competition if left to its own devices (“competition is a click away,” tech executives like to say), but that’s not true for a variety of reasons. The paradox of our current media landscape is this: our devices and consumption patterns are ever more personalized, yet we’re simultaneously connected to this immense, opaque, centralized infrastructure. We’re all dependent on a handful of firms that are effectively monopolies — from Time Warner and Comcast on up to Google and Facebook — and we’re seeing increased vertical integration, with companies acting as both distributors and creators of content. Amazon aspires to be the bookstore, the bookshelf, and the book. Google isn’t just a search engine, a popular browser, and an operating system; it also invests in original content
  • So it’s not that the Internet needs to be regulated but that these big tech corporations need to be subject to governmental oversight. After all, they are reaching farther and farther into our intimate lives. They’re watching us. Someone should be watching them.
Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 1 views

  • Skinner's approach stressed the historical associations between a stimulus and the animal's response -- an approach easily framed as a kind of empirical statistical analysis, predicting the future as a function of the past.
  • Chomsky's conception of language, on the other hand, stressed the complexity of internal representations, encoded in the genome, and their maturation in light of the right data into a sophisticated computational system, one that cannot be usefully broken down into a set of associations.
  • Chomsky acknowledged that the statistical approach might have practical value, just as in the example of a useful search engine, and is enabled by the advent of fast computers capable of processing massive data. But as far as a science goes, Chomsky would argue it is inadequate, or more harshly, kind of shallow
  • ...17 more annotations...
  • David Marr, a neuroscientist colleague of Chomsky's at MIT, defined a general framework for studying complex biological systems (like the brain) in his influential book Vision,
  • a complex biological system can be understood at three distinct levels. The first level ("computational level") describes the input and output to the system, which define the task the system is performing. In the case of the visual system, the input might be the image projected on our retina and the output might our brain's identification of the objects present in the image we had observed. The second level ("algorithmic level") describes the procedure by which an input is converted to an output, i.e. how the image on our retina can be processed to achieve the task described by the computational level. Finally, the third level ("implementation level") describes how our own biological hardware of cells implements the procedure described by the algorithmic level.
  • The emphasis here is on the internal structure of the system that enables it to perform a task, rather than on external association between past behavior of the system and the environment. The goal is to dig into the "black box" that drives the system and describe its inner workings, much like how a computer scientist would explain how a cleverly designed piece of software works and how it can be executed on a desktop computer.
  • As written today, the history of cognitive science is a story of the unequivocal triumph of an essentially Chomskyian approach over Skinner's behaviorist paradigm -- an achievement commonly referred to as the "cognitive revolution,"
  • While this may be a relatively accurate depiction in cognitive science and psychology, behaviorist thinking is far from dead in related disciplines. Behaviorist experimental paradigms and associationist explanations for animal behavior are used routinely by neuroscientists
  • Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field's heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the "new AI" -- focused on using statistical learning techniques to better mine and predict data -- is unlikely to yield general principles about the nature of intelligent beings or about cognition.
  • Behaviorist principles of associations could not explain the richness of linguistic knowledge, our endlessly creative use of it, or how quickly children acquire it with only minimal and imperfect exposure to language presented by their environment.
  • it has been argued in my view rather plausibly, though neuroscientists don't like it -- that neuroscience for the last couple hundred years has been on the wrong track.
  • Implicit in this endeavor is the assumption that with enough sophisticated statistical tools and a large enough collection of data, signals of interest can be weeded it out from the noise in large and poorly understood biological systems.
  • Brenner, a contemporary of Chomsky who also participated in the same symposium on AI, was equally skeptical about new systems approaches to understanding the brain. When describing an up-and-coming systems approach to mapping brain circuits called Connectomics, which seeks to map the wiring of all neurons in the brain (i.e. diagramming which nerve cells are connected to others), Brenner called it a "form of insanity."
  • These debates raise an old and general question in the philosophy of science: What makes a satisfying scientific theory or explanation, and how ought success be defined for science?
  • Ever since Isaiah Berlin's famous essay, it has become a favorite pastime of academics to place various thinkers and scientists on the "Hedgehog-Fox" continuum: the Hedgehog, a meticulous and specialized worker, driven by incremental progress in a clearly defined field versus the Fox, a flashier, ideas-driven thinker who jumps from question to question, ignoring field boundaries and applying his or her skills where they seem applicable.
  • Chomsky's work has had tremendous influence on a variety of fields outside his own, including computer science and philosophy, and he has not shied away from discussing and critiquing the influence of these ideas, making him a particularly interesting person to interview.
  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • An unlikely pair, systems biology and artificial intelligence both face the same fundamental task of reverse-engineering a highly complex system whose inner workings are largely a mystery
  • neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
delgadool

Coronavirus pandemic: Congress response lets down workers, US economy - Business Insider - 0 views

  • The US share of global GDP is nearly 15%. If our economy can't stabilize and then recover from the coronavirus pandemic, it will be harder for the world to do so
  • it's imperative that Congress write fair, generous legislation to get us through the economic shutdown required to fight the virus
  • But that isn't what's happening. Republicans accuse Democrats of not moving fast enough. Democrats accuse Republicans of short-changing American workers and favoring big corporations.
  • ...9 more annotations...
  • Under-funding this stimulus will drag the global economy down. And any appearance that corporations are getting a more fair deal than individuals will make people not want to comply. A lack of compliance will drag on the crisis.
  • When it falls into ruin, the entire global economy drags. We saw that happen during the financial crisis of 2008.
    • delgadool
       
      Example of comparable situation
  • Congress could under-fund the US coronavirus stimulus package. If they do, they put not only the economy but the effort to fight the virus at risk.
  • this weekend the Senate was unable to pass aid legislation
  • Democrats also rejected the bill over a lack of labor protections that would only mandate corporations keep employees "to the extent possible." They want more limits on executive compensation and share buybacks, and they want more money for healthcare workers. They accuse Republicans of being cheap, and writing a deal that favors corporations over average Americans.
  • The only proposal that comes close to being generous enough for individuals comes from Democratic Rep. Rashida Tlaib. It would give a prepaid card with $2,000 to every American. That card would then be recharged with $1,000 monthly until one year after the end of the coronavirus crisis. This is the kind of plan that will make Americans believe the government has their back, not just the backs of big corporations.
  • The distrust that is bred by corruption will make it much harder to fight this virus, potentially dragging out the crisis. The vast majority of Americans already think that our lack of trust in each other and our government makes it hard to solve problems, according to Pew Research. If Americans feel like this whole aid package is a handout to big corporations — which they also distrust — they may stop listening to authorities.
  • Goldman Sachs estimates that the recession brought on by fighting off coronavirus will trough in April, knocking 10% off US GDP. Over time, bank analysts wrote last week, the economy should begin to grow again incrementally. How fast depends on how well Americans comply with government social-distancing mandates. Americans have to want to comply.
  • Small and midsize companies make up 83% of the US economy, and thousands of workers are already out of a job across the country. Means-testing initial payments to individuals — that is, restricting who gets the checks based on income — is a waste of time.
Javier E

Science and gun violence: why is the research so weak? [Part 2] - Boing Boing - 1 views

  • Scientists are missing some important bits of data that would help them better understand the effects of gun policy and the causes of gun-related violence. But that’s not the only reason why we don’t have solid answers. Once you have the data, you still have to figure out what it means. This is where the research gets complicated, because the problem isn’t simply about what we do and don’t know right now. The problem, say some scientists, is that we —from the public, to politicians, to even scientists themselves—may be trying to force research to give a type of answer that we can’t reasonably expect it to offer. To understand what science can do for the gun debates, we might have to rethink what “evidence-based policy” means to us.
  • For the most part, there aren’t a lot of differences in the data that these studies are using. So how can they reach such drastically different conclusions? The issue is in the kind of data that exists, and what you have to do to understand it, says Charles Manski, professor of economics at Northwestern University. Manski studies the ways that other scientists do research and how that research translates into public policy.
  • Even if we did have those gaps filled in, Manski said, what we’d have would still just be observational data, not experimental data. “We don’t have randomized, controlled experiments, here,” he said. “The only way you could do that, you’d have to assign a gun to some people randomly at birth and follow them throughout their lives. Obviously, that’s not something that’s going to work.”
  • ...14 more annotations...
  • This means that, even under the best circumstances, scientists can’t directly test what the results of a given gun policy are. The best you can do is to compare what was happening in a state before and after a policy was enacted, or to compare two different states, one that has the policy and one that doesn’t. And that’s a pretty inexact way of working.
  • Add in enough assumptions, and you can eventually come up with an estimate. But is the estimate correct? Is it even close to reality? That’s a hard question to answer, because the assumptions you made—the correlations you drew between cause and effect, what you know and what you assume to be true because of that—might be totally wrong.
  • It’s hard to tease apart the effect of one specific change, compared to the effects of other things that could be happening at the same time.
  • This process of taking the observational data we do have and then running it through a filter of assumptions plays out in the real world in the form of statistical modeling. When the NAS report says that nobody yet knows whether more guns lead to more crime, or less crime, what they mean is that the models and the assumptions built into those models are all still proving to be pretty weak.
  • From either side of the debate, he said, scientists continue to produce wildly different conclusions using the same data. On either side, small shifts in the assumptions lead the models to produce different results. Both factions continue to choose sets of assumptions that aren’t terribly logical. It’s as if you decided that anybody with blue shoes probably had a belly-button piercing. There’s not really a good reason for making that correlation. And if you change the assumption—actually, belly-button piercings are more common in people who wear green shoes—you end up with completely different results.
  • The Intergovernmental Panel on Climate Change (IPCC) produces these big reports periodically, which analyze lots of individual papers. In essence, they’re looking at lots of trees and trying to paint you a picture of the forest. IPCC reports are available for free online, you can go and read them yourself. When you do, you’ll notice something interesting about the way that the reports present results. The IPCC never says, “Because we burned fossil fuels and emitted carbon dioxide into the atmosphere then the Earth will warm by x degrees.” Instead, those reports present a range of possible outcomes … for everything. Depending on the different models used, different scenarios presented, and the different assumptions made, the temperature of the Earth might increase by anywhere between 1.5 and 4.5 degrees Celsius.
  • What you’re left with is an environment where it’s really easy to prove that your colleague’s results are probably wrong, and it’s easy for him to prove that yours are probably wrong. But it’s not easy for either of you to make a compelling case for why you’re right.
  • Statistical modeling isn’t unique to gun research. It just happens to be particularly messy in this field. Scientists who study other topics have done a better job of using stronger assumptions and of building models that can’t be upended by changing one small, seemingly randomly chosen detail. It’s not that, in these other fields, there’s only one model being used, or even that all the different models produce the exact same results. But the models are stronger and, more importantly, the scientists do a better job of presenting the differences between models and drawing meaning from them.
  • “Climate change is one of the rare scientific literatures that has actually faced up to this,” Charles Manski said. What he means is that, when scientists model climate change, they don’t expect to produce exact, to-the-decimal-point answers.
  • “It’s been a complete waste of time, because we can’t validate one model versus another,” Pepper said. Most likely, he thinks that all of them are wrong. For instance, all the models he’s seen assume that a law will affect every state in the same way, and every person within that state in the same way. “But if you think about it, that’s just nonsensical,” he said.
  • On the one hand, that leaves politicians in a bit of a lurch. The response you might mount to counteract a 1.5 degree increase in global average temperature is pretty different from the response you’d have to 4.5 degrees. On the other hand, the range does tell us something valuable: the temperature is increasing.
  • The problem with this is that it flies in the face of what most of us expect science to do for public policy. Politics is inherently biased, right? The solutions that people come up with are driven by their ideologies. Science is supposed to cut that Gordian Knot. It’s supposed to lay the evidence down on the table and impartially determine who is right and who is wrong.
  • Manski and Pepper say that this is where we need to rethink what we expect science to do. Science, they say, isn’t here to stop all political debate in its tracks. In a situation like this, it simply can’t provide a detailed enough answer to do that—not unless you’re comfortable with detailed answers that are easily called into question and disproven by somebody else with a detailed answer.
  • Instead, science can reliably produce a range of possible outcomes, but it’s still up to the politicians (and, by extension, up to us) to hash out compromises between wildly differing values on controversial subjects. When it comes to complex social issues like gun ownership and gun violence, science doesn’t mean you get to blow off your political opponents and stake a claim on truth. Chances are, the closest we can get to the truth is a range that encompasses the beliefs of many different groups.
Javier E

ThinkUp Helps the Social Network User See the Online Self - NYTimes.com - 1 views

  • In addition to a list of people’s most-used words and other straightforward stats like follower counts, ThinkUp shows subscribers more unusual information such as how often they thank and congratulate people, how frequently they swear, whose voices they tend to amplify and which posts get the biggest reaction and from whom.
  • after using ThinkUp for about six months, I’ve found it to be an indispensable guide to how I navigate social networks.
  • Every morning the service delivers an email packed with information, and in its weighty thoroughness, it reminds you that what you do on Twitter and Facebook can change your life, and other people’s lives, in important, sometimes unforeseen ways.
  • ...14 more annotations...
  • ThinkUp is something like Elf on the Shelf for digitally addled adults — a constant reminder that someone is watching you, and that you’re being judged.
  • “The goal is to make you act like less of a jerk online,” Ms. Trapani said. “The big goal is to create mindfulness and awareness, and also behavioral change.”
  • One of the biggest dangers is saying something off the cuff that might make sense in a particular context, but that sounds completely off the rails to the wider public. The problem, in other words, is acting without thinking — being caught up in the moment, without pausing to reflect on the long-term consequences. You’re never more than a few taps away from an embarrassment that might ruin your career, or at least your reputation, for years to come.
  • Because social networks often suggest a false sense of intimacy, they tend to lower people’s self-control.
  • Like a drug or perhaps a parasite, they worm into your devices, your daily habits and your every free moment, and they change how you think.Continue reading the main story Continue reading the main story
  • For those of us most deeply afflicted, myself included, every mundane observation becomes grist for a 140-character quip, and every interaction a potential springboard into an all-consuming, emotionally wrenching flame battle.
  • people often tweet and update without any perspective about themselves. That’s because Facebook and Twitter, as others have observed, have a way of infecting our brains.
  • getting a daily reminder from ThinkUp that there are good ways and bad ways to behave online — has a tendency to focus the mind.
  • More basically, though, it’s helped me pull back from social networks. Each week, ThinkUp tells me how often I’ve tweeted. Sometimes that number is terribly high — a few weeks ago it was more than 800 times — and I realize I’m probably overtaxing my followers
  • ThinkUp charges $5 a month for each social network you connect to it. Is it worth it? After all, there’s a better, more surefire way of avoiding any such long-term catastrophe caused by social media: Just stop using social networks.
  • The main issue constraining growth, the founders say, is that it has been difficult to explain to people why they might need ThinkUp.
  • your online profile plays an important role in how you’re perceived by potential employers. In a recent survey commissioned by the job-hunting site CareerBuilder, almost half of companies said they perused job-seekers’ social networking profiles to look for red flags and to see what sort of image prospective employees portrayed online.
  • even though “never tweet” became a popular, ironic thing to tweet this year, actually never tweeting, and never being on Facebook, is becoming nearly impossible for many people.
  • That may change as more people falter on social networks, either by posting unthinking comments that end up damaging their careers, or simply by annoying people to the point that their online presence becomes a hindrance to their real-life prospects.
Duncan H

Facebook Is Using You - NYTimes.com - 0 views

  • Facebook’s inventory consists of personal data — yours and mine.
  • Facebook makes money by selling ad space to companies that want to reach us. Advertisers choose key words or details — like relationship status, location, activities, favorite books and employment — and then Facebook runs the ads for the targeted subset of its 845 million users
  • The magnitude of online information Facebook has available about each of us for targeted marketing is stunning. In Europe, laws give people the right to know what data companies have about them, but that is not the case in the United States.
  • ...8 more annotations...
  • The bits and bytes about your life can easily be used against you. Whether you can obtain a job, credit or insurance can be based on your digital doppelgänger — and you may never know why you’ve been turned down.
  • Stereotyping is alive and well in data aggregation. Your application for credit could be declined not on the basis of your own finances or credit history, but on the basis of aggregate data — what other people whose likes and dislikes are similar to yours have done
  • Data aggregators’ practices conflict with what people say they want. A 2008 Consumer Reports poll of 2,000 people found that 93 percent thought Internet companies should always ask for permission before using personal information, and 72 percent wanted the right to opt out of online tracking. A study by Princeton Survey Research Associates in 2009 using a random sample of 1,000 people found that 69 percent thought that the United States should adopt a law giving people the right to learn everything a Web site knows about them. We need a do-not-track law, similar to the do-not-call one. Now it’s not just about whether my dinner will be interrupted by a telemarketer. It’s about whether my dreams will be dashed by the collection of bits and bytes over which I have no control and for which companies are currently unaccountable.
  • The term Weblining describes the practice of denying people opportunities based on their digital selves. You might be refused health insurance based on a Google search you did about a medical condition. You might be shown a credit card with a lower credit limit, not because of your credit history, but because of your race, sex or ZIP code or the types of Web sites you visit.
  • Advertisers are drawing new redlines, limiting people to the roles society expects them to play
  • Even though laws allow people to challenge false information in credit reports, there are no laws that require data aggregators to reveal what they know about you. If I’ve Googled “diabetes” for a friend or “date rape drugs” for a mystery I’m writing, data aggregators assume those searches reflect my own health and proclivities. Because no laws regulate what types of data these aggregators can collect, they make their own rules.
  • LAST week, Facebook filed documents with the government that will allow it to sell shares of stock to the public. It is estimated to be worth at least $75 billion. But unlike other big-ticket corporations, it doesn’t have an inventory of widgets or gadgets, cars or phones.
  • If you indicate that you like cupcakes, live in a certain neighborhood and have invited friends over, expect an ad from a nearby bakery to appear on your page.
Javier E

From Sports Illustrated, the Latest Body Part for Women to Fix - NYTimes.com - 0 views

  • At 44, I am old enough to remember when reconstruction was something you read about in history class, when a muffin top was something delicious you ate at the bakery, a six-pack was how you bought your beer, camel toe was something one might glimpse at the zoo, a Brazilian was someone from the largest country in South America and terms like thigh gap and bikini bridge would be met with blank looks.
  • Now, each year brings a new term for an unruly bit of body that women are expected to subdue through diet and exercise.
  • Girls’ and women’s lives matter. Their safety and health and their rights matter. Whether every inch of them looks like a magazine cover?That, my sisters, does not matter at all.
  • ...5 more annotations...
  • there’s no profit in leaving things as they are.Show me a body part, I’ll show you someone who’s making money by telling women that theirs looks wrong and they need to fix it. Tone it, work it out, tan it, bleach it, tattoo it, lipo it, remove all the hair, lose every bit of jiggle.
  • As a graphic designer and Photoshop teacher, I also have to note that Photoshop is used HEAVILY in these kinds of publications. Even on women with incredibly beautiful (by pop culture standards) bodies. It's quite sad because the imagery we're expected to live up to (or approximate) by cultural standards, is illustration. It's not even real. My boyfriend and I had a big laugh over a Playboy cover a few months ago where the Photoshopping was so extreme (thigh gap and butt cheek) it was anatomically impossible and looked ridiculous. I work in the industry.. I know what the Liquify filter and the Spot Healing Brush can do!
  • We may harp on gender inequality while pursuing stupid fetishes. Well into our middle age, we still try to forcefully wriggle into size 2 pair of jeans. We foolishly spend tonnes of money on fake ( these guys should be sued for false advertising )age -defying, anti-wrinkle creams. Why do we have to have our fuzz and bush diappear while the men have forests on their chests,abdomens,butts, arms and legs? For that we have only ourselves to blame. We just cannot get out of this mindset of being objectified. And we pass on these foolishness to our daughters and grand-daughters. They get trapped, never satisfied with what they see in the mirror. Don't expect the men to change anytime soon. They will always maintain the status quo. It is for us, women to get out of this rut. We have to 'snatch' gender-equality. It will never be handed to us. PERIOD
  • I spent years dieting and exercising to look good--or really to not look bad. I knew the calories (and probably still do) in thousands of foods. How I regret the time I spent on that and the boyfriends who cared about that. And how much more I had to give to the world. With unprecedented economic injustice, ecosystems collapsing, war breaking out everywhere, nations going under water, people starving in refugee camps, the keys to life, behavior, and disease being unlocked in the biological sciences . . . this is what we think women should spend their time worrying about? Talk about a poverty of ambition. No more. Won't even look at these demeaning magazines when I get my hair cut. If that's what a woman cares about, I try to tell her to stop wasting her time. If that's what a man cares about, he is a waste of my time. What a depressing way to distract women from achieving more in this world. Really wish I'd know this at 12.
  • we believe we're all competing against one another to procreate and participate in evolution. So women (and men) compete ferociously, and body image is a subset of all that. Then there's LeMarckian evolutionary theory and epigenetics...http://en.wikipedia.org/wiki/Lamarckismhttp://en.wikipedia.org/wiki/EpigeneticsBottom line is that we can't stop this train any more easily than we can stop the anthropocene's Climate Change. Human beings are tempted. Sometimes we win the battle, other times we give in to vanity, hedonism, and ego. This is all a subset of much larger forces at play. Men and women make choices and act within that environment. Deal with it.
Javier E

Facebook Overhauls News Feed to Focus on What Friends and Family Share - The New York T... - 0 views

  • it would prioritize what their friends and family share and comment on while de-emphasizing content from publishers and brands
  • The changes are intended to maximize the amount of content with “meaningful interaction” that people consume on Facebook, Mark Zuckerberg, the company’s chief executive, sai
  • The social network wants to reduce what Mr. Zuckerberg called “passive content” — videos and articles that ask little more of the viewer than to sit back and watch or read — so that users’ time on the site was well spent.
  • ...11 more annotations...
  • “We want to make sure that our products are not just fun, but are good for people,” Mr. Zuckerberg said. “We need to refocus the system.”
  • The change may also work against Facebook’s immediate business interests. The company has long pushed users to spend more time on the social network. With different, less viral types of content surfacing more often, people could end up spending their time elsewhere. Mr. Zuckerberg said that was in fact Facebook’s expectation, but that if people end up feeling better about using the social network, the business will ultimately benefit.
  • The goal of the overhaul, ultimately, is for something less quantifiable that may be difficult to achieve: Facebook wants people to feel positive, rather than negative, after visiting.
  • . Publishers, nonprofits, small business and many other groups rely on the social network to reach people, so de-emphasizing their posts will most likely hurt them
  • Thursday’s changes raise questions of whether people may end up seeing more content that reinforces their own ideologies if they end up frequently interacting with posts and videos that reflect the similar views of their friends or family.
  • Facebook has conducted research and worked with outside academics for months to examine the effects that its service has on people. The work was spurred by criticism from politicians, academics, the media and others that Facebook had not adequately considered its responsibility for what it shows its users.
  • “Just because a tool can be used for good and bad, that doesn’t make the tool bad — it just means you need to understand what the negative is so that you can mitigate it,” he said.
  • Facebook and other researchers have particularly homed in on passive content. In surveys of Facebook users, people said they felt the site had shifted too far away from friends and family-related content, especially amid a swell of outside posts from brands, publishers and media companies.
  • “This big wave of public content has really made us reflect: What are we really here to do?” Mr. Zuckerberg said. “If what we’re here to do is help people build relationships, then we need to adjust.”
  • Product managers are being asked to “facilitate the most meaningful interactions between people,” rather than the previous mandate of helping people find the most meaningful content, he said.
  • “It’s important to me that when Max and August grow up that they feel like what their father built was good for the world,” Mr. Zuckerberg said.
Javier E

AI is about to completely change how you use computers | Bill Gates - 0 views

  • Health care
  • Entertainment and shopping
  • Today, AI’s main role in healthcare is to help with administrative tasks. Abridge, Nuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.
  • ...38 more annotations...
  • agents will open up many more learning opportunities.
  • Already, AI can help you pick out a new TV and recommend movies, books, shows, and podcasts. Likewise, a company I’ve invested in, recently launched Pix, which lets you ask questions (“Which Robert Redford movies would I like and where can I watch them?”) and then makes recommendations based on what you’ve liked in the past
  • Productivity
  • copilots can do a lot—such as turn a written document into a slide deck, answer questions about a spreadsheet using natural language, and summarize email threads while representing each person’s point of view.
  • before the sophisticated agents I’m describing become a reality, we need to confront a number of questions about the technology and how we’ll use it.
  • Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.
  • To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want. It will be able to write the code, design the look and feel of the app, create a logo, and publish the app to an online store
  • Agents will do even more. Having one will be like having a person dedicated to helping you with various tasks and doing them independently if you want. If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like
  • For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job.
  • Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it.
  • I don’t think any single company will dominate the agents business--there will be many different AI engines available.
  • The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
  • They’ll replace word processors, spreadsheets, and other productivity apps.
  • Education
  • For example, few families can pay for a tutor who works one-on-one with a student to supplement their classroom work. If agents can capture what makes a tutor effective, they’ll unlock this supplemental instruction for everyone who wants it. If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today’s text-based tutors.
  • your agent will be able to help you in the same way that personal assistants support executives today. If your friend just had surgery, your agent will offer to send flowers and be able to order them for you. If you tell it you’d like to catch up with your old college roommate, it will work with their agent to find a time to get together, and just before you arrive, it will remind you that their oldest child just started college at the local university.
  • To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences.
  • The current state of the art is Khanmigo, a text-based bot created by Khan Academy. It can tutor students in math, science, and the humanities—for example, it can explain the quadratic formula and create math problems to practice on. It can also help teachers do things like write lesson plans.
  • Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.
  • other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?
  • In the computing industry, we talk about platforms—the technologies that apps and services are built on. Android, iOS, and Windows are all platforms. Agents will be the next platform.
  • A shock wave in the tech industry
  • Agents won’t simply make recommendations; they’ll help you act on them. If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision.
  • Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you
  • they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter.
  • Companies will be able to make agents available for their employees to consult directly and be part of every meeting so they can answer questions.
  • AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.
  • If the number of companies that have started working on AI just this year is any indication, there will be an exceptional amount of competition, which will make agents very inexpensive.
  • Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.
  • Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.
  • The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people
  • The ramifications for the software business and for society will be profound.
  • In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
  • You’ll also be able to get news and entertainment that’s been tailored to your interests. CurioAI, which creates a custom podcast on any subject you ask about, is a glimpse of what’s coming.
  • An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.
  • even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
  • In the distant future, agents may even force humans to face profound questions about purpose. Imagine that agents become so good that everyone can have a high quality of life without working nearly as much. In a future like that, what would people do with their time? Would anyone still want to get an education when an agent has all the answers? Can you have a safe and thriving society when most people have a lot of free time on their hands?
  • They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.
Javier E

For Chat-Based AI, We Are All Once Again Tech Companies' Guinea Pigs - WSJ - 0 views

  • The companies touting new chat-based artificial-intelligence systems are running a massive experiment—and we are the test subjects.
  • In this experiment, Microsoft, MSFT -2.18% OpenAI and others are rolling out on the internet an alien intelligence that no one really understands, which has been granted the ability to influence our assessment of what’s true in the world. 
  • Companies have been cautious in the past about unleashing this technology on the world. In 2019, OpenAI decided not to release an earlier version of the underlying model that powers both ChatGPT and the new Bing because the company’s leaders deemed it too dangerous to do so, they said at the time.
  • ...26 more annotations...
  • Microsoft leaders felt “enormous urgency” for it to be the company to bring this technology to market, because others around the world are working on similar tech but might not have the resources or inclination to build it as responsibly, says Sarah Bird, a leader on Microsoft’s responsible AI team.
  • One common starting point for such models is what is essentially a download or “scrape” of most of the internet. In the past, these language models were used to try to understand text, but the new generation of them, part of the revolution in “generative” AI, uses those same models to create texts by trying to guess, one word at a time, the most likely word to come next in any given sequence.
  • Wide-scale testing gives Microsoft and OpenAI a big competitive edge by enabling them to gather huge amounts of data about how people actually use such chatbots. Both the prompts users input into their systems, and the results their AIs spit out, can then be fed back into a complicated system—which includes human content moderators paid by the companies—to improve it.
  • , being first to market with a chat-based AI gives these companies a huge initial lead over companies that have been slower to release their own chat-based AIs, such as Google.
  • rarely has an experiment like Microsoft and OpenAI’s been rolled out so quickly, and at such a broad scale.
  • Among those who build and study these kinds of AIs, Mr. Altman’s case for experimenting on the global public has inspired responses ranging from raised eyebrows to condemnation.
  • The fact that we’re all guinea pigs in this experiment doesn’t mean it shouldn’t be conducted, says Nathan Lambert, a research scientist at the AI startup Huggingface.
  • “I would kind of be happier with Microsoft doing this experiment than a startup, because Microsoft will at least address these issues when the press cycle gets really bad,” says Dr. Lambert. “I think there are going to be a lot of harms from this kind of AI, and it’s better people know they are coming,” he adds.
  • Others, particularly those who study and advocate for the concept of “ethical AI” or “responsible AI,” argue that the global experiment Microsoft and OpenAI are conducting is downright dangerous
  • Celeste Kidd, a professor of psychology at University of California, Berkeley, studies how people acquire knowledge
  • Her research has shown that people learning about new things have a narrow window in which they form a lasting opinion. Seeing misinformation during this critical initial period of exposure to a new concept—such as the kind of misinformation that chat-based AIs can confidently dispense—can do lasting harm, she says.
  • Dr. Kidd likens OpenAI’s experimentation with AI to exposing the public to possibly dangerous chemicals. “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,”
  • Part of the challenge with AI chatbots is that they can sometimes simply make things up. Numerous examples of this tendency have been documented by users of both ChatGPT and OpenA
  • These models also tend to be riddled with biases that may not be immediately apparent to users. For example, they can express opinions gleaned from the internet as if they were verified facts
  • When millions are exposed to these biases across billions of interactions, this AI has the potential to refashion humanity’s views, at a global scale, says Dr. Kidd.
  • OpenAI has talked publicly about the problems with these systems, and how it is trying to address them. In a recent blog post, the company said that in the future, users might be able to select AIs whose “values” align with their own.
  • “We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” the post said.
  • Eliminating made-up information and bias from chat-based search engines is impossible given the current state of the technology, says Mark Riedl, a professor at Georgia Institute of Technology who studies artificial intelligence
  • He believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment,” he adds. 
  • in other areas of human endeavor—from new drugs and new modes of transportation to advertising and broadcast media—we have standards for what can and cannot be unleashed on the public. No such standards exist for AI, says Dr. Riedl.
  • To modify these AIs so that they produce outputs that humans find both useful and not-offensive, engineers often use a process called “reinforcement learning through human feedback.
  • that’s a fancy way of saying that humans provide input to the raw AI algorithm, often by simply saying which of its potential responses to a query are better—and also which are not acceptable at all.
  • Microsoft’s and OpenAI’s globe-spanning experiments on millions of people are yielding a fire hose of data for both companies. User-entered prompts and the AI-generated results are fed back through a network of paid human AI trainers to further fine-tune the models,
  • Huggingface’s Dr. Lambert says that any company, including his own, that doesn’t have this river of real-world usage data helping it improve its AI is at a huge disadvantage
  • In chatbots, in some autonomous-driving systems, in the unaccountable AIs that decide what we see on social media, and now, in the latest applications of AI, again and again we are the guinea pigs on which tech companies are testing new technology.
  • It may be the case that there is no other way to roll out this latest iteration of AI—which is already showing promise in some areas—at scale. But we should always be asking, at times like these: At what price?
Javier E

Noted Dutch Psychologist, Stapel, Accused of Research Fraud - NYTimes.com - 0 views

  • A well-known psychologist in the Netherlands whose work has been published widely in professional journals falsified data and made up entire experiments, an investigating committee has found
  • Experts say the case exposes deep flaws in the way science is done in a field, psychology, that has only recently earned a fragile respectability.
  • In recent years, psychologists have reported a raft of findings on race biases, brain imaging and even extrasensory perception that have not stood up to scrutiny. Outright fraud may be rare, these experts say, but they contend that Dr. Stapel took advantage of a system that allows researchers to operate in near secrecy and massage data to find what they want to find, without much fear of being challenged.
  • ...8 more annotations...
  • “The big problem is that the culture is such that researchers spin their work in a way that tells a prettier story than what they really found,” said Jonathan Schooler, a psychologist at the University of California, Santa Barbara. “It’s almost like everyone is on steroids, and to compete you have to take steroids as well.”
  • Dr. Stapel published papers on the effect of power on hypocrisy, on racial stereotyping and on how advertisements affect how people view themselves. Many of his findings appeared in newspapers around the world, including The New York Times, which reported in December on his study about advertising and identity.
  • In a survey of more than 2,000 American psychologists scheduled to be published this year, Leslie John of Harvard Business School and two colleagues found that 70 percent had acknowledged, anonymously, to cutting some corners in reporting data. About a third said they had reported an unexpected finding as predicted from the start, and about 1 percent admitted to falsifying data.
  • Dr. Stapel was able to operate for so long, the committee said, in large measure because he was “lord of the data,” the only person who saw the experimental evidence that had been gathered (or fabricated). This is a widespread problem in psychology, said Jelte M. Wicherts, a psychologist at the University of Amsterdam. In a recent survey, two-thirds of Dutch research psychologists said they did not make their raw data available for other researchers to see. “This is in violation of ethical rules established in the field,” Dr. Wicherts said.
  • Also common is a self-serving statistical sloppiness. In an analysis published this year, Dr. Wicherts and Marjan Bakker, also at the University of Amsterdam, searched a random sample of 281 psychology papers for statistical errors. They found that about half of the papers in high-end journals contained some statistical error, and that about 15 percent of all papers had at least one error tha
  • t changed a reported finding — almost always in opposition to the authors’ hypothesis.
  • an analysis of 49 studies appearing Wednesday in the journal PLoS One, by Dr. Wicherts, Dr. Bakker and Dylan Molenaar, found that the more reluctant that scientists were to share their data, the more likely that evidence contradicted their reported findings.
  • “We know the general tendency of humans to draw the conclusions they want to draw — there’s a different threshold,” said Joseph P. Simmons, a psychologist at the University of Pennsylvania’s Wharton School. “With findings we want to see, we ask, ‘Can I believe this?’ With those we don’t, we ask, ‘Must I believe this?’
Javier E

Opinion | Privacy Is Too Big to Understand - The New York Times - 1 views

  • There is “no single rhetorical approach likely to work on a given audience and none too dangerous to try. Any story that sticks is a good one,”
  • This newsletter is about finding ways to make this stuff stick in your mind and to arm you with the information you need to take control of your digital life.
  • how to start? The definition of privacy itself. I think it’s time to radically expand it.
  • ...12 more annotations...
  • “Privacy is really about being able to define for ourselves who we are for the world and on our own terms,”
  • “hyperobjects,” a concept so all-encompassing that it is impossible to adequately describe
  • invite skepticism because their scale is so vast and sometimes abstract.
  • When technology governs so many aspects of our lives — and when that technology is powered by the exploitation of our data — privacy isn’t just about knowing your secrets, it’s about autonomy
  • “Privacy” is an impoverished word — far too small a word to describe what we talk about when we talk about the mining, transmission, storing, buying, selling, use and misuse of our personal information.
  • not a choice that belongs to an algorithm or data brokerEntities that collect, aggregate and sell individuals’ personal data, derivatives and inferences from disparate public and private sources. Glossary and definitely not to Facebook.”
  • privacy is about how that data is used to take away our control
  • real-time data, once assumed to be protected by phone companies, was available for sale to bounty hunters for a $300 fee
  • ICE officials partnered with a private data firm to track license plate data.
  • It means reckoning with private surveillance databases armed with dossiers on regular citizens and outsourced to the highest bidder
  • “Years ago we worried about the N.S.A. building huge server farms, but now it’s much cheaper to go to a private-service vendor and outsource this to a company who can cloak their activity in trade secrets,
  • “It’s comparable to asking people to stop using air conditioning because of the ozone layer. It’s not likely to happen because the immediate comfort is more valuable than the long-term fear.
Javier E

The Disgust Election - NYTimes.com - 0 views

  • I would like for the most influential swing voter on the Supreme Court to step away from his legal aerie, and wade through some of the muck that he and four fellow justices have given us with the 2014 campaign.
  • How did we lose our democracy? Slowly at first, and then all at once. This fall, voters are more disgusted, more bored and more cynical about the midterm elections than at any time in at least two decades.
  • beyond disdain for this singular crop of do-nothings, the revulsion is generated by a sense that average people have lost control of one of the last things that citizens should be able to control — the election itself.
  • ...8 more annotations...
  • You can trace the Great Breach to Justice Kennedy’s words in the 2010 Citizens United case, which gave wealthy, secret donors unlimited power to manipulate American elections. The decision legalized large-scale bribery — O.K., influence buying — and ensured that we would never know exactly who was purchasing certain politicians.
  • Kennedy famously predicted the opposite. He wrote that “independent expenditures, including those made by corporations, do not give rise to corruption or the appearance of corruption.” That’s the money quote — one of the great wish-projections in court history. But Kennedy also envisioned a new day, whereby there would be real-time disclosure of the big financial forces he unleashed across the land.In his make-believe, post-Citizens United world, voters “can see whether elected officials are ‘in the pocket’ of so-called moneyed interests.”
  • just the opposite has happened. The big money headed for the shadows. As my colleague Nicholas Confessore documented earlier this month, more than half the ads aired by outside groups during this campaign have come from secret donors. Oligarchs hiding behind front groups — Citizens for Fluffy Pillows — are pulling the levers of the 2014 campaign, and overwhelmingly aiding Republicans.
  • you can’t argue with the corrosive and dispiriting effect, on the rest of us, of campaigns controlled by the rich, the secret, the few.
  • This year, the Koch brothers and their extensions — just to name one lonely voice in the public realm — have operations in at least 35 states, and will spend somewhere north of $120 million to ensure a Congress that will do their bidding. Spending by outside groups has gone to $1 billion in 2012 from $52 million in 2000.
  • At the same time that this court has handed over elections to people who already have enormous power, they’ve given approval to efforts to keep the powerless from voting. In Texas, Republicans have passed a selective voter ID bill that could keep upward of 600,000 citizens — students, Native Americans in federally recognized tribes, the elderly — from having a say in this election.
  • What’s the big deal? Well, you can vote in Texas with a concealed handgun ID, but not one from a four-year college. The new voter suppression measure, allowed to go ahead in an unsigned order by the court last Saturday, “is a purposefully discriminating law,” Justice Ruth Bader Ginsburg wrote in dissent, “one that likely imposes an unconstitutional poll tax and risks denying the right to vote to hun
  • With the 2010 case, the court handed control of elections over to dark money interests who answer to nobody. And in the Texas case, the court has ensured that it will be more difficult for voters without money or influence to use the one tool they have.
dicindioha

Nervous markets take fright at prospect of Trump failing to deliver | Larry Elliott | B... - 0 views

  • Shares, oil and the US dollar were all under pressure as global financial markets took fright at the prospect that Donald Trump would fail to deliver on his growth-boosting promises.
  • stock markets in Asia and Europe fell in response to Tuesday’s sharp decline on Wall Street.
  • Markets have become increasingly impatient with the new Trump administration for failing to follow through on pledges to use a package of tax cuts and infrastructure spending to raise the US growth rate.
  • ...3 more annotations...
  • Investors believe a failure to secure agreement on Capitol Hill to repeal Barack Obama’s healthcare act – the new administration’s first legislative test – will lead to a further sell-off on Wall Street.
  • money flowed out of the dollar and into the safe haven of the Japanese yen. Sterling rose to stand at just under $1.25 against the US currency.
  • The “repeal and replace” of Obamacare was being seen as an acid test of whether Trump could deliver on his fiscal plans and the difficulties encountered were a “bad omen” for tax reform.
  •  
    After watching inside job it is so interesting to see the way the world market flows around the major countries, and the small countries rely on the success of the big ones. It will be important to monitor whether Trump will be able to implement his campaign claims referring to the market and taxes.
Javier E

Anger for Path Social Network After Privacy Breach - NYTimes.com - 0 views

  • bloggers in Egypt and Tunisia are often approached online by people who are state security in disguise.
  • The most sought-after bounty for state officials: dissidents’ address books, to figure out who they are in cahoots with, where they live and information about their family. In some cases, this information leads to roundups and arrests.
  • A person’s contacts are so sensitive that Alec Ross, a senior adviser on innovation to Secretary of State Hillary Rodham Clinton, said the State Department was supporting the development of an application that would act as a “panic button” on a smartphone, enabling people to erase all contacts with one click if they are arrested during a protest.
  • ...2 more annotations...
  • The big deal is that privacy and security is not a big deal in Silicon Valley. While technorati tripped over themselves to congratulate Mr. Morin on finessing the bad publicity, a number of concerned engineers e-mailed me noting that the data collection was not an accident. It would have taken programmers weeks to write the code necessary to copy and organize someone’s address book. Many said Apple was at fault, too, for approving Path for its App Store when it appears to violate its rules.
  • Lawyers I spoke with said that my address book — which contains my reporting sources at companies and in government — is protected under the First Amendment. On Path’s servers, it is frightfully open for anyone to see and use, because the company did not encrypt the data.
Javier E

Mark Zuckerberg, Let Me Pay for Facebook - NYTimes.com - 0 views

  • 93 percent of the public believes that “being in control of who can get information about them is important,” and yet the amount of information we generate online has exploded and we seldom know where it all goes.
  • the pop-up and the ad-financed business model. The former is annoying but it’s the latter that is helping destroy the fabric of a rich, pluralistic Internet.
  • Facebook makes about 20 cents per user per month in profit. This is a pitiful sum, especially since the average user spends an impressive 20 hours on Facebook every month, according to the company. This paltry profit margin drives the business model: Internet ads are basically worthless unless they are hyper-targeted based on tracking and extensive profiling of users. This is a bad bargain, especially since two-thirds of American adults don’t want ads that target them based on that tracking and analysis of personal behavior.
  • ...10 more annotations...
  • This way of doing business rewards huge Internet platforms, since ads that are worth so little can support only companies with hundreds of millions of users.
  • Ad-based businesses distort our online interactions. People flock to Internet platforms because they help us connect with one another or the world’s bounty of information — a crucial, valuable function. Yet ad-based financing means that the companies have an interest in manipulating our attention on behalf of advertisers, instead of letting us connect as we wish.
  • Many users think their feed shows everything that their friends post. It doesn’t. Facebook runs its billion-plus users’ newsfeed by a proprietary, ever-changing algorithm that decides what we see. If Facebook didn’t have to control the feed to keep us on the site longer and to inject ads into our stream, it could instead offer us control over this algorithm.
  • we’re not starting from scratch. Micropayment systems that would allow users to spend a few cents here and there, not be so easily tracked by all the Big Brothers, and even allow personalization were developed in the early days of the Internet. Big banks and large Internet platforms didn’t show much interest in this micropayment path, which would limit their surveillance abilities. We can revive it.
  • What to do? It’s simple: Internet sites should allow their users to be the customers. I would, as I bet many others would, happily pay more than 20 cents per month for a Facebook or a Google that did not track me, upgraded its encryption and treated me as a customer whose preferences and privacy matter.
  • Many people say that no significant number of users will ever pay directly for Internet services. But that is because we are misled by the mantra that these services are free. With growing awareness of the privacy cost of ads, this may well change. Millions of people pay for Netflix despite the fact that pirated copies of many movies are available free. We eventually pay for ads, anyway, as that cost is baked into products we purchase
  • A seamless, secure micropayment system that spreads a few pennies at a time as we browse a social network, up to a preset monthly limit, would alter the whole landscape for the better.
  • Many nonprofits and civic groups that were initially thrilled about their success in using Facebook to reach people are now despondent as their entries are less and less likely to reach people who “liked” their posts unless they pay Facebook to help boost their updates.
  • If even a quarter of Facebook’s 1.5 billion users were willing to pay $1 per month in return for not being tracked or targeted based on their data, that would yield more than $4 billion per year — surely a number worth considering.
  • Mr. Zuckerberg has reportedly spent more than $30 million to buy the homes around his in Palo Alto, Calif., and more than $100 million for a secluded parcel of land in Hawaii. He knows privacy is worth paying for. So he should let us pay a few dollars to protect ours.
« First ‹ Previous 41 - 60 of 203 Next › Last »
Showing 20 items per page