Skip to main content

Home/ TOK Friends/ Group items tagged algorithm

Rss Feed Group items tagged

Javier E

Silicon Valley Sharknado - NYTimes.com - 0 views

  • algorithms and machines will replace 80 percent of doctors in years to come, making medicine more data driven and less like “witchcraft.”
  • Page predicted a “time of abundance,” when human needs could be more easily met and people would “have more time with their family or to pursue their own interests.”
  • people could be thrown out of work. As Page said, “90 percent of people used to be farmers,” so “it’s not surprising.”
  • ...4 more annotations...
  • “You should presume that someday,” Brin said, “we will be able to make machines that can reason, think and do things better than we can.”
  • Of course, when we get more free time, we’ll simply spend it staring at our iPads
  • “In a way, it’s not being honest,” he said. “We’re still pretending that we’re inventing a brain when all we’ve come up with is a giant mash-up of real brains. We don’t yet understand how brains work, so we can’t build one.”
  • “People are unwittingly feeding information into the Cloud for automated services, which they’re not being paid for,” Lanier said. “I don’t like pretending that humans are becoming buggy whips. You have this fantasy that it’s machines doing it without people helping.
anonymous

Controversial Quantum Machine Tested by NASA and Google Shows Promise | MIT Technology ... - 0 views

  • artificial-intelligence software.
  • Google says it has proof that a controversial machine it bought in 2013 really can use quantum physics to work through a type of math that’s crucial to artificial intelligence much faster than a conventional computer.
  • “It is a truly disruptive technology that could change how we do everything,” said Rupak Biswas, director of exploration technology at NASA’s Ames Research Center in Mountain View, California.
  • ...7 more annotations...
  • An alternative algorithm is known that could have let the conventional computer be more competitive, or even win, by exploiting what Neven called a “bug” in D-Wave’s design. Neven said the test his group staged is still important because that shortcut won’t be available to regular computers when they compete with future quantum annealers capable of working on larger amounts of data.
  • “For a specific, carefully crafted proof-of-concept problem we achieve a 100-million-fold speed-up,” said Neven.
  • “the world’s first commercial quantum computer.” The computer is installed at NASA’s Ames Research Center in Mountain View, California, and operates on data using a superconducting chip called a quantum annealer.
  • Google is competing with D-Wave to make a quantum annealer that could do useful work.
  • Martinis is also working on quantum hardware that would not be limited to optimization problems, as annealers are.
  • Government and university labs, Microsoft (see “Microsoft’s Quantum Mechanics”), and IBM (see “IBM Shows Off a Quantum Computing Chip”) are also working on that technology.
  • “it may be several years before this research makes a difference to Google products.”
silveiragu

College Scorecard Sandbags Equity in Higher Education | Patricia McGuire - 0 views

  • the "haves" in higher education have quite a lot; the "have nots" struggle mightily. And this economic chasm is seriously influenced by gender, race and social class -- issues on which the College Scorecard is silent, but which affect just about every factoid presented
  • The reality is that even smart wonks educated at some of the best of the "haves" can be blind to social reality; their monument to algorithmic gymnastics in the College Scorecard obscures some of the most important and painful facts about college life and American society today.
  • The administration presents the collegiate earnings data as if it were value-neutral, not only with no reference to the mission of institutions that may have different values from those the administration apparently exalts, but even more devastatingly, with no reference to the pernicious effects of gender and race discrimination on career opportunities and earnings.
  • ...6 more annotations...
  • I am not a wonk, but I did prepare this chart based on data in the College Scorecard and the federal data system IPEDS
  • The value-neutral approach to the collegiate earnings data ignores the facts of life about women and families.
  • 74% of all undergraduates have at least one "non-traditional" characteristic, and more than 55% have two or more non-traditional characteristics such as having children, being a caregiver, delaying college enrollment, attending part-time, working full-time.
  • But the College Scorecard completely ignores the increasingly non-traditional nature of the nation's undergraduate student body today, and instead, presents data as if most college students are privileged children whiling away four years in some grove of academic luxury
  • The Obama administration claims that the new College Scorecard will provide more "transparent" data to students and families trying to decide which college to attend. Unfortunately, by presenting some data in value-neutral or misleading ways, and ignoring other truly important questions in the college choice process
  • the administration presents a data mashup with limited utility for consumers but large potential for misrepresentation of social realities.
Javier E

Opinion | Is Computer Code a Foreign Language? - The New York Times - 1 views

  • the proposal that foreign language learning can be replaced by computer coding knowledge is misguided:
  • It stems from a widely held but mistaken belief that science and technology education should take precedence over subjects like English, history and foreign languages.
  • more urgent is my alarm at the growing tendency to accept and even foster the decline of the sort of interpersonal human contact that learning languages both requires and cultivates.
  • ...6 more annotations...
  • Language is an essential — perhaps the essential — marker of our species. We learn in and through natural languages; we develop our most fundamental cognitive skills by speaking and hearing languages; and we ultimately assume our identities as human beings and members of communities by exercising those languages
  • Our profound and impressive ability to create complex tools with which to manipulate our environments is secondary to our ability to conceptualize and communicate about those environments in natural languages.
  • Natural languages aren’t just more complex versions of the algorithms with which we teach machines to do tasks; they are also the living embodiments of our essence as social animals.
  • We express our love and our losses, explore beauty, justice and the meaning of our existence, and even come to know ourselves all though natural languages.
  • we are fundamentally limited in how much we can know about another’s thoughts and feelings, and that this limitation and the desire to transcend it is essential to our humanity
  • or us humans, communication is about much more than getting information or following instructions; it’s about learning who we are by interacting with others.
Javier E

'Our minds can be hijacked': the tech insiders who fear a smartphone dystopia | Technol... - 0 views

  • Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called “attention economy”: an internet shaped around the demands of an advertising economy.
  • “It is very common,” Rosenstein says, “for humans to develop things with the best of intentions and for them to have unintended, negative consequences.”
  • most concerned about the psychological effects on people who, research shows, touch, swipe or tap their phone 2,617 times a day.
  • ...43 more annotations...
  • There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity – even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”
  • Drawing a straight line between addiction to social media and political earthquakes like Brexit and the rise of Donald Trump, they contend that digital forces have completely upended the political system and, left unchecked, could even render democracy as we know it obsolete.
  • Without irony, Eyal finished his talk with some personal tips for resisting the lure of technology. He told his audience he uses a Chrome extension, called DF YouTube, “which scrubs out a lot of those external triggers” he writes about in his book, and recommended an app called Pocket Points that “rewards you for staying off your phone when you need to focus”.
  • “One reason I think it is particularly important for us to talk about this now is that we may be the last generation that can remember life before,” Rosenstein says. It may or may not be relevant that Rosenstein, Pearlman and most of the tech insiders questioning today’s attention economy are in their 30s, members of the last generation that can remember a world in which telephones were plugged into walls.
  • One morning in April this year, designers, programmers and tech entrepreneurs from across the world gathered at a conference centre on the shore of the San Francisco Bay. They had each paid up to $1,700 to learn how to manipulate people into habitual use of their products, on a course curated by conference organiser Nir Eyal.
  • Eyal, 39, the author of Hooked: How to Build Habit-Forming Products, has spent several years consulting for the tech industry, teaching techniques he developed by closely studying how the Silicon Valley giants operate.
  • “The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”
  • He explains the subtle psychological tricks that can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation,” Eyal writes.
  • The most seductive design, Harris explains, exploits the same psychological susceptibility that makes gambling so compulsive: variable rewards. When we tap those apps with red icons, we don’t know whether we’ll discover an interesting email, an avalanche of “likes”, or nothing at all. It is the possibility of disappointment that makes it so compulsive.
  • Finally, Eyal confided the lengths he goes to protect his own family. He has installed in his house an outlet timer connected to a router that cuts off access to the internet at a set time every day. “The idea is to remember that we are not powerless,” he said. “We are in control.
  • But are we? If the people who built these technologies are taking such radical steps to wean themselves free, can the rest of us reasonably be expected to exercise our free will?
  • Not according to Tristan Harris, a 33-year-old former Google employee turned vocal critic of the tech industry. “All of us are jacked into this system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.”
  • Harris, who has been branded “the closest thing Silicon Valley has to a conscience”, insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.
  • “I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.” Harris went public – giving talks, writing papers, meeting lawmakers and campaigning for reform after three years struggling to effect change inside Google’s Mountain View headquarters.
  • He explored how LinkedIn exploits a need for social reciprocity to widen its network; how YouTube and Netflix autoplay videos and next episodes, depriving users of a choice about whether or not they want to keep watching; how Snapchat created its addictive Snapstreaks feature, encouraging near-constant communication between its mostly teenage users.
  • The techniques these companies use are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, for example, revealed that the company can identify when teens feel “insecure”, “worthless” and “need a confidence boost”. Such granular information, Harris adds, is “a perfect model of what buttons you can push in a particular person”.
  • Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. “There’s no ethics,” he says. A company paying Facebook to use its levers of persuasion could be a car business targeting tailored advertisements to different types of users who want a new vehicle. Or it could be a Moscow-based troll farm seeking to turn voters in a swing county in Wisconsin.
  • It was Rosenstein’s colleague, Leah Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”, who announced the feature in a 2009 blogpost. Now 35 and an illustrator, Pearlman confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.
  • Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.
  • It’s this that explains how the pull-to-refresh mechanism, whereby users swipe down, pause and wait to see what content appears, rapidly became one of the most addictive and ubiquitous design features in modern technology. “Each time you’re swiping down, it’s like a slot machine,” Harris says. “You don’t know what’s coming next. Sometimes it’s a beautiful photo. Sometimes it’s just an ad.”
  • The reality TV star’s campaign, he said, had heralded a watershed in which “the new, digitally supercharged dynamics of the attention economy have finally crossed a threshold and become manifest in the political realm”.
  • “Smartphones are useful tools,” he says. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about. I’m not saying I’m mature now, but I’m a little bit more mature, and I regret the downsides.”
  • All of it, he says, is reward-based behaviour that activates the brain’s dopamine pathways. He sometimes finds himself clicking on the red icons beside his apps “to make them go away”, but is conflicted about the ethics of exploiting people’s psychological vulnerabilities. “It is not inherently evil to bring people back to your product,” he says. “It’s capitalism.”
  • He identifies the advent of the smartphone as a turning point, raising the stakes in an arms race for people’s attention. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.”
  • McNamee chooses his words carefully. “The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.”
  • But how can Google and Facebook be forced to abandon the business models that have transformed them into two of the most profitable companies on the planet?
  • McNamee believes the companies he invested in should be subjected to greater regulation, including new anti-monopoly rules. In Washington, there is growing appetite, on both sides of the political divide, to rein in Silicon Valley. But McNamee worries the behemoths he helped build may already be too big to curtail.
  • Rosenstein, the Facebook “like” co-creator, believes there may be a case for state regulation of “psychologically manipulative advertising”, saying the moral impetus is comparable to taking action against fossil fuel or tobacco companies. “If we only care about profit maximisation,” he says, “we will go rapidly into dystopia.”
  • James Williams does not believe talk of dystopia is far-fetched. The ex-Google strategist who built the metrics system for the company’s global search advertising business, he has had a front-row view of an industry he describes as the “largest, most standardised and most centralised form of attentional control in human history”.
  • It is a journey that has led him to question whether democracy can survive the new technological age.
  • He says his epiphany came a few years ago, when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realisation: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?
  • That discomfort was compounded during a moment at work, when he glanced at one of Google’s dashboards, a multicoloured display showing how much of people’s attention the company had commandeered for advertisers. “I realised: this is literally a million people that we’ve sort of nudged or persuaded to do this thing that they weren’t going to otherwise do,” he recalls.
  • Williams and Harris left Google around the same time, and co-founded an advocacy group, Time Well Spent, that seeks to build public momentum for a change in the way big tech companies think about design. Williams finds it hard to comprehend why this issue is not “on the front page of every newspaper every day.
  • “Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.
  • g. “The attention economy incentivises the design of technologies that grab our attention,” he says. “In so doing, it privileges our impulses over our intentions.”
  • That means privileging what is sensational over what is nuanced, appealing to emotion, anger and outrage. The news media is increasingly working in service to tech companies, Williams adds, and must play by the rules of the attention economy to “sensationalise, bait and entertain in order to survive”.
  • It is not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.
  • All of which has left Brichter, who has put his design work on the backburner while he focuses on building a house in New Jersey, questioning his legacy. “I’ve spent many hours and weeks and months and years thinking about whether anything I’ve done has made a net positive impact on society or humanity at all,” he says. He has blocked certain websites, turned off push notifications, restricted his use of the Telegram app to message only with his wife and two close friends, and tried to wean himself off Twitter. “I still waste time on it,” he confesses, “just reading stupid news I already know about.” He charges his phone in the kitchen, plugging it in at 7pm and not touching it until the next morning.
  • He stresses these dynamics are by no means isolated to the political right: they also play a role, he believes, in the unexpected popularity of leftwing politicians such as Bernie Sanders and Jeremy Corbyn, and the frequent outbreaks of internet outrage over issues that ignite fury among progressives.
  • All of which, Williams says, is not only distorting the way we view politics but, over time, may be changing the way we think, making us less rational and more impulsive. “We’ve habituated ourselves into a perpetual cognitive style of outrage, by internalising the dynamics of the medium,” he says.
  • It was another English science fiction writer, Aldous Huxley, who provided the more prescient observation when he warned that Orwellian-style coercion was less of a threat to democracy than the more subtle power of psychological manipulation, and “man’s almost infinite appetite for distractions”.
  • If the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself?
  • “The dynamics of the attention economy are structurally set up to undermine the human will,” he says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.”
Javier E

ROUGH TYPE | Nicholas Carr's blog - 0 views

  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • ...39 more annotations...
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Social skills and relationships seem to suffer as well.
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
Javier E

Can Social Networks Do Better? We Don't Know Because They Haven't Tried - Talking Point... - 0 views

  • it’s not fair to say it’s Facebook or a Facebook problem. Facebook is just the latest media and communications medium. We hardly blame the technology of the book for spreading anti-Semitism via the notorious Protocols of the Elders of Zion
  • But of course, it’s not that simple. Social media platforms have distinct features that earlier communications media did not. The interactive nature of the media, the collection of data which is then run through algorithms and artificial intelligence creates something different.
  • All social media platforms are engineered with two basic goals: maximize the time you spend on the platform and make advertising as effective and thus as lucrative as possible. This means that social media can never be simply a common carrier, a distribution technology that has no substantial influence over the nature of the communication that travels over it.
  • ...5 more annotations...
  • it’s a substantial difference which deprives social media platforms of the kind of hands-off logic that would make it ridiculous to say phones are bad or the phone company is responsible if planning for a mass murder was carried out over the phone.
  • the Internet doesn’t ‘do’ anything more than make the distribution of information more efficient and radically lower the formal, informal and financial barriers to entry that used to stand in the way of various marginalized ideas.
  • Social media can never plead innocence like this because the platforms are designed to addict you and convince you of things.
  • If the question is: what can social media platforms do to protect against government-backed subversion campaigns like the one we saw in the 2016 campaign the best answer is, we don’t know. And we don’t know for a simple reason: they haven’t tried.
  • The point is straightforward: the mass collection of data, harnessed to modern computing power and the chance to amass unimaginable wealth has spurred vast technological innovation.
Javier E

How Tech Can Turn Doctors Into Clerical Workers - The New York Times - 0 views

  • what I see in my colleague is disillusionment, and it has come too early, and I am seeing too much of it.
  • In America today, the patient in the hospital bed is just the icon, a place holder for the real patient who is not in the bed but in the computer. That virtual entity gets all our attention. Old-fashioned “bedside” rounds conducted by the attending physician too often take place nowhere near the bed but have become “card flip” rounds
  • My young colleague slumping in the chair in my office survived the student years, then three years of internship and residency and is now a full-time practitioner and teacher. The despair I hear comes from being the highest-paid clerical worker in the hospital: For every one hour we spend cumulatively with patients, studies have shown, we spend nearly two hours on our primitive Electronic Health Records, or “E.H.R.s,” and another hour or two during sacred personal time.
  • ...23 more annotations...
  • The living, breathing source of the data and images we juggle, meanwhile, is in the bed and left wondering: Where is everyone? What are they doing? Hello! It’s my body, you know
  • Our $3.4 trillion health care system is responsible for more than a quarter of a million deaths per year because of medical error, the rough equivalent of, say, a jumbo jet’s crashing every day.
  • I can get cash and account details all over America and beyond. Yet I can’t reliably get a patient record from across town, let alone from a hospital in the same state, even if both places use the same brand of E.H.R
  • the leading E.H.R.s were never built with any understanding of the rituals of care or the user experience of physicians or nurses. A clinician will make roughly 4,000 keyboard clicks during a busy 10-hour emergency-room shift
  • In the process, our daily progress notes have become bloated cut-and-paste monsters that are inaccurate and hard to wade through. A half-page, handwritten progress note of the paper era might in a few lines tell you what a physician really thought
  • so much of the E.H.R., but particularly the physical exam it encodes, is a marvel of fiction, because we humans don’t want to leave a check box empty or leave gaps in a template.
  • For a study, my colleagues and I at Stanford solicited anecdotes from physicians nationwide about patients for whom an oversight in the exam (a “miss”) had resulted in real consequences, like diagnostic delay, radiation exposure, therapeutic or surgical misadventure, even death. They were the sorts of things that would leave no trace in the E.H.R. because the recorded exam always seems complete — and yet the omission would be glaring and memorable to other physicians involved in the subsequent care. We got more than 200 such anecdotes.
  • The reason for these errors? Most of them resulted from exams that simply weren’t done as claimed. “Food poisoning” was diagnosed because the strangulated hernia in the groin was overlooked, or patients were sent to the catheterization lab for chest pain because no one saw the shingles rash on the left chest.
  • I worry that such mistakes come because we’ve gotten trapped in the bunker of machine medicine. It is a preventable kind of failure
  • How we salivated at the idea of searchable records, of being able to graph fever trends, or white blood counts, or share records at a keystroke with another institution — “interoperability”
  • The seriously ill patient has entered another kingdom, an alternate universe, a place and a process that is frightening, infantilizing; that patient’s greatest need is both scientific state-of-the-art knowledge and genuine caring from another human being. Caring is expressed in listening, in the time-honored ritual of the skilled bedside exam — reading the body — in touching and looking at where it hurts and ultimately in localizing the disease for patients not on a screen, not on an image, not on a biopsy report, but on their bodies.
  • What if the computer gave the nurse the big picture of who he was both medically and as a person?
  • a professor at M.I.T. whose current interest in biomedical engineering is “bedside informatics,” marvels at the fact that in an I.C.U., a blizzard of monitors from disparate manufacturers display EKG, heart rate, respiratory rate, oxygen saturation, blood pressure, temperature and more, and yet none of this is pulled together, summarized and synthesized anywhere for the clinical staff to use
  • What these monitors do exceedingly well is sound alarms, an average of one alarm every eight minutes, or more than 180 per patient per day. What is our most common response to an alarm? We look for the button to silence the nuisance because, unlike those in a Boeing cockpit, say, our alarms are rarely diagnosing genuine danger.
  • By some estimates, more than 50 percent of physicians in the United States have at least one symptom of burnout, defined as a syndrome of emotional exhaustion, cynicism and decreased efficacy at work
  • It is on the increase, up by 9 percent from 2011 to 2014 in one national study. This is clearly not an individual problem but a systemic one, a 4,000-key-clicks-a-day problem.
  • The E.H.R. is only part of the issue: Other factors include rapid patient turnover, decreased autonomy, merging hospital systems, an aging population, the increasing medical complexity of patients. Even if the E.H.R. is not the sole cause of what ails us, believe me, it has become the symbol of burnou
  • burnout is one of the largest predictors of physician attrition from the work force. The total cost of recruiting a physician can be nearly $90,000, but the lost revenue per physician who leaves is between $500,000 and $1 million, even more in high-paying specialties.
  • I hold out hope that artificial intelligence and machine-learning algorithms will transform our experience, particularly if natural-language processing and video technology allow us to capture what is actually said and done in the exam room.
  • as with any lab test, what A.I. will provide is at best a recommendation that a physician using clinical judgment must decide how to apply.
  • True clinical judgment is more than addressing the avalanche of blood work, imaging and lab tests; it is about using human skills to understand where the patient is in the trajectory of a life and the disease, what the nature of the patient’s family and social circumstances is and how much they want done.
  • Much of that is a result of poorly coordinated care, poor communication, patients falling through the cracks, knowledge not being transferred and so on, but some part of it is surely from failing to listen to the story and diminishing skill in reading the body as a text.
  • As he was nearing death, Avedis Donabedian, a guru of health care metrics, was asked by an interviewer about the commercialization of health care. “The secret of quality,” he replied, “is love.”/•/
Javier E

The meaning of life in a world without work | Technology | The Guardian - 0 views

  • As artificial intelligence outperforms humans in more and more tasks, it will replace humans in more and more jobs.
  • Many new professions are likely to appear: virtual-world designers, for example. But such professions will probably require more creativity and flexibility, and it is unclear whether 40-year-old unemployed taxi drivers or insurance agents will be able to reinvent themselves as virtual-world designers
  • The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms. Consequently, by 2050 a new class of people might emerge – the useless class. People who are not just unemployed, but unemployable.
  • ...15 more annotations...
  • The same technology that renders humans useless might also make it feasible to feed and support the unemployable masses through some scheme of universal basic income.
  • The real problem will then be to keep the masses occupied and content. People must engage in purposeful activities, or they go crazy. So what will the useless class do all day?
  • One answer might be computer games. Economically redundant people might spend increasing amounts of time within 3D virtual reality worlds, which would provide them with far more excitement and emotional engagement than the “real world” outside.
  • This, in fact, is a very old solution. For thousands of years, billions of people have found meaning in playing virtual reality games. In the past, we have called these virtual reality games “religions”.
  • Muslims and Christians go through life trying to gain points in their favorite virtual reality game. If you pray every day, you get points. If you forget to pray, you lose points. If by the end of your life you gain enough points, then after you die you go to the next level of the game (aka heaven).
  • As religions show us, the virtual reality need not be encased inside an isolated box. Rather, it can be superimposed on the physical reality. In the past this was done with the human imagination and with sacred books, and in the 21st century it can be done with smartphones.
  • Consumerism too is a virtual reality game. You gain points by acquiring new cars, buying expensive brands and taking vacations abroad, and if you have more points than everybody else, you tell yourself you won the game.
  • we saw two others kids on the street who were hunting the same Pokémon, and we almost got into a fight with them. It struck me how similar the situation was to the conflict between Jews and Muslims about the holy city of Jerusalem. When you look at the objective reality of Jerusalem, all you see are stones and buildings. There is no holiness anywhere. But when you look through the medium of smartbooks (such as the Bible and the Qur’an), you see holy places and angels everywhere.
  • In the end, the real action always takes place inside the human brain. Does it matter whether the neurons are stimulated by observing pixels on a computer screen, by looking outside the windows of a Caribbean resort, or by seeing heaven in our mind’s eyes?
  • Indeed, one particularly interesting section of Israeli society provides a unique laboratory for how to live a contented life in a post-work world. In Israel, a significant percentage of ultra-orthodox Jewish men never work. They spend their entire lives studying holy scriptures and performing religion rituals. They and their families don’t starve to death partly because the wives often work, and partly because the government provides them with generous subsidies. Though they usually live in poverty, government support means that they never lack for the basic necessities of life.
  • That’s universal basic income in action. Though they are poor and never work, in survey after survey these ultra-orthodox Jewish men report higher levels of life-satisfaction than any other section of Israeli society.
  • Hence virtual realities are likely to be key to providing meaning to the useless class of the post-work world. Maybe these virtual realities will be generated inside computers. Maybe they will be generated outside computers, in the shape of new religions and ideologies. Maybe it will be a combination of the two. The possibilities are endless
  • In any case, the end of work will not necessarily mean the end of meaning, because meaning is generated by imagining rather than by working.
  • People in 2050 will probably be able to play deeper games and to construct more complex virtual worlds than in any previous time in history.
  • But what about truth? What about reality? Do we really want to live in a world in which billions of people are immersed in fantasies, pursuing make-believe goals and obeying imaginary laws? Well, like it or not, that’s the world we have been living in for thousands of years already.
sissij

There's a Major Problem with AI's Decision Making | Big Think - 0 views

  • For eons, God has served as a standby for “things we don’t understand.” Once an innovative researcher or tinkering alchemist figures out the science behind the miracle, humans harness the power of chemistry, biology, or computer science.
  • The process of ‘deep learning’—in which a machine extracts information, often in an unsupervised manner, to teach and transform itself—exploits a longstanding human paradox: we believe ourselves to have free will, but really we’re a habit-making and -performing animal repeatedly playing out its own patterns.
  • When we place our faith in an algorithm we don’t understand—autonomous cars, stock trades, educational policies, cancer screenings—we’re risking autonomy, as well as the higher cognitive and emotional qualities that make us human, such as compassion, empathy, and altruism.
  • ...2 more annotations...
  • Of course, defining terms is of primary importance, a task that has proven impossible when discussing the nuances of consciousness, which is effectively the power we’re attempting to imbue our machines with.
  • What type of machines are we creating if we only recognize a “sort of” intelligence under the hood of our robots? For over a century, dystopian novelists have envisioned an automated future in which our machines best us. This is no longer a future scenario.
  •  
    In the fiction books, we can always see a scene that the AI robots start to take over the world. We humans are always afraid of AI robots having emotions. As we discussed in TOK, there is a phenomenon that the more robots are like human, the more people despise of them. I think that's because if robots start to have emotions, then they would be easily out of our control. We still see AI robots as lifeless gears and machines, what if they are more than that? --Sissi (4/23/2017)
priyankaghosh

Where to get fast personal loan in India? - 0 views

  •  
    LenDenClub ensures that availing a loan is a simple, quick and hassle free process for you as a borrower. You can register on the platform by submitting the prescribed KYC documents coupled with a nominal registration fee.Our automated system goes through your profile to ensure all details are filled and taken care of. Once registered, your loan requirement is posted on our platform and is visible to various lenders. LenDenClub ensures that your loan requirements are met completely and quickly. For repayments, our smart algorithm breaks down the loan amount into easy EMI's based on your repayment ability.
Javier E

I asked Tinder for my data. It sent me 800 pages of my deepest, darkest secrets | Techn... - 0 views

  • I emailed Tinder requesting my personal data and got back way more than I bargained for. Some 800 pages came back containing information such as my Facebook “likes”, my photos from Instagram (even after I deleted the associated account), my education, the age-rank of men I was interested in, how many times I connected, when and where every online conversation with every single one of my matches happened … the list goes on.
  • “You are lured into giving away all this information,” says Luke Stark, a digital technology sociologist at Dartmouth University. “Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can’t feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality.”
  • What will happen if this treasure trove of data gets hacked, is made public or simply bought by another company? I can almost feel the shame I would experience. The thought that, before sending me these 800 pages, someone at Tinder might have read them already makes me cringe.
  • ...3 more annotations...
  • In May, an algorithm was used to scrape 40,000 profile images from the platform in order to build an AI to “genderise” faces. A few months earlier, 70,000 profiles from OkCupid (owned by Tinder’s parent company Match Group) were made public by a Danish researcher some commentators have labelled a “white supremacist”, who used the data to try to establish a link between intelligence and religious beliefs. The data is still out there.
  • The trouble is these 800 pages of my most intimate data are actually just the tip of the iceberg. “Your personal data affects who you see first on Tinder, yes,” says Dehaye. “But also what job offers you have access to on LinkedIn, how much you will pay for insuring your car, which ad you will see in the tube and if you can subscribe to a loan. “We are leaning towards a more and more opaque society, towards an even more intangible world where data collected about you will decide even larger facets of your life. Eventually, your whole existence will be affected.”
  • As a typical millennial constantly glued to my phone, my virtual life has fully merged with my real life. There is no difference any more. Tinder is how I meet people, so this is my reality. It is a reality that is constantly being shaped by others – but good luck trying to find out how.
Javier E

Fight the Future - The Triad - 1 views

  • In large part because our major tech platforms reduced the coefficient of friction (μ for my mechanics nerd posse) to basically zero. QAnons crept out of the dark corners of the web—obscure boards like 4chan and 8kun—and got into the mainstream platforms YouTube, Facebook, Instagram, and Twitter.
  • Why did QAnon spread like wildfire in America?
  • These platforms not only made it easy for conspiracy nuts to share their crazy, but they used algorithms that actually boosted the spread of crazy, acting as a force multiplier.
  • ...24 more annotations...
  • So it sounds like a simple fix: Impose more friction at the major platform level and you’ll clean up the public square.
  • But it’s not actually that simple because friction runs counter to the very idea of the internet.
  • The fundamental precept of the internet is that it reduces marginal costs to zero. And this fact is why the design paradigm of the internet is to continually reduce friction experienced by users to zero, too. Because if the second unit of everything is free, then the internet has a vested interest in pushing that unit in front of your eyeballs as smoothly as possible.
  • the internet is “broken,” but rather it’s been functioning exactly as it was designed to:
  • Perhaps more than any other job in the world, you do not want the President of the United States to live in a frictionless state of posting. The Presidency is not meant to be a frictionless position, and the United States government is not a frictionless entity, much to the chagrin of many who have tried to change it. Prior to this administration, decisions were closely scrutinized for, at the very least, legality, along with the impact on diplomacy, general norms, and basic grammar. This kind of legal scrutiny and due diligence is also a kind of friction--one that we now see has a lot of benefits. 
  • The deep lesson here isn’t about Donald Trump. It’s about the collision between the digital world and the real world.
  • In the real world, marginal costs are not zero. And so friction is a desirable element in helping to get to the optimal state. You want people to pause before making decisions.
  • described friction this summer as: “anything that inhibits user action within a digital interface, particularly anything that requires an additional click or screen.” For much of my time in the technology sector, friction was almost always seen as the enemy, a force to be vanquished. A “frictionless” experience was generally held up as the ideal state, the optimal product state.
  • Trump was riding the ultimate frictionless optimized engagement Twitter experience: he rode it all the way to the presidency, and then he crashed the presidency into the ground.
  • From a metrics and user point of view, the abstract notion of the President himself tweeting was exactly what Twitter wanted in its original platonic ideal. Twitter has been built to incentivize someone like Trump to engage and post
  • The other day we talked a little bit about how fighting disinformation, extremism, and online cults is like fighting a virus: There is no “cure.” Instead, what you have to do is create enough friction that the rate of spread becomes slow.
  • Our challenge is that when human and digital design comes into conflict, the artificial constraints we impose should be on the digital world to become more in service to us. Instead, we’ve let the digital world do as it will and tried to reconcile ourselves to the havoc it wreaks.
  • And one of the lessons of the last four years is that when you prize the digital design imperatives—lack of friction—over the human design imperatives—a need for friction—then bad things can happen.
  • We have an ongoing conflict between the design precepts of humans and the design precepts of computers.
  • Anyone who works with computers learns to fear their capacity to forget. Like so many things with computers, memory is strictly binary. There is either perfect recall or total oblivion, with nothing in between. It doesn't matter how important or trivial the information is. The computer can forget anything in an instant. If it remembers, it remembers for keeps.
  • This doesn't map well onto human experience of memory, which is fuzzy. We don't remember anything with perfect fidelity, but we're also not at risk of waking up having forgotten our own name. Memories tend to fade with time, and we remember only the more salient events.
  • And because we live in a time when storage grows ever cheaper, we learn to save everything, log everything, and keep it forever. You never know what will come in useful. Deleting is dangerous.
  • Our lives have become split between two worlds with two very different norms around memory.
  • [A] lot of what's wrong with the Internet has to do with memory. The Internet somehow contrives to remember too much and too little at the same time, and it maps poorly on our concepts of how memory should work.
  • The digital world is designed to never forget anything. It has perfect memory. Forever. So that one time you made a crude joke 20 years ago? It can now ruin your life.
  • Memory in the carbon-based world is imperfect. People forget things. That can be annoying if you’re looking for your keys but helpful if you’re trying to broker peace between two cultures. Or simply become a better person than you were 20 years ago.
  • The digital and carbon-based worlds have different design parameters. Marginal cost is one of them. Memory is another.
  • 2. Forget Me Now
  • 1. Fix Tech, Fix America
Javier E

Opinion | Privacy Is Too Big to Understand - The New York Times - 1 views

  • There is “no single rhetorical approach likely to work on a given audience and none too dangerous to try. Any story that sticks is a good one,”
  • This newsletter is about finding ways to make this stuff stick in your mind and to arm you with the information you need to take control of your digital life.
  • how to start? The definition of privacy itself. I think it’s time to radically expand it.
  • ...12 more annotations...
  • “Privacy is really about being able to define for ourselves who we are for the world and on our own terms,”
  • “hyperobjects,” a concept so all-encompassing that it is impossible to adequately describe
  • invite skepticism because their scale is so vast and sometimes abstract.
  • When technology governs so many aspects of our lives — and when that technology is powered by the exploitation of our data — privacy isn’t just about knowing your secrets, it’s about autonomy
  • “Privacy” is an impoverished word — far too small a word to describe what we talk about when we talk about the mining, transmission, storing, buying, selling, use and misuse of our personal information.
  • not a choice that belongs to an algorithm or data brokerEntities that collect, aggregate and sell individuals’ personal data, derivatives and inferences from disparate public and private sources. Glossary and definitely not to Facebook.”
  • privacy is about how that data is used to take away our control
  • real-time data, once assumed to be protected by phone companies, was available for sale to bounty hunters for a $300 fee
  • ICE officials partnered with a private data firm to track license plate data.
  • It means reckoning with private surveillance databases armed with dossiers on regular citizens and outsourced to the highest bidder
  • “Years ago we worried about the N.S.A. building huge server farms, but now it’s much cheaper to go to a private-service vendor and outsource this to a company who can cloak their activity in trade secrets,
  • “It’s comparable to asking people to stop using air conditioning because of the ozone layer. It’s not likely to happen because the immediate comfort is more valuable than the long-term fear.
Javier E

'I Think This Guy Is, Like, Passed Out in His Tesla' - The New York Times - 0 views

  • Tesla’s response to these videos has been consistent: Autopilot is meant to function as a complement to a conscious driver, not a replacement. If you don’t keep a hand on the wheel, your Tesla is supposed to beep at you; eventually it’s supposed to slow to a stop and put its hazard lights on. Anyway, who knows if these clips were real? Couldn’t some of them be the work of pranksters?
  • of course you can still fall asleep with a hand on the wheel — or you can go on YouTube and watch Tesla drivers swap tips for using a water bottle or custom “cellphone holder” to fool the system.
  • What’s fascinating is the way the sci-fi novelty of Autopilot — combined with the deep familiarity of old-fashioned driving — manages to warp our danger-detecting radar. There are instances in which investigators have found that the Autopilot system contributed to crashes, but none of those have been captured on film.
  • ...6 more annotations...
  • driving is already one of the more dangerous activities Americans undertake on a daily basis. According to the National Highway Traffic Safety Administration, “drowsy driving” was a factor in 91,000 crashes, resulting in 50,000 people injured and 810 deaths in 2017, so it’s theoretically possible that what some of these videos are showing us is disaster averted, not disaster in motion.
  • Tesla once generated widespread good will by promising affordable electric cars that would make the world cleaner and safer. But over time, its image was tarnished by missed deadlines, worrying crash reports, signs of a cultlike corporate culture and a chief executive, Elon Musk, who habitually exaggerates progress while announcing extravagant new ideas. This was hardly the institution you would want determining the future of highway safety.
  • These technologies — and the companies that engineer them — keep turning out to be less benign than imagined. We fell in love with Amazon, but now we miss the local stores it closed. We couldn’t resist the convenience of Uber and Lyft, but now we’ve seen their effect on public transit and drivers. “Jetsons”-esque smart-home technology turned out to be riddled with glitches and vulnerable to hackers.
  • Tech companies have hollowed out old industries, shredded privacy, disregarded regulations and created new vectors for the spread of misinformation and extremism, and now there is a sense that choices we have already made — tectonic shifts already in motion, terms of service already accepted — may be changing us in ways that we are only beginning to process, ready to leap up and bite us in the collective behind.
  • It’s hard to imagine a more potent visual metaphor for this feeling than a human lulled to sleep inside a hunk of metal and glass, hurtling down a highway under the control of proprietary algorithms beamed on board from Palo Alto
  • These videos are magnetic not just because of the eerie images they contain, but also because, watching them, we can’t actually be sure what we’re seeing. Is this danger or safety or both at once? Perhaps in a different era we would have cried out in excitement: How cool! Today we are more tempted to gasp in shock and call out a warning: Wake up!
adonahue011

Twitter is Showing That People Are Anxious and Depressed - The New York Times - 1 views

  • the lab offers this answer: Sunday, May 31. That day was not only the saddest day of 2020 so far, it was also the saddest day recorded by the lab in the last 13 years. Or at least, the saddest day on Twitter.
    • adonahue011
       
      The lab is offering the idea that May 31st was the saddest day of 2020, and the saddest in the last 13 years. The toll 2020 has put on all of us mentally is probably something at times we cannot even recognize.
    • adonahue011
       
      The lab is offering the idea that May 31st was the saddest day of 2020, and the saddest in the last 13 years. The toll 2020 has put on all of us mentally is probably something at times we cannot even recognize.
  • measuring word choices across millions of tweets, every day, the world over, to come up with a moving measure of well-being.
    • adonahue011
       
      They use a machine to track the words people are using on twitter specifically to measure the well-being of people
  • the main finding to emerge was our tendency toward relentless positivity on social media.
  • ...27 more annotations...
  • “Happiness is hard to know. It’s hard to measure,”
  • “We don’t have a lot of great data about how people are doing.”
    • adonahue011
       
      This is an interesting statement because it is so true. Yet it is so important to know how people are doing. Often times I think we personally miss some of the feelings we have, which is something we talked about in TOK. We cut out certain memories or feelings to make the narrative we want
  • to parse our national mental health through the prism of our online life.
  • that stockpile of information towered as high as it does now, in the summer of 2020
  • , Twitter reported a 34 percent increase in daily average user growth.
    • adonahue011
       
      Important statistic because we all took part in this
  • has gathered a random 10 percent of all public tweets, every day, across a dozen languages.
  • Twitter included “terrorist,” “violence” and “racist.” This was about a week after George Floyd was killed, near the start of the protests that would last all summe
  • the pandemic, the Hedonometer’s sadness readings have set multiple records. This year, “there was a full month — and we never see this — there was a full month of days that the Hedonometer was reading sadder than the Boston Marathon day,”
    • adonahue011
       
      This is saddening because it is the reality we have all had to learn how to deal with.
  • “These digital traces are markers that we’re not aware of, but they leave marks that tell us the degree to which you are avoiding things, the degree to which you are connected to people,”
    • adonahue011
       
      I agree with this statement because it is so similar to what we discussed in TOK with the idea that our brain lets us avoid things when we don't feel like we can deal with them.
  • one of the challenges of this line of research is that language itself is always evolving — and algorithms are notoriously bad at discerning context.
  • they were able to help predict which ones might develop postpartum depression, based on their posts before the birth of their babies.
    • adonahue011
       
      This type of research seems like a positive way to utilize social media. Not that the saddening posts are good but the way we can perceive this information is important
  • Using data from social media for the study of mental health also helps address the WEIRD problem:
  • psychology research is often exclusively composed of subjects who are Western, Educated, and from Industrialized, Rich, and Democratic countries.
    • adonahue011
       
      I never thought of this but it is so true! Using social media means that the stats are global.
  • We’re now able to look at a much more diverse variety of mental health experiences.”
  • but also anxiety, depression, stress and suicidal thoughts. Unsurprisingly, she found that all these levels were significantly higher than during the same months of 2019.
  • is really a representative place to check the state of the general population’s mental health.
  • argues that in the rush to embrace data, many researchers ignore the distorting effects of the platforms themselves.
    • adonahue011
       
      Contrasting opinion from the rest of the article
  • emotionally invested in the content we are presented with, coaxed toward remaining in a certain mental state.
    • adonahue011
       
      Interesting idea though I tend to think more in the opposite direction that social media is a pretty solid reflection.
  • The closest we get to looking at national mental health otherwise is through surveys like the one Gallup performs
  • the lowest rates of life satisfaction this year in over a decade, including during the 2008 recession
  • I have never been more exhausted at the end of the day than I am now,” said Michael Garfinkle, a psychoanalyst in New York.
  • There are so many contenders to consider: was it Thursday, March 12, the day after Tom Hanks announced he was sick and the N.B.A. announced it was canceled? Was it Monday, June 1, the day peaceful protesters were tear gassed so that President Trump could comfortably stroll to his Bible-wielding photo op?
ilanaprincilus06

How the web distorts reality and impairs our judgement skills | Media Network | The Gua... - 0 views

  • IBM estimates that 90% of the world's online data has been created just in the past two years. What's more, it has made information more accessible than ever before.
  • However, rather than enhancing knowledge, the internet has produced an information glut or "infoxication".
  • Furthermore, since online content is often curated to fit our preferences, interests and personality, the internet can even enhance our existing biases and undermine our motivation to learn new things.
    • ilanaprincilus06
       
      When we see our preferences constantly being displayed, we are more likely to go back to wherever the information was or utilize that source, website, etc more often.
  • ...14 more annotations...
  • these filters will isolate people in information bubbles only partly of their own choosing, and the inaccurate beliefs they form as a result may be difficult to correct."
  • the proliferation of search engines, news aggregators and feed-ranking algorithms is more likely to perpetuate ignorance than knowledge.
  • It would seem that excessive social media use may intensify not only feelings of loneliness, but also ideological isolation.
    • ilanaprincilus06
       
      Would social media networks need to stop exploiting these preferences in order for us to limit ideological isolation?
  • "What the human being is best at doing is interpreting all new information so that their prior conclusions remain intact."
  • Recent studies show that although most people consume information that matches their opinions, being exposed to conflicting views tends to reduce prejudice and enhance creative thinking.
  • the desire to prove ourselves right and maintain our current beliefs trumps any attempt to be creative or more open-minded.
  • "our objects of inquiry are not 'truth' or 'meaning' but rather configurations of consciousness. These are figures or patterns of knowledge, cognitive and practical attitudes, which emerge within a definite historical and cultural context."
  • the internet is best understood as a cultural lens through which we construct – or distort – reality.
  • we can only deal with this overwhelming range of choices by ignoring most of them.
  • trolling is so effective for enticing readers' comments, but so ineffective for changing their viewpoints.
  • Will accumulating facts help you understand the world?
    • ilanaprincilus06
       
      We must take an extra step past just reading/learning about facts and develop second order thinking about the claims/facts to truly gain a better sense of what is going on.
  • we have developed a dependency on technology, which has eclipsed our reliance on logic, critical thinking and common sense: if you can find the answer online, why bother thinking?
  • it is conceivable that individuals' capacity to evaluate and produce original knowledge will matter more than the actual acquisition of knowledge.
  • Good judgment and decision-making will be more in demand than sheer expertise or domain-specific knowledge.
margogramiak

How To Fight Deforestation In The Amazon From Your Couch | HuffPost - 0 views

  • If you’ve got as little as 30 seconds and a decent internet connection, you can help combat the deforestation of the Amazon. 
  • Some 15% of the Amazon, the world’s largest rainforest and a crucial carbon repository, has been cut or burned down. Around two-thirds of the Amazon lie within Brazil’s borders, where almost 157 square miles of forest were cleared in April alone. In addition to storing billions of tons of carbon, the Amazon is home to tens of millions of people and some 10% of the Earth’s biodiversity.
    • margogramiak
       
      all horrifying stats.
  • you just have to be a citizen that is concerned about the issue of deforestation,
    • margogramiak
       
      that's me!
  • ...12 more annotations...
  • If you’ve got as little as 30 seconds and a decent internet connection, you can help combat the deforestation of the Amazon. 
    • margogramiak
       
      great!
  • to build an artificial intelligence model that can recognize signs of deforestation. That data can be used to alert governments and conservation organizations where intervention is needed and to inform policies that protect vital ecosystems. It may even one day predict where deforestation is likely to happen next.
    • margogramiak
       
      That sounds super cool, and definitely useful.
  • To monitor deforestation, conservation organizations need an eye in the sky.
    • margogramiak
       
      bird's eye view pictures of deforestation are always super impactful.
  • WRI’s Global Forest Watch online tracking system receives images of the world’s forests taken every few days by NASA satellites. A simple computer algorithm scans the images, flagging instances where before there were trees and now there are not. But slight disturbances, such as clouds, can trip up the computer, so experts are increasingly interested in using artificial intelligence.
    • margogramiak
       
      that's so cool.
  • Inman was surprised how willing people have been to spend their time clicking on abstract-looking pictures of the Amazon.
    • margogramiak
       
      I'm glad so many people want to help.
  • Look at these nine blocks and make a judgment about each one. Does that satellite image look like a situation where human beings have transformed the landscape in some way?” Inman explained.
    • margogramiak
       
      seems simple enough
  • It’s not always easy; that’s the point. For example, a brown patch in the trees could be the result of burning to clear land for agriculture (earning a check mark for human impact), or it could be the result of a natural forest fire (no check mark). Keen users might be able to spot subtle signs of intervention the computer would miss, like the thin yellow line of a dirt road running through the clearing. 
    • margogramiak
       
      I was thinking about this issue... that's a hard problem to solve.
  • SAS’s website offers a handful of examples comparing natural forest features and manmade changes. 
    • margogramiak
       
      I guess that would be helpful. What happens if someone messes up though?
  • users have analyzed almost 41,000 images, covering an area of rainforest nearly the size of the state of Montana. Deforestation caused by human activity is evident in almost 2 in 5 photos.
    • margogramiak
       
      wow.
  • The researchers hope to use historical images of these new geographies to create a predictive model that could identify areas most at risk of future deforestation. If they can show that their AI model is successful, it could be useful for NGOs, governments and forest monitoring bodies, enabling them to carefully track forest changes and respond by sending park rangers and conservation teams to threatened areas. In the meantime, it’s a great educational tool for the citizen scientists who use the app
    • margogramiak
       
      But then what do they do with this data? How do they use it to make a difference?
  • Users simply select the squares in which they’ve spotted some indication of human impact: the tell-tale quilt of farm plots, a highway, a suspiciously straight edge of tree line. 
    • margogramiak
       
      I could do that!
  • we have still had people from 80 different countries come onto the app and make literally hundreds of judgments that enabled us to resolve 40,000 images,
    • margogramiak
       
      I like how in a sense it makes all the users one big community because of their common goal of wanting to help the earth.
runlai_jiang

The Big Tech Trends to Follow at CES 2018 - The New York Times - 0 views

  • magine this: When you leave the house, your air conditioner and lights turn off automatically. Then when a motion sensor detects a person in the house, like your house cleaner, it sends an alert to your phone. When you arrive home, a camera recognizes who you are and the door automatically unlocks.
  • Automated technologies like these will be at the forefront of CES, one of the world’s largest tech conventions, next week in Las Vegas. They underline one major trend: Increasingly, the innovations that are making their way into your personal technology aren’t physical electronics or gadgets at all.
  • he culmination of software, algorithms and sensors working together to make your everyday appliances smarter and more automated. It is
  • ...4 more annotations...
  • Alexa and Her CounterpartsAlexa, Amazon’s intelligent assistant that listens to your voice commands to play music, order diapers and place a phone call, will be everywhere at CES.
  • Smart CitiesNowadays, it’s easy to shop for high-quality internet-connected home accessories, like light bulbs, thermostats and security cameras. At CES, Samsung is even planning to introduce a smart refrigerator at the electronics show that can listen to voice commands to control other home accessories.
  • Smarter CarsSelf-driving-car enthusiasts like Elon Musk, the chief executive of Tesla, dream of a future where driverless cars eliminate traffic accidents while letting people do work on their commutes.They can keep dreaming: Autonomous vehicles still have a long way to go before they become safe and properly regulated.
  • Next-Generation Wireless TechnologyAs a growing number of devices rely on artificial intelligence, they will require faster bandwidth speeds. At CES, wireless companies like AT&T and Verizon are expected to give progress reports on so-called 5G, the fifth-generation network technology.With 5G, wireless carriers envision an era of incredibly fast speeds that let smartphone users download a movie in less than five seconds — roughly 100 times faster than the current network technology, 4G. Even more important, 5G is expected to greatly reduce latency to let devices communicate with each other with extremely fast response times.
« First ‹ Previous 101 - 120 of 159 Next › Last »
Showing 20 items per page