Skip to main content

Home/ TOK Friends/ Group items tagged simulation

Rss Feed Group items tagged

rachelramirez

How Phantom Limbs Explain Consciousness - The Atlantic - 0 views

  • How Phantom Limbs Explain Consciousness
  • Almost all people who have an amputation experience a phantom. Usually the effect fades within days, but in some cases it can remain for a lifetime.
  • The brain constructs a model of the self that neuroscientists call the body schema. The body schema is a simulation. It takes in touch, vision, and baseline information about what’s connected to what, and builds a virtual model of your body.
  • ...6 more annotations...
  • As a child grows up, the body schema adjusts to the changing body, but its adaptability has limits.
  • Attention is the selective enhancement of some signals over others, such that the brain’s resources are strategically deployed.
  • The brain needs to control its attention, just as it controls the body.
  • In control theory, if a machine is to control something optimally, it needs a working model of whatever it’s controlling. The brain certainly follows this principle in controlling the body.
  • Just as the body schema is a surreal description of the body, so the attention schema would be a surreal description of attention.
  • This is called the attention schema theory, a theory that my lab has been developing and testing experimentally for the past five years. It’s a theory of why we insist with such certainty that we have subjective experience. Attention is fundamental.
Javier E

New Statesman - The limits of science: Martin Rees - 1 views

  • Einstein averred that “the most incomprehensible thing about the universe is that it is comprehensible”. He was right to be astonished. It seems sur­prising that our minds, which evolved to cope with life on the African savannah and haven’t changed much in 10,000 years, can make sense of phenomena far from our everyday intuitions: the microworld of atoms and the vastness of the cosmos. But our comprehension could one day “hit the buffers”. A monkey is unaware that atoms exist. Likewise, our brainpower may not stretch to the deepest aspects of reality.
  • Everything, however complicated – breaking waves, migrating birds, or tropical forests – is made up of atoms and obeys the equations of quantum physics. That, at least, is what most scientists believe, and there is no reason to doubt it. Yet there are inherent limits to science’s predictive power. Some things, like the orbits of the planets, can be calculated far into the future. But that’s atypical. In most contexts, there is a limit. Even the most fine-grained compu­tation can only forecast British weather a few days ahead.
  • even if we could build a computer with hugely superhuman processing power, which could offer an accurate simulation, that doesn’t mean that we will have the insight to understand it. Some of the “aha” insights that scientists strive for may have to await the emergence of post-human intellects.
Javier E

The Older Mind May Just Be a Fuller Mind - NYTimes.com - 0 views

  • Memory’s speed and accuracy begin to slip around age 25 and keep on slipping.
  • Now comes a new kind of challenge to the evidence of a cognitive decline, from a decidedly digital quarter: data mining, based on theories of information processing
  • Since educated older people generally know more words than younger people, simply by virtue of having been around longer, the experiment simulates what an older brain has to do to retrieve a word. And when the researchers incorporated that difference into the models, the aging “deficits” largely disappeared.
  • ...6 more annotations...
  • Neuroscientists have some reason to believe that neural processing speed, like many reflexes, slows over the years; anatomical studies suggest that the brain also undergoes subtle structural changes that could affect memory.
  • doubts about the average extent of the decline are rooted not in individual differences but in study methodology. Many studies comparing older and younger people, for instance, did not take into account the effects of pre-symptomatic Alzheimer’s disease,
  • The new data-mining analysis also raises questions about many of the measures scientists use. Dr. Ramscar and his colleagues applied leading learning models to an estimated pool of words and phrases that an educated 70-year-old would have seen, and another pool suitable for an educated 20-year-old. Their model accounted for more than 75 percent of the difference in scores between older and younger adults on items in a paired-associate test
  • That is to say, the larger the library you have in your head, the longer it usually takes to find a particular word (or pair).
  • Scientists who study thinking and memory often make a broad distinction between “fluid” and “crystallized” intelligence. The former includes short-term memory, like holding a phone number in mind, analytical reasoning, and the ability to tune out distractions, like ambient conversation. The latter is accumulated knowledge, vocabulary and expertise.
  • an increase in crystallized intelligence can account for a decrease in fluid intelligence,
Javier E

Computers Jump to the Head of the Class - NYTimes.com - 0 views

  • Tokyo University, known as Todai, is Japan’s best. Its exacting entry test requires years of cramming to pass and can defeat even the most erudite. Most current computers, trained in data crunching, fail to understand its natural language tasks altogether. Ms. Arai has set researchers at Japan’s National Institute of Informatics, where she works, the task of developing a machine that can jump the lofty Todai bar by 2021. If they succeed, she said, such a machine should be capable, with appropriate programming, of doing many — perhaps most — jobs now done by university graduates.
  • A recent study published by the Program on the Impacts of Future Technology, at Oxford University’s Oxford Martin School, predicted that nearly half of all jobs in the United States could be replaced by computers over the next two decades.
  • Intelligent machines could be used to replace expensive human resources, potentially undermining the economic value of much vocational education, Ms. Arai said.
  • ...8 more annotations...
  • “Educational investment will not be attractive to those without unique skills,” she said. Graduates, she noted, need to earn a return on their investment in training: “But instead they will lose jobs, replaced by information simulation. They will stay uneducated.” In such a scenario, high-salary jobs would remain for those equipped with problem-solving skills, she predicted. But many common tasks now done by college graduates might vanish.
  • Over the next 10 to 20 years, “10 percent to 20 percent pushed out of work by A.I. will be a catastrophe,” she says. “I can’t begin to think what 50 percent would mean — way beyond a catastrophe and such numbers can’t be ruled out if A.I. performs well in the future.”
  • There is a significant danger, Ms. Arai says, that the widespread adoption of artificial intelligence, if not well managed, could lead to a radical restructuring of economic activity and the job market, outpacing the ability of social and education systems to adjust.
  • Smart machines will give companies “the opportunity to automate many tasks, redesign jobs, and do things never before possible even with the best human work forces,” according to a report this year by the business consulting firm McKinsey.
  • Advances in speech recognition, translation and pattern recognition threaten employment in the service sectors — call centers, marketing and sales — precisely the sectors that provide most jobs in developed economies.
  • Gartner’s 2013 chief executive survey, published in April, found that 60 percent of executives surveyed dismissed as “‘futurist fantasy” the possibility that smart machines could displace many white-collar employees within 15 years.
  • Kenneth Brant, research director at Gartner, told a conference in October: “Job destruction will happen at a faster pace, with machine-driven job elimination overwhelming the market’s ability to create valuable new ones.”
  • Optimists say this could lead to the ultimate elimination of work — an “Athens without the slaves” — and a possible boom for less vocational-style education. Mr. Brant’s hope is that such disruption might lead to a system where individuals are paid a citizen stipend and be free for education and self-realization. “This optimistic scenario I call Homo Ludens, or ‘Man, the Player,’ because maybe we will not be the smartest thing on the planet after all,” he said. “Maybe our destiny is to create the smartest thing on the planet and use it to follow a course of self-actualization.”
charlottedonoho

Schoolroom Climate Change Indoctrination - WSJ - 0 views

  • While many American parents are angry about the Common Core educational standards and related student assessments in math and English, less attention is being paid to the federally driven green Common Core that is now being rolled out across the country. Under the guise of the first new K-12 science curriculum to be introduced in 15 years, the real goal seems to be to expose students to politically correct climate-change orthodoxy during their formative learning years.
  • The standards were designed to provide students with an internationally benchmarked science education.
  • From the council’s perspective, the science of climate change has already been settled. Not surprisingly, global climate change is one of the disciplinary core ideas embedded in the Next Generation of Science Standards, making it required learning for students in grade, middle and high school.
  • ...3 more annotations...
  • The National Research Council framework for K-12 science education recommends that by the end of Grade 5, students should appreciate that rising average global temperatures will affect the lives of all humans and other organisms on the planet. By Grade 8, students should understand that the release of greenhouse gases from burning fossil fuels is a major factor in global warming. And by Grade 12, students should know that global climate models are very effective in modeling, predicting and managing the current and future impact of climate change.
  • Relying on a climate-change curriculum and teaching materials largely sourced from federal agencies—particularly those of the current ideologically driven administration—raises a number of issues. Along with the undue authoritative weight that such government-produced documents carry in the classroom, most of the work is one-sided and presented in categorical terms, leaving no room for a balanced discussion. Moreover, too much blind trust is placed in the predictive power of long-range computer simulations, despite the weak forecasting track record of most climate models to date.
  • Employing such a Socratic approach to teaching climate change would likely lead to a rational and thought-provoking classroom debate on the merits of the case. However, that is not the point of this academic exercise—which seems to be to indoctrinate young people by using K-12 educators to establish the same positive political feedback loop around global warming that has existed between the federal government and the nation’s colleges and universities for the past two decades.
grayton downing

Send in the Bots | The Scientist Magazine® - 0 views

  • any hypothesis, his idea needed to be tested. But measuring brain activity in a moving ant—the most direct way to determine cognitive processing during animal decision making—was not possible. So Garnier didn’t study ants; he studied robots. U
  • The robots then navigated the environment by sensing light intensity through two sensors on their “heads.”
  • , several groups have used autonomous robots that sense and react to their environments to “debunk the idea that you need higher cognitive processing to do what look like cognitive things,”
  • ...10 more annotations...
  • a growing number of scientists are using autonomous robots to interrogate animal behavior and cognition. Researchers have designed robots to behave like ants, cockroaches, rodents, chickens, and more, then deployed their bots in the lab or in the environment to see how similarly they behave to their flesh-and-blood counterparts.
  • robots give behavioral biologists the freedom to explore the mind of an animal in ways that would not be possible with living subjects, says University of Sheffield researcher James Marshall, who in March helped launch a 3-year collaborative project to build a flying robot controlled by a computer-run simulation of the entire honeybee brain.
  • “I really think there is a lot to be discovered by doing the engineering side along with the science.”
  • Not only did the bots move around the space like the rat pups did, they aggregated in remarkably similar ways to the real animals.3 Then Schank realized that there was a bug in his program. The robots weren’t following his predetermined rules; they were moving randomly.
  • Animal experiments are still needed to advance neuroscience.” But, he adds, robots may prove to be an indispensable new ethological tool for focusing the scope of research. “If you can have good physical models,” Prescott says, “then you can reduce the number of experiments and only do the ones that answer really important questions.”
  • animal-mimicking robots is not easy, however, particularly when knowledge of the system’s biology is lacking.
  • However, when the researchers also gave the robots a sense of flow, and programmed them to assume that odors come from upstream, the bots much more closely mimicked real lobster behavior. “That was a demonstration that the animals’ brains were multimodal—that they were using chemical information and flow information,” says Grasso, who has since worked on robotic models of octopus arms and crayfish.
  • some sense, the use of robotics in animal-behavior research is not that new. Since the inception of the field of ethology, researchers have been using simple physical models of animals—“dummies”—to examine the social behavior of real animals, and biologists began animating their dummies as soon as technology would allow. “The fundamental problem when you’re studying an interaction between two individuals is that it’s a two-way interaction—you’ve got two players whose behaviors are both variable,”
  • building a robot that animals will accept as one of their own is complicated, to say the least.
  • handful of other researchers have also successfully integrated robots with live animals—including fish, ducks, and chickens. There are several notable benefits to intermixing robots and animals; first and foremost, control. “One of the problems when studying behavior is that, of course, it’s very difficult to have control of animals, and so it’s hard for us to interpret fully how they interact with each other
Javier E

Does Google Make Us Stupid? - Pew Research Center - 0 views

  • Carr argued that the ease of online searching and distractions of browsing through the web were possibly limiting his capacity to concentrate. "I'm not thinking the way I used to," he wrote, in part because he is becoming a skimming, browsing reader, rather than a deep and engaged reader. "The kind of deep reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from the author's words but for the intellectual vibrations those words set off within our own minds. In the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of contemplation, for that matter, we make our own associations, draw our own inferences and analogies, foster our own ideas.... If we lose those quiet spaces, or fill them up with ‘content,' we will sacrifice something important not only in our selves but in our culture."
  • force us to get smarter if we are to survive. "Most people don't realize that this process is already under way," he wrote. "In fact, it's happening all around us, across the full spectrum of how we understand intelligence. It's visible in the hive mind of the Internet, in the powerful tools for simulation and visualization that are jump-starting new scientific disciplines, and in the development of drugs that some people (myself included) have discovered let them study harder, focus better, and stay awake longer with full clarity." He argued that while the proliferation of technology and media can challenge humans' capacity to concentrate there were signs that we are developing "fluid intelligence-the ability to find meaning in confusion and solve new problems, independent of acquired knowledge." He also expressed hope that techies will develop tools to help people find and assess information smartly.
  • 76% of the experts agreed with the statement, "By 2020, people's use of the internet has enhanced human intelligence; as people are allowed unprecedented access to more information they become smarter and make better choices. Nicholas Carr was wrong: Google does not make us stupid."
Javier E

Don't Be a Stranger - Boston.com - 1 views

  • In experiments with total strangers to whom they're unrelated, and whom they'll never see again, people are often surprisingly (and, from a theoretical point-of-view, needlessly) generous, cooperative, and unwilling to cheat.
  • Why should this be? There have been lots of explanations (naive, optimistic undergrads? a culture of friendliness and charity?), but none of them seem to provide the sort of long-term, structured pressures that might explain our friendliness evolutionarily.
  • Essentially, it's that every social encounter between two people involves a guess about whether or not you'll meet again in the future; you have to decide whether or not an interaction will be "one-shot" or "repeated." By modeling "one-shot discrimination" in a computer, the group has shown that it makes more sense to presume that you'll meet again down the road.
  • ...2 more annotations...
  • Tooby and Cosmides ran their simulation for tens of thousands of generations, to figure out where the generosity thermostat would get set. They find that it makes more sense to adopt a general attitude of generosity, in the hope that paying it forward now will pay back later. What does this all mean for how we think about ourselves? To the researchers, it suggests that "human generosity, far from being a thin veneer of cultural conditioning atop a Machiavellian core, may turn out to be a bedrock feature of human nature." Why? Because thousands of years of small-town living have left their mark.
  • Why would you choose to cooperate or cheat? The answer hinges, essentially, on a guess: For many encounters, you simply can't know whether or not they'll be one-shot or repeated.
Javier E

To Justify Every 'A,' Some Professors Hand Over Grading Power to Outsiders - Technology... - 0 views

  • The best way to eliminate grade inflation is to take professors out of the grading process: Replace them with professional evaluators who never meet the students, and who don't worry that students will punish harsh grades with poor reviews. That's the argument made by leaders of Western Governors University, which has hired 300 adjunct professors who do nothing but grade student work.
  • These efforts raise the question: What if professors aren't that good at grading? What if the model of giving instructors full control over grades is fundamentally flawed? As more observers call for evidence of college value in an era of ever-rising tuition costs, game-changing models like these are getting serious consideration.
  • Professors do score poorly when it comes to fair grading, according to a study published in July in the journal Teachers College Record. After crunching the numbers on decades' worth of grade reports from about 135 colleges, the researchers found that average grades have risen for 30 years, and that A is now the most common grade given at most colleges. The authors, Stuart Rojstaczer and Christopher Healy, argue that a "consumer-based approach" to higher education has created subtle incentives for professors to give higher marks than deserved. "The standard practice of allowing professors free rein in grading has resulted in grades that bear little relation to actual performance," the two professors concluded.
  • ...13 more annotations...
  • Western Governors is entirely online, for one thing. Technically it doesn't offer courses; instead it provides mentors who help students prepare for a series of high-stakes homework assignments. Those assignments are designed by a team of professional test-makers to prove competence in various subject areas. The idea is that as long as students can leap all of those hurdles, they deserve degrees, whether or not they've ever entered a classroom, watched a lecture video, or participated in any other traditional teaching experience. The model is called "competency-based education."
  • Ms. Johnson explains that Western Governors essentially splits the role of the traditional professor into two jobs. Instructional duties fall to a group the university calls "course mentors," who help students master material. The graders, or evaluators, step in once the homework is filed, with the mind-set of, "OK, the teaching's done, now our job is to find out how much you know," says Ms. Johnson. They log on to a Web site called TaskStream and pluck the first assignment they see. The institution promises that every assignment will be graded within two days of submission.
  • Western Governors requires all evaluators to hold at least a master's degree in the subject they're grading.
  • Evaluators are required to write extensive comments on each task, explaining why the student passed or failed to prove competence in the requisite skill. No letter grades are given—students either pass or fail each task.
  • Another selling point is the software's fast response rate. It can grade a batch of 1,000 essay tests in minutes. Professors can set the software to return the grade immediately and can give students the option of making revisions and resubmitting their work on the spot.
  • The graders must regularly participate in "calibration exercises," in which they grade a simulated assignment to make sure they are all scoring consistently. As the phrase suggests, the process is designed to run like a well-oiled machine.
  • Other evaluators want to push talented students to do more than the university's requirements for a task, or to allow a struggling student to pass if he or she is just under the bar. "Some people just can't acclimate to a competency-based environment," says Ms. Johnson. "I tell them, If they don't buy this, they need to not be here.
  • She and some teaching assistants scored the tests by hand and compared their performance with the computer's.
  • The graduate students became fatigued and made mistakes after grading several tests in a row, she told me, "but the machine was right-on every time."
  • He argues that students like the idea that their tests are being evaluated in a consistent way.
  • All evaluators initially receive a month of training, conducted online, about how to follow each task's grading guidelines, which lay out characteristics of a passing score.
  • He said once students get essays back instantly, they start to view essay tests differently. "It's almost like a big math problem. You don't expect to get everything right the first time, but you work through it.
  • robot grading is the hottest trend in testing circles, says Jacqueline Leighton, a professor of educational psychology at the University of Alberta who edits the journal Educational Measurement: Issues and Practice. Companies building essay-grading robots include the Educational Testing Service, which sells e-rater, and Pearson Education, which makes Intelligent Essay Assessor. "The research is promising, but they're still very much in their infancy," Ms. Leighton says.
Emilio Ergueta

Nietzsche on Love | Issue 104 | Philosophy Now - 0 views

  • What could Friedrich Nietzsche (1844-1900) have to teach us about love? More than we might suppose.
  • Even during these times, between physical suffering and intense periods of writing, he pursued the company of learned women. Moreover, Nietzsche grew up in a family of women, turned to women for friendship, and witnessed his friends courtin
  • By calling our attention to the base, vulgar and selfish qualities of (heterosexual) erotic or sexual love, Nietzsche aims to strip love of its privileged status and demonstrate that what we conceive to be its opposites, such as egoism and greed, are in many instances inextricably bound up in the experience of love.
  • ...7 more annotations...
  • In doing so, Nietzsche disassociates love from its other-worldly Christian-Platonic heritage, and so asserts his ethical claims concerning the value of the Earth over the other-worldly, and the truth of the body over the sacred.
  • Nietzsche speaks critically about the possessive or tyrannical qualities of masculine love alongside its fictionalising tendencies, stating that the natural functions of a woman’s body disgust men because they prevent him having complete access to her as a possession; they also encroach upon the conceptual perfection of love. He writes, “‘The human being under the skin’ is for all lovers a horror and unthinkable, a blasphemy against God and love.”
  • He proposes that love is close to greed and the lust for possession. Love is an instinctual force related to our biological and cultural drives, and as such, cannot be considered a moral good (GS 363).
  • Nietzsche pointedly distinguishes masculine from feminine love by the notions of devotion and fidelity. Whereas women want to surrender completely to love, to approach it as a faith, “to be taken and accepted as a possession” (363), Nietzsche claims male love hinges upon the possessive thirst to acquire more from the lover, and states that men who are inclined towards complete devotion are “not men.”
  • In other words, the experiences of both greed and love are the same drive or instinct, but depending upon the level of satisfaction one has achieved, this drive will be alternatively named ‘greed’ or ‘love’: satisfied people who feel their possessions (their lover for example) threatened by others will name other’s instinct for gain greed or avarice, whereas those who are still searching out something new to desire will impose a positive evaluation on that instinct and call it ‘love’.
  • In order to be successful in love, he counsels women to “simulate a lack of love” and to enact the roles that men find attractive. Nietzsche finds love comedic because it does not consist in some attempt to know the other deeply, but rather in the confirmation of male fantasies in which women perform their constructed gender roles.
  • Nietzsche’s writings on love have not surprisingly been influential on many feminist reflections on sex/gender. Although he is not making moralising claims about how one should love, his discussion of the difficult impact erotic and romantic relationships have on women, as well as his commentary on the ironies both sexes face in love, force his readers of both sexes to examine the roles that they play in love. It is difficult when reading him not to question one’s own performances in romantic relationships.
Javier E

Losing Our Touch - NYTimes.com - 0 views

  • Are we losing our senses? In our increasingly virtual world, are we losing touch with the sense of touch itself? And if so, so what?
  • Tactility is not blind immediacy — not merely sensorial but cognitive, too. Savoring is wisdom; in Latin, wisdom is “sapientia,” from “sapere,” to taste. These carnal senses make us human by keeping us in touch with things, by responding to people’s pain
  • But Aristotle did not win this battle of ideas. The Platonists prevailed and the Western universe became a system governed by “the soul’s eye.” Sight came to dominate the hierarchy of the senses, and was quickly deemed the appropriate ally of theoretical ideas.
  • ...6 more annotations...
  • Western philosophy thus sprang from a dualism between the intellectual senses, crowned by sight, and the lower “animal” senses, stigmatized by touch.
  • opto-centrism prevailed for over 2,000 years, culminating in our contemporary culture of digital simulation and spectacle. The eye continues to rule in what Roland Barthes once called our “civilization of the image.” The world is no longer our oyster, but our screen.
  • our current technology is arguably exacerbating our carnal alienation. While offering us enormous freedoms of fantasy and encounter, digital eros may also be removing us further from the flesh
  • The move toward excarnation is apparent in what is becoming more and more a fleshless society. In medicine, “bedside manner” and hand on pulse has ceded to the anonymous technologies of imaging in diagnosis and treatment. In war, hand-to-hand combat has been replaced by “targeted killing” via remote-controlled drones.
  • certain cyber engineers now envisage implanting transmission codes in brains so that we will not have to move a finger — or come into contact with another human being — to get what we want.
  • We need to return from head to foot, from brain to fingertip, from iCloud to earth. To close the distance, so that eros is more about proximity than proxy. So that soul becomes flesh, where it belongs. Such a move, I submit, would radically alter our “sense” of sex in our digital civilization. It would enhance the role of empathy, vulnerability and sensitivity in the art of carnal love, and ideally, in all of human relations. Because to love or be loved truly is to be able to say, “I have been touched.”
Javier E

The Rich Have Higher Level of Narcissism, Study Shows | TIME.com - 1 views

  • The rich really are different — and, apparently more self-absorbed, according to the latest research.
  • Recent studies show, for example, that wealthier people are more likely to cut people off in traffic and to behave unethically in simulated business and charity scenarios.
  • Earlier this year, statistics on charitable giving revealed that while the wealthy donate about 1.3% of their income to charity, the poorest actually give more than twice as much as a proportion of their earnings — 3.2%.
  • ...11 more annotations...
  • In five different experiments involving several hundred undergraduates and 100 adults recruited from online communities, the researchers found higher levels of both narcissism and entitlement among those of higher income and social class.
  • when asked to visually depict themselves as circles, with size indicating relative importance, richer people picked larger circles for themselves and smaller ones for others. Another experiment found that they also looked in the mirror more frequently.
  • The wealthier participants were also more likely to agree with statements like “I honestly feel I’m just more deserving than other people
  • But which came first — did gaining wealth increase self-aggrandizement? Were self-infatuated people more likely to seek and then gain riches
  • To explore that relationship further, the researchers also asked the college students in one experiment to report the educational attainment and annual income of their parents. Those with more highly educated and wealthier parents remained higher in their self-reported entitlement and narcissistic characteristics. “That would suggest that it’s not just [that] people who feel entitled are more likely to become wealthy,” says Piff. Wealth, in other words, may breed narcissistic tendencies — and wealthy people justify their excess by convincing themselves that they are more deserving of it
  • “The strength of the study is that it uses multiple methods for measuring narcissism and entitlement and social class and multiple populations, and that can really increase our confidence in the results,”
  • “This paper should not be read as saying that narcissists are more successful because we know from lots of other studies that that’s not true.
  • “entitlement is a facet of narcissism,” says Twenge. “And [it’s the] one most associated with high social class. It’s the idea that you deserve special treatment and that things will come to you without working hard.”
  • Manipulating the sense of entitlement, however, may provide a way to influence narcissism. In the final experiment in the paper, the researchers found that having participants who listed three benefits of seeing others as equals eliminated class differences in narcissism, while simply listing three daily activities did not.
  • In the meantime, the connection between wealth and entitlement could have troubling social implications. “You have this bifurcation of rich and poor,” says Levine. “The rich are increasingly entitled, and since they set the cultural tone for advertising and all those kinds of things, I think there’s a pervasive sense of entitlement.”
  • That could perpetuate a deepening lack of empathy that could fuel narcissistic tendencies. “You could imagine negative attitudes toward wealth redistribution as a result of entitlement,” says Piff. “The more severe inequality becomes, the more entitled people may feel and the less likely to share those resources they become.”
Javier E

The New York Times mourns the death of bad social science - 0 views

  • The crisis in the social sciences has grown so obvious that even mainstream social scientists have begun to acknowledge it. In the past five years or so, disinterested researchers have reexamined many of the most crucial experiments and findings in social psychology and related fields. A very large percentage of them—as many as two-thirds, by some counts—crumble on close examination. These include such supposedly settled science as “implicit bias,” “stereotype threat,” “priming,” “ego depletion” and many others known to every student of introductory psychology
  • At the root of the failure are errors of methodology and execution that should have been obvious from the start. Sample sizes are small and poorly selected; statistical manipulations are misunderstood and ill-performed; experiments lack control groups and are poorly designed; data are cherry-picked; and safeguards against researcher bias are ignored. It’s a long list.
  • Still, Carey insists, psychology is a science. It’s just not a science in the way that other, fussier sciences are science. “The study of human behavior will never be as clean as physics or cardiology,” he writes. “How could it be?” And of course those farfetched experiments aren’t like real experiments. “Psychology’s elaborate simulations are just that.
  • ...1 more annotation...
  • Carey’s defense of social psychology fits the current age. It is post-truth, as our public intellectuals like to say. “[Social psychology’s] findings are far more accessible and personally relevant to the public than those in most other scientific fields,” Carey writes. “The public’s judgments matter to the field, too.”
Javier E

Ditch the GPS. It's ruining your brain. - The Washington Post - 0 views

  • they also affect perception and judgment. When people are told which way to turn, it relieves them of the need to create their own routes and remember them. They pay less attention to their surroundings. And neuroscientists can now see that brain behavior changes when people rely on turn-by-turn directions.
  • 2017, researchers asked subjects to navigate a virtual simulation of London’s Soho neighborhood and monitored their brain activity, specifically the hippocampus, which is integral to spatial navigation
  • The hippocampus makes an internal map of the environment and this map becomes active only when you are engaged in navigating and not using GPS,
  • ...5 more annotations...
  • The hippocampus is crucial to many aspects of daily life. It allows us to orient in space and know where we are by creating cognitive maps. It also allows us to recall events from the past, what is known as episodic memory. And, remarkably, it is the part of the brain that neuroscientists believe gives us the ability to imagine ourselves in the future.
  • “when people use tools such as GPS, they tend to engage less with navigation. Therefore, brain area responsible for navigation is less used, and consequently their brain areas involved in navigation tend to shrink.”
  • avigation aptitude appears to peak around age 19, and after that, most people slowly stop using spatial memory strategies to find their way, relying on habit instead.
  • “If we are paying attention to our environment, we are stimulating our hippocampus, and a bigger hippocampus seems to be protective against Alzheimer’s disease,” Bohbot told me in an email. “When we get lost, it activates the hippocampus, it gets us completely out of the habit mode. Getting lost is good!”
  • practicing navigation is a powerful form of engagement with the environment that can inspire a greater sense of stewardship
Javier E

How YouTube Drives People to the Internet's Darkest Corners - WSJ - 0 views

  • YouTube is the new television, with more than 1.5 billion users, and videos the site recommends have the power to influence viewpoints around the world.
  • Those recommendations often present divisive, misleading or false content despite changes the site has recently made to highlight more-neutral fare, a Wall Street Journal investigation found.
  • Behind that growth is an algorithm that creates personalized playlists. YouTube says these recommendations drive more than 70% of its viewing time, making the algorithm among the single biggest deciders of what people watch.
  • ...25 more annotations...
  • People cumulatively watch more than a billion YouTube hours daily world-wide, a 10-fold increase from 2012
  • After the Journal this week provided examples of how the site still promotes deceptive and divisive videos, YouTube executives said the recommendations were a problem.
  • When users show a political bias in what they choose to view, YouTube typically recommends videos that echo those biases, often with more-extreme viewpoints.
  • Such recommendations play into concerns about how social-media sites can amplify extremist voices, sow misinformation and isolate users in “filter bubbles”
  • Unlike Facebook Inc. and Twitter Inc. sites, where users see content from accounts they choose to follow, YouTube takes an active role in pushing information to users they likely wouldn’t have otherwise seen.
  • “The editorial policy of these new platforms is to essentially not have one,”
  • “That sounded great when it was all about free speech and ‘in the marketplace of ideas, only the best ones win.’ But we’re seeing again and again that that’s not what happens. What’s happening instead is the systems are being gamed and people are being gamed.”
  • YouTube has been tweaking its algorithm since last autumn to surface what its executives call “more authoritative” news source
  • YouTube last week said it is considering a design change to promote relevant information from credible news sources alongside videos that push conspiracy theories.
  • The Journal investigation found YouTube’s recommendations often lead users to channels that feature conspiracy theories, partisan viewpoints and misleading videos, even when those users haven’t shown interest in such content.
  • YouTube engineered its algorithm several years ago to make the site “sticky”—to recommend videos that keep users staying to watch still more, said current and former YouTube engineers who helped build it. The site earns money selling ads that run before and during videos.
  • YouTube’s algorithm tweaks don’t appear to have changed how YouTube recommends videos on its home page. On the home page, the algorithm provides a personalized feed for each logged-in user largely based on what the user has watched.
  • There is another way to calculate recommendations, demonstrated by YouTube’s parent, Alphabet Inc.’s Google. It has designed its search-engine algorithms to recommend sources that are authoritative, not just popular.
  • Google spokeswoman Crystal Dahlen said that Google improved its algorithm last year “to surface more authoritative content, to help prevent the spread of blatantly misleading, low-quality, offensive or downright false information,” adding that it is “working with the YouTube team to help share learnings.”
  • In recent weeks, it has expanded that change to other news-related queries. Since then, the Journal’s tests show, news searches in YouTube return fewer videos from highly partisan channels.
  • YouTube’s recommendations became even more effective at keeping people on the site in 2016, when the company began employing an artificial-intelligence technique called a deep neural network that makes connections between videos that humans wouldn’t. The algorithm uses hundreds of signals, YouTube says, but the most important remains what a given user has watched.
  • Using a deep neural network makes the recommendations more of a black box to engineers than previous techniques,
  • “We don’t have to think as much,” he said. “We’ll just give it some raw data and let it figure it out.”
  • To better understand the algorithm, the Journal enlisted former YouTube engineer Guillaume Chaslot, who worked on its recommendation engine, to analyze thousands of YouTube’s recommendations on the most popular news-related queries
  • Mr. Chaslot created a computer program that simulates the “rabbit hole” users often descend into when surfing the site. In the Journal study, the program collected the top five results to a given search. Next, it gathered the top three recommendations that YouTube promoted once the program clicked on each of those results. Then it gathered the top three recommendations for each of those promoted videos, continuing four clicks from the original search.
  • The first analysis, of November’s top search terms, showed YouTube frequently led users to divisive and misleading videos. On the 21 news-related searches left after eliminating queries about entertainment, sports and gaming—such as “Trump,” “North Korea” and “bitcoin”—YouTube most frequently recommended these videos:
  • The algorithm doesn’t seek out extreme videos, they said, but looks for clips that data show are already drawing high traffic and keeping people on the site. Those videos often tend to be sensationalist and on the extreme fringe, the engineers said.
  • Repeated tests by the Journal as recently as this week showed the home page often fed far-right or far-left videos to users who watched relatively mainstream news sources, such as Fox News and MSNBC.
  • Searching some topics and then returning to the home page without doing a new search can produce recommendations that push users toward conspiracy theories even if they seek out just mainstream sources.
  • After searching for “9/11” last month, then clicking on a single CNN clip about the attacks, and then returning to the home page, the fifth and sixth recommended videos were about claims the U.S. government carried out the attacks. One, titled “Footage Shows Military Plane hitting WTC Tower on 9/11—13 Witnesses React”—had 5.3 million views.
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

Andrew Sullivan: Trump's Mindless Nihilism - 2 views

  • The trouble with reactionary politics is that it is fundamentally a feeling, an impulse, a reflex. It’s not a workable program. You can see that in the word itself: it’s a reaction, an emotional response to change. Sure, it can include valuable insights into past mistakes, but it can’t undo them, without massive disruption
  • I mention this as a way to see more clearly why the right in Britain and America is either unraveling quickly into chaos, or about to inflict probably irreparable damage on a massive scale to their respective countries. Brexit and Trump are the history of Thatcher and Reagan repeating as dangerous farce, a confident, intelligent conservatism reduced to nihilist, mindless reactionism.
  • But it’s the impossible reactionary agenda that is the core problem. And the reason we have a president increasingly isolated, ever more deranged, legislatively impotent, diplomatically catastrophic, and constitutionally dangerous, is not just because he is a fucking moron requiring an adult day-care center to avoid catastrophe daily.
  • ...14 more annotations...
  • It’s because he’s a reactionary fantasist, whose policies stir the emotions but are stalled in the headwinds of reality
  • These are not conservative reforms, thought-through, possible to implement, strategically planned. They are the unhinged fantasies of a 71-year-old Fox News viewer imagining he can reconstruct the late 1950s. They cannot actually be implemented, without huge damage.
  • In Britain, meanwhile, Brexit is in exactly the same place — a reactionary policy that is close to impossible to implement without economic and diplomatic catastrophe
  • Brexit too was built on Trump-like lies, and a Trump-like fantasy that 50 years of integration with the E.U. could be magically abolished overnight, and that the Britain of the early 1970s could be instantly re-conjured. No actual conservative can possibly believe that such radical, sudden change won’t end in tears.
  • “The researchers start by simulating what happens when extra links are introduced into a social network. Their network consists of men and women from different races who are randomly distributed. In this model, everyone wants to marry a person of the opposite sex but can only marry someone with whom a connection exists. This leads to a society with a relatively low level of interracial marriage. But if the researchers add random links between people from different ethnic groups, the level of interracial marriage changes dramatically.”
  • Disruptions of events are, to my mind, integral to the exercise of free speech. Hecklers are part of the contentious and messy world of open debate. To suspend or, after three offenses, expel students for merely disrupting events is not so much to chill the possibility of dissent, but to freeze it altogether.
  • Maybe a college could set a time limit for protest — say, ten or fifteen minutes — after which the speaker must be heard, or penalties will be imposed. Heckling — that doesn’t prevent a speech — should also be tolerated to a reasonable extent. There’s a balance here that protects everyone’s free speech
  • dating apps are changing our society, by becoming the second-most common way straights meet partners, and by expanding the range of people we can meet.
  • here’s what’s intriguing: Correlated with that is a sustained, and hard-to-explain, rise in interracial marriage.
  • “It is intriguing that shortly after the introduction of the first dating websites in 1995, like Match.com, the percentage of new marriages created by interracial couples increased rapidly,” say the researchers. “The increase became steeper in the 2000s, when online dating became even more popular. Then, in 2014, the proportion of interracial marriages jumped again.” That was when Tinder took off.
  • the line to draw, it seems to me, is when a speech is actually shut down or rendered impossible by disruption. A fiery protest that initially prevents an event from starting is one thing; a disruption that prevents the speech taking place at all is another.
  • Even more encouraging, the marriages begun online seem to last longer than others.
  • I wonder if online dating doesn’t just expand your ability to meet more people of another race, by eliminating geography and the subtle grouping effect of race and class and education. Maybe it lowers some of the social inhibitions against interracial dating.
  • It’s always seemed to me that racism is deeply ingrained in human nature, and always will be, simply because our primate in-group aversion to members of an out-group expresses itself in racism, unless you actively fight it. You can try every law or custom to mitigate this, but it will only go so far.
Javier E

FaceApp helped a middle-aged man become a popular younger woman. His fan base has never... - 1 views

  • Soya’s fame illustrated a simple truth: that social media is less a reflection of who we are, and more a performance of who we want to be.
  • It also seemed to herald a darker future where our fundamental senses of reality are under siege: The AI that allows anyone to fabricate a face can also be used to harass women with “deepfake” pornography, invent fraudulent LinkedIn personas and digitally impersonate political enemies.
  • As the photos began receiving hundreds of likes, Soya’s personality and style began to come through. She was relentlessly upbeat. She never sneered or bickered or trolled. She explored small towns, savored scenic vistas, celebrated roadside restaurants’ simple meals.
  • ...25 more annotations...
  • She took pride in the basic things, like cleaning engine parts. And she only hinted at the truth: When one fan told her in October, “It’s great to be young,” Soya replied, “Youth does not mean a certain period of life, but how to hold your heart.”
  • She seemed, well, happy, and FaceApp had made her that way. Creating the lifelike impostor had taken only a few taps: He changed the “Gender” setting to “Female,” the “Age” setting to “Teen,” and the “Impression” setting — a mix of makeup filters — to a glamorous look the app calls “Hollywood.”
  • Soya pouted and scowled on rare occasions when Nakajima himself felt frustrated. But her baseline expression was an extra-wide smile, activated with a single tap.
  • Nakajima grew his shimmering hair below his shoulders and raided his local convenience store for beauty supplies he thought would make the FaceApp images more convincing: blushes, eyeliners, concealers, shampoos.
  • “When I compare how I feel when I started to tweet as a woman and now, I do feel that I’m gradually gravitating toward this persona … this fantasy world that I created,” Nakajima said. “When I see photos of what I tweeted, I feel like, ‘Oh. That’s me.’ ”
  • The sensation Nakajima was feeling is so common that there’s a term for it: the Proteus effect, named for the shape-shifting Greek god. Stanford University researchers first coined it in 2007 to describe how people inhabiting the body of a digital avatar began to act the part
  • People made to appear taller in virtual-reality simulations acted more assertively, even after the experience ended. Prettier characters began to flirt.
  • What is it about online disguises? Why are they so good at bending people’s sense of self-perception?
  • they tap into this “very human impulse to play with identity and pretend to be someone you’re not.”
  • Users in the Internet’s early days rarely had any presumptions of authenticity, said Melanie C. Green, a University of Buffalo professor who studies technology and social trust. Most people assumed everyone else was playing a character clearly distinguished from their real life.
  • “This identity play was considered one of the huge advantages of being online,” Green said. “You could switch your gender and try on all of these different personas. It was a playground for people to explore.”
  • It wasn’t until the rise of giant social networks like Facebook — which used real identities to, among other things, supercharge targeted advertising — that this big game of pretend gained an air of duplicity. Spaces for playful performance shrank, and the biggest Internet watering holes began demanding proof of authenticity as a way to block out malicious intent.
  • The Web’s big shift from text to visuals — the rise of photo-sharing apps, live streams and video calls — seemed at first to make that unspoken rule of real identities concrete. It seemed too difficult to fake one’s appearance when everyone’s face was on constant display.
  • Now, researchers argue, advances in image-editing artificial intelligence have done for the modern Internet what online pseudonyms did for the world’s first chat rooms. Facial filters have allowed anyone to mold themselves into the character they want to play.
  • researchers fear these augmented reality tools could end up distorting the beauty standards and expectations of actual reality.
  • Some political and tech theorists worry this new world of synthetic media threatens to detonate our concept of truth, eroding our shared experiences and infusing every online relationship with suspicion and self-doubt.
  • Deceptive political memes, conspiracy theories, anti-vaccine hoaxes and other scams have torn the fabric of our democracy, culture and public health.
  • But she also thinks about her kids, who assume “that everything online is fabricated,” and wonders whether the rules of online identity require a bit more nuance — and whether that generational shift is already underway.
  • “Bots pretending to be people, automated representations of humanity — that, they perceive as exploitative,” she said. “But if it’s just someone engaging in identity experimentation, they’re like: ‘Yeah, that’s what we’re all doing.'
  • To their generation, “authenticity is not about: ‘Does your profile picture match your real face?’ Authenticity is: ‘Is your voice your voice?’
  • “Their feeling is: ‘The ideas are mine. The voice is mine. The content is mine. I’m just looking for you to receive it without all the assumptions and baggage that comes with it.’ That’s the essence of a person’s identity. That’s who they really are.”
  • But wasn’t this all just a big con? Nakajima had tricked people with a “cool girl” stereotype to boost his Twitter numbers. He hadn’t elevated the role of women in motorcycling; if anything, he’d supplanted them. And the character he’d created was paper thin: Soya had no internal complexity outside of what Nakajima had projected, just that eternally superimposed smile.
  • Perhaps he should have accepted his irrelevance and faded into the digital sunset, sharing his life for few to see. But some of Soya’s followers have said they never felt deceived: It was Nakajima — his enthusiasm, his attitude about life — they’d been charmed by all along. “His personality,” as one Twitter follower said, “shined through.”
  • In Nakajima’s mind, he’d used the tools of a superficial medium to craft genuine connections. He had not felt real until he had become noticed for being fake.
  • Nakajima said he doesn’t know how long he’ll keep Soya alive. But he said he’s grateful for the way she helped him feel: carefree, adventurous, seen.
‹ Previous 21 - 40 of 58 Next ›
Showing 20 items per page