Skip to main content

Home/ TOK Friends/ Group items tagged modification

Rss Feed Group items tagged

Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
Javier E

The Not-So-Distant Future When We Can All Upgrade Our Brains - Alexis C. Madrigal - The... - 0 views

  • "Magna Cortica is the argument that we need to have a guidebook for both the design spec and ethical rules around the increasing power and diversity of cognitive augmentation," said IFTF distinguished fellow, Jamais Cascio. "There are a lot of pharmaceutical and digital tools that have been able to boost our ability to think. Adderall, Provigil, and extra-cortical technologies."
  • Back in 2008, 20 percent of scientists reported using brain-enhancing drugs. And I spoke with dozens of readers who had complex regimens, including, for example, a researcher at the MIT-affiliated Whitehead Institute for Biomedical Research. "We aren't the teen clubbers popping uppers to get through a hard day running a cash register after binge drinking," the researcher told me. "We are responsible humans." Responsible humans trying to get an edge in incredibly competitive and cognitively demanding fields. 
  • part of Google Glass's divisiveness stems from its prospective ability to enhance one's social awareness or provide contextual help in conversations; the company Social Radar has already released an app for Glass that shows social network information for people who are in the same location as you are. A regular app called MindMeld listens to conference calls and provides helpful links based on what the software hears you talking about.
  • ...2 more annotations...
  • These are not questions that can be answered by the development of the technologies. They require new social understandings. "What are the things we want to see happen?" Cascio asked. "What are the things we should and should not do?"
  • he floated five simple principles: 1. The right to self-knowledge 2. The right to self-modification 3. The right to refuse modification 4. The right to modify/refuse to modify your children 5. The right to know who has been modified
grayton downing

Epigenetics Play Cupid for Prairie Voles | The Scientist Magazine® - 0 views

  • It’s the first time anyone’s shown any epigenetic basis for partner preference,
  • Mohamed Kabbaj, a neuroscientist at Florida State University and an author of the paper, said that work in other species gave him clues that epigenetics could be important for social behavior. For instance, previous work suggests that modifications are involved in bonds between mothers and offspring in rats.
  • the researchers blocked vasopressin or oxytocin receptors in the animals, the TSA had no effect on pair bond formation, supporting the theory that oxytocin and vasopressin were mediating the effects of the epigenetic modification.
  • ...1 more annotation...
  • The mating and cohabitation produced the exact same effects [as TSA],” said Bruce Cushing, a behavioral neuroscientist at the University of Akron who was not involved in the study. “That is really powerful.”
Javier E

Scientists Seek Ban on Method of Editing the Human Genome - NYTimes.com - 1 views

  • A group of leading biologists on Thursday called for a worldwide moratorium on use of a new genome-editing technique that would alter human DNA in a way that can be inherited.
  • The biologists fear that the new technique is so effective and easy to use that some physicians may push ahead before its safety can be assessed. They also want the public to understand the ethical issues surrounding the technique, which could be used to cure genetic diseases, but also to enhance qualities like beauty or intelligence. The latter is a path that many ethicists believe should never be taken.
  • a technique invented in 2012 makes it possible to edit the genome precisely and with much greater ease. The technique has already been used to edit the genomes of mice, rats and monkeys, and few doubt that it would work the same way in people.
  • ...8 more annotations...
  • The technique holds the power to repair or enhance any human gene. “It raises the most fundamental of issues about how we are going to view our humanity in the future and whether we are going to take the dramatic step of modifying our own germline and in a sense take control of our genetic destiny, which raises enormous peril for humanity,”
  • The paper’s authors, however, are concerned about countries that have less regulation in science. They urge that “scientists should avoid even attempting, in lax jurisdictions, germline genome modification for clinical application in humans” until the full implications “are discussed among scientific and governmental organizations.”
  • Though such a moratorium would not be legally enforceable and might seem unlikely to exert global influence, there is a precedent. In 1975, scientists worldwide were asked to refrain from using a method for manipulating genes, the recombinant DNA technique, until rules had been established.
  • Though highly efficient, the technique occasionally cuts the genome at unintended sites. The issue of how much mistargeting could be tolerated in a clinical setting is one that Dr. Doudna’s group wants to see thoroughly explored before any human genome is edited.
  • “We worry about people making changes without the knowledge of what those changes mean in terms of the overall genome,” Dr. Baltimore said. “I personally think we are just not smart enough — and won’t be for a very long time — to feel comfortable about the consequences of changing heredity, even in a single individual.”
  • Many ethicists have accepted the idea of gene therapy, changes that die with the patient, but draw a clear line at altering the germline, since these will extend to future generations. The British Parliament in February approved the transfer of mitochondria, small DNA-containing organelles, to human eggs whose own mitochondria are defective. But that technique is less far-reaching because no genes are edited.
  • There are two broad schools of thought on modifying the human germline, said R. Alta Charo, a bioethicist at the University of Wisconsin and a member of the Doudna group. One is pragmatic and seeks to balance benefit and risk. The other “sets up inherent limits on how much humankind should alter nature,” she said. Some Christian doctrines oppose the idea of playing God, whereas in Judaism and Islam there is the notion “that humankind is supposed to improve the world.” She described herself as more of a pragmatist, saying, “I would try to regulate such things rather than shut a new technology down at its beginning.
  • The Doudna group calls for public discussion, but is also working to develop some more formal process, such as an international meeting convened by the National Academy of Sciences, to establish guidelines for human use of the genome-editing technique.“We need some principled agreement that we want to enhance humans in this way or we don’t,” Dr. Jaenisch said. “You have to have this discussion because people are gearing up to do this.”
Javier E

Seeking Dark Matter, They Detected Another Mystery - The New York Times - 0 views

  • A team of scientists hunting dark matter has recorded suspicious pings coming from a vat of liquid xenon underneath a mountain in Italy
  • If the signal is real and persists, the scientists say, it may be evidence of a species of subatomic particles called axions — long theorized to play a crucial role in keeping nature symmetrical but never seen — streaming from the sun.
  • Instead of axions, the scientists may have detected a new, unexpected property of the slippery ghostly particles called neutrinos. Yet another equally likely explanation is that their detector has been contaminated by vanishingly tiny amounts of tritium, a rare radioactive form of hydrogen.
  • ...19 more annotations...
  • “We want to be very clear that all we are reporting is observation of an excess (a fairly significant one) and not a discovery of any kind,”
  • “I’m trying to be calm here, but it’s hard not to be hyperbolic,” said Neal Weiner, a particle theorist at New York University. “If this is real, calling it a game changer would be an understatement.”
  • Dr. Aprile’s Xenon experiment is currently the largest and most sensitive in an alphabet soup of efforts aimed at detecting and identifying dark matter
  • The best guess is that this dark matter consists of clouds of exotic subatomic particles left over from the Big Bang and known generically as WIMPs, for weakly interacting massive particles, hundreds or thousands of times more massive than a hydrogen atom.
  • The story of axions begins in 1977, when Roberto Peccei, a professor at the University of California, Los Angeles, who died on June 1, and Helen Quinn, emerita professor at Stanford, suggested a slight modification to the theory that governs strong nuclear forces, making sure that it is invariant to the direction of time, a feature that physicists consider a necessity for the universe.
  • in its most recent analysis of that experiment, the team had looked for electrons, rather than the heavier xenon nuclei, recoiling from collisions. Among other things, that could be the signature of particles much lighter than the putative WIMPs striking the xenon.
  • Simulations and calculations suggested that random events should have produced about 232 such recoils over the course of a year.
  • But from February 2017 to February 2018, the detector recorded 285, an excess of 53 recoils.
  • Dr. Aprile and her colleagues have wired a succession of vats containing liquid xenon with photomultipliers and other sensors. The hope is that her team’s device — far underground to shield it from cosmic rays and other worldly forms of interference — would spot the rare collision between a WIMP and a xenon atom. The collision should result in a flash of light and a cloud of electrical charge.
  • this modification implied the existence of a new subatomic particle. Dr. Wilczek called it the axion, and the name stuck.
  • Axions have never been detected either directly or indirectly. And the theory does not predict their mass, which makes it hard to look for them. It only predicts that they would be weird and would barely interact with regular matter
  • although they are not WIMPS, they share some of those particles’ imagined weird abilities, such as being able to float through Earth and our bodies like smoke through a screen door.
  • In order to fulfill the requirements of cosmologists, however, such dark-matter axions would need to have a mass of less than a thousandth of an electron volt in the units of mass and energy preferred by physicists
  • (By comparison, the electrons that dance around in your smartphone weigh in at half a million electron volts each.) What they lack in heft they would more than make up for in numbers.
  • That would make individual cosmic dark-matter axions too slow and ethereal to be detected by the Xenon experiment.But axions could also be produced by nuclear reactions in the sun, and those “solar axions” would have enough energy to ping the Xenon detector right where it is most sensitive
  • The other exciting, though slightly less likely, possibility is that the Xenon collaboration’s excess signals come from the wispy particles known as neutrinos, which are real, and weird, and zipping through our bodies by the trillions every second.
  • Ordinarily, these neutrinos would not contribute much to the excess of events the detector read. But they would do so if they had an intrinsic magnetism that physicists call a magnetic moment. That would give them a higher probability of interacting with the xenon and tripping the detector
  • According to the standard lore, neutrinos, which are electrically neutral, do not carry magnetism. The discovery that they did would require rewriting the rules as they apply to neutrinos.
  • That, said Dr. Weiner, would be “a very very big deal,” because it would imply that there are new fundamental particles out there to look for — new physics.
Javier E

A Great Debate - NYTimes.com - 0 views

  • our political “debates” seldom deserve the name. For the most part representatives of the rival parties exchange one-liners: “The rich can afford to pay more” is met by “Tax increases kill jobs.” Slightly more sophisticated discussions may cite historical precedents: “There were higher tax rates during the post-war boom” versus “Reagan’s tax cuts increased revenues.”
  • Such volleys still don’t even amount to arguments: they don’t put forward generally accepted premises that support a conclusion.
  • Despite the name, candidates’ pre-election debates are exercises in looking authoritative, imposing their talking points on the questions, avoiding gaffes, and embarrassing their opponents with “zingers”
  • ...12 more annotations...
  • There is a high level of political discussion in the editorials and op-eds of national newspapers and magazines as well as on a number of blogs, with positions often carefully formulated and supported with argument and evidence. But even here we seldom see a direct and sustained confrontation of rival positions through the dialectic of assertion, critique, response and counter-critique.
  • As a result, partisans typically remain safe in their ideological worlds, convincing themselves that they hold to obvious truths, while their opponents must be either knaves or fools — with no need to think through the strengths of their rivals’ positions or the weaknesses of their own.
  • In the second session, the Republican asks the Democrat a series of questions (no more than one minute per question and three minutes per response) on the debate topic. In the third session, the Democrat questions the Republican. In the fourth session, each side has 15 minutes to present a final argument.
  • A first condition is that the debates be focused on specific points of major disagreement.
  • Another issue is the medium of the debate. Written discussions, in print or online could be easily arranged, but personal encounters are more vivid and will better engage public attention. They should not, however, be merely extemporaneous events, where too much will depend on quick-thinking and an engaging manner. We want remarks to be carefully prepared and open to considered responses
  • Here’s one suggestion for an effective exchange. The debate would consist of a series of four half-hour televised sessions, carried out on successive days. In the first session, the Republican, say, presents a pre-written case for a particular position
  • Is there any way to make genuine debates — sustained back-and-forth exchanges, meeting high intellectual standards but still widely accessible — part of our political culture?
  • they will set much higher standards of discussion, requiring fuller explanations of positions and even modifications to make them more defensible. It’s unlikely that either side would ever simply give up its view, but, politically, they would have to react to a strong public consensus if they had not made a respectable case. Further, the quasi-official status of the participants, as representatives chosen by their parties, would make the parties’ politicians answerable to points the representatives have made.
  • The only major obstacle to implementing this proposal would be getting the parties to participate. Here, I suggest, shame would be a prime motivator.
  • Facts and reasoning will never settle political issues. All of us have fundamental commitments that are impervious to argument
  • But rationality almost always has some role in our decisions, and more rationality in our political discussion will at a minimum help many to better understand what is at stake in our disputes and why their opponents think as they do.
  • So why not give reason a chance?
Javier E

Skinner Marketing: We're the Rats, and Facebook Likes Are the Reward - Bill Davidow - T... - 0 views

  • the age of Skinnerian Marketing. Future applications making use of big data, location, maps, tracking of a browser's interests, and data streams coming from mobile and wearable devices, promise to usher in the era of unprecedented power in the hands of marketers, who are no longer merely appealing to our innate desires, but programming our behaviors.
  • In the 1930's, B. F. Skinner developed the concept of operant conditioning. He put pigeons and rats in Skinner boxes to study how he could modify their behavior using rewards and punishments.
  • Skinner's techniques of operant conditioning and his notorious theory of behavior modification were denounced by his critics 70 years ago as fascist, manipulative vehicles that could be used for government control.
  • ...6 more annotations...
  • They were right about control but wrong about the controllers. Our Internet handlers, not government, are using operant conditioning to modify our behavior today.
  • we now know how to design cue, activity, and reward systems to more effectively leverage our brain chemistry and program human behavior.
  • The beauty of the Internet is that by combining big data, behavioral targeting, wearable and mobile devices, and GPS, application developers can design more effective operant conditioning environments and keep us in virtual Skinner boxes as long as we have a smart phone in our pockets.
  • Operant conditioning techniques will and are currently being used to program the behavior of susceptible Internet users -- young men who play MMORPG (Massively Multiplayer Online Role-Playing Games) for forty hours a week, women who commit hours to social networks, shoppers seeking the thrill of a deal, and poker players.
  • As smart devices become integrated into our lives, retailers who will know where we are standing in stores and fast food restaurants and bars will find ways to provide us with cues to trigger behaviors.
  • The real question is how many hundreds of millions of us will become susceptible to what I believe will prove to be history's most potent marketing techniques.
sissij

Language family - Wikipedia - 0 views

  • A language family is a group of languages related through descent from a common ancestor, called the proto-language of that family. The term 'family' reflects the tree model of language origination in historical linguistics, which makes use of a metaphor comparing languages to people in a biological family tree, or in a subsequent modification, to species in a phylogenetic tree of evolutionary taxonomy.
  • the Celtic, Germanic, Slavic, Romance, and Indo-Iranian language families are branches of a larger Indo-European language family. There is a remarkably similar pattern shown by the linguistic tree and the genetic tree of human ancestry[3] that was verified statistically.
  • A speech variety may also be considered either a language or a dialect depending on social or political considerations. Thus, different sources give sometimes wildly different accounts of the number of languages within a family. Classifications of the Japonic family, for example, range from one language (a language isolate) to nearly twenty.
  • ...2 more annotations...
  • A language isolated in its own branch within a family, such as Armenian within Indo-European, is often also called an isolate, but the meaning of isolate in such cases is usually clarified. For instance, Armenian may be referred to as an "Indo-European isolate". By contrast, so far as is known, the Basque language is an absolute isolate: it has not been shown to be related to any other language despite numerous attempts.
  • The common ancestor of a language family is seldom known directly since most languages have a relatively short recorded history. However, it is possible to recover many features of a proto-language by applying the comparative method, a reconstructive procedure worked out by 19th century linguist August Schleicher.
  •  
    I found this metaphor very accurate because I think languages certainly have some intimate relationship like family members. Languages are not all very different from one another and isolated. Although people speaking different language may not understand one another, their languages are still connected. I think this article can show that language in some ways are connected like bridges instead of walls. --Sissi (11/26/2016)
Javier E

Why It's OK to Let Apps Make You a Better Person - Evan Selinger - Technology - The Atl... - 0 views

  • one theme emerges from the media coverage of people's relationships with our current set of technologies: Consumers want digital willpower. App designers in touch with the latest trends in behavioral modification--nudging, the quantified self, and gamification--and good old-fashioned financial incentive manipulation, are tackling weakness of will. They're harnessing the power of payouts, cognitive biases, social networking, and biofeedback. The quantified self becomes the programmable self.
  • the trend still has multiple interesting dimensions
  • Individuals are turning ever more aspects of their lives into managerial problems that require technological solutions. We have access to an ever-increasing array of free and inexpensive technologies that harness incredible computational power that effectively allows us to self-police behavior everywhere we go. As pervasiveness expands, so does trust.
  • ...20 more annotations...
  • Some embrace networked, data-driven lives and are comfortable volunteering embarrassing, real time information about what we're doing, whom we're doing it with, and how we feel about our monitored activities.
  • Put it all together and we can see that our conception of what it means to be human has become "design space." We're now Humanity 2.0, primed for optimization through commercial upgrades. And today's apps are more harbinger than endpoint.
  • philosophers have had much to say about the enticing and seemingly inevitable dispersion of technological mental prosthetic that promise to substitute or enhance some of our motivational powers.
  • beyond the practical issues lie a constellation of central ethical concerns.
  • It simply means that when it comes to digital willpower, we should be on our guard to avoid confusing situational with integrated behaviors.
  • it is antithetical to the ideal of " resolute choice." Some may find the norm overly perfectionist, Spartan, or puritanical. However, it is not uncommon for folks to defend the idea that mature adults should strive to develop internal willpower strong enough to avoid external temptations, whatever they are, and wherever they are encountered.
  • In part, resolute choosing is prized out of concern for consistency, as some worry that lapse of willpower in any context indicates a generally weak character.
  • Fragmented selves behave one way while under the influence of digital willpower, but another when making decisions without such assistance. In these instances, inconsistent preferences are exhibited and we risk underestimating the extent of our technological dependency.
  • they should cause us to pause as we think about a possible future that significantly increases the scale and effectiveness of willpower-enhancing apps. Let's call this hypothetical future Digital Willpower World and characterize the ethical traps we're about to discuss as potential general pitfalls
  • the problem of inauthenticity, a staple of the neuroethics debates, might arise. People might start asking themselves: Has the problem of fragmentation gone away only because devices are choreographing our behavior so powerfully that we are no longer in touch with our so-called real selves -- the selves who used to exist before Digital Willpower World was formed?
  • Infantalized subjects are morally lazy, quick to have others take responsibility for their welfare. They do not view the capacity to assume personal responsibility for selecting means and ends as a fundamental life goal that validates the effort required to remain committed to the ongoing project of maintaining willpower and self-control.
  • Michael Sandel's Atlantic essay, "The Case Against Perfection." He notes that technological enhancement can diminish people's sense of achievement when their accomplishments become attributable to human-technology systems and not an individual's use of human agency.
  • Borgmann worries that this environment, which habituates us to be on auto-pilot and delegate deliberation, threatens to harm the powers of reason, the most central component of willpower (according to the rationalist tradition).
  • In several books, including Technology and the Character of Contemporary Life, he expresses concern about technologies that seem to enhance willpower but only do so through distraction. Borgmann's paradigmatic example of the non-distracted, focally centered person is a serious runner. This person finds the practice of running maximally fulfilling, replete with the rewarding "flow" that can only comes when mind/body and means/ends are unified, while skill gets pushed to the limit.
  • Perhaps the very conception of a resolute self was flawed. What if, as psychologist Roy Baumeister suggests, willpower is more "staple of folk psychology" than real way of thinking about our brain processes?
  • novel approaches suggest the will is a flexible mesh of different capacities and cognitive mechanisms that can expand and contract, depending on the agent's particular setting and needs. Contrary to the traditional view that identifies the unified and cognitively transparent self as the source of willed actions, the new picture embraces a rather diffused, extended, and opaque self who is often guided by irrational trains of thought. What actually keeps the self and its will together are the given boundaries offered by biology, a coherent self narrative created by shared memories and experiences, and society. If this view of the will as an expa
  • nding and contracting system with porous and dynamic boundaries is correct, then it might seem that the new motivating technologies and devices can only increase our reach and further empower our willing selves.
  • "It's a mistake to think of the will as some interior faculty that belongs to an individual--the thing that pushes the motor control processes that cause my action," Gallagher says. "Rather, the will is both embodied and embedded: social and physical environment enhance or impoverish our ability to decide and carry out our intentions; often our intentions themselves are shaped by social and physical aspects of the environment."
  • It makes perfect sense to think of the will as something that can be supported or assisted by technology. Technologies, like environments and institutions can facilitate action or block it. Imagine I have the inclination to go to a concert. If I can get my ticket by pressing some buttons on my iPhone, I find myself going to the concert. If I have to fill out an application form and carry it to a location several miles away and wait in line to pick up my ticket, then forget it.
  • Perhaps the best way forward is to put a digital spin on the Socratic dictum of knowing myself and submit to the new freedom: the freedom of consuming digital willpower to guide me past the sirens.
Javier E

Emmy Noether, the Most Significant Mathematician You've Never Heard Of - NYTimes.com - 0 views

  • Albert Einstein called her the most “significant” and “creative” female mathematician of all time, and others of her contemporaries were inclined to drop the modification by sex. She invented a theorem that united with magisterial concision two conceptual pillars of physics: symmetry in nature and the universal laws of conservation. Some consider Noether’s theorem, as it is now called, as important as Einstein’s theory of relativity; it undergirds much of today’s vanguard research in physics
  • At Göttingen, she pursued her passion for mathematical invariance, the study of numbers that can be manipulated in various ways and still remain constant. In the relationship between a star and its planet, for example, the shape and radius of the planetary orbit may change, but the gravitational attraction conjoining one to the other remains the same — and there’s your invariance.
  • Noether’s theorem, an expression of the deep tie between the underlying geometry of the universe and the behavior of the mass and energy that call the universe home. What the revolutionary theorem says, in cartoon essence, is the following: Wherever you find some sort of symmetry in nature, some predictability or homogeneity of parts, you’ll find lurking in the background a corresponding conservation — of momentum, electric charge, energy or the like. If a bicycle wheel is radially symmetric, if you can spin it on its axis and it still looks the same in all directions, well, then, that symmetric translation must yield a corresponding conservation.
  • ...1 more annotation...
  • Noether’s theorem shows that a symmetry of time — like the fact that whether you throw a ball in the air tomorrow or make the same toss next week will have no effect on the ball’s trajectory — is directly related to the conservation of energy, our old homily that energy can be neither created nor destroyed but merely changes form.
Javier E

Getting It Right - NYTimes.com - 1 views

  • What is it to truly know something?
  • In the complacent 1950s, it was received wisdom that we know a given proposition to be true if, and only if, it is true, we believe it to be true, and we are justified in so believing.
  • This consensus was exploded in a brief 1963 note by Edmund Gettier in the journal Analysis.
  • ...17 more annotations...
  • Suppose you have every reason to believe that you own a Bentley, since you have had it in your possession for many years, and you parked it that morning at its usual spot. However, it has just been destroyed by a bomb, so that you own no Bentley, despite your well justified belief that you do. As you sit in a cafe having your morning latte, you muse that someone in that cafe owns a Bentley (since after all you do). And it turns out you are right, but only because the other person in the cafe, the barista, owns a Bentley, which you have no reason to suspect. So you here have a well justified true belief that is not knowledge.
  • After many failed attempts to fix the justified-true-belief account with minor modifications, philosophers tried more radical departures. One promising approach suggests that knowledge is a form of action, comparable to an archer’s success when he consciously aims to hit a target.
  • An archer’s shot can be assessed in several ways. It can be accurate (successful in hitting the target). It can also be adroit (skillful or competent). An archery shot is adroit only if, as the arrow leaves the bow, it is oriented well and powerfully enough.
  • A shot’s aptness requires that its success be attained not just by luck (such as the luck of that second gust). The success must rather be a result of competence.
  • we can generalize from this example, to give an account of a fully successful attempt of any sort. Any attempt will have a distinctive aim and will thus be fully successful only if it succeeds not only adroitly but also aptly.
  • We need people to be willing to affirm things publicly. And we need them to be sincere (by and large) in doing so, by aligning public affirmation with private judgment. Finally, we need people whose assertions express what they actually know.
  • Aristotle in his “Nicomachean Ethics” developed an AAA account of attempts to lead a flourishing life in accord with fundamental human virtues (for example, justice or courage). Such an approach is called virtue ethics.
  • Since there is much truth that must be grasped if one is to flourish, some philosophers have begun to treat truth’s apt attainment as virtuous in the Aristotelian sense, and have developed a virtue epistemology
  • Virtue epistemology begins by recognizing assertions or affirmations.
  • A particularly important sort of affirmation is one aimed at attaining truth, at getting it right
  • All it takes for an affirmation to be alethic is that one of its aims be: getting it right.
  • Humans perform acts of public affirmation in the endeavor to speak the truth, acts with crucial importance to a linguistic species. We need such affirmations for activities of the greatest import for life in society: for collective deliberation and coordination, and for the sharing of information.
  • a fully successful attempt is good overall only if the agent’s goal is good enough. An attempt to murder an innocent person is not good even if it fully succeeds.
  • Virtue epistemology gives an AAA account of knowledge: to know affirmatively is to make an affirmation that is accurate (true) and adroit (which requires taking proper account of the evidence). But in addition, the affirmation must be apt; that is, its accuracy must be attributable to competence rather than luck.
  • Requiring knowledge to be apt (in addition to accurate and adroit) reconfigures epistemology as the ethics of belief.
  • as a bonus, it allows contemporary virtue epistemology to solve our Gettier problem. We now have an explanation for why you fail to know that someone in the cafe owns a Bentley, when your own Bentley has been destroyed by a bomb, but the barista happens to own one. Your belief in that case falls short of knowledge for the reason that it fails to be apt. You are right that someone in the cafe owns a Bentley, but the correctness of your belief does not manifest your cognitive or epistemic competence. You are right only because by epistemic luck the barista happens to own one.
  • When in your musings you affirm to yourself that someone in the cafe owns a Bentley, therefore, your affirmation is not an apt alethic affirmation, and hence falls short of knowledge.
Javier E

Getting It Right - NYTimes.com - 1 views

  • What is it to truly know something? In our daily lives, we might not give this much thought — most of us rely on what we consider to be fair judgment and common sense in establishing knowledge
  • is a form of action, comparable to an archer’s success when he consciously aims to hit a target.
  • fix the justified-true-belief account with minor modifications, philosophers tried more radical departures.
  • ...7 more annotations...
  • can be accurate (successful in hitting the target). It can also be adroit (skillful or competent). An archery shot is adroit only if, as the arrow leaves the bow, it is oriented well and powerfully enough. But a shot that is both accurate and adroit can still fall short.
  • fully successful attempt is good overall only if the agent’s goal is good enough. An attempt to murder an innocent person is not good even if it fully succeeds.
  • Virtue epistemology begins by recognizing assertions or affirmations
  • affirmation is one aimed at attaining truth, at getting it right
  • Virtue epistemology gives an AAA account of knowledge: to know affirmatively is to make an affirmation that is accurate (true) and adroit (which requires taking proper account of the evidence).
  • Requiring knowledge to be apt (in addition to accurate and adroit) reconfigures epistemology as the ethics of belief.
  • We now have an explanation for why you fail to know that someone in the cafe owns a Bentley, when your own Bentley has been destroyed by a bomb, but the barista happens to own one. Your belief in that case falls short of knowledge for the reason that it fails to be apt. You are right that someone in the cafe owns a Bentley, but the correctness of your belief does not manifest your cognitive or epistemic competence. You are right only because by epistemic luck the barista happens to own one.
carolinewren

YaleNews | Yale researchers map 'switches' that shaped the evolution of the human brain - 0 views

  • Thousands of genetic “dimmer” switches, regions of DNA known as regulatory elements, were turned up high during human evolution in the developing cerebral cortex, according to new research from the Yale School of Medicine.
  • these switches show increased activity in humans, where they may drive the expression of genes in the cerebral cortex, the region of the brain that is involved in conscious thought and language. This difference may explain why the structure and function of that part of the brain is so unique in humans compared to other mammals.
  • Noonan and his colleagues pinpointed several biological processes potentially guided by these regulatory elements that are crucial to human brain development.
  • ...7 more annotations...
  • “Building a more complex cortex likely involves several things: making more cells, modifying the functions of cortical areas, and changing the connections neurons make with each other
  • Scientists have become adept at comparing the genomes of different species to identify the DNA sequence changes that underlie those differences. But many human genes are very similar to those of other primates, which suggests that changes in the way genes are regulated — in addition to changes in the genes themselves — is what sets human biology apart.
  • First, Noonan and his colleagues mapped active regulatory elements in the human genome during the first 12 weeks of cortical development by searching for specific biochemical, or “epigenetic” modifications
  • same in the developing brains of rhesus monkeys and mice, then compared the three maps to identify those elements that showed greater activity in the developing human brain.
  • wanted to know the biological impact of those regulatory changes.
  • They used those data to identify groups of genes that showed coordinated expression in the cerebral cortex.
  • “While we often think of the human brain as a highly innovative structure, it’s been surprising that so many of these regulatory elements seem to play a role in ancient processes important for building the cortex in all mammals, said first author Steven Reilly
Javier E

Opinion | Unicorns of the Intellectual Right - The New York Times - 0 views

  • trying to find influential conservative economic intellectuals is basically a hopeless task, for two reasons.
  • First, while there are many conservative economists with appointments at top universities, publications in top journals, and so on, they have no influence on conservative policymaking
  • What the right wants are charlatans and cranks, in (conservative) Greg Mankiw’s famous phrase. If they use actual economists, they use them the way a drunkard uses a lamppost: for support, not illumination.
  • ...11 more annotations...
  • if you get a conservative economist who isn’t a charlatan and crank, you are more or less by definition getting someone with no influence on policymakers. But that’s not the only problem.
  • But even among conservative economists who didn’t go down that rabbit hole, there has been a moral collapse – a willingness to put political loyalty over professional standards.
  • the intellectual decadence. In macroeconomics, what began in the 60s and 70s as a usefully challenging critique of Keynesian views went all wrong in the 80s, because the anti-Keynesians refused to reconsider their views when their own models failed the reality test while Keynesian models, with some modification, performed pretty well.
  • By the time the Great Recession struck, the right-leaning side of the profession had entered a Dark Age, having retrogressed to the point where famous economists trotted out 30s-era fallacies as deep insights.
  • The second problem with conservative economic thought is that even aside from its complete lack of policy influence, it’s in an advanced state of both intellectual and moral decadence – something that has been obvious for a while, but became utterly clear after the 2008 crisis.
  • We saw that most recently in the way leading conservative economists raced to endorse ludicrous claims for the efficacy of the Trump tax cuts, then tried to climb down without admitting what they had done. We saw it in the false claims that Obama had presided over a massive expansion of government programs and refusal to admit that he hadn’t, the warnings that Fed policy would cause huge inflation followed by refusal to admit having been wrong, and on and on.
  • What accounts for this moral decline? I suspect that it’s about a desperate attempt to retain some influence on a party that prefers the likes of Kudlow or Stephen Moore.
  • no, you don’t see the same thing on the other side. Liberal economists have made plenty of bad predictions – if you never get it wrong, you’re not taking enough risks – but have generally been willing to admit to and learn from mistakes, and have rarely been sycophants to people in power. In this, as in so much else, we’re looking at asymmetric polarization.
  • And I think that’s true across the board. The left has genuine public intellectuals with actual ideas and at least some real influence; the right does not. News organizations don’t seem to have figured out how to deal with this reality, except by pretending that it doesn’t exist
  • Am I saying that there are no conservative economists who have maintained their principles? Not at all. But they have no influence, zero, on GOP thinking. So in economics, a news organization trying to represent conservative thought either has to publish people with no constituency or go with the charlatans who actually matter.
  • the real problem here is that media organizations are looking for unicorns: serious, honest, conservative intellectuals with real influence. Forty or fifty years ago, such people did exist. But now they don’t.
Javier E

Opinion | Grifters Gone Wild - The New York Times - 0 views

  • Silicon Valley has always had “a flimflam element” and a “fake it ’til you make it” ethos, from the early ’80s, when it was selling vaporware (hardware or software that was more of a concept or work in progress than a workable reality).
  • “We’ve been lionizing and revering these young tech entrepreneurs, treating them not just like princes and princesses but like heroes and icons,” Carreyrou says. “Now that there’s a backlash to Silicon Valley, it will be interesting to see if we reconsider this view that just because you made a lot of money doesn’t necessarily mean that you’re a role model for boys and girls.”
  • Jaron Lanier, the scientist and musician known as the father of virtual reality, has a new book out, “Ten Arguments for Deleting Your Social Media Accounts Right Now.” He says that the business plans of Facebook and Google have served to “elevate the role of the con artist to be central in society.”
  • ...5 more annotations...
  • “Anytime people want to contact each other or have an awareness of each other, it can only be when it’s financed by a third party who wants to manipulate us, to change us in some way or affect how we vote or what we buy,” he says. “In the old days, to be in that unusual situation, you had to be in a cult or a volunteer in an experiment in a psychology building or be in an abusive relationship or at a bogus real estate seminar.
  • “We don’t believe in government,” he says. “A lot of people are pissed at media. They don’t like education. People who used to think the F.B.I. was good now think it’s terrible. With all of these institutions the subject of ridicule, there’s nothing — except Skinner boxes and con artists.”
  • “But now you just need to sign onto Facebook to find yourself in a behavior modification loop, which is the con. And this may destroy our civilization and even our species.”
  • As Maria Konnikova wrote in her book, “The Confidence Game,” “The whirlwind advance of technology heralds a new golden age of the grift. Cons thrive in times of transition and fast change” when we are losing the old ways and open to the unexpected.
  • now narcissistic con artists are dominating the main stage, soaring to great heights and spectacularly exploding
sandrine_h

Darwin's Influence on Modern Thought - Scientific American - 0 views

  • Great minds shape the thinking of successive historical periods. Luther and Calvin inspired the Reformation; Locke, Leibniz, Voltaire and Rousseau, the Enlightenment. Modern thought is most dependent on the influence of Charles Darwin
  • one needs schooling in the physicist’s style of thought and mathematical techniques to appreciate Einstein’s contributions in their fullness. Indeed, this limitation is true for all the extraordinary theories of modern physics, which have had little impact on the way the average person apprehends the world.
  • The situation differs dramatically with regard to concepts in biology.
  • ...10 more annotations...
  • Many biological ideas proposed during the past 150 years stood in stark conflict with what everybody assumed to be true. The acceptance of these ideas required an ideological revolution. And no biologist has been responsible for more—and for more drastic—modifications of the average person’s worldview than Charles Darwin
  • . Evolutionary biology, in contrast with physics and chemistry, is a historical science—the evolutionist attempts to explain events and processes that have already taken place. Laws and experiments are inappropriate techniques for the explication of such events and processes. Instead one constructs a historical narrative, consisting of a tentative reconstruction of the particular scenario that led to the events one is trying to explain.
  • The discovery of natural selection, by Darwin and Alfred Russel Wallace, must itself be counted as an extraordinary philosophical advance
  • The concept of natural selection had remarkable power for explaining directional and adaptive changes. Its nature is simplicity itself. It is not a force like the forces described in the laws of physics; its mechanism is simply the elimination of inferior individuals
  • A diverse population is a necessity for the proper working of natural selection
  • Because of the importance of variation, natural selection should be considered a two-step process: the production of abundant variation is followed by the elimination of inferior individuals
  • By adopting natural selection, Darwin settled the several-thousandyear- old argument among philosophers over chance or necessity. Change on the earth is the result of both, the first step being dominated by randomness, the second by necessity
  • Another aspect of the new philosophy of biology concerns the role of laws. Laws give way to concepts in Darwinism. In the physical sciences, as a rule, theories are based on laws; for example, the laws of motion led to the theory of gravitation. In evolutionary biology, however, theories are largely based on concepts such as competition, female choice, selection, succession and dominance. These biological concepts, and the theories based on them, cannot be reduced to the laws and theories of the physical sciences
  • Despite the initial resistance by physicists and philosophers, the role of contingency and chance in natural processes is now almost universally acknowledged. Many biologists and philosophers deny the existence of universal laws in biology and suggest that all regularities be stated in probabilistic terms, as nearly all so-called biological laws have exceptions. Philosopher of science Karl Popper’s famous test of falsification therefore cannot be applied in these cases.
  • To borrow Darwin’s phrase, there is grandeur in this view of life. New modes of thinking have been, and are being, evolved. Almost every component in modern man’s belief system is somehow affected by Darwinian principles
maxleffler

Resilience theory and the brain - 0 views

  • Encouraging the attributes of resilience in children as part of an early intervention and prevention approach is well supported by the literature, to not only safeguard against the effects of adversity and mental illness, but also enable individuals to acquire the attributes to adapt and thrive in challenging circumstances
  • children with high exposure to risk and low exposure to support are vulnerable to poor mental health and academic outcomes
  • One of the most recent explains it as a multifactorial, multidimensional facet that incorporates the social, environment, and cultural conditioning of the individual.
  • ...6 more annotations...
  • A quarter of children in primary school are bullied on the playground, causing significant physiological, psychological and social challenges
  • resilience is developed through social, cultural, mental and physical factors. All facets become important when supporting the growth and development of a child.
  • The earlier the child experiences stress without supportive platforms in place, the more likely that this stress will compromise the cognitive platforms on which the mental constructs of resilience are developed.
  • This experience of early adversity can result as a trauma in the brain, thus compromising the ability for the child to use the executive skills that support the development of cognitive resilience.
  • Understanding the risks and/or impact that early trauma and adversity can have on a developing brain can facilitate the intervention of techniques and a modification in environment that a child may need to transform maladaptive ways of coping to a more adaptive, resilience-supporting response
  • Research indicates school-based mental health, resilience and social and emotional learning initiatives, in Australia and internationally, can significantly improve the health, wellbeing and psychosocial skills of children – particularly for those who may have experienced early adversity.
knudsenlu

Hawaii: Where Evolution Can Be Surprisingly Predictable - The Atlantic - 0 views

  • Situated around 2,400 miles from the nearest continent, the Hawaiian Islands are about as remote as it’s possible for islands to be. In the last 5 million years, they’ve been repeatedly colonized by far-traveling animals, which then diversified into dozens of new species. Honeycreeper birds, fruit flies, carnivorous caterpillars ... all of these creatures reached Hawaii, and evolved into wondrous arrays of unique forms.
  • The most spectacular of these spider dynasties, Gillespie says, are the stick spiders. They’re so-named because some of them have long, distended abdomens that make them look like twigs. “You only see them at night, walking around the understory very slowly,” Gillespie says. “They’re kind of like sloths.” Murderous sloths, though: Their sluggish movements allow them to sneak up on other spiders and kill them.
  • Gillespie has shown that the gold spiders on Oahu belong to a different species from those on Kauai or Molokai. In fact, they’re more closely related to their brown and white neighbors from Oahu. Time and again, these spiders have arrived on new islands and evolved into new species—but always in one of three basic ways. A gold spider arrives on Oahu, and diversified into gold, brown, and white species. Another gold spider hops across to Maui and again diversified into gold, brown, and white species. “They repeatedly evolve the same forms,” says Gillespie.
  • ...3 more annotations...
  • Gillespie has seen this same pattern before, among Hawaii’s long-jawed goblin spiders. Each island has its own representatives of the four basic types: green, maroon, small brown, and large brown. At first, Gillespie assumed that all the green species were related to each other. But the spiders’ DNA revealed that the ones that live on the same islands are most closely related, regardless of their colors. They too have hopped from one island to another, radiating into the same four varieties wherever they land.
  • One of the most common misunderstandings about evolution is that it is a random process. Mutations are random, yes, but those mutations then rise and fall in ways that are anything but random. That’s why stick spiders, when they invade a new island, don’t diversify into red species, or zebra-striped ones. The environment of Hawaii sculpts their bodies in a limited number of ways.
  • Gillespie adds that there’s an urgency to this work. For millions of years, islands like Hawaii have acted as crucibles of evolution, allowing living things to replay evolution’s tape in the way that Gould envisaged. But in a much shorter time span, humans have threatened the results of those natural experiments. “The Hawaiian islands are in dire trouble from invasive species, and environmental modifications,” says Gillespie. “And you have all these unknown groups of spiders—entire lineages of really beautiful, charismatic animals, most of which are undescribed.”
runlai_jiang

In Some Countries, Facebook's Fiddling Has Magnified Fake News - The New York Times - 0 views

  • In Some Countries, Facebook’s Fiddling Has Magnified Fake News
  • SAN FRANCISCO — One morning in October, the editors of Página Siete, Bolivia’s third-largest news site, noticed that traffic to their outlet coming from Facebook was plummeting.The publication had recently been hit by cyberattacks, and editors feared it was being targeted by hackers loyal to the government of President Evo Morales.
  • But it wasn’t the government’s fault. It was Facebook’s. The Silicon Valley company was testing a new version of its hugely popular News Feed, peeling off professional news sites from what people normally see and relegating them to a new section of Facebook called Explore.
  • ...4 more annotations...
  • Facebook said these News Feed modifications were not identical to those introduced last fall in six countries through its Explore program, but both alterations favor posts from friends and family over professional news sites. And what happened in those countries illustrates the unintended consequences of such a change in an online service that now has a global reach of more than two billion people every month.
  • The fabricated story circulated so widely that the local police issued a statement saying it wasn’t true. But when the police went to issue the warning on Facebook, they found that the message — unlike the fake news story they meant to combat — could no longer appear on News Feed because it came from an official account.Facebook explained its goals for the Explore program in Slovakia, Sri Lanka, Cambodia, Bolivia, Guatemala and Serbia in a blog post in October. “The goal of this test is to understand if people prefer to have separate places for personal and public content,” wrote Adam Mosseri, head of Facebook’s News Feed. “There is no current plan to roll this out beyond these test countries.”
  • The loss of visitors from Facebook was readily apparent in October, and Mr. Huallpa could communicate with Facebook only through a customer service form letter. He received an automatic reply in return.
  • ech giant may play in her country.“It’s a private company — they have the right to do as they please, of course,” she said. “But the first question we asked is ‘Why Bolivia?’ And we don’t even have the possibility of asking why. Why us?”
Javier E

Why Is It So Hard to Be Rational? | The New Yorker - 0 views

  • an unusually large number of books about rationality were being published this year, among them Steven Pinker’s “Rationality: What It Is, Why It Seems Scarce, Why It Matters” (Viking) and Julia Galef’s “The Scout Mindset: Why Some People See Things Clearly and Others Don’t” (Portfolio).
  • When the world changes quickly, we need strategies for understanding it. We hope, reasonably, that rational people will be more careful, honest, truthful, fair-minded, curious, and right than irrational ones.
  • And yet rationality has sharp edges that make it hard to put at the center of one’s life
  • ...43 more annotations...
  • You might be well-intentioned, rational, and mistaken, simply because so much in our thinking can go wrong. (“RATIONAL, adj.: Devoid of all delusions save those of observation, experience and reflection,”
  • You might be rational and self-deceptive, because telling yourself that you are rational can itself become a source of bias. It’s possible that you are trying to appear rational only because you want to impress people; or that you are more rational about some things (your job) than others (your kids); or that your rationality gives way to rancor as soon as your ideas are challenged. Perhaps you irrationally insist on answering difficult questions yourself when you’d be better off trusting the expert consensus.
  • Not just individuals but societies can fall prey to false or compromised rationality. In a 2014 book, “The Revolt of the Public and the Crisis of Authority in the New Millennium,” Martin Gurri, a C.I.A. analyst turned libertarian social thinker, argued that the unmasking of allegedly pseudo-rational institutions had become the central drama of our age: people around the world, having concluded that the bigwigs in our colleges, newsrooms, and legislatures were better at appearing rational than at being so, had embraced a nihilist populism that sees all forms of public rationality as suspect.
  • modern life would be impossible without those rational systems; we must improve them, not reject them. We have no choice but to wrestle with rationality—an ideal that, the sociologist Max Weber wrote, “contains within itself a world of contradictions.”
  • Where others might be completely convinced that G.M.O.s are bad, or that Jack is trustworthy, or that the enemy is Eurasia, a Bayesian assigns probabilities to these propositions. She doesn’t build an immovable world view; instead, by continually updating her probabilities, she inches closer to a more useful account of reality. The cooking is never done.
  • Rationality is one of humanity’s superpowers. How do we keep from misusing it?
  • Start with the big picture, fixing it firmly in your mind. Be cautious as you integrate new information, and don’t jump to conclusions. Notice when new data points do and do not alter your baseline assumptions (most of the time, they won’t alter them), but keep track of how often those assumptions seem contradicted by what’s new. Beware the power of alarming news, and proceed by putting it in a broader, real-world context.
  • Bayesian reasoning implies a few “best practices.”
  • Keep the cooked information over here and the raw information over there; remember that raw ingredients often reduce over heat
  • We want to live in a more rational society, but not in a falsely rationalized one. We want to be more rational as individuals, but not to overdo it. We need to know when to think and when to stop thinking, when to doubt and when to trust.
  • But the real power of the Bayesian approach isn’t procedural; it’s that it replaces the facts in our minds with probabilities.
  • Applied to specific problems—Should you invest in Tesla? How bad is the Delta variant?—the techniques promoted by rationality writers are clarifying and powerful.
  • the rationality movement is also a social movement; rationalists today form what is sometimes called the “rationality community,” and, as evangelists, they hope to increase its size.
  • In “Rationality,” “The Scout Mindset,” and other similar books, irrationality is often presented as a form of misbehavior, which might be rectified through education or socialization.
  • Greg tells me that, in his business, it’s not enough to have rational thoughts. Someone who’s used to pondering questions at leisure might struggle to learn and reason when the clock is ticking; someone who is good at reaching rational conclusions might not be willing to sign on the dotted line when the time comes. Greg’s hedge-fund colleagues describe as “commercial”—a compliment—someone who is not only rational but timely and decisive.
  • You can know what’s right but still struggle to do it.
  • Following through on your own conclusions is one challenge. But a rationalist must also be “metarational,” willing to hand over the thinking keys when someone else is better informed or better trained. This, too, is harder than it sounds.
  • For all this to happen, rationality is necessary, but not sufficient. Thinking straight is just part of the work. 
  • I found it possible to be metarational with my dad not just because I respected his mind but because I knew that he was a good and cautious person who had my and my mother’s best interests at heart.
  • between the two of us, we had the right ingredients—mutual trust, mutual concern, and a shared commitment to reason and to act.
  • Intellectually, we understand that our complex society requires the division of both practical and cognitive labor. We accept that our knowledge maps are limited not just by our smarts but by our time and interests. Still, like Gurri’s populists, rationalists may stage their own contrarian revolts, repeatedly finding that no one’s opinions but their own are defensible. In letting go, as in following through, one’s whole personality gets involved.
  • in truth, it maps out a series of escalating challenges. In search of facts, we must make do with probabilities. Unable to know it all for ourselves, we must rely on others who care enough to know. We must act while we are still uncertain, and we must act in time—sometimes individually, but often together.
  • The realities of rationality are humbling. Know things; want things; use what you know to get what you want. It sounds like a simple formula.
  • The real challenge isn’t being right but knowing how wrong you might be.By Joshua RothmanAugust 16, 2021
  • Writing about rationality in the early twentieth century, Weber saw himself as coming to grips with a titanic force—an ascendant outlook that was rewriting our values. He talked about rationality in many different ways. We can practice the instrumental rationality of means and ends (how do I get what I want?) and the value rationality of purposes and goals (do I have good reasons for wanting what I want?). We can pursue the rationality of affect (am I cool, calm, and collected?) or develop the rationality of habit (do I live an ordered, or “rationalized,” life?).
  • Weber worried that it was turning each individual into a “cog in the machine,” and life into an “iron cage.” Today, rationality and the words around it are still shadowed with Weberian pessimism and cursed with double meanings. You’re rationalizing the org chart: are you bringing order to chaos, or justifying the illogical?
  • For Aristotle, rationality was what separated human beings from animals. For the authors of “The Rationality Quotient,” it’s a mental faculty, parallel to but distinct from intelligence, which involves a person’s ability to juggle many scenarios in her head at once, without letting any one monopolize her attention or bias her against the rest.
  • In “The Rationality Quotient: Toward a Test of Rational Thinking” (M.I.T.), from 2016, the psychologists Keith E. Stanovich, Richard F. West, and Maggie E. Toplak call rationality “a torturous and tortured term,” in part because philosophers, sociologists, psychologists, and economists have all defined it differently
  • Galef, who hosts a podcast called “Rationally Speaking” and co-founded the nonprofit Center for Applied Rationality, in Berkeley, barely uses the word “rationality” in her book on the subject. Instead, she describes a “scout mindset,” which can help you “to recognize when you are wrong, to seek out your blind spots, to test your assumptions and change course.” (The “soldier mindset,” by contrast, encourages you to defend your positions at any cost.)
  • Galef tends to see rationality as a method for acquiring more accurate views.
  • Pinker, a cognitive and evolutionary psychologist, sees it instrumentally, as “the ability to use knowledge to attain goals.” By this definition, to be a rational person you have to know things, you have to want things, and you have to use what you know to get what you want.
  • Introspection is key to rationality. A rational person must practice what the neuroscientist Stephen Fleming, in “Know Thyself: The Science of Self-Awareness” (Basic Books), calls “metacognition,” or “the ability to think about our own thinking”—“a fragile, beautiful, and frankly bizarre feature of the human mind.”
  • A successful student uses metacognition to know when he needs to study more and when he’s studied enough: essentially, parts of his brain are monitoring other parts.
  • In everyday life, the biggest obstacle to metacognition is what psychologists call the “illusion of fluency.” As we perform increasingly familiar tasks, we monitor our performance less rigorously; this happens when we drive, or fold laundry, and also when we think thoughts we’ve thought many times before
  • The trick is to break the illusion of fluency, and to encourage an “awareness of ignorance.”
  • metacognition is a skill. Some people are better at it than others. Galef believes that, by “calibrating” our metacognitive minds, we can improve our performance and so become more rational
  • There are many calibration methods
  • nowing about what you know is Rationality 101. The advanced coursework has to do with changes in your knowledge.
  • Most of us stay informed straightforwardly—by taking in new information. Rationalists do the same, but self-consciously, with an eye to deliberately redrawing their mental maps.
  • The challenge is that news about distant territories drifts in from many sources; fresh facts and opinions aren’t uniformly significant. In recent decades, rationalists confronting this problem have rallied behind the work of Thomas Bayes
  • So-called Bayesian reasoning—a particular thinking technique, with its own distinctive jargon—has become de rigueur.
  • the basic idea is simple. When new information comes in, you don’t want it to replace old information wholesale. Instead, you want it to modify what you already know to an appropriate degree. The degree of modification depends both on your confidence in your preëxisting knowledge and on the value of the new data. Bayesian reasoners begin with what they call the “prior” probability of something being true, and then find out if they need to adjust it.
  • Bayesian reasoning is an approach to statistics, but you can use it to interpret all sorts of new information.
1 - 20 of 22 Next ›
Showing 20 items per page