Skip to main content

Home/ TOK Friends/ Group items tagged problem-solving

Rss Feed Group items tagged

Javier E

History News Network | Just How Stupid Are We? Facing the Truth About Donald Trump's Am... - 1 views

  •  Just How Stupid Are We?  Facing the Truth About the American Voter.  The book is filled with statistics like these:● A majority of Americans don’t know which party is in control of Congress.  ● A majority can’t name the chief justice of the Supreme Court.  ● A majority don’t know we have three branches of government.
  • suddenly mainstream media pundits have discovered how ignorant millions of voters are.  See this and this and this and this.  More importantly, the concern with low-information voters has become widespread.  Many are now wondering what country they’re living in. 
  • The answer science gives us (the title of my last book and this essay notwithstanding) is not that people fall for slick charlatans like Trump because they’re stupid.
  • ...19 more annotations...
  •  The problem is that we humans didn’t evolve to live in the world in which we find ourselves.  As the social scientists Leda Cosmides and John Tooby put it, the human mind was “designed to solve the day-to-day problems of our hunter-gatherer ancestors. These stone age priorities produced a brain far better at solving some problems than others.” 
  • there are four failings common to human beings as a result of our Stone-Age brain that hinder us in politics.
  • why are we this way?  Science suggests that one reason is that we evolved to win in social settings and in such situations the truth doesn't matter as much as sheer doggedness
  • Second, we find it hard to size up politicians correctly.  The reason for this is that we rely on instant impressions. 
  • This stops voters from worrying that they need to bolster their impressions by consulting experts and reading news stories from a broad array of ideological viewpoints.  Why study when you can rely on your gut instinct?
  • Third, we aren’t inclined to reward politicians who tell us hard truths.
  • First, most people find it easy to ignore politics because it usually involves people they don’t know.  As human beings we evolved to care about people in our immediate vicinity.  Our nervous system kicks into action usually only when we meet people face-to-face
  •  This has left millions of voters on their own.  Lacking information, millions do what you would expect.  They go with their gut
  • most of the time we return to a state of well-being by simply ignoring the evidence we find discomforting.  This is known as Disconfirmation Bias and it afflicts all of us
  • Fourth, we frequently fail to show empathy in circumstances that clearly cry out for it.
  • We evolved to show empathy for people we know.  It takes special effort to empathize with people who don’t dress like us or look like us.
  • long-term we need to teach voters not to trust their instincts in politics because our instincts often don’t work.
  • Doing politics in a modern mass democracy, in other words, is an unnatural act.
  • Teaching this lesson doesn’t sound like a job for historians, but in one way it is.  Studying history is all about putting events into context. And as it turns out, voters need to learn the importance of context.
  • Given the mismatch between our Stone-Age brain and the problems we face in the 21st century, we should only trust our political instincts when those instincts are serviceable in a modern context.  If they aren’t (and most of the time they aren't), then higher order cognitive thinking is required.
  • Just why mass ignorance seems to be afflicting our politics at this moment is a complicated question.  But here again history can be helpful.  The answer seems to be that the institutions voters formerly could turn to for help have withered.
  • We don't want the truth to prevail, as Harvard's Steven Pinker informs us, we want our version of the truth to prevail, for in the end what we're really concerned with is maintaining our status or enhancing it.
  • ut cultural norms can be established that help us overcome our natural inclinations.
  • don’t have much confidence that people in general will be willing on their own to undertake the effort.
Javier E

The Story Behind the SAT Overhaul - NYTimes.com - 2 views

  • “When you cover too many topics,” Coleman said, “the assessments designed to measure those standards are inevitably superficial.” He pointed to research showing that more students entering college weren’t prepared and were forced into “remediation programs from which they never escape.” In math, for example, if you examined data from top-performing countries, you found an approach that emphasized “far fewer topics, far deeper,” the opposite of the curriculums he found in the United States, which he described as “a mile wide and an inch deep.”
  • The lessons he brought with him from thinking about the Common Core were evident — that American education needed to be more focused and less superficial, and that it should be possible to test the success of the newly defined standards through an exam that reflected the material being taught in the classroom.
  • she and her team had extensive conversations with students, teachers, parents, counselors, admissions officers and college instructors, asking each group to tell them in detail what they wanted from the test. What they arrived at above all was that a test should reflect the most important skills that were imparted by the best teachers
  • ...12 more annotations...
  • for example, a good instructor would teach Martin Luther King Jr.’s “I Have a Dream” speech by encouraging a conversation that involved analyzing the text and identifying the evidence, both factual and rhetorical, that makes it persuasive. “The opposite of what we’d want is a classroom where a teacher might ask only: ‘What was the year the speech was given? Where was it given?’ ”
  • in the past, assembling the SAT focused on making sure the questions performed on technical grounds, meaning: Were they appropriately easy or difficult among a wide range of students, and were they free of bias when tested across ethnic, racial and religious subgroups? The goal was “maximizing differentiation” among kids, which meant finding items that were answered correctly by those students who were expected to get them right and incorrectly by the weaker students. A simple way of achieving this, Coleman said, was to test the kind of obscure vocabulary words for which the SAT was famous
  • In redesigning the test, the College Board shifted its emphasis. It prioritized content, measuring each question against a set of specifications that reflect the kind of reading and math that students would encounter in college and their work lives. Schmeiser and others then spent much of early last year watching students as they answered a set of 20 or so problems, discussing the questions with the students afterward. “The predictive validity is going to come out the same,” she said of the redesigned test. “But in the new test, we have much more control over the content and skills that are being measured.”
  • Evidence-based reading and writing, he said, will replace the current sections on reading and writing. It will use as its source materials pieces of writing — from science articles to historical documents to literature excerpts — which research suggests are important for educated Americans to know and understand deeply. “The Declaration of Independence, the Constitution, the Bill of Rights and the Federalist Papers,” Coleman said, “have managed to inspire an enduring great conversation about freedom, justice, human dignity in this country and the world” — therefore every SAT will contain a passage from either a founding document or from a text (like Lincoln’s Gettysburg Address) that is part of the “great global conversation” the founding documents inspired.
  • The Barbara Jordan vocabulary question would have a follow-up — “How do you know your answer is correct?” — to which students would respond by identifying lines in the passage that supported their answer.
  • The idea is that the test will emphasize words students should be encountering, like “synthesis,” which can have several meanings depending on their context. Instead of encouraging students to memorize flashcards, the test should promote the idea that they must read widely throughout their high-school years.
  • . No longer will it be good enough to focus on tricks and trying to eliminate answer choices. We are not interested in students just picking an answer, but justifying their answers.”
  • the essay portion of the test will also be reformulated so that it will always be the same, some version of: “As you read the passage in front of you, consider how the author uses evidence such as facts or examples; reasoning to develop ideas and to connect claims and evidence; and stylistic or persuasive elements to add power to the ideas expressed. Write an essay in which you explain how the author builds an argument to persuade an audience.”
  • The math section, too, will be predicated on research that shows that there are “a few areas of math that are a prerequisite for a wide range of college courses” and careers. Coleman conceded that some might treat the news that they were shifting away from more obscure math problems to these fewer fundamental skills as a dumbing-down the test, but he was adamant that this was not the case. He explained that there will be three areas of focus: problem solving and data analysis, which will include ratios and percentages and other mathematical reasoning used to solve problems in the real world; the “heart of algebra,” which will test how well students can work with linear equations (“a powerful set of tools that echo throughout many fields of study”); and what will be called the “passport to advanced math,” which will focus on the student’s familiarity with complex equations and their applications in science and social science.
  • “Sometimes in the past, there’s been a feeling that tests were measuring some sort of ineffable entity such as intelligence, whatever that might mean. Or ability, whatever that might mean. What this is is a clear message that good hard work is going to pay off and achievement is going to pay off. This is one of the most significant developments that I have seen in the 40-plus years that I’ve been working in admissions in higher education.”
  • The idea of creating a transparent test and then providing a free website that any student could use — not to learn gimmicks but to get a better grounding and additional practice in the core knowledge that would be tested — was appealing to Coleman.
  • (The College Board won’t pay Khan Academy.) They talked about a hypothetical test-prep experience in which students would log on to a personal dashboard, indicate that they wanted to prepare for the SAT and then work through a series of preliminary questions to demonstrate their initial skill level and identify the gaps in their knowledge. Khan said he could foresee a way to estimate the amount of time it would take to achieve certain benchmarks. “It might go something like, ‘O.K., we think you’ll be able to get to this level within the next month and this level within the next two months if you put in 30 minutes a day,’ ” he said. And he saw no reason the site couldn’t predict for anyone, anywhere the score he or she might hope to achieve with a commitment to a prescribed amount of work.
Javier E

Quantum Computing Advance Begins New Era, IBM Says - The New York Times - 0 views

  • While researchers at Google in 2019 claimed that they had achieved “quantum supremacy” — a task performed much more quickly on a quantum computer than a conventional one — IBM’s researchers say they have achieved something new and more useful, albeit more modestly named.
  • “We’re entering this phase of quantum computing that I call utility,” said Jay Gambetta, a vice president of IBM Quantum. “The era of utility.”
  • Present-day computers are called digital, or classical, because they deal with bits of information that are either 1 or 0, on or off. A quantum computer performs calculations on quantum bits, or qubits, that capture a more complex state of information. Just as a thought experiment by the physicist Erwin Schrödinger postulated that a cat could be in a quantum state that is both dead and alive, a qubit can be both 1 and 0 simultaneously.
  • ...15 more annotations...
  • That allows quantum computers to make many calculations in one pass, while digital ones have to perform each calculation separately. By speeding up computation, quantum computers could potentially solve big, complex problems in fields like chemistry and materials science that are out of reach today.
  • When Google researchers made their supremacy claim in 2019, they said their quantum computer performed a calculation in 3 minutes 20 seconds that would take about 10,000 years on a state-of-the-art conventional supercomputer.
  • The IBM researchers in the new study performed a different task, one that interests physicists. They used a quantum processor with 127 qubits to simulate the behavior of 127 atom-scale bar magnets — tiny enough to be governed by the spooky rules of quantum mechanics — in a magnetic field. That is a simple system known as the Ising model, which is often used to study magnetism.
  • This problem is too complex for a precise answer to be calculated even on the largest, fastest supercomputers.
  • On the quantum computer, the calculation took less than a thousandth of a second to complete. Each quantum calculation was unreliable — fluctuations of quantum noise inevitably intrude and induce errors — but each calculation was quick, so it could be performed repeatedly.
  • Indeed, for many of the calculations, additional noise was deliberately added, making the answers even more unreliable. But by varying the amount of noise, the researchers could tease out the specific characteristics of the noise and its effects at each step of the calculation.“We can amplify the noise very precisely, and then we can rerun that same circuit,” said Abhinav Kandala, the manager of quantum capabilities and demonstrations at IBM Quantum and an author of the Nature paper. “And once we have results of these different noise levels, we can extrapolate back to what the result would have been in the absence of noise.”In essence, the researchers were able to subtract the effects of noise from the unreliable quantum calculations, a process they call error mitigation.
  • Altogether, the computer performed the calculation 600,000 times, converging on an answer for the overall magnetization produced by the 127 bar magnets.
  • Although an Ising model with 127 bar magnets is too big, with far too many possible configurations, to fit in a conventional computer, classical algorithms can produce approximate answers, a technique similar to how compression in JPEG images throws away less crucial data to reduce the size of the file while preserving most of the image’s details
  • Certain configurations of the Ising model can be solved exactly, and both the classical and quantum algorithms agreed on the simpler examples. For more complex but solvable instances, the quantum and classical algorithms produced different answers, and it was the quantum one that was correct.
  • Thus, for other cases where the quantum and classical calculations diverged and no exact solutions are known, “there is reason to believe that the quantum result is more accurate,”
  • Mr. Anand is currently trying to add a version of error mitigation for the classical algorithm, and it is possible that could match or surpass the performance of the quantum calculations.
  • In the long run, quantum scientists expect that a different approach, error correction, will be able to detect and correct calculation mistakes, and that will open the door for quantum computers to speed ahead for many uses.
  • Error correction is already used in conventional computers and data transmission to fix garbles. But for quantum computers, error correction is likely years away, requiring better processors able to process many more qubits
  • “This is one of the simplest natural science problems that exists,” Dr. Gambetta said. “So it’s a good one to start with. But now the question is, how do you generalize it and go to more interesting natural science problems?”
  • Those might include figuring out the properties of exotic materials, accelerating drug discovery and modeling fusion reactions.
carolinewren

How Brain Science Explains the Way We See #TheDress | Michael Buice - 1 views

  • Your brain is forced into being creative in order to perform the simple act of seeing the world around you.
  • Perception is a type of problem that mathematicians refer to as "ill-posed". Because of nothing more than light and geometry, a given image can have an infinite number of possible causes in the real world. Nonetheless, perception is a problem our brains must solve, so that we can find food, shelter, and each other.
  • he brain must resort to inference
  • ...8 more annotations...
  • Determination of color is based on a complicated inference, involving surrounding colors, local brightness cues and shape
  • This dress is a nice example of how what you see isn't necessarily what you perceive.
  • One consequence of this is that while we all live in the same world, we don't always see it the same way.
  • Color interpretation relies on the same kind of contextual inference as brightness. In Bloj, Kersten, and Hurlbert (1999) the authors demonstrated that context inferred from depth could change the perceived color of an object.
  • Humans also have strong built in assumptions about perceived light sources. In terms of brightness, humans have a "light-from-above" prior that determines how we often interpret shapes.
  • In the case of the dress, one's assumptions about lighting have a strong impact on the perceived color.
  • It so happens that these average colors are close to being inverses of one another
  • Our brains have to make guesses, but they don't always make the same guesses, even though we live in the same world. One of the hardest inference problems our brains have to solve is figuring out how everyone else sees the world
Javier E

Sleight of the 'Invisible Hand' - NYTimes.com - 1 views

  • The wealthy, says Smith, spend their days establishing an “economy of greatness,” one founded on “luxury and caprice” and fueled by “the gratification of their own vain and insatiable desires.” Any broader benefit that accrues from their striving is not the consequence of foresight or benevolence, but “in spite of their natural selfishness and rapacity.” They don’t do good, they are led to it.
  • Smith described this state of affairs as “the obvious and simple system of natural liberty,” and he knew that it made for the revolutionary implication of his work. It shifted the way we thought about the relationship between government action and economic growth, making less means more the rebuttable presumption of policy proposals.
  • What it did not do, however, was void any proposal outright, much less prove that all government activity was counterproductive. Smith held that the sovereign had a role supporting education, building infrastructure and public institutions, and providing security from foreign and domestic threats — initiatives that should be paid for, in part, by a progressive tax code and duties on luxury goods. He even believed the government had a “duty” to protect citizens from “oppression,” the inevitable tendency of the strong to take advantage of the ignorance and necessity of the weak.
  • ...4 more annotations...
  • In other words, the invisible hand did not solve the problem of politics by making politics altogether unnecessary. “We don’t think government can solve all our problems,” President Obama said in his convention address, “But we don’t think that government is the source of all our problems.” Smith would have appreciated this formulation. For him, whether government should get out of the way in any given matter, economic or otherwise, was a question for considered judgment abetted by scientific inquiry.
  • politics is a practical venture, and Smith distrusted those statesmen who confused their work with an exercise in speculative philosophy. Their proposals should be judged not by the delusive lights of the imagination, but by the metrics of science and experience, what President Obama described in the first presidential debate as “math, common sense and our history.”
  • John Paul Rollert teaches business ethics at the University of Chicago Booth School of Business and leadership at the Harvard Extension School.  He is the author of a recent paper on President Obama’s “Empathy Standard” for the Yale Law Journal Online.
  • Adam Smith, analytic philosophy, economics, Elections 2012
  •  
    "Adam Smith, analytic philosophy, economics"
Javier E

Watson Still Can't Think - NYTimes.com - 0 views

  • Fish argued that Watson “does not come within a million miles of replicating the achievements of everyday human action and thought.” In defending this claim, Fish invoked arguments that one of us (Dreyfus) articulated almost 40 years ago in “What Computers Can’t Do,” a criticism of 1960s and 1970s style artificial intelligence.
  • At the dawn of the AI era the dominant approach to creating intelligent systems was based on finding the right rules for the computer to follow.
  • GOFAI, for Good Old Fashioned Artificial Intelligence.
  • ...12 more annotations...
  • For constrained domains the GOFAI approach is a winning strategy.
  • there is nothing intelligent or even interesting about the brute force approach.
  • the dominant paradigm in AI research has largely “moved on from GOFAI to embodied, distributed intelligence.” And Faustus from Cincinnati insists that as a result “machines with bodies that experience the world and act on it” will be “able to achieve intelligence.”
  • The new, embodied paradigm in AI, deriving primarily from the work of roboticist Rodney Brooks, insists that the body is required for intelligence. Indeed, Brooks’s classic 1990 paper, “Elephants Don’t Play Chess,” rejected the very symbolic computation paradigm against which Dreyfus had railed, favoring instead a range of biologically inspired robots that could solve apparently simple, but actually quite complicated, problems like locomotion, grasping, navigation through physical environments and so on. To solve these problems, Brooks discovered that it was actually a disadvantage for the system to represent the status of the environment and respond to it on the basis of pre-programmed rules about what to do, as the traditional GOFAI systems had. Instead, Brooks insisted, “It is better to use the world as its own model.”
  • although they respond to the physical world rather well, they tend to be oblivious to the global, social moods in which we find ourselves embedded essentially from birth, and in virtue of which things matter to us in the first place.
  • the embodied AI paradigm is irrelevant to Watson. After all, Watson has no useful bodily interaction with the world at all.
  • The statistical machine learning strategies that it uses are indeed a big advance over traditional GOFAI techniques. But they still fall far short of what human beings do.
  • “The illusion is that this computer is doing the same thing that a very good ‘Jeopardy!’ player would do. It’s not. It’s doing something sort of different that looks the same on the surface. And every so often you see the cracks.”
  • Watson doesn’t understand relevance at all. It only measures statistical frequencies. Because it is relatively common to find mismatches of this sort, Watson learns to weigh them as only mild evidence against the answer. But the human just doesn’t do it that way. The human being sees immediately that the mismatch is irrelevant for the Erie Canal but essential for Toronto. Past frequency is simply no guide to relevance.
  • The fact is, things are relevant for human beings because at root we are beings for whom things matter. Relevance and mattering are two sides of the same coin. As Haugeland said, “The problem with computers is that they just don’t give a damn.” It is easy to pretend that computers can care about something if we focus on relatively narrow domains — like trivia games or chess — where by definition winning the game is the only thing that could matter, and the computer is programmed to win. But precisely because the criteria for success are so narrowly defined in these cases, they have nothing to do with what human beings are when they are at their best.
  • Far from being the paradigm of intelligence, therefore, mere matching with no sense of mattering or relevance is barely any kind of intelligence at all. As beings for whom the world already matters, our central human ability is to be able to see what matters when.
  • But, as we show in our recent book, this is an existential achievement orders of magnitude more amazing and wonderful than any statistical treatment of bare facts could ever be. The greatest danger of Watson’s victory is not that it proves machines could be better versions of us, but that it tempts us to misunderstand ourselves as poorer versions of them.
kushnerha

BBC - Future - The surprising downsides of being clever - 0 views

  • If ignorance is bliss, does a high IQ equal misery? Popular opinion would have it so. We tend to think of geniuses as being plagued by existential angst, frustration, and loneliness. Think of Virginia Woolf, Alan Turing, or Lisa Simpson – lone stars, isolated even as they burn their brightest. As Ernest Hemingway wrote: “Happiness in intelligent people is the rarest thing I know.”
  • Combing California’s schools for the creme de la creme, he selected 1,500 pupils with an IQ of 140 or more – 80 of whom had IQs above 170. Together, they became known as the “Termites”, and the highs and lows of their lives are still being studied to this day.
  • Termites’ average salary was twice that of the average white-collar job. But not all the group met Terman’s expectations – there were many who pursued more “humble” professions such as police officers, seafarers, and typists. For this reason, Terman concluded that “intellect and achievement are far from perfectly correlated”. Nor did their smarts endow personal happiness. Over the course of their lives, levels of divorce, alcoholism and suicide were about the same as the national average.
  • ...16 more annotations...
  • One possibility is that knowledge of your talents becomes something of a ball and chain. Indeed, during the 1990s, the surviving Termites were asked to look back at the events in their 80-year lifespan. Rather than basking in their successes, many reported that they had been plagued by the sense that they had somehow failed to live up to their youthful expectations.
  • The most notable, and sad, case concerns the maths prodigy Sufiah Yusof. Enrolled at Oxford University aged 12, she dropped out of her course before taking her finals and started waitressing. She later worked as a call girl, entertaining clients with her ability to recite equations during sexual acts.
  • Another common complaint, often heard in student bars and internet forums, is that smarter people somehow have a clearer vision of the world’s failings. Whereas the rest of us are blinkered from existential angst, smarter people lay awake agonising over the human condition or other people’s folly.
  • MacEwan University in Canada found that those with the higher IQ did indeed feel more anxiety throughout the day. Interestingly, most worries were mundane, day-to-day concerns, though; the high-IQ students were far more likely to be replaying an awkward conversation, than asking the “big questions”. “It’s not that their worries were more profound, but they are just worrying more often about more things,” says Penney. “If something negative happened, they thought about it more.”
  • seemed to correlate with verbal intelligence – the kind tested by word games in IQ tests, compared to prowess at spatial puzzles (which, in fact, seemed to reduce the risk of anxiety). He speculates that greater eloquence might also make you more likely to verbalise anxieties and ruminate over them. It’s not necessarily a disadvantage, though. “Maybe they were problem-solving a bit more than most people,” he says – which might help them to learn from their mistakes.
  • The harsh truth, however, is that greater intelligence does not equate to wiser decisions; in fact, in some cases it might make your choices a little more foolish.
  • we need to turn our minds to an age-old concept: “wisdom”. His approach is more scientific that it might at first sound. “The concept of wisdom has an ethereal quality to it,” he admits. “But if you look at the lay definition of wisdom, many people would agree it’s the idea of someone who can make good unbiased judgement.”
  • “my-side bias” – our tendency to be highly selective in the information we collect so that it reinforces our previous attitudes. The more enlightened approach would be to leave your assumptions at the door as you build your argument – but Stanovich found that smarter people are almost no more likely to do so than people with distinctly average IQs.
  • People who ace standard cognitive tests are in fact slightly more likely to have a “bias blind spot”. That is, they are less able to see their own flaws, even when though they are quite capable of criticising the foibles of others. And they have a greater tendency to fall for the “gambler’s fallacy”
  • A tendency to rely on gut instincts rather than rational thought might also explain why a surprisingly high number of Mensa members believe in the paranormal; or why someone with an IQ of 140 is about twice as likely to max out their credit card.
  • “The people pushing the anti-vaccination meme on parents and spreading misinformation on websites are generally of more than average intelligence and education.” Clearly, clever people can be dangerously, and foolishly, misguided.
  • spent the last decade building tests for rationality, and he has found that fair, unbiased decision-making is largely independent of IQ.
  • Crucially, Grossmann found that IQ was not related to any of these measures, and certainly didn’t predict greater wisdom. “People who are very sharp may generate, very quickly, arguments [for] why their claims are the correct ones – but may do it in a very biased fashion.”
  • employers may well begin to start testing these abilities in place of IQ; Google has already announced that it plans to screen candidates for qualities like intellectual humility, rather than sheer cognitive prowess.
  • He points out that we often find it easier to leave our biases behind when we consider other people, rather than ourselves. Along these lines, he has found that simply talking through your problems in the third person (“he” or “she”, rather than “I”) helps create the necessary emotional distance, reducing your prejudices and leading to wiser arguments.
  • If you’ve been able to rest on the laurels of your intelligence all your life, it could be very hard to accept that it has been blinding your judgement. As Socrates had it: the wisest person really may be the one who can admit he knows nothing.
sissij

Two Cities Launch Plans for a Flying Taxi Service by the 2030s | Big Think - 0 views

  • Few things are infuriating as traffic. Think about all the hours lost over a lifetime just sitting there, instead of being home, enjoying some quality time with your partner, or having a drink with friends.
  • It’s slated to become a reality. Singapore is investing in flying, driverless drones, which could take riders anywhere in the city.
  • Singapore is a relatively small city with a high population and a tremendous traffic problem, which is expected only to worsen over time.
  •  
    Flying cars have always been on the list of innovations that would be soon achieve. This is actually not a very new idea by now. Some people have made their own flying car out of their original cars. However, the manufacturing of this kind of car has not yet been accomplished. This is one of the innovative ideas that I have witnessed it developing from scrap to real life. As we learned in TOK that the ultimate goal of science should serve the common interest of human and improve human life. This is a great example of people using technology to solve daily problems. --Sissi (3/31/2017)
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
caelengrubb

The scientific method can't save us from the coronavirus - The Washington Post - 0 views

  • The scientific method can’t save us — because it doesn’t exist.
  • there is no such thing as “the scientific method,” no single set of steps or one-size-fits-all solution to the problems we face.
  • Ask any scientist: what they do, individually and collectively, is too diverse, too dynamic, too difficult to follow one recipe.
  • ...11 more annotations...
  • But its nonexistence has never dampened the scientific method’s appeal. And now, in the face of the novel coronavirus pandemic, the question of who is (or is not) adhering to the scientific method feels more urgent than ever.
  • Fictional or not, “the scientific method” seems to offer safety in unsafe times.
  • The novel coronavirus causing the current crisis presents a multidimensional challenge — to personal, public, economic and mental health. There is no single tool with which to confront such a threat; what we need is a vast tool kit.
  • Luckily, scientists know this. Science is about staying flexible, trying out a variety of tools as the questions we try to answer change before our eyes. It is a process, not a product
  • In 1910, the philosopher and psychologist John Dewey published a brief introduction to thinking in general, based on research at the Laboratory School he had founded at the University of Chicago.
  • If you paid attention, Dewey argued, you saw that children were already scientific thinkers — they were creative, they solved problems, they worked together. Science came naturally to them.
  • Dewey emphasized that science was all around us and that was its strength
  • Finally, Dewey contended that science evolves. Constant change is how organisms keep up with their environments; the same is true for science. Facts matter, but not as much as flexibility
  • But Dewey’s list wasn’t meant to be the scientific method. He advocated flexibility, not stasis, and saw science as a continuation of everyday problem-solving
  • Pointing to the scientific method, which so many are doing with the best of intentions, misses the thing that gives science its power: scale. Science is too big for one set of steps — and too big to fail
  • The phrase “the scientific method” implies something special, static and solitary. But the history of the scientific method as it emerged last century reveals something familiar, adaptive and social. Science is human, in other words, just like the scientists who do it every day
anniina03

The Human Brain Evolved When Carbon Dioxide Was Lower - The Atlantic - 0 views

  • Kris Karnauskas, a professor of ocean sciences at the University of Colorado, has started walking around campus with a pocket-size carbon-dioxide detector. He’s not doing it to measure the amount of carbon pollution in the atmosphere. He’s interested in the amount of CO₂ in each room.
  • The indoor concentration of carbon dioxide concerns him—and not only for the usual reason. Karnauskas is worried that indoor CO₂ levels are getting so high that they are starting to impair human cognition.
  • Carbon dioxide, the same odorless and invisible gas that causes global warming, may be making us dumber.
  • ...11 more annotations...
  • “This is a hidden impact of climate change … that could actually impact our ability to solve the problem itself,” he said.
  • The science is, at first glance, surprisingly fundamental. Researchers have long believed that carbon dioxide harms the brain at very high concentrations. Anyone who’s seen the film Apollo 13 (or knows the real-life story behind it) may remember a moment when the mission’s three astronauts watch a gauge monitoring their cabin start to report dangerous levels of a gas. That gauge was measuring carbon dioxide. As one of the film’s NASA engineers remarks, if CO₂ levels rise too high, “you get impaired judgement, blackouts, the beginning of brain asphyxia.”
  • The same general principle, he argues, could soon affect people here on Earth. Two centuries of rampant fossil-fuel use have already spiked the amount of CO₂ in the atmosphere from about 280 parts per million before the Industrial Revolution to about 410 parts per million today. For Earth as a whole, that pollution traps heat in the atmosphere and causes climate change. But more locally, it also sets a baseline for indoor levels of carbon dioxide: You cannot ventilate a room’s carbon-dioxide levels below the global average.
  • In fact, many rooms have a much higher CO₂ level than the atmosphere, since ventilation systems don’t work perfectly.
  • On top of that, some rooms—in places such as offices, hospitals, and schools—are filled with many breathing people, that is, many people who are themselves exhaling carbon dioxide.
  • As the amount of atmospheric CO₂ keeps rising, indoor CO₂ will climb as well.
  • in one 2016 study Danish scientists cranked up indoor carbon-dioxide levels to 3,000 parts per million—more than seven times outdoor levels today—and found that their 25 subjects suffered no cognitive impairment or health issues. Only when scientists infused that same air with other trace chemicals and organic compounds emitted by the human body did the subjects begin to struggle, reporting “headache, fatigue, sleepiness, and difficulty in thinking clearly.” The subjects also took longer to solve basic math problems. The same lab, in another study, found that indoor concentrations of pure CO₂ could get to 5,000 parts per million and still cause little difficulty, at least for college students.
  • But other research is not as optimistic. When scientists at NASA’s Johnson Space Center tested the effects of CO₂ on about two dozen “astronaut-like subjects,” they found that their advanced decision-making skills declined with CO₂ at 1,200 parts per million. But cognitive skills did not seem to worsen as CO₂ climbed past that mark, and the intensity of the effect seemed to vary from person to person.
  • There’s evidence that carbon-dioxide levels may impair only the most complex and challenging human cognitive tasks. And we still don’t know why.
  • No one has looked at the effects of indoor CO₂ on children, the elderly, or people with health problems. Likewise, studies have so far exposed people to very high carbon levels for only a few hours, leaving open the question of what days-long exposure could do.
  • Modern humans, as a species, are only about 300,000 years old, and the ambient CO₂ that we encountered for most of our evolutionary life—from the first breath of infants to the last rattle of a dying elder—was much lower than the ambient CO₂ today. I asked Gall: Has anyone looked to see if human cognition improves under lower carbon-dioxide levels? If you tested someone in a room that had only 250 parts per million of carbon dioxide—a level much closer to that of Earth’s atmosphere three centuries or three millennia ago—would their performance on tests improve? In other words, is it possible that human cognitive ability has already declined?
Javier E

Critics and Audiences Often Disagree. It's Not a Big Deal. - 0 views

  • So what’s the actual reason for the gap between audiences and critics? Simply put, it’s that audiences tend to be easier to please because they’re merely looking for movies to be entertainment while critics are trying to judge them artistically.
  • one of the things W. David Marx discusses is how art receives acclaim as art. “Invention requires ‘answering’ the works of previous artists,” Marx writes. So the creation of photography led to artists trying to “solve” the problem of a new form capable of capturing perfect representations of reality; hence the rise of cubism and abstract art
  • “There are perhaps an infinite number of potential problems in art, but to gain artist status, artists must solve the agreed-upon problems of the current moment,” he writes.
  • ...1 more annotation...
  • Another way to put this is that critics are looking for something “interesting”; audiences are merely looking to be “entertained.”
Javier E

The Equality Conundrum | The New Yorker - 0 views

  • The philosopher Ronald Dworkin considered this type of parental conundrum in an essay called “What Is Equality?,” from 1981. The parents in such a family, he wrote, confront a trade-off between two worthy egalitarian goals. One goal, “equality of resources,” might be achieved by dividing the inheritance evenly, but it has the downside of failing to recognize important differences among the parties involved.
  • Another goal, “equality of welfare,” tries to take account of those differences by means of twisty calculations.
  • Take the first path, and you willfully ignore meaningful facts about your children. Take the second, and you risk dividing the inheritance both unevenly and incorrectly.
  • ...33 more annotations...
  • In 2014, the Pew Research Center asked Americans to rank the “greatest dangers in the world.” A plurality put inequality first, ahead of “religious and ethnic hatred,” nuclear weapons, and environmental degradation. And yet people don’t agree about what, exactly, “equality” means.
  • One side argues that the city should guarantee procedural equality: it should insure that all students and families are equally informed about and encouraged to study for the entrance exam. The other side argues for a more direct, representation-based form of equality: it would jettison the exam, adopting a new admissions system designed to produce student bodies reflective of the city’s demography
  • In the past year, for example, New York City residents have found themselves in a debate over the city’s élite public high schools
  • The complexities of egalitarianism are especially frustrating because inequalities are so easy to grasp. C.E.O.s, on average, make almost three hundred times what their employees make; billionaire donors shape our politics; automation favors owners over workers; urban economies grow while rural areas stagnate; the best health care goes to the richest.
  • It’s not just about money. Tocqueville, writing in 1835, noted that our “ordinary practices of life” were egalitarian, too: we behaved as if there weren’t many differences among us. Today, there are “premiere” lines for popcorn at the movies and five tiers of Uber;
  • Inequality is everywhere, and unignorable. We’ve diagnosed the disease. Why can’t we agree on a cure?
  • In a book based on those lectures, “One Another’s Equals: The Basis of Human Equality,” Waldron points out that people are also marked by differences of skill, experience, creativity, and virtue. Given such consequential differences, he asks, in what sense are people “equal”?
  • According to the Declaration of Independence, it is “self-evident” that all men are created equal. But, from a certain perspective, it’s our inequality that’s self-evident.
  • More than twenty per cent of Americans, according to a 2015 poll, agree: they believe that the statement “All men are created equal” is false.
  • In Waldron’s view, though, it’s not a binary choice; it’s possible to see people as equal and unequal simultaneously. A society can sort its members into various categories—lawful and criminal, brilliant and not—while also allowing some principle of basic equality to circumscribe its judgments and, in some contexts, override them
  • Egalitarians like Dworkin and Waldron call this principle “deep equality.” It’s because of deep equality that even those people who acquire additional, justified worth through their actions—heroes, senators, pop stars—can still be considered fundamentally no better than anyone else.
  • In the course of his search, he explores centuries of intellectual history. Many thinkers, from Cicero to Locke, have argued that our ability to reason is what makes us equals.
  • Other thinkers, including Immanuel Kant, have cited our moral sense.
  • Some philosophers, such as Jeremy Bentham, have suggested that it’s our capacity to suffer that equalizes us
  • Waldron finds none of these arguments totally persuasive.
  • In various religious traditions, he observes, equality flows not just from broad assurances that we are all made in God’s image but from some sense that everyone is the protagonist in a saga of error, realization, and redemption: we’re equal because God cares about how things turn out for each of us.
  • Waldron himself is taken by Hannah Arendt’s related concept of “natality,” the notion that what each of us share is having been born as a “newcomer,” entering into history with “the capacity of beginning something anew, that is, of acting.”
  • equality may be not a self-evident fact about human beings but a human-made social construction that we must choose to put into practice.
  • In the end, Waldron concludes that there is no “small polished unitary soul-like substance” that makes us equal; there’s only a patchwork of arguments for our deep equality, collectively compelling but individually limited.
  • Equality is a composite idea—a nexus of complementary and competing intuitions.
  • The blurry nature of equality makes it hard to solve egalitarian dilemmas from first principles. In each situation, we must feel our way forward, reconciling our conflicting intuitions about what “equal” means.
  • The communities that have the easiest time doing that tend to have some clearly defined, shared purpose. Sprinters competing in a hundred-metre dash have varied endowments and train in different conditions; from a certain perspective, those differences make every race unfair.
  • By embracing an agreed-upon theory of equality before the race, the sprinters can find collective meaning in the ranked inequalities that emerge when it ends
  • Perhaps because necessity is so demanding, our egalitarian commitments tend to rest on a different principle: luck.
  • “Some people are blessed with good luck, some are cursed with bad luck, and it is the responsibility of society—all of us regarded collectively—to alter the distribution of goods and evils that arises from the jumble of lotteries that constitutes human life as we know it.” Anderson, in an influential coinage, calls this outlook “luck egalitarianism.”
  • This sort of artisanal egalitarianism is comparatively easy to arrange. Mass-producing it is what’s hard. A whole society can’t get together in a room to hash things out. Instead, consensus must coalesce slowly around broad egalitarian principles.
  • No principle is perfect; each contains hidden dangers that emerge with time. Many people, in contemplating the division of goods, invoke the principle of necessity: the idea that our first priority should be the equal fulfillment of fundamental needs. The hidden danger here becomes apparent once we go past a certain point of subsistence.
  • a core problem that bedevils egalitarianism—what philosophers call “the problem of expensive tastes.”
  • The problem—what feels like a necessity to one person seems like a luxury to another—is familiar to anyone who’s argued with a foodie spouse or roommate about the grocery bil
  • The problem is so insistent that a whole body of political philosophy—“prioritarianism”—is devoted to the challenge of sorting people with needs from people with wants
  • the line shifts as the years pass. Medical procedures that seem optional today become necessities tomorrow; educational attainments that were once unusual, such as college degrees, become increasingly indispensable with time
  • Some thinkers try to tame the problem of expensive tastes by asking what a “normal” or “typical” person might find necessary. But it’s easy to define “typical” too narrowly, letting unfair assumptions influence our judgment
  • an odd feature of our social contract: if you’re fired from your job, unemployment benefits help keep you afloat, while if you stop working to have a child you must deal with the loss of income yourself. This contradiction, she writes, reveals an assumption that “the desire to procreate is just another expensive taste”; it reflects, she argues, the sexist presumption that “atomistic egoism and self-sufficiency” are the human norm. The word “necessity” suggests the idea of a bare minimum. In fact, it sets a high bar. Clearing it may require rethinking how society functions.
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
Javier E

Lockheed Martin Harnesses Quantum Technology - NYTimes.com - 0 views

  • academic researchers and scientists at companies like Microsoft, I.B.M. and Hewlett-Packard have been working to develop quantum computers.
  • Lockheed Martin — which bought an early version of such a computer from the Canadian company D-Wave Systems two years ago — is confident enough in the technology to upgrade it to commercial scale, becoming the first company to use quantum computing as part of its business.
  • if it performs as Lockheed and D-Wave expect, the design could be used to supercharge even the most powerful systems, solving some science and business problems millions of times faster
  • ...8 more annotations...
  • quantum computing relies on the fact that subatomic particles inhabit a range of states. Different relationships among the particles may coexist, as well. Those probable states can be narrowed to determine an optimal outcome among a near-infinitude of possibilities, which allows certain types of problems to be solved rapidly.
  • “This is a revolution not unlike the early days of computing,” he said. “It is a transformation in the way computers are thought about.”
  • It could be possible, for example, to tell instantly how the millions of lines of software running a network of satellites would react to a solar burst or a pulse from a nuclear explosion — something that can now take weeks, if ever, to determine.
  • Mr. Brownell, who joined D-Wave in 2009, was until 2000 the chief technical officer at Goldman Sachs. “In those days, we had 50,000 servers just doing simulations” to figure out trading strategies, he said. “I’m sure there is a lot more than that now, but we’ll be able to do that with one machine, for far less money.”
  • If Microsoft’s work pans out, he said, the millions of possible combinations of the proteins in a human gene could be worked out “fairly easily.”
  • Quantum computing has been a goal of researchers for more than three decades, but it has proved remarkably difficult to achieve. The idea has been to exploit a property of matter in a quantum state known as superposition, which makes it possible for the basic elements of a quantum computer, known as qubits, to hold a vast array of values simultaneously.
  • There are a variety of ways scientists create the conditions needed to achieve superposition as well as a second quantum state known as entanglement, which are both necessary for quantum computing. Researchers have suspended ions in magnetic fields, trapped photons or manipulated phosphorus atoms in silicon.
  • In the D-Wave system, a quantum computing processor, made from a lattice of tiny superconducting wires, is chilled close to absolute zero. It is then programmed by loading a set of mathematical equations into the lattice. The processor then moves through a near-infinity of possibilities to determine the lowest energy required to form those relationships. That state, seen as the optimal outcome, is the answer.
Emily Horwitz

Proposed Brain Mapping Project Faces Significant Hurdles - NYTimes.com - 0 views

  •  
    This article was very interesting. It at fist described some of the disconnect between what we understand about scientists and what the scientists understand; in this case, the article argued that, while the 10-year grant to neuroscientific research seems great to the general public, it is extremely complicated to even begin mapping how our neurons interact. What I found the most intriguing though, was the fact that some scientists from UC San Francisco have found the exact part of the brain, as well as their mechanisms, that control our language function. The research concluded that for those who have lost their faculties of speech, by stroke or otherwise, could eventually speak again if a prosthetic was developed. In short, the article conveyed the idea that nothing is ever as simple as it seems; although we try to make advances in science, we often just wind up with a whole other set of problems to solve.
Javier E

A New Kind of Tutoring Aims to Make Students Smarter - NYTimes.com - 1 views

  • the goal is to improve cognitive skills. LearningRx is one of a growing number of such commercial services — some online, others offered by psychologists. Unlike traditional tutoring services that seek to help students master a subject, brain training purports to enhance comprehension and the ability to analyze and mentally manipulate concepts, images, sounds and instructions. In a word, it seeks to make students smarter.
  • “The average gain on I.Q. is 15 points after 24 weeks of training, and 20 points in less than 32 weeks.”
  • , “Our users have reported profound benefits that include: clearer and quicker thinking; faster problem-solving skills; increased alertness and awareness; better concentration at work or while driving; sharper memory for names, numbers and directions.”
  • ...8 more annotations...
  • “It used to take me an hour to memorize 20 words. Now I can learn, like, 40 new words in 20 minutes.”
  • “I don’t know if it makes you smarter. But when you get to each new level on the math and reading tasks, it definitely builds up your self-confidence.”
  • . “What you care about is not an intelligence test score, but whether your ability to do an important task has really improved. That’s a chain of evidence that would be really great to have. I haven’t seen it.”
  • Still,a new and growing body of scientific evidence indicates that cognitive training can be effective, including that offered by commercial services.
  • He looked at 340 middle-school students who spent two hours a week for a semester using LearningRx exercises in their schools’ computer labs and an equal number of students who received no such training. Those who played the online games, Dr. Hill found, not only improved significantly on measures of cognitive abilities compared to their peers, but also on Virginia’s annual Standards of Learning exam.
  • I’ve had some kids who not only reported that they had very big changes in the classroom, but when we bring them back in the laboratory to do neuropsychological testing, we also see great changes. They show increases that would be highly unlikely to happen just by chance.”
  • where crosswords and Sudoku are intended to be a diversion, the games here give that same kind of reward, only they’re designed to improve your brain, your memory, your problem-solving skills.”
  • More than 40 games are offered by Lumosity. One, the N-back, is based on a task developed decades ago by psychologists. Created to test working memory, the N-back challenges users to keep track of a continuously updated list and remember which item appeared “n” times ago.
Javier E

Archimedes - Separating Myth From Science - NYTimes.com - 0 views

  • A panoply of devices and ideas are named after Archimedes. Besides the Archimedes screw, there is the Archimedes principle, the law of buoyancy that states the upward force on a submerged object equals the weight of the liquid displaced. There is the Archimedes claw, a weapon that most likely did exist, grabbing onto Roman ships and tipping them over. And there is the Archimedes sphere, a forerunner of the planetarium — a hand-held globe that showed the constellations as well as the locations of the sun and the planets in the sky.
  • Dr. Rorres said the singular genius of Archimedes was that he not only was able to solve abstract mathematics problems, but also used mathematics to solve physics problems, and he then engineered devices to take advantage of the physics. “He came up with fundamental laws of nature, proved them mathematically and then was able to apply them,” Dr. Rorres said.
mcginnisca

We Talked to One of the World Trade Center Bombers About ISIS and Mass Shootings | VICE... - 0 views

  • Eyad Ismoil is one of the half-dozen men convicted for carrying out the World Trade Center bombings in 1993
  • sentenced to 240 years in prison for driving a rental van packed with a bomb into a garage, killing six and injuring about 1000 more
  • for someone who's supposed to "hate the infidels," he shows no signs of loathing towards the many prisoners and staff who openly despise him.
  • ...14 more annotations...
  • "hate the infidels,"
  • when I first asked Ismoil about ISIS after the Paris attacks, he asked me one question back: "Why do you think they did it?" I responded with the only thing I knew: "They hate us."
  • He said that to resolve the conflicts between extremists in the Middle East and the West, it was important to talk "human to human," but he also made it clear that he empathizes at least somewhat with the Islamic State. Unsurprisingly, many of his views would be considered appalling to the vast majority of Americans, but our conversation gave me a window into the worldview of people who think the US is to blame for terrorism.
  • ISIS is not jihadists recruited from all over to fight. They are the Sunni Muslims that have lived through 25 years of wars, torture, and rapes. They are the Iraqi and Syrian people that have suffered from unjust wars started by the US government. And when the US government [mostly pulled out of] Iraq in 2010, the Shia and Maliki government started killing the Sunni day and night under the watch of the Americans.
  • You don't have to recruit people for ISIS. They're Muslims from all over the world that have seen an injustice after 25 years and want to help their brothers. What you have to understand is the Iraqi people are the most stubborn of the Muslim world. They won't accept occupation or humiliation.
  • People over in America ask why ISIS did this. [But] people in the Middle East ask, "Why is the US doing this to us?" Put yourself in their shoes—France is dropping bombs for a year in Iraq and [more recently] Syria, destroying everything, women, children, buildings... A bomb doesn't discriminate between ISIS or women and children—it just destroys.
  • Imagine the Iraq and Syrian people. After a year of bombing, you see your people killed, land destroyed, children scared to do anything more than hide in the corners all day. All this coming from bombs in the sky and you can't stop it. What would you do?
  • So, the question should be who is the first to be blamed? Tell both sides of the story.
  • My religion prohibits attacks on civilians. Unfortunately, many Muslims don't know much about Islam
  • What about the Planned Parenthood attack?What this man did is worse then what the doctors do. If this is what he's angry at, taking life, he did worse. Islam doesn't believe in abortion—all life is precious....[But] what he did was kill adult people who are grown. How is he trying to solve the issue?
  • For every action, there's a reaction. If you throw a ball against a wall, it's going to come back at you. If you throw a ball hard, it's going to come back at you hard. This is the problem with all sides in these wars. We hit you, you hit back. We hit you hard, you hit back harder. Back and forth, back and forth. Nobody wins. Both sides end up with death and destruction.
  • The Arabs are not radicalizing themselves. Your government action is radicalizing the Arabs
  • The only thing that keeps us just is Islam. Because in Islam, the peace, the justice, comes from the sky. The one who created earth and man, he knows best.
  • To solve the problem from the root, everyone has to become human. They need to talk, human to human. Let the people decide what they want. Leave them alone. Everyone can come together and say enough is enough. How long are we going to keep this action up? For the rest of our lives?It's the law of the jungle that we're living in right now. We were given more sense than this. We walk on two legs, with our heads high. But right now, we are walking with our heads down. We need to lift our heads up, and use the brains God created for us.
Javier E

The Bilingual Advantage - NYTimes.com - 0 views

  • We found that if you gave 5- and 6-year-olds language problems to solve, monolingual and bilingual children knew, pretty much, the same amount of language.
  • The bilinguals, we found, manifested a cognitive system with the ability to attend to important information and ignore the less important.
  • There’s a system in your brain, the executive control system. It’s a general manager. Its job is to keep you focused on what is relevant, while ignoring distractions. It’s what makes it possible for you to hold two different things in your mind at one time and switch between them. If you have two languages and you use them regularly, the way the brain’s networks work is that every time you speak, both languages pop up and the executive control system has to sort through everything and attend to what’s relevant in the moment. Therefore the bilinguals use that system more, and it’s that regular use that makes that system more efficient.
  • ...5 more annotations...
  • we found that normally aging bilinguals had better cognitive functioning than normally aging monolinguals. Bilingual older adults performed better than monolingual older adults on executive control tasks.
  • On average, the bilinguals showed Alzheimer’s symptoms five or six years later than those who spoke only one language. This didn’t mean that the bilinguals didn’t have Alzheimer’s. It meant that as the disease took root in their brains, they were able to continue functioning at a higher level. They could cope with the disease for longer.
  • You have to use both languages all the time. You won’t get the bilingual benefit from occasional use.
  • One would think bilingualism might help with multitasking — does it? A. Yes, multitasking is one of the things the executive control system handles.
  • One of the things we’ve seen is that on certain kinds of even nonverbal tests, bilingual people are faster. Why? Well, when we look in their brains through neuroimaging, it appears like they’re using a different kind of a network that might include language centers to solve a completely nonverbal problem. Their whole brain appears to rewire because of bilingualism.
‹ Previous 21 - 40 of 172 Next › Last »
Showing 20 items per page