Skip to main content

Home/ TOK Friends/ Group items matching "possible" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
18More

Here's what the government's dietary guidelines should really say - The Washington Post - 0 views

  • If I were writing the dietary guidelines, I would give them a radical overhaul. I’d go so far as to radically overhaul the way we evaluate diet. Here’s why and how.
  • Lately, as scientists try, and fail, to reproduce results, all of science is taking a hard look at funding biases, statistical shenanigans and groupthink. All that criticism, and then some, applies to nutrition.
  • Prominent in the charge to change the way we do science is John Ioannidis, professor of health research and policy at Stanford University. In 2005, he published “Why Most Research Findings Are False” in the journal PLOS Medicin
  • ...15 more annotations...
  • He came down hard on nutrition in a pull-no-punches 2013 British Medical Journal editorial titled, “Implausible results in human nutrition research,” in which he noted, “Almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome.”
  • Ioannidis told me that sussing out the connection between diet and health — nutritional epidemiology — is enormously challenging, and “the tools that we’re throwing at the problem are not commensurate with the complexity and difficulty of the problem.” The biggest of those tools is observational research, in which we collect data on what people eat, and track what happens to them.
  • He lists plant-based foods — fruit, veg, whole grains, legumes — but acknowledges that we don’t understand enough to prescribe specific combinations or numbers of servings.
  • funding bias isn’t the only kind. “Fanatical opinions abound in nutrition,” Ioannidis wrote in 2013, and those have bias power too.
  • “Definitive solutions won’t come from another million observational papers or small randomized trials,” reads the subtitle of Ioannidis’s paper. His is a burn-down-the-house ethos.
  • When it comes to actual dietary recommendations, the disagreement is stark. “Ioannidis and others say we have no clue, the science is so bad that we don’t know anything,” Hu told me. “I think that’s completely bogus. We know a lot about the basic elements of a healthy diet.”
  • Give tens of thousands of people that FFQ, and you end up with a ginormous repository of possible correlations. You can zero in on a vitamin, macronutrient or food, and go to town. But not only are you starting with flawed data, you’ve got a zillion possible confounding variables — dietary, demographic, socioeconomic. I’ve heard statisticians call it “noise mining,” and Ioannidis is equally skeptical. “With this type of data, you can get any result you want,” he said. “You can align it to your beliefs.”
  • Big differences in what people eat track with other differences. Heavy plant-eaters are different from, say, heavy meat-eaters in all kinds of ways (income, education, physical activity, BMI). Red meat consumption correlates with increased risk of dying in an accident as much as dying from heart disease. The amount of faith we put in observational studies is a judgment call.
  • I find myself in Ioannidis’s camp. What have we learned, unequivocally enough to build a consensus in the nutrition community, about how diet affects health? Well, trans-fats are bad.
  • Over and over, large population studies get sliced and diced, and it’s all but impossible to figure out what’s signal and what’s noise. Researchers try to do that with controlled trials to test the connections, but those have issues too. They’re expensive, so they’re usually small and short-term. People have trouble sticking to the diet being studied. And scientists are generally looking for what they call “surrogate endpoints,” like increased cholesterol rather than death from heart disease, since it’s impractical to keep a trial going until people die.
  • , what do we do? Hu and Ioannidis actually have similar suggestions. For starters, they both think we should be looking at dietary patterns rather than single foods or nutrients. They also both want to look across the data sets. Ioannidis emphasizes transparency. He wants to open data to the world and analyze all the data sets in the same way to see if “any signals survive.” Hu is more cautious (partly to safeguard confidentiality
  • I have a suggestion. Let’s give up on evidence-based eating. It’s given us nothing but trouble and strife. Our tools can’t find any but the most obvious links between food and health, and we’ve found those already.
  • Instead, let’s acknowledge the uncertainty and eat to hedge against what we don’t know
  • We’ve got two excellent hedges: variety and foods with nutrients intact (which describes such diets as the Mediterranean, touted by researchers). If you severely limit your foods (vegan, keto), you might miss out on something. Ditto if you eat foods with little nutritional value (sugar, refined grains). Oh, and pay attention to the two things we can say with certainty: Keep your weight down, and exercise.
  • I used to say I could tell you everything important about diet in 60 seconds. Over the years, my spiel got shorter and shorter as truisms fell by the wayside, and my confidence waned in a field where we know less, rather than more, over time. I’m down to five seconds now: Eat a wide variety of foods with their nutrients intact, keep your weight down and get some exercise.
46More

'Our minds can be hijacked': the tech insiders who fear a smartphone dystopia | Technol... - 0 views

  • Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called “attention economy”: an internet shaped around the demands of an advertising economy.
  • “It is very common,” Rosenstein says, “for humans to develop things with the best of intentions and for them to have unintended, negative consequences.”
  • most concerned about the psychological effects on people who, research shows, touch, swipe or tap their phone 2,617 times a day.
  • ...43 more annotations...
  • There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity – even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”
  • Drawing a straight line between addiction to social media and political earthquakes like Brexit and the rise of Donald Trump, they contend that digital forces have completely upended the political system and, left unchecked, could even render democracy as we know it obsolete.
  • He explains the subtle psychological tricks that can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation,” Eyal writes.
  • “One reason I think it is particularly important for us to talk about this now is that we may be the last generation that can remember life before,” Rosenstein says. It may or may not be relevant that Rosenstein, Pearlman and most of the tech insiders questioning today’s attention economy are in their 30s, members of the last generation that can remember a world in which telephones were plugged into walls.
  • One morning in April this year, designers, programmers and tech entrepreneurs from across the world gathered at a conference centre on the shore of the San Francisco Bay. They had each paid up to $1,700 to learn how to manipulate people into habitual use of their products, on a course curated by conference organiser Nir Eyal.
  • Eyal, 39, the author of Hooked: How to Build Habit-Forming Products, has spent several years consulting for the tech industry, teaching techniques he developed by closely studying how the Silicon Valley giants operate.
  • “The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”
  • Without irony, Eyal finished his talk with some personal tips for resisting the lure of technology. He told his audience he uses a Chrome extension, called DF YouTube, “which scrubs out a lot of those external triggers” he writes about in his book, and recommended an app called Pocket Points that “rewards you for staying off your phone when you need to focus”.
  • The most seductive design, Harris explains, exploits the same psychological susceptibility that makes gambling so compulsive: variable rewards. When we tap those apps with red icons, we don’t know whether we’ll discover an interesting email, an avalanche of “likes”, or nothing at all. It is the possibility of disappointment that makes it so compulsive.
  • Finally, Eyal confided the lengths he goes to protect his own family. He has installed in his house an outlet timer connected to a router that cuts off access to the internet at a set time every day. “The idea is to remember that we are not powerless,” he said. “We are in control.
  • But are we? If the people who built these technologies are taking such radical steps to wean themselves free, can the rest of us reasonably be expected to exercise our free will?
  • Not according to Tristan Harris, a 33-year-old former Google employee turned vocal critic of the tech industry. “All of us are jacked into this system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.”
  • Harris, who has been branded “the closest thing Silicon Valley has to a conscience”, insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.
  • “I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.” Harris went public – giving talks, writing papers, meeting lawmakers and campaigning for reform after three years struggling to effect change inside Google’s Mountain View headquarters.
  • He explored how LinkedIn exploits a need for social reciprocity to widen its network; how YouTube and Netflix autoplay videos and next episodes, depriving users of a choice about whether or not they want to keep watching; how Snapchat created its addictive Snapstreaks feature, encouraging near-constant communication between its mostly teenage users.
  • The techniques these companies use are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, for example, revealed that the company can identify when teens feel “insecure”, “worthless” and “need a confidence boost”. Such granular information, Harris adds, is “a perfect model of what buttons you can push in a particular person”.
  • Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. “There’s no ethics,” he says. A company paying Facebook to use its levers of persuasion could be a car business targeting tailored advertisements to different types of users who want a new vehicle. Or it could be a Moscow-based troll farm seeking to turn voters in a swing county in Wisconsin.
  • Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.
  • It was Rosenstein’s colleague, Leah Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”, who announced the feature in a 2009 blogpost. Now 35 and an illustrator, Pearlman confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.
  • It’s this that explains how the pull-to-refresh mechanism, whereby users swipe down, pause and wait to see what content appears, rapidly became one of the most addictive and ubiquitous design features in modern technology. “Each time you’re swiping down, it’s like a slot machine,” Harris says. “You don’t know what’s coming next. Sometimes it’s a beautiful photo. Sometimes it’s just an ad.”
  • All of which has left Brichter, who has put his design work on the backburner while he focuses on building a house in New Jersey, questioning his legacy. “I’ve spent many hours and weeks and months and years thinking about whether anything I’ve done has made a net positive impact on society or humanity at all,” he says. He has blocked certain websites, turned off push notifications, restricted his use of the Telegram app to message only with his wife and two close friends, and tried to wean himself off Twitter. “I still waste time on it,” he confesses, “just reading stupid news I already know about.” He charges his phone in the kitchen, plugging it in at 7pm and not touching it until the next morning.
  • “Smartphones are useful tools,” he says. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about. I’m not saying I’m mature now, but I’m a little bit more mature, and I regret the downsides.”
  • All of it, he says, is reward-based behaviour that activates the brain’s dopamine pathways. He sometimes finds himself clicking on the red icons beside his apps “to make them go away”, but is conflicted about the ethics of exploiting people’s psychological vulnerabilities. “It is not inherently evil to bring people back to your product,” he says. “It’s capitalism.”
  • He identifies the advent of the smartphone as a turning point, raising the stakes in an arms race for people’s attention. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.”
  • McNamee chooses his words carefully. “The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.”
  • But how can Google and Facebook be forced to abandon the business models that have transformed them into two of the most profitable companies on the planet?
  • McNamee believes the companies he invested in should be subjected to greater regulation, including new anti-monopoly rules. In Washington, there is growing appetite, on both sides of the political divide, to rein in Silicon Valley. But McNamee worries the behemoths he helped build may already be too big to curtail.
  • Rosenstein, the Facebook “like” co-creator, believes there may be a case for state regulation of “psychologically manipulative advertising”, saying the moral impetus is comparable to taking action against fossil fuel or tobacco companies. “If we only care about profit maximisation,” he says, “we will go rapidly into dystopia.”
  • James Williams does not believe talk of dystopia is far-fetched. The ex-Google strategist who built the metrics system for the company’s global search advertising business, he has had a front-row view of an industry he describes as the “largest, most standardised and most centralised form of attentional control in human history”.
  • It is a journey that has led him to question whether democracy can survive the new technological age.
  • He says his epiphany came a few years ago, when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realisation: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?
  • That discomfort was compounded during a moment at work, when he glanced at one of Google’s dashboards, a multicoloured display showing how much of people’s attention the company had commandeered for advertisers. “I realised: this is literally a million people that we’ve sort of nudged or persuaded to do this thing that they weren’t going to otherwise do,” he recalls.
  • Williams and Harris left Google around the same time, and co-founded an advocacy group, Time Well Spent, that seeks to build public momentum for a change in the way big tech companies think about design. Williams finds it hard to comprehend why this issue is not “on the front page of every newspaper every day.
  • “Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.
  • g. “The attention economy incentivises the design of technologies that grab our attention,” he says. “In so doing, it privileges our impulses over our intentions.”
  • That means privileging what is sensational over what is nuanced, appealing to emotion, anger and outrage. The news media is increasingly working in service to tech companies, Williams adds, and must play by the rules of the attention economy to “sensationalise, bait and entertain in order to survive”.
  • It is not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.
  • The reality TV star’s campaign, he said, had heralded a watershed in which “the new, digitally supercharged dynamics of the attention economy have finally crossed a threshold and become manifest in the political realm”.
  • He stresses these dynamics are by no means isolated to the political right: they also play a role, he believes, in the unexpected popularity of leftwing politicians such as Bernie Sanders and Jeremy Corbyn, and the frequent outbreaks of internet outrage over issues that ignite fury among progressives.
  • All of which, Williams says, is not only distorting the way we view politics but, over time, may be changing the way we think, making us less rational and more impulsive. “We’ve habituated ourselves into a perpetual cognitive style of outrage, by internalising the dynamics of the medium,” he says.
  • It was another English science fiction writer, Aldous Huxley, who provided the more prescient observation when he warned that Orwellian-style coercion was less of a threat to democracy than the more subtle power of psychological manipulation, and “man’s almost infinite appetite for distractions”.
  • If the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself?
  • “The dynamics of the attention economy are structurally set up to undermine the human will,” he says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.”
23More

Older Americans Are 'Hooked' on Vitamins - The New York Times - 1 views

  • When she was a young physician, Dr. Martha Gulati noticed that many of her mentors were prescribing vitamin E and folic acid to patients. Preliminary studies in the early 1990s had linked both supplements to a lower risk of heart disease.She urged her father to pop the pills as well: “Dad, you should be on these vitamins, because every cardiologist is taking them or putting their patients on [them],” recalled Dr. Gulati, now chief of cardiology for the University of Arizona College of Medicine-Phoenix
  • But just a few years later, she found herself reversing course, after rigorous clinical trials found neither vitamin E nor folic acid supplements did anything to protect the heart. Even worse, studies linked high-dose vitamin E to a higher risk of heart failure, prostate cancer and death from any cause.
  • More than half of Americans take vitamin supplements, including 68 percent of those age 65 and older, according to a 2013 Gallup poll. Among older adults, 29 percent take four or more supplements of any kind
  • ...20 more annotations...
  • Often, preliminary studies fuel irrational exuberance about a promising dietary supplement, leading millions of people to buy in to the trend. Many never stop. They continue even though more rigorous studies — which can take many years to complete — almost never find that vitamins prevent disease, and in some cases cause harm
  • There’s no conclusive evidence that dietary supplements prevent chronic disease in the average American, Dr. Manson said. And while a handful of vitamin and mineral studies have had positive results, those findings haven’t been strong enough to recommend supplements to the general American public, she said.
  • The National Institutes of Health has spent more than $2.4 billion since 1999 studying vitamins and minerals. Yet for “all the research we’ve done, we don’t have much to show for it,” said Dr. Barnett Kramer, director of cancer prevention at the National Cancer Institute.
  • A big part of the problem, Dr. Kramer said, could be that much nutrition research has been based on faulty assumptions, including the notion that people need more vitamins and minerals than a typical diet provides; that megadoses are always safe; and that scientists can boil down the benefits of vegetables like broccoli into a daily pill.
  • when researchers tried to deliver the key ingredients of a healthy diet in a capsule, Dr. Kramer said, those efforts nearly always failed.
  • It’s possible that the chemicals in the fruits and vegetables on your plate work together in ways that scientists don’t fully understand — and which can’t be replicated in a table
  • More important, perhaps, is that most Americans get plenty of the essentials, anyway. Although the Western diet has a lot of problems — too much sodium, sugar, saturated fat and calories, in general — it’s not short on vitamins
  • Without even realizing it, someone who eats a typical lunch or breakfast “is essentially eating a multivitamin,”
  • The body naturally regulates the levels of many nutrients, such as vitamin C and many B vitamins, Dr. Kramer said, by excreting what it doesn’t need in urine. He added: “It’s hard to avoid getting the full range of vitamins.”
  • Not all experts agree. Dr. Walter Willett, a professor at the Harvard T.H. Chan School of Public Health, says it’s reasonable to take a daily multivitamin “for insurance.” Dr. Willett said that clinical trials underestimate supplements’ true benefits because they aren’t long enough, often lasting five to 10 years. It could take decades to notice a lower rate of cancer or heart disease in vitamin taker
  • For Charlsa Bentley, 67, keeping up with the latest nutrition research can be frustrating. She stopped taking calcium, for example, after studies found it doesn’t protect against bone fractures. Additional studies suggest that calcium supplements increase the risk of kidney stones and heart disease.
  • People who take vitamins tend to be healthier, wealthier and better educated than those who don’t, Dr. Kramer said. They are probably less likely to succumb to heart disease or cancer, whether they take supplements or not. That can skew research results, making vitamin pills seem more effective than they really are
  • Because folic acid can lower homocysteine levels, researchers once hoped that folic acid supplements would prevent heart attacks and strokes.In a series of clinical trials, folic acid pills lowered homocysteine levels but had no overall benefit for heart disease, Dr. Lichtenstein said
  • When studies of large populations showed that people who eat lots of seafood had fewer heart attacks, many assumed that the benefits came from the omega-3 fatty acids in fish oil, Dr. Lichtenstein said.Rigorous studies have failed to show that fish oil supplements prevent heart attacks
  • But it’s possible the benefits of sardines and salmon have nothing to do with fish oil, Dr. Lichtenstein said. People who have fish for dinner may be healthier as a result of what they don’t eat, such as meatloaf and cheeseburgers.
  • “Eating fish is probably a good thing, but we haven’t been able to show that taking fish oil [supplements] does anything for you,
  • In the tiny amounts provided by fruits and vegetables, beta carotene and similar substances appear to protect the body from a process called oxidation, which damages healthy cells, said Dr. Edgar Miller, a professor of medicine at Johns Hopkins School of Medicine.Experts were shocked when two large, well-designed studies in the 1990s found that beta carotene pills actually increased lung cancer rates.
  • Likewise, a clinical trial published in 2011 found that vitamin E, also an antioxidant, increased the risk of prostate cancer in men by 17 percent
  • “Vitamins are not inert,” said Dr. Eric Klein, a prostate cancer expert at the Cleveland Clinic who led the vitamin E study. “They are biologically active agents. We have to think of them in the same way as drugs. If you take too high a dose of them, they cause side effects.”
  • “We should be responsible physicians,” she said, “and wait for the data.”
12More

Praising Andy Warhol - NYTimes.com - 1 views

  • Peter Schjeldahl, for example, calls Warhol a “genius” and a “great artist” and even says that “the gold standard of Warhol exposes every inflated value in other currencies.”
  •   If Warhol is a great artist and these boxes are among his most important works, what am I missing?
  • Appreciations of Warhol’s boxes typically emphasize their effects rather than their appearance.  These appreciations take two quite different forms.
  • ...9 more annotations...
  • Warhol’s boxes are praised for subverting the distinction between mundane objects of everyday life and “art” in a museum.  As a result, we can enjoy and appreciate the things that make up our everyday life just as much as what we see in museums (and with far less effort).  Whereas the joys of traditional art typically require an initiation into an esoteric world of historical information and refined taste, Warhol’s “Pop Art” reveals the joys of what we all readily understand and appreciate.  As Danto put it, “Warhol’s intuition was that nothing an artist could do would give us more of what art sought than reality already gave us.”
  • Warhol’s work is also praised for posing a crucial philosophical question about art.  As Danto puts it: “Given two objects that look exactly alike, how is it possible for one of them to be a work of art and the other just an ordinary object?”  Answering this question requires realizing that there are no perceptual qualities that make something a work of art.  This in turn implies that anything, no matter how it looks, can be a work of art.
  • According to Danto, whether an object is a work of art depends on its relation to an “art world”:  “an atmosphere of artistic theory, a knowledge of the history of art” that exists at a particular time.
  • this explanation of Warhol’s greatness, contrary to the first one, makes art appreciation once again a matter of esoteric knowledge and taste, now focused on subtle philosophical puzzles about the nature of art.
  • it was Danto, not Warhol, who provided the intellectual/aesthetic excitement by formulating and developing a brilliant answer to the question.  To the extent that the philosophical question had artistic value in the context of the contemporary artworld,  Danto was more the artist than Warhol.
  • I agree that Warhol — along with many other artists from the 1950s on — opened up new ways of making art that traditional “high art” had excluded.  But new modes of artistic creation — commercial design techniques, performances, installations, conceptual art — do not guarantee a new kind or a higher quality of aesthetic experience. 
  • anything can be presented as a work of art.   But it does not follow that anything can produce a satisfying aesthetic experience.  The great works of the tradition do not circumscribe the sorts of things that can be art, but they are exemplars of what we expect a work of art to do to us.  (This is the sense in which, according to Kant, originally beautiful works of art are exemplary, yet without providing rules for further such works of art.)
  • Praise of Andy Warhol often emphasizes the new possibilities of artistic creation his work opened up.  That would make his work important in the history of art and for that reason alone of considerable interest.
  • as Jerrold Levinson and others have pointed out, a work can be an important artistic achievement without being an important aesthetic achievement.  This, I suggest, is how we should think about Warhol’s Brillo boxes.
6More

Opinion | A.I. Is Harder Than You Think - The New York Times - 1 views

  • The limitations of Google Duplex are not just a result of its being announced prematurely and with too much fanfare; they are also a vivid reminder that genuine A.I. is far beyond the field’s current capabilities, even at a company with perhaps the largest collection of A.I. researchers in the world, vast amounts of computing power and enormous quantities of data.
  • The crux of the problem is that the field of artificial intelligence has not come to grips with the infinite complexity of language. Just as you can make infinitely many arithmetic equations by combining a few mathematical symbols and following a small set of rules, you can make infinitely many sentences by combining a modest set of words and a modest set of rules.
  • A genuine, human-level A.I. will need to be able to cope with all of those possible sentences, not just a small fragment of them.
  • ...3 more annotations...
  • No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.
  • Once upon a time, before the fashionable rise of machine learning and “big data,” A.I. researchers tried to understand how complex knowledge could be encoded and processed in computers. This project, known as knowledge engineering, aimed not to create programs that would detect statistical patterns in huge data sets but to formalize, in a system of rules, the fundamental elements of human understanding, so that those rules could be applied in computer programs.
  • That job proved difficult and was never finished. But “difficult and unfinished” doesn’t mean misguided. A.I. researchers need to return to that project sooner rather than later, ideally enlisting the help of cognitive psychologists who study the question of how human cognition manages to be endlessly flexible.
3More

Jared Diamond: We Could Be Living in a New Stone Age by 2114 - Mother Jones - 0 views

  • Either by the year 2050 we’ve succeeded in developing a sustainable economy, in which case we can then ask your question about 100 years from now, because there will be 100 years from now; or by 2050 we’ve failed to develop a sustainable economy, which means that there will no longer be first world living conditions, and there either won’t be humans 100 years from now, or those humans 100 years from now will have lifestyles similar of those of Cro-Magnons 40,000 years ago, because we’ve already stripped away the surface copper and the surface iron. If we knock ourselves out of the first world, we’re not going to be able to rebuild a first world.
  • It all depends, he says, on where we are at 2050:
  • Not everybody agrees with Diamond that we’re in such a perilous state, of course. But there is perhaps no more celebrated chronicler of why civilizations rise, and why they fall. That is, after all, why we read him. So when Diamond says we’ve got maybe 50 years to turn it around, we should at least consider the possibility that he might actually be right. For if he is, the consequences are so intolerable that anything possible should be done to avert them.
22More

What Does Quantum Physics Actually Tell Us About the World? - The New York Times - 2 views

  • The physics of atoms and their ever-smaller constituents and cousins is, as Adam Becker reminds us more than once in his new book, “What Is Real?,” “the most successful theory in all of science.” Its predictions are stunningly accurate, and its power to grasp the unseen ultramicroscopic world has brought us modern marvels.
  • But there is a problem: Quantum theory is, in a profound way, weird. It defies our common-sense intuition about what things are and what they can do.
  • Indeed, Heisenberg said that quantum particles “are not as real; they form a world of potentialities or possibilities rather than one of things or facts.”
  • ...19 more annotations...
  • Before he died, Richard Feynman, who understood quantum theory as well as anyone, said, “I still get nervous with it...I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem.” The problem is not with using the theory — making calculations, applying it to engineering tasks — but in understanding what it means. What does it tell us about the world?
  • From one point of view, quantum physics is just a set of formalisms, a useful tool kit. Want to make better lasers or transistors or television sets? The Schrödinger equation is your friend. The trouble starts only when you step back and ask whether the entities implied by the equation can really exist. Then you encounter problems that can be described in several familiar ways:
  • Wave-particle duality. Everything there is — all matter and energy, all known forces — behaves sometimes like waves, smooth and continuous, and sometimes like particles, rat-a-tat-tat. Electricity flows through wires, like a fluid, or flies through a vacuum as a volley of individual electrons. Can it be both things at once?
  • The uncertainty principle. Werner Heisenberg famously discovered that when you measure the position (let’s say) of an electron as precisely as you can, you find yourself more and more in the dark about its momentum. And vice versa. You can pin down one or the other but not both.
  • The measurement problem. Most of quantum mechanics deals with probabilities rather than certainties. A particle has a probability of appearing in a certain place. An unstable atom has a probability of decaying at a certain instant. But when a physicist goes into the laboratory and performs an experiment, there is a definite outcome. The act of measurement — observation, by someone or something — becomes an inextricable part of the theory
  • The strange implication is that the reality of the quantum world remains amorphous or indefinite until scientists start measuring
  • Other interpretations rely on “hidden variables” to account for quantities presumed to exist behind the curtain.
  • This is disturbing to philosophers as well as physicists. It led Einstein to say in 1952, “The theory reminds me a little of the system of delusions of an exceedingly intelligent paranoiac.”
  • “Figuring out what quantum physics is saying about the world has been hard,” Becker says, and this understatement motivates his book, a thorough, illuminating exploration of the most consequential controversy raging in modern science.
  • In a way, the Copenhagen is an anti-interpretation. “It is wrong to think that the task of physics is to find out how nature is,” Bohr said. “Physics concerns what we can say about nature.”
  • Nothing is definite in Bohr’s quantum world until someone observes it. Physics can help us order experience but should not be expected to provide a complete picture of reality. The popular four-word summary of the Copenhagen interpretation is: “Shut up and calculate!”
  • Becker sides with the worriers. He leads us through an impressive account of the rise of competing interpretations, grounding them in the human stories
  • He makes a convincing case that it’s wrong to imagine the Copenhagen interpretation as a single official or even coherent statement. It is, he suggests, a “strange assemblage of claims.
  • An American physicist, David Bohm, devised a radical alternative at midcentury, visualizing “pilot waves” that guide every particle, an attempt to eliminate the wave-particle duality.
  • Competing approaches to quantum foundations are called “interpretations,” and nowadays there are many. The first and still possibly foremost of these is the so-called Copenhagen interpretation.
  • Perhaps the most popular lately — certainly the most talked about — is the “many-worlds interpretation”: Every quantum event is a fork in the road, and one way to escape the difficulties is to imagine, mathematically speaking, that each fork creates a new universe
  • if you think the many-worlds idea is easily dismissed, plenty of physicists will beg to differ. They will tell you that it could explain, for example, why quantum computers (which admittedly don’t yet quite exist) could be so powerful: They would delegate the work to their alter egos in other universes.
  • When scientists search for meaning in quantum physics, they may be straying into a no-man’s-land between philosophy and religion. But they can’t help themselves. They’re only human.
  • If you were to watch me by day, you would see me sitting at my desk solving Schrödinger’s equation...exactly like my colleagues,” says Sir Anthony Leggett, a Nobel Prize winner and pioneer in superfluidity. “But occasionally at night, when the full moon is bright, I do what in the physics community is the intellectual equivalent of turning into a werewolf: I question whether quantum mechanics is the complete and ultimate truth about the physical universe.”
15More

Consciousness Isn't a Mystery. It's Matter. - The New York Times - 3 views

  • Every day, it seems, some verifiably intelligent person tells us that we don’t know what consciousness is. The nature of consciousness, they say, is an awesome mystery. It’s the ultimate hard problem. The current Wikipedia entry is typical: Consciousness “is the most mysterious aspect of our lives”; philosophers “have struggled to comprehend the nature of consciousness.”
  • I find this odd because we know exactly what consciousness is — where by “consciousness” I mean what most people mean in this debate: experience of any kind whatever. It’s the most familiar thing there is, whether it’s experience of emotion, pain, understanding what someone is saying, seeing, hearing, touching, tasting or feeling. It is in fact the only thing in the universe whose ultimate intrinsic nature we can claim to know. It is utterly unmysterious.
  • The nature of physical stuff, by contrast, is deeply mysterious, and physics grows stranger by the hour. (Richard Feynman’s remark about quantum theory — “I think I can safely say that nobody understands quantum mechanics” — seems as true as ever.) Or rather, more carefully: The nature of physical stuff is mysterious except insofar as consciousness is itself a form of physical stuff.
  • ...12 more annotations...
  • “We know nothing about the intrinsic quality of physical events,” he wrote, “except when these are mental events that we directly experience.”
  • I think Russell is right: Human conscious experience is wholly a matter of physical goings-on in the body and in particular the brain. But why does he say that we know nothing about the intrinsic quality of physical events except when these are mental events we directly experience? Isn’t he exaggerating? I don’t think so
  • I need to try to reply to those (they’re probably philosophers) who doubt that we really know what conscious experience is.The reply is simple. We know what conscious experience is because the having is the knowing: Having conscious experience is knowing what it is. You don’t have to think about it (it’s really much better not to). You just have to have it. It’s true that people can make all sorts of mistakes about what is going on when they have experience, but none of them threaten the fundamental sense in which we know exactly what experience is just in having it.
  • If someone continues to ask what it is, one good reply (although Wittgenstein disapproved of it) is “you know what it is like from your own case.” Ned Block replies by adapting the response Louis Armstrong reportedly gave to someone who asked him what jazz was: “If you gotta ask, you ain’t never going to know.”
  • So we all know what consciousness is. Once we’re clear on this we can try to go further, for consciousness does of course raise a hard problem. The problem arises from the fact that we accept that consciousness is wholly a matter of physical goings-on, but can’t see how this can be so. We examine the brain in ever greater detail, using increasingly powerful techniques like fMRI, and we observe extraordinarily complex neuroelectrochemical goings-on, but we can’t even begin to understand how these goings-on can be (or give rise to) conscious experiences.
  • 1966 movie “Fantastic Voyage,” or imagine the ultimate brain scanner. Leibniz continued, “Suppose we do: visiting its insides, we will never find anything but parts pushing each other — never anything that could explain a conscious state.”
  • His mistake is to go further, and conclude that physical goings-on can’t possibly be conscious goings-on. Many make the same mistake today — the Very Large Mistake (as Winnie-the-Pooh might put it) of thinking that we know enough about the nature of physical stuff to know that conscious experience can’t be physical. We don’t. We don’t know the intrinsic nature of physical stuff, except — Russell again — insofar as we know it simply through having a conscious experience.
  • We find this idea extremely difficult because we’re so very deeply committed to the belief that we know more about the physical than we do, and (in particular) know enough to know that consciousness can’t be physical. We don’t see that the hard problem is not what consciousness is, it’s what matter is — what the physical is.
  • This point about the limits on what physics can tell us is rock solid, and it arises before we begin to consider any of the deep problems of understanding that arise within physics — problems with “dark matter” or “dark energy,” for example — or with reconciling quantum mechanics and general relativity theory.
  • Those who make the Very Large Mistake (of thinking they know enough about the nature of the physical to know that consciousness can’t be physical) tend to split into two groups. Members of the first group remain unshaken in their belief that consciousness exists, and conclude that there must be some sort of nonphysical stuff: They tend to become “dualists.” Members of the second group, passionately committed to the idea that everything is physical, make the most extraordinary move that has ever been made in the history of human thought. They deny the existence of consciousness: They become “eliminativists.”
  • no one has to react in either of these ways. All they have to do is grasp the fundamental respect in which we don’t know the intrinsic nature of physical stuff in spite of all that physics tells us. In particular, we don’t know anything about the physical that gives us good reason to think that consciousness can’t be wholly physical. It’s worth adding that one can fully accept this even if one is unwilling to agree with Russell that in having conscious experience we thereby know something about the intrinsic nature of physical reality.
  • It’s not the physics picture of matter that’s the problem; it’s the ordinary everyday picture of matter. It’s ironic that the people who are most likely to doubt or deny the existence of consciousness (on the ground that everything is physical, and that consciousness can’t possibly be physical) are also those who are most insistent on the primacy of science, because it is precisely science that makes the key point shine most brightly: the point that there is a fundamental respect in which ultimate intrinsic nature of the stuff of the universe is unknown to us — except insofar as it is consciousness.
30More

Sex, Morality, and Modernity: Can Immanuel Kant Unite Us? - The Atlantic - 1 views

  • Before I jump back into the conversation about sexual ethics that has unfolded on the Web in recent days, inspired by Emily Witt's n+1 essay "What Do You Desire?" and featuring a fair number of my favorite writers, it's worth saying a few words about why I so value debate on this subject, and my reasons for running through some sex-life hypotheticals near the end of this article.
  • As we think and live, the investment required to understand one another increases. So do the stakes of disagreeing. 18-year-olds on the cusp of leaving home for the first time may disagree profoundly about how best to live and flourish, but the disagreements are abstract. It is easy, at 18, to express profound disagreement with, say, a friend's notions of child-rearing. To do so when he's 28, married, and raising a son or daughter is delicate, and perhaps best avoided
  • I have been speaking of friends. The gulfs that separate strangers can be wider and more difficult to navigate because there is no history of love and mutual goodwill as a foundation for trust. Less investment has been made, so there is less incentive to persevere through the hard parts.
  • ...27 more annotations...
  • I've grown very close to new people whose perspectives are radically different than mine.
  • It floors me: These individuals are all repositories of wisdom. They've gleaned it from experiences I'll never have, assumptions I don't share, and brains wired different than mine. I want to learn what they know.
  • Does that get us anywhere? A little ways, I think.
  • "Are we stuck with a passé traditionalism on one hand, and total laissez-faire on the other?" Is there common ground shared by the orthodox-Christian sexual ethics of a Rod Dreher and those who treat consent as their lodestar?
  • Gobry suggests that Emmanuel Kant provides a framework everyone can and should embrace, wherein consent isn't nearly enough to make a sexual act moral--we must, in addition, treat the people in our sex lives as ends, not means.
  • Here's how Kant put it: "Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end."
  • the disappearance of a default sexual ethic in America and the divergence of our lived experiences means we have more to learn from one another than ever, even as our different choices raise the emotional stakes.
  • Nor does it seem intuitively obvious that a suffering, terminally ill 90-year-old is regarding himself as a means, or an object, if he prefers to end his life with a lethal injection rather than waiting three months in semi-lucid agony for his lungs to slowly shut down and suffocate him. (Kant thought suicide impermissible.) The terminally ill man isn't denigrating his own worth or the preciousness of life or saying it's permissible "any time" it is difficult. He believes ending his life is permissible only because the end is nigh, and the interim affords no opportunity for "living" in anything except a narrow biological sense.
  • It seems to me that, whether we're talking about a three-week college relationship or a 60-year marriage, it is equally possible to treat one's partner as a means or as an end (though I would agree that "treating as means" is more common in hookups than marriage)
  • my simple definition is this: It is wrong to treat human persons in such a way that they are reduced to objects. This says nothing about consent: a person may consent to be used as an object, but it is still wrong to use them that way. It says nothing about utility: society may approve of using some people as objects; whether those people are actual slaves or economically oppressed wage-slaves it is still wrong to treat them like objects. What it says, in fact, is that human beings have intrinsic worth and dignity such that treating them like objects is wrong.
  • what it means to treat someone as a means, or as an object, turns out to be in dispute.
  • Years ago, I interviewed a sister who was acting as a surrogate for a sibling who couldn't carry her own child. The notion that either regarded the other (or themselves) as an object seems preposterous to me. Neither was treating the other as a means, because they both freely chose, desired and worked in concert to achieve the same end.
  • It seems to me that the Kantian insight is exactly the sort of challenge traditionalist Christians should make to college students as they try to persuade them to look more critically at hookup culture. I think a lot of college students casually mislead one another about their intentions and degree of investment, feigning romantic interest when actually they just want to have sex. Some would say they're transgressing against consent. I think Kant has a more powerful challenge. 
  • Ultimately, Kant only gets us a little way in this conversation because, outside the realm of sex, he thinks consent goes a long way toward mitigating the means problem, whereas in the realm of sex, not so much. This is inseparable from notions he has about sex that many of us just don't share.
  • two Biblical passages fit my moral intuition even better than Kant. "Love your neighbor as yourself." And "therefore all things whatsoever would that men should do to you, do ye even so to them.
  • "do unto others..." is extremely demanding, hard to live up to, and a very close fit with my moral intuitions.
  • "Do unto others" is also enough to condemn all sorts of porn, and to share all sorts of common ground with Dreher beyond consent. Interesting that it leaves us with so many disagreements too. "Do unto others" is core to my support for gay marriage.
  • Are our bones always to be trusted?) The sexual behavior parents would be mortified by is highly variable across time and cultures. So how can I regard it as a credible guide of inherent wrong? Professional football and championship boxing are every bit as violent and far more physically damaging to their participants than that basement scene, yet their cultural familiarity is such that most people don't feel them to be morally suspect. Lots of parents are proud, not mortified, when a son makes the NFL.
  • "Porn operates in fantasy the way boxing and football operate in fantasy. The injuries are quite real." He is, as you can see, uncomfortable with both. Forced at gunpoint to choose which of two events could proceed on a given night, an exact replica of the San Francisco porn shoot or an Ultimate Fighting Championship tournament--if I had to shut one down and grant the other permission to proceed--what would the correct choice be?
  • insofar as there is something morally objectionable here, it's that the audience is taking pleasure in the spectacle of someone being abused, whether that abuse is fact or convincing illusion. Violent sports and violent porn interact with dark impulses in humanity, as their producers well know.
  • If Princess Donna was failing to "do unto others" at all, the audience was arguably who she failed. Would she want others to entertain her by stoking her dark human impulses? Then again, perhaps she is helping to neuter and dissipate them in a harmless way. That's one theory of sports, isn't it? We go to war on the gridiron as a replacement for going to war? And the rise in violent porn has seemed to coincide with falling, not rising, incidence of sexual violence. 
  • On all sorts of moral questions I can articulate confident judgments. But I am confident in neither my intellect nor my gut when it comes to judging Princess Donna, or whether others are transgressing against themselves or "nature" when doing things that I myself wouldn't want to do. Without understanding their mindset, why they find that thing desirable, or what it costs them, if anything, I am loath to declare that it's grounded in depravity or inherently immoral just because it triggers my disgust instinct, especially if the people involved articulate a plausible moral code that they are following, and it even passes a widely held standard like "do unto others."
  • Here's another way to put it. Asked to render moral judgments about sexual behaviors, there are some I would readily label as immoral. (Rape is an extreme example. Showing the topless photo your girlfriend sent to your best friend is a milder one.) But I often choose to hold back and error on the side of not rendering a definitive judgment, knowing that occasionally means I'll fail to label as unethical some things that actually turn out to be morally suspect.
  • Partly I take that approach because, unlike Dreher, I don't see any great value or urgency in the condemnations, and unlike Douthat, I worry more about wrongful stigma than lack of rightful stigmas
  • In a society where notions of sexual morality aren't coercively enforced by the church or the state, what purpose is condemnation serving?
  • People are great! Erring on the side of failing to condemn permits at least the possibility of people from all of these world views engaging in conversation with one another.
  • Dreher worries about the fact that, despite our discomfort, neither Witt nor I can bring ourselves to say that the sexual acts performed during the S.F. porn shoot were definitely wrong. Does that really matter? My interlocutors perhaps see a cost more clearly than me, as well they might. My bias is that just arguing around the fire is elevating.
8More

Manterruption is a Thing, and Now There is an App to Detect it in Daily Conversation | ... - 0 views

  • Introducing our word of the day – “manterruption”. It’s a pretty self-explanatory term, describing a behavior when men interrupt women unnecessarily, which leads to a pretty serious imbalance in the amount of female vs. male contributions in a conversation.
  • A 2004 study on gender issues at Harvard Law School found that men were 50% more likely than women to volunteer at least one comment during class and 144% more likely to volunteer three or more comments. 
  • which as a consequence leaves decision-making mostly to men.
  • ...4 more annotations...
  • Meaning, women’s voices bring a different and valuable perspective in a conversation and should be heard more.
  • Here's the thing, though: while fighting for the cause of hearing the female perspective equally in all matters of business, government, and life is definitely worthwhile, blaming it all on interrupting men doesn’t seem fair. Because it is not just men who interrupt women, women do it too. As a matter of fact, a study done in a tech company showed that 87% of the time that women interrupt, they are interrupting other women.
  • There are also other dynamics at play, for example, seniority. It is still more likely that men will hold a more senior position in a professional environment and, generally, people with a higher rank tend to interrupt more and be interrupted less.
  • Hearing the voices and perspectives of both genders equally is incredibly important, but we should make sure we are addressing the right root causes and are not antagonizing those who need to be on the same side for progress to be made. 
  •  
    I think this app is very interesting. There are obviously gender inequality in the society that men are often more used to take the leadership than women. I think by counting how many times a woman is interrupted by a man is a very interesting aspect to show how the society is still dominated by men. I also really like that the author discusses about other possible factors of why women are more likely to be interrupted by men. Only arguing about one side wouldn't make a strong argument. Gender inequality is a big and heavy label that we should give it more thinking before we apply it to any phenomenon. --Sissi (3/14/2017)
35More

What Is Wrong with the West's Economies? by Edmund S. Phelps | The New York Review of B... - 1 views

  • What is wrong with the economies of the West—and with economics?
  • With little or no effective policy initiative giving a lift to the less advantaged, the jarring market forces of the past four decades—mainly the slowdowns in productivity that have spread over the West and, of course, globalization, which has moved much low-wage manufacturing to Asia—have proceeded, unopposed, to drag down both employment and wage rates at the low end. The setback has cost the less advantaged not only a loss of income but also a loss of what economists call inclusion—access to jobs offering work and pay that provide self-respect.
  • The classical idea of political economy has been to let wage rates sink to whatever level the market takes them, and then provide everyone with the “safety net” of a “negative income tax,” unemployment insurance, and free food, shelter, clothing, and medical care
  • ...32 more annotations...
  • This failing in the West’s economies is also a failing of economics
  • many people have long felt the desire to do something with their lives besides consuming goods and having leisure. They desire to participate in a community in which they can interact and develop.
  • Our prevailing political economy is blind to the very concept of inclusion; it does not map out any remedy for the deficiency
  • injustice of another sort. Workers in decent jobs view the economy as unjust if they or their children have virtually no chance of climbing to a higher rung in the socioeconomic ladder
  • “Money is like blood. You need it to live but it isn’t the point of life.”4
  • justice is not everything that people need from their economy. They need an economy that is good as well as just. And for some decades, the Western economies have fallen short of any conception of a “good economy”—an economy offering a “good life,” or a life of “richness,” as some humanists call it
  • The good life as it is popularly conceived typically involves acquiring mastery in one’s work, thus gaining for oneself better terms—or means to rewards, whether material, like wealth, or nonmaterial—an experience we may call “prospering.”
  • As humanists and philosophers have conceived it, the good life involves using one’s imagination, exercising one’s creativity, taking fascinating journeys into the unknown, and acting on the world—an experience I call “flourishing.”
  • prospering and flourishing became prevalent in the nineteenth century when, in Europe and America, economies emerged with the dynamism to generate their own innovation.
  • What is the mechanism of the slowdown in productivity
  • prospering
  • In nineteenth-century Britain and America, and later Germany and France, a culture of exploration, experimentation, and ultimately innovation grew out of the individualism of the Renaissance, the vitalism of the Baroque era, and the expressionism of the Romantic period.
  • What made innovating so powerful in these economies was that it was not limited to elites. It permeated society from the less advantaged parts of the population on up.
  • High-enough wages, low-enough unemployment, and wide-enough access to engaging work are necessary for a “good-enough” economy—though far from sufficient. The material possibilities of the economy must be adequate for the nonmaterial possibilities to be widespread—the satisfactions of prospering and of flourishing through adventurous, creative, and even imaginative work.
  • today’s standard economics. This economics, despite its sophistication in some respects, makes no room for economies in which people are imagining new products and using their creativity to build them. What is most fundamentally “wrong with economics” is that it takes such an economy to be the norm—to be “as good as it gets.”
  • ince around 1970, or earlier in some cases, most of the continental Western European economies have come to resemble more completely the mechanical model of standard economics. Most companies are highly efficient. Households, apart from the very low-paid or unemployed, have gone on saving
  • In most of Western Europe, economic dynamism is now at lows not seen, I would judge, since the advent of dynamism in the nineteenth century. Imagining and creating new products has almost disappeared from the continent
  • The bleak levels of both unemployment and job satisfaction in Europe are testimony to its dreary economies.
  • a recent survey of household attitudes found that, in “happiness,” the median scores in Spain (54), France (51), Italy (48), and Greece (37) are all below those in the upper half of the nations labeled “emerging”—Mexico (79), Venezuela (74), Brazil (73), Argentina (66), Vietnam (64), Colombia (64), China (59), Indonesia (58), Chile (58), and Malaysia (56)
  • The US economy is not much better. Two economists, Stanley Fischer and Assar Lindbeck, wrote of a “Great Productivity Slowdown,” which they saw as beginning in the late 1960s.11 The slowdown in the growth of capital and labor combined—what is called “total factor productivity”—is star
  • though the injustices in the West’s economies are egregious, they ought not to be seen as a major cause of the productivity slowdowns and globalization. (For one thing, a slowdown of productivity started in the US in the mid-1960s and the sharp loss of manufacturing jobs to poorer countries occurred much later—from the late 1970s to the early 1990s.) Deeper causes must be at work.
  • The plausible explanation of the syndrome in America—the productivity slowdown and the decline of job satisfaction, among other things—is a critical loss of indigenous innovation in the established industries like traditional manufacturing and services that was not nearly offset by the innovation that flowered in a few new industries
  • hat then caused this narrowing of innovation? No single explanation is persuasive. Yet two classes of explanations have the ring of truth. One points to suppression of innovation by vested interests
  • some professions, such as those in education and medicine, have instituted regulation and licensing to curb experimentation and change, thus dampening innovation
  • established corporations—their owners and stakeholders—and entire industries, using their lobbyists, have obtained regulations and patents that make it harder for new firms to gain entry into the market and to compete with incumbents.
  • The second explanation points to a new repression of potential innovators by families and schools. As the corporatist values of control, solidarity, and protection are invoked to prohibit innovation, traditional values of conservatism and materialism are often invoked to inhibit a young person from undertaking an innovation.
  • ow might Western nations gain—or regain—widespread prospering and flourishing? Taking concrete actions will not help much without fresh thinking: people must first grasp that standard economics is not a guide to flourishing—it is a tool only for efficiency.
  • Widespread flourishing in a nation requires an economy energized by its own homegrown innovation from the grassroots on up. For such innovation a nation must possess the dynamism to imagine and create the new—economic freedoms are not sufficient. And dynamism needs to be nourished with strong human values.
  • a reform of education stands out. The problem here is not a perceived mismatch between skills taught and skills in demand
  • The problem is that young people are not taught to see the economy as a place where participants may imagine new things, where entrepreneurs may want to build them and investors may venture to back some of them. It is essential to educate young people to this image of the economy.
  • It will also be essential that high schools and colleges expose students to the human values expressed in the masterpieces of Western literature, so that young people will want to seek economies offering imaginative and creative careers. Education systems must put students in touch with the humanities in order to fuel the human desire to conceive the new and perchance to achieve innovations
  • This reorientation of general education will have to be supported by a similar reorientation of economic education.
7More

Are You Lucky? How You Respond Affects Your Fate. | Big Think - 0 views

  • Humans are superstitious creatures. Our rituals are vast. We tie one shoelace before the other; if we tie one, we have to tie the other even if it’s not loose.
  • Luck is the ever-present elephant in the room, dwarfed in our vocabulary by destiny and blessings.
  • But a roll of seven has more to do with the flick of a wrist than fate. 
  • ...3 more annotations...
  • Considering yourself lucky is a good thing. Rather than having a negative worldview—“that’s just my luck”—thinking yourself to be lucky results in positive brain functioning and overall well-being.
  • To navigate this tricky terrain, Frank suggests asking someone about their luck rather than informing them of their luck.
  • As should we all. Luck is not a mystical ally.
  •  
    I think luck is very tricky thing in human social science. As study suggests, luck is not a real thing, it is just something that human invented to comfort themselves. However, the belief of luck does have an effect on people's performance. I remembered once I saw in a study that people who believe that they are very lucky would have a better chance of good performance. This does not necessarily means that there is some unknown force called luck. It just means that believing in oneself would have a positive effect. I think it is very interesting that people are so used to use the word luck when a thing that is low in possibility happened to them. I think the language itself is giving people suggestions that there is some force that helps people in their action. --Sissi (4/19/2017)
11More

Researchers Analyze 1,280 Suicide Notes to Devise a Better Prevention Strategy | Big Think - 1 views

  • That’s why the results of a 2015 report were so shocking. For the first time in generations, middle-aged white people saw their death rate increase.
  • Approximately 40,000 people take their own lives each year in the US.
  • They wanted to obtain a holistic view using psychology, history, and the social sciences to tackle suicide.
  • ...7 more annotations...
  • Last words such as these are only found in 14% of cases. The authors began to notice differences between note leavers and non-leavers in their research, as well as people who attempt suicide and those who complete the act.
  • Many notes were addressed to one person. Others were to no one in particular.
  • Nowadays, being a white male is the single biggest risk factor. Why is that? According to Case and Deaton, drastic changes in the labor market is the most significant factor.
  • “Hegemonic masculinity,” or a perception that heightened masculinity must be portrayed at all times, a goal that no male can live up to.
  • Another 23% of note writers ended it all due to unrequited love or love lost. 22% said they themselves created the problem which led to their decision.
  • Meyer and colleagues also propose a national prevention plan, to foster a sense of community and social support.
  • If you feel suicidal, or are concerned for a friend, don't wait: talk to someone, or learn about suicide prevention here.
  •  
    Suicide is a very interesting and big subject in studies of human social activities. From the point of view of evolution, it is a very inefficient act as one kills oneself. This action should be banned from our genes. Gender pressure, as stated in the reading, is probably a factor of suicide. Since the society expect too much on male and some of them are not able to fulfill the expectation, they choose to suicide. It also reminds me of a movie I watched when I was little and left a big impression, called The Happening. In that movie, the plants would release a certain hormone and lead people to suicide one by one. So is it possible that hormone can be a factor of making the decision to suicide? --Sissi (4/6/2017)
6More

A Pepsi Commercial's Lesson for Advertisers - The New York Times - 1 views

  • And the commercial (Pepsi calls it a “short film”) drew the most intense disgust from the very people whose experiences it tried to reflect.
  • I can imagine that it was born from some combination of good intentions, corporate hedging and a huge failure to grasp the limitations of advertising on the part of the people working for Pepsi’s in-house content agency.
  • In the wake of Donald Trump’s election, there has been a rash of advertisements seeking to remind consumers of what really makes America great, or to push saccharine messages about the idea that by working together, we can fight evil and make the world a better place.
  • ...2 more annotations...
  • Advertising has long had a parasitic relationship with culture, most infamously when it comes to themes associated with African-Americans — even when the goal is not to specifically reach that audience.
  • To pull off an ad that nodded to the contemporary fight against racism, Pepsi would’ve had to somehow acknowledge a point of view without trying to adopt it wholesale, speaking on behalf of a community it didn’t understand, or exaggerating its own awareness.
  •  
    A successful advertisement always requires a successful understanding of psychology. it is as complicated as language as they both try to explain and communicate ideas to the general public. This pepsi advertisement is obviously a example of a communication failure that fails to deal with the cultural context of the society of the views. I believe that the pepsi company is not trying to show racist in its advertisement; however, the possible interpretation or response of the advertisement from the audiences should be analysis before it being release. I think regulation is needed in advertisements. --Sissi (4/9/2017)
11More

Scientists Are Attempting to Unlock the Secret Potential of the Human Brain | Big Think - 1 views

  • Sometimes, it occurs when a person suffers a nearly fatal accident or life-threatening situation. In others, they are born with a developmental disorder, such as autism.
  • This is known as savant syndrome. Of course, it’s exceedingly rare.
  • Upon entering the bathroom and turning on the faucet, he saw “lines emanating out perpendicularly from the flow.” He couldn’t believe it.
  • ...7 more annotations...
  • “At first, I was startled, and worried for myself, but it was so beautiful that I just stood in my slippers and stared.” It was like, “watching a slow-motion film.”
  • Before, he never rose beyond pre-Algebra.
  • savant syndrome
  • Padgett is one of the few people on earth who can draw fractals accurately, freehand.
  • There are two ways for it to occur, either through an injury that causes brain damage or through a disorder, such as autism.
  • It’s estimated that around 50% of those with savant syndrome are autistic.
  • “The most common ability to emerge is art, followed by music,” Treffert told The Guardian. “But I’ve had cases where brain damage makes people suddenly interested in dance, or in Pinball Wizard.”
  •  
    This article reminds me of a scientific myth I saw one day about that people only use 10% percent of their brain. However, this description is not accurate because when we are using our brain, every parts of our brain is active and their is no vacant part of it. It is only that it has more potential than it seems. I think this description is 10% is misleading. I found the savant syndrome very interesting. I think this amazing talent in human brain is very amazing. So is it possible that the cognitive bias in human brain is because of the potential of human brain was not activated. --Sissi (4/17/2017)
9More

Ignore the GPS. That Ocean Is Not a Road. - The New York Times - 2 views

  • Faith is a concept that often enters the accounts of GPS-induced mishaps. “It kept saying it would navigate us a road,” said a Japanese tourist in Australia who, while attempting to reach North Stradbroke Island, drove into the Pacific Ocean. A man in West Yorkshire, England, who took his BMW off-road and nearly over a cliff, told authorities that his GPS “kept insisting the path was a road.” In perhaps the most infamous incident, a woman in Belgium asked GPS to take her to a destination less than two hours away. Two days later, she turned up in Croatia.
  • These episodes naturally inspire incredulity, if not outright mockery. After a couple of Swedes mistakenly followed their GPS to the city of Carpi (when they meant to visit Capri), an Italian tourism official dryly noted to the BBC that “Capri is an island. They did not even wonder why they didn’t cross any bridge or take any boat.” An Upper West Side blogger’s account of the man who interpreted “turn here” to mean onto a stairway in Riverside Park was headlined “GPS, Brain Fail Driver.”
  • several studies have demonstrated empirically what we already know instinctively. Cornell researchers who analyzed the behavior of drivers using GPS found drivers “detached” from the “environments that surround them.” Their conclusion: “GPS eliminated much of the need to pay attention.”
  • ...6 more annotations...
  • We seem driven (so to speak) to transform cars, conveyances that show us the world, into machines that also see the world for
  • There is evidence that one’s cognitive map can deteriorate. A widely reported study published in 2006 demonstrated that the brains of London taxi drivers have larger than average amounts of gray matter in the area responsible for complex spatial relations. Brain scans of retired taxi drivers suggested that the volume of gray matter in those areas also decreases when that part of the brain is no longer being used as frequently. “I think it’s possible that if you went to someone doing a lot of active navigation, but just relying on GPS,” Hugo Spiers, one of the authors of the taxi study, hypothesized to me, “you’d actually get a reduction in that area.”
  • A consequence is a possible diminution of our “cognitive map,” a term introduced in 1948 by the psychologist Edward Tolman of the University of California, Berkeley. In a groundbreaking paper, Dr. Tolman analyzed several laboratory experiments involving rats and mazes. He argued that rats had the ability to develop not only cognitive “strip maps” — simple conceptions of the spatial relationship between two points — but also more comprehensive cognitive maps that encompassed the entire maze.
  • Could society’s embrace of GPS be eroding our cognitive maps? For Julia Frankenstein, a psychologist at the University of Freiburg’s Center for Cognitive Science, the danger of GPS is that “we are not forced to remember or process the information — as it is permanently ‘at hand,’ we need not think or decide for ourselves.” She has written that we “see the way from A to Z, but we don’t see the landmarks along the way.” In this sense, “developing a cognitive map from this reduced information is a bit like trying to get an entire musical piece from a few notes.” GPS abets a strip-map level of orientation with the world.
  • We seem driven (so to speak) to transform cars, conveyances that show us the world, into machines that also see the world for us.
  • For Dr. Tolman, the cognitive map was a fluid metaphor with myriad applications. He identified with his rats. Like them, a scientist runs the maze, turning strip maps into comprehensive maps — increasingly accurate models of the “great God-given maze which is our human world,” as he put it. The countless examples of “displaced aggression” he saw in that maze — “the poor Southern whites, who take it out on the Negros,” “we psychologists who criticize all other departments,” “Americans who criticize the Russians and the Russians who criticize us” — were all, to some degree, examples of strip-map comprehension, a blinkered view that failed to comprehend the big picture. “What in the name of Heaven and Psychology can we do about it?” he wrote. “My only answer is to preach again the virtues of reason — of, that is, broad cognitive maps.”
17More

Buddhism Is More 'Western' Than You Think - The New York Times - 0 views

  • Not only have Buddhist thinkers for millenniums been making very much the kinds of claims that Western philosophers and psychologists make — many of these claims are looking good in light of modern Western thought.
  • In fact, in some cases Buddhist thought anticipated Western thought, grasping things about the human mind, and its habitual misperception of reality, that modern psychology is only now coming to appreciate.
  • “Things exist but they are not real.” I agree with Gopnik that this sentence seems a bit hard to unpack. But if you go look at the book it is taken from, you’ll find that the author himself, Mu Soeng, does a good job of unpacking it.
  • ...14 more annotations...
  • It turns out Soeng is explaining an idea that is central to Buddhist philosophy: “not self” — the idea that your “self,” as you intuitively conceive it, is actually an illusion. Soeng writes that the doctrine of not-self doesn’t deny an “existential personality” — it doesn’t deny that there is a you that exists; what it denies is that somewhere within you is an “abiding core,” a kind of essence-of-you that remains constant amid the flux of thoughts, feelings, perceptions and other elements that constitute your experience. So if by “you” we mean a “self” that features an enduring essence, then you aren’t real.
  • In recent decades, important aspects of the Buddhist concept of not-self have gotten support from psychology. In particular, psychology has bolstered Buddhism’s doubts about our intuition of what you might call the “C.E.O. self” — our sense that the conscious “self” is the initiator of thought and action.
  • recognizing that “you” are not in control, that you are not a C.E.O., can help give “you” more control. Or, at least, you can behave more like a C.E.O. is expected to behave: more rationally, more wisely, more reflectively; less emotionally, less rashly, less reactively.
  • Suppose that, via mindfulness meditation, you observe a feeling like anxiety or anger and, rather than let it draw you into a whole train of anxious or angry thoughts, you let it pass away. Though you experience the feeling — and in a sense experience it more fully than usual — you experience it with “non-attachment” and so evade its grip. And you now see the thoughts that accompanied it in a new light — they no longer seem like trustworthy emanations from some “I” but rather as transient notions accompanying transient feelings.
  • Brain-scan studies have produced tentative evidence that this lusting and disliking — embracing thoughts that feel good and rejecting thoughts that feel bad — lies near the heart of certain “cognitive biases.” If such evidence continues to accumulate, the Buddhist assertion that a clear view of the world involves letting go of these lusts and dislikes will have drawn a measure of support from modern science.
  • There’s a broader and deeper sense in which Buddhist thought is more “Western” than stereotype suggests. What, after all, is more Western than science’s emphasis on causality, on figuring out what causes what, and hoping to thus explain why all things do the things they do?
  • the Buddhist idea of “not-self” grows out of the belief undergirding this mission — that the world is pervasively governed by causal laws. The reason there is no “abiding core” within us is that the ever-changing forces that impinge on us — the sights, the sounds, the smells, the tastes — are constantly setting off chain reactions inside of us.
  • Buddhism’s doubts about the distinctness and solidity of the “self” — and of other things, for that matter — rests on a recognition of the sense in which pervasive causality means pervasive fluidity.
  • Buddhism long ago generated insights that modern psychology is only now catching up to, and these go beyond doubts about the C.E.O. self.
  • psychology has lately started to let go of its once-sharp distinction between “cognitive” and “affective” parts of the mind; it has started to see that feelings are so finely intertwined with thoughts as to be part of their very coloration. This wouldn’t qualify as breaking news in Buddhist circles.
  • Note how, in addition to being therapeutic, this clarifies your view of the world. After all, the “anxious” or “angry” trains of thought you avoid probably aren’t objectively true. They probably involve either imagining things that haven’t happened or making subjective judgments about things that have.
  • All we can do is clear away as many impediments to comprehension as possible. Science has a way of doing that — by insisting that entrants in its “competitive storytelling” demonstrate explanatory power in ways that are publicly observable, thus neutralizing, to the extent possible, subjective biases that might otherwise prevail.
  • Buddhism has a different way of doing it: via meditative disciplines that are designed to attack subjective biases at the source, yielding a clearer view of both the mind itself and the world beyond it.
  • The results of these two inquiries converge to a remarkable extent — an extent that can be appreciated only in light of the last few decades of progress in psychology and evolutionary science. At least, that’s my argument.
21More

The Problem With History Classes - The Atlantic - 3 views

  • The passion and urgency with which these battles are fought reflect the misguided way history is taught in schools. Currently, most students learn history as a set narrative—a process that reinforces the mistaken idea that the past can be synthesized into a single, standardized chronicle of several hundred pages. This teaching pretends that there is a uniform collective story, which is akin to saying everyone remembers events the same.
  • Yet, history is anything but agreeable. It is not a collection of facts deemed to be "official" by scholars on high. It is a collection of historians exchanging different, often conflicting analyses.
  • rather than vainly seeking to transcend the inevitable clash of memories, American students would be better served by descending into the bog of conflict and learning the many "histories" that compose the American national story.
  • ...18 more annotations...
  • Perhaps Fisher offers the nation an opportunity to divorce, once and for all, memory from history. History may be an attempt to memorialize and preserve the past, but it is not memory; memories can serve as primary sources, but they do not stand alone as history. A history is essentially a collection of memories, analyzed and reduced into meaningful conclusions—but that collection depends on the memories chosen.
  • Memories make for a risky foundation: As events recede further into the past, the facts are distorted or augmented by entirely new details
  • people construct unique memories while informing perfectly valid histories. Just as there is a plurality of memories, so, too, is there a plurality of histories.
  • Scholars who read a diverse set of historians who are all focused on the same specific period or event are engaging in historiography
  • This approach exposes textbooks as nothing more than a compilation of histories that the authors deemed to be most relevant and useful.
  • In historiography, the barrier between historian and student is dropped, exposing a conflict-ridden landscape. A diplomatic historian approaches an event from the perspective of the most influential statesmen (who are most often white males), analyzing the context, motives, and consequences of their decisions. A cultural historian peels back the objects, sights, and sounds of a period to uncover humanity’s underlying emotions and anxieties. A Marxist historian adopts the lens of class conflict to explain the progression of events. There are intellectual historians, social historians, and gender historians, among many others. Historians studying the same topic will draw different interpretations—sometimes radically so, depending on the sources they draw from
  • Jacoba Urist points out that history is "about explaining and interpreting past events analytically." If students are really to learn and master these analytical tools, then it is absolutely essential that they read a diverse set of historians and learn how brilliant men and women who are scrutinizing the same topic can reach different conclusions
  • Rather than constructing a curriculum based on the muddled consensus of boards, legislatures, and think tanks, schools should teach students history through historiography. The shortcomings of one historian become apparent after reading the work of another one on the list.
  • Although, as Urist notes, the AP course is "designed to teach students to think like historians," my own experience in that class suggests that it fails to achieve that goal.
  • The course’s framework has always served as an outline of important concepts aiming to allow educators flexibility in how to teach; it makes no reference to historiographical conflicts. Historiography was an epiphany for me because I had never before come face-to-face with how historians think and reason
  • When I took AP U.S. History, I jumbled these diverse histories into one indistinct narrative. Although the test involved open-ended essay questions, I was taught that graders were looking for a firm thesis—forcing students to adopt a side. The AP test also, unsurprisingly, rewards students who cite a wealth of supporting details
  • By the time I took the test in 2009, I was a master at "checking boxes," weighing political factors equally against those involving socioeconomics and ensuring that previously neglected populations like women and ethnic minorities received their due. I did not know that I was pulling ideas from different historiographical traditions. I still subscribed to the idea of a prevailing national narrative and served as an unwitting sponsor of synthesis, oblivious to the academic battles that made such synthesis impossible.
  • Although there may be an inclination to seek to establish order where there is chaos, that urge must be resisted in teaching history. Public controversies over memory are hardly new. Students must be prepared to confront divisiveness, not conditioned to shoehorn agreement into situations where none is possible
  • When conflict is accepted rather than resisted, it becomes possible for different conceptions of American history to co-exist. There is no longer a need to appoint a victor.
  • More importantly, the historiographical approach avoids pursuing truth for the sake of satisfying a national myth
  • The country’s founding fathers crafted some of the finest expressions of personal liberty and representative government the world has ever seen; many of them also held fellow humans in bondage. This paradox is only a problem if the goal is to view the founding fathers as faultless, perfect individuals. If multiple histories are embraced, no one needs to fear that one history will be lost.
  • History is not indoctrination. It is a wrestling match. For too long, the emphasis has been on pinning the opponent. It is time to shift the focus to the struggle itself
  • There is no better way to use the past to inform the present than by accepting the impossibility of a definitive history—and by ensuring that current students are equipped to grapple with the contested memories in their midst.
3More

'Scorn for Tribalism Is an American Tradition' - The Atlantic - 1 views

  • Wittgenstein, of course, tried to prove that individual words had atomic absolute meanings. Immediately after claiming to prove that, however, he realized that they don't.  A few years ago a team at MIT looked at the matter from a technical measurable sense. The summary is that the most efficient and powerful language possible is one in which all of the words are ambiguous, and the meaning is only in the context.
  • Briefly, instead of having a separate word for every possible present and future meaning, you use combinations of words to define any given meaning.  (The same thing is done with letters to make words.) That is basically what Wittgenstein came to understand, but the new point being that that is not just some inherent defect of language, but a hallmark of an efficient and powerful language.  As the study said:
  • "Basically, if you have any human language in your input or output, you are stuck with needing context to disambiguate.”
8More

The advantage of ambiguity | MIT News - 1 views

  • Why did language evolve? While the answer might seem obvious — as a way for individuals to exchange information — linguists and other students of communication have debated this question for years. Many prominent linguists, including MIT’s Noam Chomsky, have argued that language is, in fact, poorly designed for communication. Such a use, they say, is merely a byproduct of a system that probably evolved for other reasons — perhaps for structuring our own private thoughts.
  • In a new theory, they claim that ambiguity actually makes language more efficient, by allowing for the reuse of short, efficient sounds that listeners can easily disambiguate with the help of context.
  • “Various people have said that ambiguity is a problem for communication,” says Ted Gibson, an MIT professor of cognitive science and senior author of a paper describing the research to appear in the journal Cognition. "But the fact that context disambiguates has important ramifications for the re-use of potentially ambiguous forms. Ambiguity is no longer a problem — it's something that you can take advantage of, because you can reuse easy [words] in different contexts over and over again."
  • ...5 more annotations...
  • virtually no speaker of English gets confused when he or she hears the word “mean.” That’s because the different senses of the word occur in such different contexts as to allow listeners to infer its meaning nearly automatically.
  • To understand why ambiguity makes a language more efficient rather than less so, think about the competing desires of the speaker and the listener. The speaker is interested in conveying as much as possible with the fewest possible words, while the listener is aiming to get a complete and specific understanding of what the speaker is trying to say.
  • it is “cognitively cheaper” to have the listener infer certain things from the context than to have the speaker spend time on longer and more complicated utterances. The result is a system that skews toward ambiguity, reusing the “easiest” words. Once context is considered, it’s clear that “ambiguity is actually something you would want in the communication system,” Piantadosi says.
  • “You would expect that since languages are constantly changing, they would evolve to get rid of ambiguity,” Wasow says. “But if you look at natural languages, they are massively ambiguous: Words have multiple meanings, there are multiple ways to parse strings of words. … This paper presents a really rigorous argument as to why that kind of ambiguity is actually functional for communicative purposes, rather than dysfunctional.”
  • “Ambiguity is only good for us [as humans] because we have these really sophisticated cognitive mechanisms for disambiguating,” he says. “It’s really difficult to work out the details of what those are, or even some sort of approximation that you could get a computer to use.”
« First ‹ Previous 121 - 140 of 749 Next › Last »
Showing 20 items per page