Skip to main content

Home/ TOK@ISPrague/ Group items matching "No" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
markfrankel18

Internet Makes Hypochondria Worse - 1 views

  • Thanks to the Internet, becoming a hypochondriac is much easier than it used to be. The easy availability of health information on the web has certainly helped countless people make educated decisions about their health and medical treatment, but it can be disastrous for people who are likely to worry. Hypochondriacs researching an illness used to have to scour books and ask doctors for information. Now a universe of information is available with a few mouse clicks. "For hypochondriacs, the Internet has absolutely changed things for the worse," says Brian Fallon, MD, professor of psychiatry at Columbia University and the co-author of Phantom Illness: Recognizing, Understanding and Overcoming Hypochondria (1996). So far, No studies have been conducted on just how hypochondriacs use the Internet, Fallon says. But the pheNomeNon is common eNough to have a snappy name -- "cyberchondria."
Lawrence Hrubes

Remembering a Crime That You Didn't Commit - The New Yorker - 1 views

  • Earlier this year, two forensic psychologists—Julia Shaw, of the University of Bedfordshire, and Stephen Porter, of the University of British Columbia—upped the ante. Writing in the January issue of the journal Psychological Science, they described a method for implanting false memories, not of getting lost in childhood but of committing a crime in adolescence. They modelled their work on Loftus’s, sending questionnaires to each of their participant’s parents to gather background information. (Any past run-ins with the law would eliminate a student from the study.) Then they divided the students into two groups and told each a different kind of false story. One group was prompted to remember an emotional event, such as getting attacked by a dog. The other was prompted to remember a crime—an assault, for example—that led to an encounter with the police. At no time during the experiments were the participants allowed to communicate with their parents.
  • What Shaw and Porter found astonished them. “We thought we’d have something like a thirty-per-cent success rate, and we ended up having over seventy,” Shaw told me. “We only had a handful of people who didn’t believe us.” After three debriefing sessions, seventy-six per cent of the students claimed to remember the false emotional event; nearly the same amount—seventy per cent—remembered the fictional crime. Shaw and Porter hadn’t put undue stress on the students; in fact, they had treated them in a friendly way. All it took was a suggestion from an authoritative source, and the subjects’ imaginations did the rest. As Münsterberg observed of the farmer’s son, the students seemed almost eager to self-incriminate.
  • Kassin cited the example of Martin Tankleff, a high-school senior from Long Island who, in 1988, awoke to find his parents bleeding on the floor. Both had been repeatedly stabbed; his mother was dead and his father was dying. He called the police. Later, at the station, he was harshly interrogated. For five hours, Tankleff resisted. Finally, an officer told him that his father had regained consciousness at the hospital and named him as the killer. (In truth, the father died without ever waking.) Overwhelmed by the news, Tankleff took responsibility, saying that he must have blacked out and killed his parents unwittingly. A jury convicted him of murder. He spent seventeen years in prison before the real murderers were found. Kassin condemns the practice of lying to suspects, which is illegal in many countries but not here. The American court system, he said, should address it. “Lying puts innocent people at risk, and there’s a hundred years of psychology to show it,” he said.
Lawrence Hrubes

When Philosophy Lost Its Way - The New York Times - 0 views

  • Having adopted the same structural form as the sciences, it’s no wonder philosophy fell prey to physics envy and feelings of inadequacy. Philosophy adopted the scientific modus operandi of knowledge production, but failed to match the sciences in terms of making progress in describing the world. Much has been made of this inability of philosophy to match the cognitive success of the sciences. But what has passed unnoticed is philosophy’s all-too-successful aping of the institutional form of the sciences. We, too, produce research articles. We, too, are judged by the same coin of the realm: peer-reviewed products. We, too, develop sub-specializations far from the comprehension of the person on the street. In all of these ways we are so very “scientific.”
markfrankel18

A Cambridge professor on how to stop being so easily manipulated by misleading statistics - Quartz - 0 views

  • Graphs can be as manipulative as words. Using tricks such as cutting axes, rescaling things, changing data from positive to negative, etc. Sometimes putting zero on the y-axis is wrong. So to be sure that you are communicating the right things, you need to evaluate the message that people are taking away. There are no absolute rules. It all depends on what you want to communicate.
  • The bottom line is that humans are very bad at understanding probability. Everyone finds it difficult, even I do. We just have to get better at it. We need to learn to spot when we are being manipulated.
Lawrence Hrubes

Why 'Natural' Doesn't Mean Anything Anymore - NYTimes.com - 1 views

  • It seems that getting end-of-life patients and their families to endorse “do not resuscitate” orders has been challenging. To many ears, “D.N.R.” sounds a little too much like throwing Grandpa under the bus. But according to a paper in The Journal of Medical Ethics, when the orders are reworded to say “allow natural death,” patients and family members and even medical professionals are much more likely to give their consent to what amounts to exactly the same protocols.
  • So does this mean that, when it comes to saying what’s natural, anything goes? I don’t think so. In fact, I think there’s some philosophical wisdom we can harvest from, of all places, the Food and Drug Administration. When the federal judges couldn’t find a definition of “natural” to apply to the class-action suits before them, three of them wrote to the F.D.A., ordering the agency to define the word. But the F.D.A. had considered the question several times before, and refused to attempt a definition. The only advice the F.D.A. was willing to offer the jurists is that a food labeled “natural” should have “nothing artificial or synthetic” in it “that would not normally be expected in the food.” The F.D.A. states on its website that “it is difficult to define a food product as ‘natural’ because the food has probably been processed and is no longer the product of the earth,” suggesting that the industry might not want to press the point too hard, lest it discover that nothing it sells is natural.
Lawrence Hrubes

Why this man wants to take the words 'Allahu akbar' back from terrorists - Home | As It Happens | CBC Radio - 1 views

  • Extremists on all sides not only hijack religion and identity and narratives, they also hijack language to rationalize their violent ideology and their violent actions. I want to take it back and say, "no. Allahu akbar means God is great. I use it in prayer."
Lawrence Hrubes

The Great A.I. Awakening - The New York Times - 1 views

  • Translation, however, is an example of a field where this approach fails horribly, because words cannot be reduced to their dictionary definitions, and because languages tend to have as many exceptions as they have rules. More often than not, a system like this is liable to translate “minister of agriculture” as “priest of farming.” Still, for math and chess it worked great, and the proponents of symbolic A.I. took it for granted that no activities signaled “general intelligence” better than math and chess.
  • A rarefied department within the company, Google Brain, was founded five years ago on this very principle: that artificial “neural networks” that acquaint themselves with the world via trial and error, as toddlers do, might in turn develop something like human flexibility. This notion is not new — a version of it dates to the earliest stages of modern computing, in the 1940s — but for much of its history most computer scientists saw it as vaguely disreputable, even mystical. Since 2011, though, Google Brain has demonstrated that this approach to artificial intelligence could solve many problems that confounded decades of conventional efforts. Speech recognition didn’t work very well until Brain undertook an effort to revamp it; the application of machine learning made its performance on Google’s mobile platform, Android, almost as good as human transcription. The same was true of image recognition. Less than a year ago, Brain for the first time commenced with the gut renovation of an entire consumer product, and its momentous results were being celebrated tonight.
Lawrence Hrubes

There's a morality test that evaluates utilitarianism better than the Trolley Problem - Quartz - 3 views

  • Everyone likes to think of themselves as moral. Objectively evaluating morality is decidedly tricky, though, not least because there’s no clear consensus on what it actually means to be moral. A group of philosophers and psychologists from Oxford University have created a scale to evaluate one of the most clear-cut and well-known theories of morality: utilitarianism. This theory, first put forward by 18th century British philosopher Jeremy Bentham, argues that action is moral when it creates the maximum happiness for the maximum number of people. Utilitarianism’s focus on consequences states that it’s morally acceptable to actively hurt someone if it means that, overall, more people will benefit as a result.
markfrankel18

The People Have Voted: Pluto is a Planet! | TIME - 2 views

  • That would be just too confusing, argued the second debater, astronomer Gareth Williams, associate director of the IAU’s Minor Planet Center. If you let Pluto stay, he said, you logically have to let the number of planets rise to 24 or 25, “with the possibility of 50 or 100 within the next decade” as more objects are found. “Do we want schoolchildren to have to remember so many? no, we want to keep the numbers low.”
  • David Aguilar, the Center’s director of public affairs, who set up the debate, wanted to look at the question not just from a scientific perspective, but also through the lens of history. The first speaker, therefore, was the eminent Harvard astronomer and historian of science Owen Gingerich. “Planet,” he pointed out, “is a culturally defined word that has changed its meaning over the ages.”
Lawrence Hrubes

Watching Them Turn Off the Rothkos - The New Yorker - 4 views

  • Mainly, I think, the restoration story gets people hooked because it raises ancient and endlessly fascinating philosophy-of-art questions. In this respect, the restored murals are really a new work, a work of conceptual art. To look at them is to have thoughts about the nature of art. When I was a student, I went to a class taught by the art historian Meyer Schapiro. There were lots of people in the room; I think it was supposed to be his last class. (This was at Columbia, where Schapiro had been, as a student and a professor, since 1920.) He devoted the entire opening lecture to forgeries. I couldn’t believe it. I wanted to hear him talk about paintings, not fakes. I didn’t go back.
  • Which shows how clueless I was, even then. Forgery is important because it exposes the ideological character of aesthetic experience. We’re actually not, or not only, or never entirely, responding to an art object via its physical attributes. What we’re seeing is not just what we see. We bring with us a lot of non-sensory values—one of which is authenticity.
  • We’re not absolutists about it. Authenticity is a relative term. Most people don’t undergo mild epistemological queasiness while they’re looking at a conventionally restored Rothko. We look at restored art in museums all the time, and we rarely worry that it’s insufficiently authentic. In the case of the Harvard Rothkos, though, the fact that the faded painting and the faked painting are in front of us at the same time somehow makes for a discordant aesthetic experience. It’s as though, at four o’clock every day, Andy Warhol’s Brillo Boxes turned into the ordinary Brillo cartons of which they were designed to be simulacra. You would no longer be sure what you were looking at.
Lawrence Hrubes

What Should I Do With Old Racist Memorabilia? - The New York Times - 4 views

  • The album was disintegrating, and we removed the cards. Over the years I forgot about them, but in getting ready to move, I came across them again. One in particular is offensive in its captioning and art to people of African descent. While I presume there is a market for this type of memorabilia, there is no way I would seek to profit from it. I offered it to the National Museum of African American History and Culture in Washington. I never heard from them, so it moved with us.My husband thinks I should throw it away, but that feels wrong. I feel it is history that we should acknowledge, however painful and wrong. Your thoughts?
daryashinwary

Colin Tudge: Microscopes have no morals | World news | The Guardian - 4 views

  •  
    The article is about how the correlation between being a scientist and being an atheist is unnecessary, and the importance of having religion when it comes to ethics.
Lawrence Hrubes

The Ethical Quandaries You Should Think About The Next Time You Look At Your Phone | Fast Company | Business + Innovation - 3 views

  • To what extent can we and should we aspire to create machines that can outthink us? For example, Netflix has an algorithm that can predict what movies you will like based on the ones you've already seen and rated. Suppose a dating site were to develop a similar algorithm—maybe even a more sophisticated one—and predict with some accuracy which partner would be the best match for you. Whose advice would you trust more? The advice of the smart dating app or the advice of your parents or your friends?
  • The question, it seems to me, is should we use new genetic technologies only to cure disease and repair injury, or also to make ourselves better-than-well. Should we aspire to become the masters of our natures to protect our children and improve their life prospects?AdvertisementAdvertisement This goes back to the role of accident. Is the unpredictability of the child an important precondition of the unconditional love of parents for children? My worry is that if we go beyond health, we run the risk of turning parenthood into an extension of the consumer society.
markfrankel18

Artificial intelligence's "paper-clip maximizer" metaphor can explain humanity's imminent doom - Quartz - 1 views

  • The thought experiment is meant to show how an optimization algorithm, even if designed with no malicious intent, could ultimately destroy the world.
markfrankel18

The Primitive Streak - Radiolab - 0 views

  • Last May, two research groups announced a breakthrough: they each grew human embryos, in the lab, longer than ever before. In doing so, they witnessed a period of human development no one had ever seen. But in the process, they crashed up against something called the '14-day rule,' a guideline set over 30 years ago that dictates what we do, and possibly how we feel, about human embryos in the lab. On this episode, join producer Molly Webster as she peers down at our very own origins, and wonders: what do we do now?
markfrankel18

Acupuncture Is Sham Medicine - But Has It Led Researchers to a Chronic Pain Treatment? | Big Think - 0 views

  • One of the more benign aspects of Chinese medicine is acupuncture, a practice deemed superstitious in China in the 17th century until Mao Zedong reemployed it for political purposes in the fifties. Two decades later it infiltrated the American imagination. A myth was reborn. As Jeneen Interlandi writes, research results have been murky at best—one 2013 report of over 3,000 studies showed acupuncture to be no more effective than placebos.
Lawrence Hrubes

Henry Marsh's "Do No Harm" - The New Yorker - 1 views

  • Marsh, who is now sixty-five, is one of Britain’s foremost neurosurgeons. He is a senior consultant at St. George’s Hospital, in London, and he helped to pioneer a kind of surgery in which patients are kept awake, under local anesthesia, so that they can converse with their surgeons while they operate, allowing them to avoid damaging what neurosurgeons call “eloquent,” or useful, parts of the brain. Marsh has been the subject of two documentary films. Still, he writes, “As I approach the end of my career I feel an increasing obligation to bear witness to past mistakes I have made.” A few years ago, he prepared a lecture called “All My Worst Mistakes.” For months, he lay awake in the mornings, remembering the patients he had failed. “The more I thought about the past,” he recalls in his book, “the more mistakes rose to the surface, like poisonous methane stirred up from a stagnant pond.”
Lawrence Hrubes

The Mystery of S., the Man with an Impossible Memory | The New Yorker - 0 views

  • The researcher who met with S. that day was twenty-seven-year-old Alexander Luria, whose fame as a founder of neuropsychology still lay before him. Luria began reeling off lists of random numbers and words and asking S. to repeat them, which he did, in ever-lengthening series. Even more remarkably, when Luria retested S. more than fifteen years later, he found those numbers and words still preserved in S.’s memory. “I simply had to admit that the capacity of his memory had no distinct limits,” Luria writes in his famous case study of S., “The Mind of a Mnemonist,” published in 1968 in both Russian and English.
« First ‹ Previous 181 - 200 of 205 Next ›
Showing 20 items per page