Skip to main content

Home/ TOK Friends/ Group items tagged Cognition

Rss Feed Group items tagged

caelengrubb

Believing in Overcoming Cognitive Biases | Journal of Ethics | American Medical Associa... - 0 views

  • Cognitive biases contribute significantly to diagnostic and treatment errors
  • A 2016 review of their roles in decision making lists 4 domains of concern for physicians: gathering and interpreting evidence, taking action, and evaluating decisions
  • Confirmation bias is the selective gathering and interpretation of evidence consistent with current beliefs and the neglect of evidence that contradicts them.
  • ...14 more annotations...
  • It can occur when a physician refuses to consider alternative diagnoses once an initial diagnosis has been established, despite contradicting data, such as lab results. This bias leads physicians to see what they want to see
  • Anchoring bias is closely related to confirmation bias and comes into play when interpreting evidence. It refers to physicians’ practices of prioritizing information and data that support their initial impressions, even when first impressions are wrong
  • When physicians move from deliberation to action, they are sometimes swayed by emotional reactions rather than rational deliberation about risks and benefits. This is called the affect heuristic, and, while heuristics can often serve as efficient approaches to problem solving, they can sometimes lead to bias
  • Further down the treatment pathway, outcomes bias can come into play. This bias refers to the practice of believing that good or bad results are always attributable to prior decisions, even when there is no valid reason to do so
  • The dual-process theory, a cognitive model of reasoning, can be particularly relevant in matters of clinical decision making
  • This theory is based on the argument that we use 2 different cognitive systems, intuitive and analytical, when reasoning. The former is quick and uses information that is readily available; the latter is slower and more deliberate.
  • Consideration should be given to the difficulty physicians face in employing analytical thinking exclusively. Beyond constraints of time, information, and resources, many physicians are also likely to be sleep deprived, work in an environment full of distractions, and be required to respond quickly while managing heavy cognitive loads
  • Simply increasing physicians’ familiarity with the many types of cognitive biases—and how to avoid them—may be one of the best strategies to decrease bias-related errors
  • The same review suggests that cognitive forcing strategies may also have some success in improving diagnostic outcomes
  • Afterwards, the resident physicians were debriefed on both case-specific details and on cognitive forcing strategies, interviewed, and asked to complete a written survey. The results suggested that resident physicians further along in their training (ie, postgraduate year three) gained more awareness of cognitive strategies than resident physicians in earlier years of training, suggesting that this tool could be more useful after a certain level of training has been completed
  • A 2013 study examined the effect of a 3-part, 1-year curriculum on recognition and knowledge of cognitive biases and debiasing strategies in second-year residents
  • Cognitive biases in clinical practice have a significant impact on care, often in negative ways. They sometimes manifest as physicians seeing what they want to see rather than what is actually there. Or they come into play when physicians make snap decisions and then prioritize evidence that supports their conclusions, as opposed to drawing conclusions from evidence
  • Fortunately, cognitive psychology provides insight into how to prevent biases. Guided reflection and cognitive forcing strategies deflect bias through close examination of our own thinking processes.
  • During medical education and consistently thereafter, we must provide physicians with a full appreciation of the cost of biases and the potential benefits of combatting them.
katedriscoll

Frontiers | A Neural Network Framework for Cognitive Bias | Psychology - 0 views

  • Human decision-making shows systematic simplifications and deviations from the tenets of rationality (‘heuristics’) that may lead to suboptimal decisional outcomes (‘cognitive biases’). There are currently three prevailing theoretical perspectives on the origin of heuristics and cognitive biases: a cognitive-psychological, an ecological and an evolutionary perspective. However, these perspectives are mainly descriptive and none of them provides an overall explanatory framework for
  • the underlying mechanisms of cognitive biases. To enhance our understanding of cognitive heuristics and biases we propose a neural network framework for cognitive biases, which explains why our brain systematically tends to default to heuristic (‘Type 1’) decision making. We argue that many cognitive biases arise from intrinsic brain mechanisms that are fundamental for the working of biological neural networks. To substantiate our viewpoint, we discern and explain four basic neural network principles: (1) Association, (2) Compatibility, (3) Retainment, and (4) Focus. These principles are inherent to (all) neural networks which were originally optimized to perform concrete biological, perceptual, and motor functions. They form the basis for our inclinations to associate and combine (unrelated) information, to prioritize information that is compatible with our present state (such as knowledge, opinions, and expectations), to retain given information that sometimes could better be ignored, and to focus on dominant information while ignoring relevant information that is not directly activated. The supposed mechanisms are complementary and not mutually exclusive. For different cognitive biases they may all contribute in varying degrees to distortion of information. The present viewpoint not only complements the earlier three viewpoints, but also provides a unifying and binding framework for many cognitive bias phenomena.
  • The cognitive-psychological (or heuristics and biases) perspective (Evans, 2008; Kahneman and Klein, 2009) attributes cognitive biases to limitations in the available data and in the human information processing capacity (Simon, 1955; Broadbent, 1958; Kahneman, 1973, 2003; Norman and Bobrow, 1975)
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
kushnerha

Ignore the GPS. That Ocean Is Not a Road. - The New York Times - 2 views

  • Faith is a concept that often enters the accounts of GPS-induced mishaps. “It kept saying it would navigate us a road,” said a Japanese tourist in Australia who, while attempting to reach North Stradbroke Island, drove into the Pacific Ocean. A man in West Yorkshire, England, who took his BMW off-road and nearly over a cliff, told authorities that his GPS “kept insisting the path was a road.” In perhaps the most infamous incident, a woman in Belgium asked GPS to take her to a destination less than two hours away. Two days later, she turned up in Croatia.
  • These episodes naturally inspire incredulity, if not outright mockery. After a couple of Swedes mistakenly followed their GPS to the city of Carpi (when they meant to visit Capri), an Italian tourism official dryly noted to the BBC that “Capri is an island. They did not even wonder why they didn’t cross any bridge or take any boat.” An Upper West Side blogger’s account of the man who interpreted “turn here” to mean onto a stairway in Riverside Park was headlined “GPS, Brain Fail Driver.”
  • several studies have demonstrated empirically what we already know instinctively. Cornell researchers who analyzed the behavior of drivers using GPS found drivers “detached” from the “environments that surround them.” Their conclusion: “GPS eliminated much of the need to pay attention.”
  • ...6 more annotations...
  • We seem driven (so to speak) to transform cars, conveyances that show us the world, into machines that also see the world for
  • There is evidence that one’s cognitive map can deteriorate. A widely reported study published in 2006 demonstrated that the brains of London taxi drivers have larger than average amounts of gray matter in the area responsible for complex spatial relations. Brain scans of retired taxi drivers suggested that the volume of gray matter in those areas also decreases when that part of the brain is no longer being used as frequently. “I think it’s possible that if you went to someone doing a lot of active navigation, but just relying on GPS,” Hugo Spiers, one of the authors of the taxi study, hypothesized to me, “you’d actually get a reduction in that area.”
  • A consequence is a possible diminution of our “cognitive map,” a term introduced in 1948 by the psychologist Edward Tolman of the University of California, Berkeley. In a groundbreaking paper, Dr. Tolman analyzed several laboratory experiments involving rats and mazes. He argued that rats had the ability to develop not only cognitive “strip maps” — simple conceptions of the spatial relationship between two points — but also more comprehensive cognitive maps that encompassed the entire maze.
  • Could society’s embrace of GPS be eroding our cognitive maps? For Julia Frankenstein, a psychologist at the University of Freiburg’s Center for Cognitive Science, the danger of GPS is that “we are not forced to remember or process the information — as it is permanently ‘at hand,’ we need not think or decide for ourselves.” She has written that we “see the way from A to Z, but we don’t see the landmarks along the way.” In this sense, “developing a cognitive map from this reduced information is a bit like trying to get an entire musical piece from a few notes.” GPS abets a strip-map level of orientation with the world.
  • We seem driven (so to speak) to transform cars, conveyances that show us the world, into machines that also see the world for us.
  • For Dr. Tolman, the cognitive map was a fluid metaphor with myriad applications. He identified with his rats. Like them, a scientist runs the maze, turning strip maps into comprehensive maps — increasingly accurate models of the “great God-given maze which is our human world,” as he put it. The countless examples of “displaced aggression” he saw in that maze — “the poor Southern whites, who take it out on the Negros,” “we psychologists who criticize all other departments,” “Americans who criticize the Russians and the Russians who criticize us” — were all, to some degree, examples of strip-map comprehension, a blinkered view that failed to comprehend the big picture. “What in the name of Heaven and Psychology can we do about it?” he wrote. “My only answer is to preach again the virtues of reason — of, that is, broad cognitive maps.”
Javier E

How thinking hard makes the brain tired | The Economist - 0 views

  • Mental labour can also be exhausting. Even resisting that last glistening chocolate-chip cookie after a long day at a consuming desk job is difficult. Cognitive control, the umbrella term encompassing mental exertion, self-control and willpower, also fades with effort.
  • unlike the mechanism of physical fatigue, the cause of cognitive fatigue has been poorly understood.
  • It posits that exerting cognitive control uses up energy in the form of glucose. At the end of a day spent intensely cogitating, the brain is metaphorically running on fumes. The problem with this version of events is that the energy cost associated with thinking is minimal.
  • ...8 more annotations...
  • To induce cognitive fatigue, a group of participants were asked to perform just over six hours of various tasks that involve thinking.
  • In other words, cognitive work results in chemical changes in the brain, which present behaviourally as fatigue. This, therefore, is a signal to stop working in order to restore balance to the brain.
  • a neurometabolic point of view. They hypothesise that cognitive fatigue results from an accumulation of a certain chemical in the region of the brain underpinning control. That substance, glutamate, is an excitatory neurotransmitter
  • Periodically, throughout the experiment, participants were asked to make decisions that could reveal their cognitive fatigue.
  • The time it takes for the pupil to subsequently dilate reflects the amount of mental exerted. The pupil-dilation times of participants assigned hard tasks fell off significantly as the experiment progressed.
  • During the experiment the scientists used a technique called magnetic-resonance spectroscopy to measure biochemical changes in the brain. In particular, they focused on the lateral prefrontal cortex, a region of the brain associated with cognitive control. If their hypothesis was to hold, there would be a measurable chemical difference between the brains of hard- and easy-task participants
  • Their analysis indicated higher concentrations of glutamate in the synapses of a hard-task participant’s lateral prefrontal cortex. Thus showing cognitive fatigue is associated with increased glutamate in the prefrontal cortex
  • There may well be ways to reduce the glutamate levels, and no doubt some researchers will now be looking at potions that might hack the brain in a way to artificially speed up its recovery from fatigue. Meanwhile, the best solution is the natural one: sleep
katedriscoll

Cognitive Biases: What They Are and How They Affect People - Effectiviology - 0 views

  • A cognitive bias is a systematic pattern of deviation from rationality, which occurs due to the way our cognitive system works. Accordingly, cognitive biases cause us to be irrational in the way we search for, evaluate, interpret, judge, use, and remember information, as well as in the way we make decisions.
  • Cognitive biases affect every area of our life, from how we form our memories, to how we shape our beliefs, and to how we form relationships with other people. In doing so, they can lead to both relatively minor issues, such as forgetting a small detail from a past event, as well as to major ones, such as choosing to avoid an important medical treatment that could save our life.Because cognitive biases can have such a powerful and pervasive influence on ourselves and on others, it’s important to understand them. As such, in the following article you will learn more about cognitive biases, understand why we experience them, see what types of them exist, and find out what you can do in order to mitigate them successfully.
manhefnawi

Neuroscience: Overview, history, major branches - 0 views

  • Neuroscience has traditionally been classed as a subdivision of biology. These days, it is an interdisciplinary science that liaises closely with other disciplines, such as mathematics, linguistics, engineering, computer science, chemistry, philosophy, psychology, and medicine.
  • The ancient Egyptians thought the seat of intelligence was in the heart. Because of this belief, during the mummification process, they would remove the brain but leave the heart in the body.
  • Behavioral neuroscience - the study of the biological bases of behavior. Looking at how the brain affects behavior.
  • ...1 more annotation...
  • Cognitive neuroscience - the study of higher cognitive functions that exist in humans, and their underlying neural basis. Cognitive neuroscience draws from linguistics, psychology, and cognitive science. Cognitive neuroscientists can take two broad directions: behavioral/experimental or computational/modeling, the aim being to understand the nature of cognition from a neural point of view.
Javier E

Eric Kandel's Visions - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • Judith, "barely clothed and fresh from the seduction and slaying of Holofernes, glows in her voluptuousness. Her hair is a dark sky between the golden branches of Assyrian trees, fertility symbols that represent her eroticism. This young, ecstatic, extravagantly made-up woman confronts the viewer through half-closed eyes in what appears to be a reverie of orgasmic rapture," writes Eric Kandel in his new book, The Age of Insight. Wait a minute. Writes who? Eric Kandel, the Nobel-winning neuroscientist who's spent most of his career fixated on the generously sized neurons of sea snails
  • Kandel goes on to speculate, in a bravura paragraph a few hundred pages later, on the exact neurochemical cognitive circuitry of the painting's viewer:
  • "At a base level, the aesthetics of the image's luminous gold surface, the soft rendering of the body, and the overall harmonious combination of colors could activate the pleasure circuits, triggering the release of dopamine. If Judith's smooth skin and exposed breast trigger the release of endorphins, oxytocin, and vasopressin, one might feel sexual excitement. The latent violence of Holofernes's decapitated head, as well as Judith's own sadistic gaze and upturned lip, could cause the release of norepinephrine, resulting in increased heart rate and blood pressure and triggering the fight-or-flight response. In contrast, the soft brushwork and repetitive, almost meditative, patterning may stimulate the release of serotonin. As the beholder takes in the image and its multifaceted emotional content, the release of acetylcholine to the hippocampus contributes to the storing of the image in the viewer's memory. What ultimately makes an image like Klimt's 'Judith' so irresistible and dynamic is its complexity, the way it activates a number of distinct and often conflicting emotional signals in the brain and combines them to produce a staggeringly complex and fascinating swirl of emotions."
  • ...18 more annotations...
  • His key findings on the snail, for which he shared the 2000 Nobel Prize in Physiology or Medicine, showed that learning and memory change not the neuron's basic structure but rather the nature, strength, and number of its synaptic connections. Further, through focus on the molecular biology involved in a learned reflex like Aplysia's gill retraction, Kandel demonstrated that experience alters nerve cells' synapses by changing their pattern of gene expression. In other words, learning doesn't change what neurons are, but rather what they do.
  • In Search of Memory (Norton), Kandel offered what sounded at the time like a vague research agenda for future generations in the budding field of neuroaesthetics, saying that the science of memory storage lay "at the foothills of a great mountain range." Experts grasp the "cellular and molecular mechanisms," he wrote, but need to move to the level of neural circuits to answer the question, "How are internal representations of a face, a scene, a melody, or an experience encoded in the brain?
  • Since giving a talk on the matter in 2001, he has been piecing together his own thoughts in relation to his favorite European artists
  • The field of neuroaesthetics, says one of its founders, Semir Zeki, of University College London, is just 10 to 15 years old. Through brain imaging and other studies, scholars like Zeki have explored the cognitive responses to, say, color contrasts or ambiguities of line or perspective in works by Titian, Michelangelo, Cubists, and Abstract Expressionists. Researchers have also examined the brain's pleasure centers in response to appealing landscapes.
  • it is fundamental to an understanding of human cognition and motivation. Art isn't, as Kandel paraphrases a concept from the late philosopher of art Denis Dutton, "a byproduct of evolution, but rather an evolutionary adaptation—an instinctual trait—that helps us survive because it is crucial to our well-being." The arts encode information, stories, and perspectives that allow us to appraise courses of action and the feelings and motives of others in a palatable, low-risk way.
  • "as far as activity in the brain is concerned, there is a faculty of beauty that is not dependent on the modality through which it is conveyed but which can be activated by at least two sources—musical and visual—and probably by other sources as well." Specifically, in this "brain-based theory of beauty," the paper says, that faculty is associated with activity in the medial orbitofrontal cortex.
  • It also enables Kandel—building on the work of Gombrich and the psychoanalyst and art historian Ernst Kris, among others—to compare the painters' rendering of emotion, the unconscious, and the libido with contemporaneous psychological insights from Freud about latent aggression, pleasure and death instincts, and other primal drives.
  • Kandel views the Expressionists' art through the powerful multiple lenses of turn-of-the-century Vienna's cultural mores and psychological insights. But then he refracts them further, through later discoveries in cognitive science. He seeks to reassure those who fear that the empirical and chemical will diminish the paintings' poetic power. "In art, as in science," he writes, "reductionism does not trivialize our perception—of color, light, and perspective—but allows us to see each of these components in a new way. Indeed, artists, particularly modern artists, have intentionally limited the scope and vocabulary of their expression to convey, as Mark Rothko and Ad Reinhardt do, the most essential, even spiritual ideas of their art."
  • The author of a classic textbook on neuroscience, he seems here to have written a layman's cognition textbook wrapped within a work of art history.
  • "our initial response to the most salient features of the paintings of the Austrian Modernists, like our response to a dangerous animal, is automatic. ... The answer to James's question of how an object simply perceived turns into an object emotionally felt, then, is that the portraits are never objects simply perceived. They are more like the dangerous animal at a distance—both perceived and felt."
  • If imaging is key to gauging therapeutic practices, it will be key to neuroaesthetics as well, Kandel predicts—a broad, intense array of "imaging experiments to see what happens with exaggeration, distorted faces, in the human brain and the monkey brain," viewers' responses to "mixed eroticism and aggression," and the like.
  • while the visual-perception literature might be richer at the moment, there's no reason that neuroaesthetics should restrict its emphasis to the purely visual arts at the expense of music, dance, film, and theater.
  • although Kandel considers The Age of Insight to be more a work of intellectual history than of science, the book summarizes centuries of research on perception. And so you'll find, in those hundreds of pages between Kandel's introduction to Klimt's "Judith" and the neurochemical cadenza about the viewer's response to it, dossiers on vision as information processing; the brain's three-dimensional-space mapping and its interpretations of two-dimensional renderings; face recognition; the mirror neurons that enable us to empathize and physically reflect the affect and intentions we see in others; and many related topics. Kandel elsewhere describes the scientific evidence that creativity is nurtured by spells of relaxation, which foster a connection between conscious and unconscious cognition.
  • Zeki's message to art historians, aesthetic philosophers, and others who chafe at that idea is twofold. The more diplomatic pitch is that neuroaesthetics is different, complementary, and not oppositional to other forms of arts scholarship. But "the stick," as he puts it, is that if arts scholars "want to be taken seriously" by neurobiologists, they need to take advantage of the discoveries of the past half-century. If they don't, he says, "it's a bit like the guys who said to Galileo that we'd rather not look through your telescope."
  • Matthews, a co-author of The Bard on the Brain: Understanding the Mind Through the Art of Shakespeare and the Science of Brain Imaging (Dana Press, 2003), seems open to the elucidations that science and the humanities can cast on each other. The neural pathways of our aesthetic responses are "good explanations," he says. But "does one [type of] explanation supersede all the others? I would argue that they don't, because there's a fundamental disconnection still between ... explanations of neural correlates of conscious experience and conscious experience" itself.
  • There are, Matthews says, "certain kinds of problems that are fundamentally interesting to us as a species: What is love? What motivates us to anger?" Writers put their observations on such matters into idiosyncratic stories, psychologists conceive their observations in a more formalized framework, and neuroscientists like Zeki monitor them at the level of functional changes in the brain. All of those approaches to human experience "intersect," Matthews says, "but no one of them is the explanation."
  • "Conscious experience," he says, "is something we cannot even interrogate in ourselves adequately. What we're always trying to do in effect is capture the conscious experience of the last moment. ... As we think about it, we have no way of capturing more than one part of it."
  • Kandel sees art and art history as "parent disciplines" and psychology and brain science as "antidisciplines," to be drawn together in an E.O. Wilson-like synthesis toward "consilience as an attempt to open a discussion between restricted areas of knowledge." Kandel approvingly cites Stephen Jay Gould's wish for "the sciences and humanities to become the greatest of pals ... but to keep their ineluctably different aims and logics separate as they ply their joint projects and learn from each other."
Adam Clark

The 12 cognitive biases that prevent you from being rational - 0 views

  •  
    "The human brain is capable of 1016 processes per second, which makes it far more powerful than any computer currently in existence. But that doesn't mean our brains don't have major limitations. The lowly calculator can do math thousands of times better than we can, and our memories are often less than useless - plus, we're subject to cognitive biases, those annoying glitches in our thinking that cause us to make questionable decisions and reach erroneous conclusions. Here are a dozen of the most common and pernicious cognitive biases that you need to know about."
paisleyd

'Brain training' app may improve memory, daily functioning of people with schizophrenia... - 0 views

  • A 'brain training' iPad game developed and tested by researchers at the University of Cambridge may improve the memory of patients with schizophrenia
  • Schizophrenia is a long-term mental health condition that causes a range of psychological symptoms, ranging from changes in behaviour through to hallucinations and delusions
  • patients are still left with debilitating cognitive impairments, including in their memory
  • ...10 more annotations...
  • increasing evidence that computer-assisted training and rehabilitation can help people with schizophrenia overcome some of their symptoms
  • Schizophrenia is estimated to cost £13.1 billion per year in total in the UK, so even small improvements in cognitive functions could help patients make the transition to independent living
  • The game, Wizard, was the result of a nine-month collaboration between psychologists, neuroscientists, a professional game-developer and people with schizophrenia
  • The memory task was woven into a narrative in which the player was allowed to choose their own character and name; the game rewarded progress with additional in-game activities to provide the user with a sense of progression independent of the cognitive training process
  • Participants in the training group played the memory game for a total of eight hours over a four-week period; participants in the control group continued their treatment as usual. At the end of the four weeks, the researchers tested all participants' episodic memory using the Cambridge Neuropsychological Test Automated Battery (CANTAB) PAL, as well as their level of enjoyment and motivation, and their score on the Global Assessment of Functioning (GAF) scale
  • patients who had played the memory game made significantly fewer errors and needed significantly fewer attempts to remember the location of different patterns in the CANTAB PAL test relative to the control group. In addition, patients in the cognitive training group saw an increase in their score on the GAF scale
  • Because the game is interesting, even those patients with a general lack of motivation are spurred on to continue the training
  • used in conjunction with medication and current psychological therapies, this could help people with schizophrenia minimise the impact of their illness on everyday life
  • It is not clear exactly how the apps also improved the patients' daily functioning, but the researchers suggest it may be because improvements in memory had a direct impact on global functions or that the cognitive training may have had an indirect impact on functionality by improving general motivation and restoring self-esteem
  • This new app will allow the Wizard memory game to become widely available, inexpensively. State-of-the-art neuroscience at the University of Cambridge, combined with the innovative approach at Peak, will help bring the games industry to a new level and promote the benefits of cognitive enhancement
Javier E

The Startling Link Between Sugar and Alzheimer's - The Atlantic - 0 views

  • A longitudinal study, published Thursday in the journal Diabetologia, followed 5,189 people over 10 years and found that people with high blood sugar had a faster rate of cognitive decline than those with normal blood sugar
  • In other words, the higher the blood sugar, the faster the cognitive decline.
  • “Currently, dementia is not curable, which makes it very important to study risk factors.”
  • ...9 more annotations...
  • People who have type 2 diabetes are about twice as likely to get Alzheimer’s, and people who have diabetes and are treated with insulin are also more likely to get Alzheimer’s, suggesting elevated insulin plays a role in Alzheimer’s. In fact, many studies have found that elevated insulin, or “hyperinsulinemia,” significantly increases your risk of Alzheimer’s. On the other hand, people with type 1 diabetes, who don’t make insulin at all, are also thought to have a higher risk of Alzheimer’s. How could these both be true?
  • Schilling posits this happens because of the insulin-degrading enzyme, a product of insulin that breaks down both insulin and amyloid proteins in the brain—the same proteins that clump up and lead to Alzheimer’s disease. People who don’t have enough insulin, like those whose bodies’ ability to produce insulin has been tapped out by diabetes, aren’t going to make enough of this enzyme to break up those brain clumps. Meanwhile, in people who use insulin to treat their diabetes and end up with a surplus of insulin, most of this enzyme gets used up breaking that insulin down, leaving not enough enzyme to address those amyloid brain clumps.
  • this can happen even in people who don’t have diabetes yet—who are in a state known as “prediabetes.” It simply means your blood sugar is higher than normal, and it’s something that affects roughly 86 million Americans.
  • In a 2012 study, Roberts broke nearly 1,000 people down into four groups based on how much of their diet came from carbohydrates. The group that ate the most carbs had an 80 percent higher chance of developing mild cognitive impairment—a pit stop on the way to dementia—than those who ate the smallest amount of carbs.
  • “It’s hard to be sure at this stage, what an ‘ideal’ diet would look like,” she said. “There’s a suggestion that a Mediterranean diet, for example, may be good for brain health.”
  • there are several theories out there to explain the connection between high blood sugar and dementia. Diabetes can also weaken the blood vessels, which increases the likelihood that you’ll have ministrokes in the brain, causing various forms of dementia. A high intake of simple sugars can make cells, including those in the brain, insulin resistant, which could cause the brain cells to die. Meanwhile, eating too much in general can cause obesity. The extra fat in obese people releases cytokines, or inflammatory proteins that can also contribute to cognitive deterioration, Roberts said. In one study by Gottesman, obesity doubled a person’s risk of having elevated amyloid proteins in their brains later in life.
  • even people who don’t have any kind of diabetes should watch their sugar intake, she said.
  • as these and other researchers point out, decisions we make about food are one risk factor we can control. And it’s starting to look like decisions we make while we’re still relatively young can affect our future cognitive health.
  • “Alzheimer’s is like a slow-burning fire that you don’t see when it starts,” Schilling said. It takes time for clumps to form and for cognition to begin to deteriorate. “By the time you see the signs, it’s way too late to put out the fire.”
anonymous

Brain size mediates the association between height and cognitive ability -- ScienceDaily - 0 views

  • Reports from several studies have identified a link between height and general cognitive ability, or intelligence, but the mechanisms underlying this association are not well known.
  • The researchers examined the association between height and cognition through a model where the size of cortical grey matter was considered as a mediator. They found that greater height was associated with bigger cortex, which in turn was linked with better cognitive ability.
  • In the study, cortical grey matter was measured with magnetic resonance imaging (MRI). The focus was on the total cortical surface area and mean cortical thickness. According to the findings, total surface area was bigger in taller persons, whereas height was not related to cortical thickness.
  • ...2 more annotations...
  • The researchers note that even though genetic effects accounted for most of the individual differences in height, cortical size and cognition, the contribution of environmental factors may be much larger in other populations.
  • In the study, cognitive ability was measured with a paper-and-pencil test consisting of items measuring verbal, mathematical, spatial and reasoning abilities.
Javier E

Our Dangerous Inability to Agree on What is TRUE | Risk: Reason and Reality | Big Think - 2 views

  • Given that human cognition is never the product of pure dispassionate reason, but a subjective interpretation of the facts based on our feelings and biases and instincts, when can we ever say that we know who is right and who is wrong, about anything? When can we declare a fact so established that it’s fair to say, without being called arrogant, that those who deny this truth don’t just disagree…that they’re just plain wrong.
  • This isn’t about matters of faith, or questions of ultimately unknowable things which by definition can not be established by fact. This is a question about what is knowable, and provable by careful objective scientific inquiry, a process which includes challenging skepticism rigorously applied precisely to establish what, beyond any reasonable doubt, is in fact true.
  • With enough careful investigation and scrupulously challenged evidence, we can establish knowable truths that are not just the product of our subjective motivated reasoning.
  • ...8 more annotations...
  • This matters for social animals like us, whose safety and very survival ultimately depend on our ability to coexist. Views that have more to do with competing tribal biases than objective interpretations of the evidence create destructive and violent conflict. Denial of scientifically established ‘truth’ cause all sorts of serious direct harms. Consider a few examples; • The widespread faith-based rejection of evolution feeds intense polarization. • Continued fear of vaccines is allowing nearly eradicated diseases to return. • Those who deny the evidence of the safety of genetically modified food are also denying the immense potential benefits of that technology to millions. • Denying the powerful evidence for climate change puts us all in serious jeopardy should that evidence prove to be true.
  • To address these harms, we need to understand why we often have trouble agreeing on what is true (what some have labeled science denialism). Social science has taught us that human cognition is innately, and inescapably, a process of interpreting the hard data about our world – its sights and sound and smells and facts and ideas - through subjective affective filters that help us turn those facts into the judgments and choices and behaviors that help us survive. The brain’s imperative, after all, is not to reason. It’s job is survival, and subjective cognitive biases and instincts have developed to help us make sense of information in the pursuit of safety, not so that we might come to know ‘THE universal absolute truth
  • This subjective cognition is built-in, subconscious, beyond free will, and unavoidably leads to different interpretations of the same facts.
  • But here is a truth with which I hope we can all agree. Our subjective system of cognition can be dangerous.
  • It can produce perceptions that conflict with the evidence, what I call The Perception Gap, which can in turn produce profound harm
  • We need to recognize the greater threat that our subjective system of cognition can pose, and in the name of our own safety and the welfare of the society on which we depend, do our very best to rise above it or, when we can’t, account for this very real danger in the policies we adopt.
  • "Everyone engages in motivated reasoning, everyone screens out unwelcome evidence, no one is a fully rational actor. Sure. But when it comes to something with such enormous consequences to human welfare
  • I think it's fair to say we have an obligation to confront our own ideological priors. We have an obligation to challenge ourselves, to push ourselves, to be suspicious of conclusions that are too convenient, to be sure that we're getting it right.
Javier E

You Think With the World, Not Just Your Brain - The Atlantic - 2 views

  • embodied or extended cognition: broadly, the theory that what we think of as brain processes can take place outside of the brain.
  • The octopus, for instance, has a bizarre and miraculous mind, sometimes inside its brain, sometimes extending beyond it in sucker-tipped trails. Neurons are spread throughout its body; the creature has more of them in its arms than in its brain itself. It’s possible that each arm might be, to some extent, an independently thinking creature, all of which are collapsed into an octopean superconsciousness in times of danger
  • Embodied cognition, though, tells us that we’re all more octopus-like than we realize. Our minds are not like the floating conceptual “I” imagined by Descartes. We’re always thinking with, and inseparable from, our bodies.
  • ...8 more annotations...
  • The body codes how the brain works, more than the brain controls the body. When we walk—whether taking a pleasant afternoon stroll, or storming off in tears, or trying to sneak into a stranger’s house late at night, with intentions that seem to have exploded into our minds from some distant elsewhere—the brain might be choosing where each foot lands, but the way in which it does so is always constrained by the shape of our legs
  • The way in which the brain approaches the task of walking is already coded by the physical layout of the body—and as such, wouldn’t it make sense to think of the body as being part of our decision-making apparatus? The mind is not simply the brain, as a generation of biological reductionists, clearing out the old wreckage of what had once been the soul, once insisted. It’s not a kind of software being run on the logical-processing unit of the brain. It’s bigger, and richer, and grosser, in every sense. It has joints and sinews. The rarefied rational mind sweats and shits; this body, this mound of eventually rotting flesh, is really you.
  • That’s embodied cognition.
  • Extended cognition is stranger.
  • The mind, they argue, has no reason to stop at the edges of the body, hemmed in by skin, flapping open and closed with mouths and anuses.
  • When we jot something down—a shopping list, maybe—on a piece of paper, aren’t we in effect remembering it outside our heads? Most of all, isn’t language itself something that’s always external to the individual mind?
  • Language sits hazy in the world, a symbolic and intersubjective ether, but at the same time it forms the substance of our thought and the structure of our understanding. Isn’t language thinking for us?
  • Writing, for Plato, is a pharmakon, a “remedy” for forgetfulness, but if taken in too strong a dose it becomes a poison: A person no longer remembers things for themselves; it’s the text that remembers, with an unholy autonomy. The same criticisms are now commonly made of smartphones. Not much changes.
caelengrubb

How Cognitive Bias Affects Your Business - 0 views

  • Human beings often act in irrational and unexpected ways when it comes to business decisions, money, and finance.
  • Behavioral finance tries to explain the difference between what economic theory predicts people will do and what they actually do in the heat of the moment. 
  • There are two main types of biases that people commit causing them to deviate from rational decision-making: cognitive and emotional.
  • ...13 more annotations...
  • Cognitive errors result from incomplete information or the inability to analyze the information that is available. These cognitive errors can be classified as either belief perseverance or processing errors
  • Processing errors occur when an individual fails to manage and organize information properly, which can be due in part to the mental effort required to compute and analyze data.
  • Conservatism bias, where people emphasize original, pre-existing information over new data.
  • Base rate neglect is the opposite effect, whereby people put too little emphasis on the original information. 
  • Confirmation bias, where people seek information that affirms existing beliefs while discounting or discarding information that might contradict them.
  • Anchoring and Adjustment happens when somebody fixates on a target number, such as the result of a calculation or valuation.
  • Hindsight bias occurs when people perceive actual outcomes as reasonable and expected, but only after the fact.
  • Sample size neglect is an error made when people infer too much from a too-small sample size.
  • Mental accounting is when people earmark certain funds for certain goals and keep them separate. When this happens, the risk and reward of projects undertaken to achieve these goals are not considered as an overall portfolio and the effect of one on another is ignored.
  • Availability bias, or recency bias skews perceived future probabilities based on memorable past events
  • Framing bias is when a person will process the same information differently depending on how it is presented and received.
  • Cognitive errors in the way people process and analyze information can lead them to make irrational decisions which can negatively impact business or investing decisions
  • . These information processing errors could have arisen to help primitive humans survive in a time before money or finance came into existence.
anniina03

The Human Brain Evolved When Carbon Dioxide Was Lower - The Atlantic - 0 views

  • Kris Karnauskas, a professor of ocean sciences at the University of Colorado, has started walking around campus with a pocket-size carbon-dioxide detector. He’s not doing it to measure the amount of carbon pollution in the atmosphere. He’s interested in the amount of CO₂ in each room.
  • The indoor concentration of carbon dioxide concerns him—and not only for the usual reason. Karnauskas is worried that indoor CO₂ levels are getting so high that they are starting to impair human cognition.
  • Carbon dioxide, the same odorless and invisible gas that causes global warming, may be making us dumber.
  • ...11 more annotations...
  • “This is a hidden impact of climate change … that could actually impact our ability to solve the problem itself,” he said.
  • The science is, at first glance, surprisingly fundamental. Researchers have long believed that carbon dioxide harms the brain at very high concentrations. Anyone who’s seen the film Apollo 13 (or knows the real-life story behind it) may remember a moment when the mission’s three astronauts watch a gauge monitoring their cabin start to report dangerous levels of a gas. That gauge was measuring carbon dioxide. As one of the film’s NASA engineers remarks, if CO₂ levels rise too high, “you get impaired judgement, blackouts, the beginning of brain asphyxia.”
  • The same general principle, he argues, could soon affect people here on Earth. Two centuries of rampant fossil-fuel use have already spiked the amount of CO₂ in the atmosphere from about 280 parts per million before the Industrial Revolution to about 410 parts per million today. For Earth as a whole, that pollution traps heat in the atmosphere and causes climate change. But more locally, it also sets a baseline for indoor levels of carbon dioxide: You cannot ventilate a room’s carbon-dioxide levels below the global average.
  • In fact, many rooms have a much higher CO₂ level than the atmosphere, since ventilation systems don’t work perfectly.
  • On top of that, some rooms—in places such as offices, hospitals, and schools—are filled with many breathing people, that is, many people who are themselves exhaling carbon dioxide.
  • As the amount of atmospheric CO₂ keeps rising, indoor CO₂ will climb as well.
  • in one 2016 study Danish scientists cranked up indoor carbon-dioxide levels to 3,000 parts per million—more than seven times outdoor levels today—and found that their 25 subjects suffered no cognitive impairment or health issues. Only when scientists infused that same air with other trace chemicals and organic compounds emitted by the human body did the subjects begin to struggle, reporting “headache, fatigue, sleepiness, and difficulty in thinking clearly.” The subjects also took longer to solve basic math problems. The same lab, in another study, found that indoor concentrations of pure CO₂ could get to 5,000 parts per million and still cause little difficulty, at least for college students.
  • But other research is not as optimistic. When scientists at NASA’s Johnson Space Center tested the effects of CO₂ on about two dozen “astronaut-like subjects,” they found that their advanced decision-making skills declined with CO₂ at 1,200 parts per million. But cognitive skills did not seem to worsen as CO₂ climbed past that mark, and the intensity of the effect seemed to vary from person to person.
  • There’s evidence that carbon-dioxide levels may impair only the most complex and challenging human cognitive tasks. And we still don’t know why.
  • No one has looked at the effects of indoor CO₂ on children, the elderly, or people with health problems. Likewise, studies have so far exposed people to very high carbon levels for only a few hours, leaving open the question of what days-long exposure could do.
  • Modern humans, as a species, are only about 300,000 years old, and the ambient CO₂ that we encountered for most of our evolutionary life—from the first breath of infants to the last rattle of a dying elder—was much lower than the ambient CO₂ today. I asked Gall: Has anyone looked to see if human cognition improves under lower carbon-dioxide levels? If you tested someone in a room that had only 250 parts per million of carbon dioxide—a level much closer to that of Earth’s atmosphere three centuries or three millennia ago—would their performance on tests improve? In other words, is it possible that human cognitive ability has already declined?
jaxredd10

What Is Cognitive Bias? - 0 views

  • Because of this, subtle biases can creep in and influence the way you see and think about the world. The concept of cognitive bias was first introduced by researchers Amos Tversky and Daniel Kahneman in 1972. Since then, researchers have described a number of different types of biases that affect decision-making in a wide range of areas including social behavior, cognition, behavioral economics, education, management, healthcare, business, and finance.
  • People sometimes confuse cognitive biases with logical fallacies, but the two are not the same. A logical fallacy stems from an error in a logical argument, while a cognitive bias is rooted in thought processing errors often arising from problems with memory, attention, attribution, and other mental mistakes.
Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 0 views

  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • neuroscience for the last couple hundred years has been on the wrong track. There's a fairly recent book by a very good cognitive neuroscientist, Randy Gallistel and King, arguing -- in my view, plausibly -- that neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
  • ...19 more annotations...
  • in general what he argues is that if you take a look at animal cognition, human too, it's computational systems. Therefore, you want to look the units of computation. Think about a Turing machine, say, which is the simplest form of computation, you have to find units that have properties like "read", "write" and "address." That's the minimal computational unit, so you got to look in the brain for those. You're never going to find them if you look for strengthening of synaptic connections or field properties, and so on. You've got to start by looking for what's there and what's working and you see that from Marr's highest level.
  • it's basically in the spirit of Marr's analysis. So when you're studying vision, he argues, you first ask what kind of computational tasks is the visual system carrying out. And then you look for an algorithm that might carry out those computations and finally you search for mechanisms of the kind that would make the algorithm work. Otherwise, you may never find anything.
  • "Good Old Fashioned AI," as it's labeled now, made strong use of formalisms in the tradition of Gottlob Frege and Bertrand Russell, mathematical logic for example, or derivatives of it, like nonmonotonic reasoning and so on. It's interesting from a history of science perspective that even very recently, these approaches have been almost wiped out from the mainstream and have been largely replaced -- in the field that calls itself AI now -- by probabilistic and statistical models. My question is, what do you think explains that shift and is it a step in the right direction?
  • AI and robotics got to the point where you could actually do things that were useful, so it turned to the practical applications and somewhat, maybe not abandoned, but put to the side, the more fundamental scientific questions, just caught up in the success of the technology and achieving specific goals.
  • The approximating unanalyzed data kind is sort of a new approach, not totally, there's things like it in the past. It's basically a new approach that has been accelerated by the existence of massive memories, very rapid processing, which enables you to do things like this that you couldn't have done by hand. But I think, myself, that it is leading subjects like computational cognitive science into a direction of maybe some practical applicability... ..in engineering? Chomsky: ...But away from understanding.
  • I was very skeptical about the original work. I thought it was first of all way too optimistic, it was assuming you could achieve things that required real understanding of systems that were barely understood, and you just can't get to that understanding by throwing a complicated machine at it.
  • if success is defined as getting a fair approximation to a mass of chaotic unanalyzed data, then it's way better to do it this way than to do it the way the physicists do, you know, no thought experiments about frictionless planes and so on and so forth. But you won't get the kind of understanding that the sciences have always been aimed at -- what you'll get at is an approximation to what's happening.
  • Suppose you want to predict tomorrow's weather. One way to do it is okay I'll get my statistical priors, if you like, there's a high probability that tomorrow's weather here will be the same as it was yesterday in Cleveland, so I'll stick that in, and where the sun is will have some effect, so I'll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow's weather is going to be. That's not what meteorologists do -- they want to understand how it's working. And these are just two different concepts of what success means, of what achievement is.
  • if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives -- but you learn nothing about the language.
  • the right approach, is to try to see if you can understand what the fundamental principles are that deal with the core properties, and recognize that in the actual usage, there's going to be a thousand other variables intervening -- kind of like what's happening outside the window, and you'll sort of tack those on later on if you want better approximations, that's a different approach.
  • take a concrete example of a new field in neuroscience, called Connectomics, where the goal is to find the wiring diagram of very complex organisms, find the connectivity of all the neurons in say human cerebral cortex, or mouse cortex. This approach was criticized by Sidney Brenner, who in many ways is [historically] one of the originators of the approach. Advocates of this field don't stop to ask if the wiring diagram is the right level of abstraction -- maybe it's no
  • if you went to MIT in the 1960s, or now, it's completely different. No matter what engineering field you're in, you learn the same basic science and mathematics. And then maybe you learn a little bit about how to apply it. But that's a very different approach. And it resulted maybe from the fact that really for the first time in history, the basic sciences, like physics, had something really to tell engineers. And besides, technologies began to change very fast, so not very much point in learning the technologies of today if it's going to be different 10 years from now. So you have to learn the fundamental science that's going to be applicable to whatever comes along next. And the same thing pretty much happened in medicine.
  • that's the kind of transition from something like an art, that you learn how to practice -- an analog would be trying to match some data that you don't understand, in some fashion, maybe building something that will work -- to science, what happened in the modern period, roughly Galilean science.
  • it turns out that there actually are neural circuits which are reacting to particular kinds of rhythm, which happen to show up in language, like syllable length and so on. And there's some evidence that that's one of the first things that the infant brain is seeking -- rhythmic structures. And going back to Gallistel and Marr, its got some computational system inside which is saying "okay, here's what I do with these things" and say, by nine months, the typical infant has rejected -- eliminated from its repertoire -- the phonetic distinctions that aren't used in its own language.
  • people like Shimon Ullman discovered some pretty remarkable things like the rigidity principle. You're not going to find that by statistical analysis of data. But he did find it by carefully designed experiments. Then you look for the neurophysiology, and see if you can find something there that carries out these computations. I think it's the same in language, the same in studying our arithmetical capacity, planning, almost anything you look at. Just trying to deal with the unanalyzed chaotic data is unlikely to get you anywhere, just like as it wouldn't have gotten Galileo anywhere.
  • with regard to cognitive science, we're kind of pre-Galilean, just beginning to open up the subject
  • You can invent a world -- I don't think it's our world -- but you can invent a world in which nothing happens except random changes in objects and selection on the basis of external forces. I don't think that's the way our world works, I don't think it's the way any biologist thinks it is. There are all kind of ways in which natural law imposes channels within which selection can take place, and some things can happen and other things don't happen. Plenty of things that go on in the biology in organisms aren't like this. So take the first step, meiosis. Why do cells split into spheres and not cubes? It's not random mutation and natural selection; it's a law of physics. There's no reason to think that laws of physics stop there, they work all the way through. Well, they constrain the biology, sure. Chomsky: Okay, well then it's not just random mutation and selection. It's random mutation, selection, and everything that matters, like laws of physics.
  • What I think is valuable is the history of science. I think we learn a lot of things from the history of science that can be very valuable to the emerging sciences. Particularly when we realize that in say, the emerging cognitive sciences, we really are in a kind of pre-Galilean stage. We don't know wh
  • at we're looking for anymore than Galileo did, and there's a lot to learn from that.
Javier E

Kids' Cognition Is Changing-Education Will Have to Change With It - Megan Garber - Tech... - 1 views

  • Elon University and the Pew Internet and American Life Project released a report about the cognitive future of the millennial generation. Based on surveys with more than 1,000 thought leaders -- among them danah boyd, Clay Shirky, David Weinberger, and Alexandra Samuel -- the survey asked thinkers to consider how the Internet and its environment are changing, for better or worse, kids' cognitive capabilities.
  • The survey found, overall, what many others already have: that neuroplasticity is, indeed, a thing; that multitasking is, indeed, the new norm; that hyperconnectivity may be leading to a lack of patience and concentration; and that an "always on" ethos may be encouraging a culture of expectation and instant gratification.
  • also found, however, another matter of general consensus among the experts they surveyed: that our education systems will need to be updated, drastically, to suit the new realities of the intellectual environment. "There is a palpable concern among these experts," Rainie puts it, "that new social and economic divisions will emerge as those who are motivated and well-schooled reap rewards that are not matched by those who fail to master new media and tech literacies."
  • ...2 more annotations...
  • It also offers its experts' predictions about what the most-desired life skills (for young people, but ostensibly for everyone else, too) will be in the year 2020. Among the skills they highlighted: public problem-solving through cooperative work -- crowdsourcing and the like; the ability to search effectively for information online; the ability to distinguish the quality and veracity of online discoveries; the ability to synthesize, or combine facts and details from different sources into coherent narratives; the ability to concentrate; and the ability to distinguish between the signal and the noise as the information we're exposed to gets bigger, and broader, and more plentiful.
  • All these skills can be taught. The question is whether kids will learn them in school, or outside of it.
sissij

Scientists Figure Out When Different Cognitive Abilities Peak Throughout Life | Big Think - 0 views

  • Such skills come from accumulated knowledge which benefits from a lifetime of experience. 
  • Vocabulary, in fact, peaked even later, in the late 60s to early 70s. So now you know why grandpa is so good at crosswords.
  • And here’s a win for the 40+ folks - the below representation of a test of 10,000 visitors to TestMyBrain.org shows that older subjects did better than the young on the vocabulary test.
  • ...4 more annotations...
  • The under-30 group did much better on memory-related tasks, however.
  • Is there one age when all of your mental powers are at their maximum? The researchers don’t think so.  
  • In general, the researchers found 24 to be a key age, after which player abilities slowly declined, losing about 15% of the speed every 15 years. 
  • Older players did perform better in some aspects, making up for the slower brain processing by using simpler strategies and being more efficient. They were, in other words, wiser.
  •  
    It is really surprising to me that cognitive abilities are directly related to age. But it is understandable since there also feels like a gulp between seniors and teenagers. There is always something we are especially good at at a certain age. I think this aligns with the logic of evolution as the society consists of people from different ages so they will cooperate well and reach the maximum benefit by working together. The society is really diverse and by having people of different age in the same team can have people cover up the cognitive disadvantages of others. --Sissi (4/4/2017)
1 - 20 of 406 Next › Last »
Showing 20 items per page