Skip to main content

Home/ TOK Friends/ Group items tagged infant

Rss Feed Group items tagged

Javier E

Our Biased Brains - NYTimes.com - 0 views

  • The human brain seems to be wired so that it categorizes people by race in the first one-fifth of a second after seeing a face
  • Racial bias also begins astonishingly early: Even infants often show a preference for their own racial group. In one study, 3-month-old white infants were shown photos of faces of white adults and black adults; they preferred the faces of whites. For 3-month-old black infants living in Africa, it was the reverse.
  • in evolutionary times we became hard-wired to make instantaneous judgments about whether someone is in our “in group” or not — because that could be lifesaving. A child who didn’t prefer his or her own group might have been at risk of being clubbed to death.
  • ...7 more annotations...
  • I encourage you to test yourself at implicit.harvard.edu. It’s sobering to discover that whatever you believe intellectually, you’re biased about race, gender, age or disability.
  • unconscious racial bias turns up in children as soon as they have the verbal skills to be tested for it, at about age 4. The degree of unconscious bias then seems pretty constant: In tests, this unconscious bias turns out to be roughly the same for a 4- or 6-year-old as for a senior citizen who grew up in more racially oppressive times.
  • Many of these experiments on in-group bias have been conducted around the world, and almost every ethnic group shows a bias favoring its own. One exception: African-Americans.
  • in contrast to other groups, African-Americans do not have an unconscious bias toward their own. From young children to adults, they are essentially neutral and favor neither whites nor blacks.
  • even if we humans have evolved to have a penchant for racial preferences from a very young age, this is not destiny. We can resist the legacy that evolution has bequeathed us.
  • “We wouldn’t have survived if our ancestors hadn’t developed bodies that store sugar and fat,” Banaji says. “What made them survive is what kills us.” Yet we fight the battle of the bulge and sometimes win — and, likewise, we can resist a predisposition for bias against other groups.
  • Deep friendships, especially romantic relationships with someone of another race, also seem to mute bias
jlessner

Our Biased Brains - NYTimes.com - 1 views

  • To better understand the roots of racial division in America, think about this:The human brain seems to be wired so that it categorizes people by race in the first one-fifth of a second after seeing a face. Brain scans show that even when people are told to sort people by gender, the brain still groups people by race.
  • Racial bias also begins astonishingly early: Even infants often show a preference for their own racial group. In one study, 3-month-old white infants were shown photos of faces of white adults and black adults; they preferred the faces of whites. For 3-month-old black infants living in Africa, it was the reverse.
  • Scholars suggest that in evolutionary times we became hard-wired to make instantaneous judgments about whether someone is in our “in group” or not — because that could be lifesaving. A child who didn’t prefer his or her own group might have been at risk of being clubbed to death.
  • ...2 more annotations...
  • “It’s a feature of evolution,” says Mahzarin Banaji, a Harvard psychology professor who co-developed tests of unconscious biases. These suggest that people turn out to have subterranean racial and gender biases that they are unaware of and even disapprove of.
  • What’s particularly dispiriting is that this unconscious bias among whites toward blacks seems just as great among preschoolers as among senior citizens.
Javier E

Hearing Bilingual - How Babies Tell Languages Apart - NYTimes.com - 4 views

  • In one recent study, Dr. Werker and her collaborators showed that babies born to bilingual mothers not only prefer both of those languages over others — but are also able to register that the two languages are different. In addition to this ability to use rhythmic sound to discriminate between languages, Dr. Werker has studied other strategies that infants use as they grow, showing how their brains use different kinds of perception to learn languages, and also to keep them separate.
  • Over the past decade, Ellen Bialystok, a distinguished research professor of psychology at York University in Toronto, has shown that bilingual children develop crucial skills in addition to their double vocabularies, learning different ways to solve logic problems or to handle multitasking, skills that are often considered part of the brain’s so-called executive function. These higher-level cognitive abilities are localized to the frontal and prefrontal cortex in the brain. “Overwhelmingly, children who are bilingual from early on have precocious development of executive function,” Dr. Bialystok said. Dr. Kuhl calls bilingual babies “more cognitively flexible” than monolingual infants.
  •  
    I had no idea that language could play such a huge role in the development of an infant! This makes me wonder as to what other external social factors can come into consequence, like music or visual perceptions.
knudsenlu

Study: Does Adult Neurogenesis Exist in Humans? - The Atlantic - 0 views

  • In 1928, Santiago Ramón y Cajal, the father of modern neuroscience, proclaimed that the brains of adult humans never make new neurons. “Once development was ended,” he wrote, “the founts of growth and regeneration ... dried up irrevocably. In the adult centers the nerve paths are something fixed, ended and immutable. Everything must die, nothing may be regenerated.”
  • For decades, scientists believed that neurogenesis—the creation of new neurons—whirs along nicely in the brains of embryos and infants, but grinds to a halt by adulthood. But from the 1980s onward, this dogma started to falter. Researchers showed that neurogenesis does occur in the brains of various adult animals, and eventually found signs of newly formed neurons in the adult human brain.
  • Finally, Gage and others say that several other lines of evidence suggest that adult neurogenesis in humans is real. For example, in 1998, he and his colleagues studied the brains of five cancer patients who had been injected with BrdU—a chemical that gets incorporated into newly created DNA. They found traces of this substance in the hippocampus, which they took as a sign that the cells there are dividing and creating new neurons.
  • ...2 more annotations...
  • Greg Sutherland from the University of Sydney agrees. In 2016, he came to similar conclusions as Alvarez-Buylla’s team, using similar methods. “Depending on your inherent biases, two scientists can look at sparse events in the adult brain and come to different conclusions,” he says. “But when faced with the stark difference between infant and adult human brains, we can only conclude that [neurogenesis] is a vestigial process in the latter.”
  • Alvarez-Buylla agrees that there’s still plenty of work to do. Even if neurogenesis is a fiction in adult humans, it’s real in infants, and in other animals. If we really don’t make any new neurons as adults, how do we learn new things? And is there any way of restoring that lost ability to create new neurons in cases of stroke, Alzheimer’s, or other degenerative diseases? “Neurogenesis is precisely what we want to induce in cases of brain damage,” Alvarez-Buylla says. “If it isn’t there to begin with, how might you induce it?”
manhefnawi

Infants Can See Image Differences That Adults Cannot, Study Finds | Mental Floss - 0 views

  • Babies may be able to see image details that are invisible or imperceptible to adults. According to a recent study [PDF] from Japanese scientists Jiale Yang, So Kanazawa, Masami K. Yamaguchi, and Isamu Motoyoshi, three- and four-month-old infants may view certain images differently because they lack perceptual constancy. That means they can see small image differences that are invisible to adults because of changes in lighting conditions.
Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 0 views

  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • neuroscience for the last couple hundred years has been on the wrong track. There's a fairly recent book by a very good cognitive neuroscientist, Randy Gallistel and King, arguing -- in my view, plausibly -- that neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
  • ...19 more annotations...
  • in general what he argues is that if you take a look at animal cognition, human too, it's computational systems. Therefore, you want to look the units of computation. Think about a Turing machine, say, which is the simplest form of computation, you have to find units that have properties like "read", "write" and "address." That's the minimal computational unit, so you got to look in the brain for those. You're never going to find them if you look for strengthening of synaptic connections or field properties, and so on. You've got to start by looking for what's there and what's working and you see that from Marr's highest level.
  • it's basically in the spirit of Marr's analysis. So when you're studying vision, he argues, you first ask what kind of computational tasks is the visual system carrying out. And then you look for an algorithm that might carry out those computations and finally you search for mechanisms of the kind that would make the algorithm work. Otherwise, you may never find anything.
  • "Good Old Fashioned AI," as it's labeled now, made strong use of formalisms in the tradition of Gottlob Frege and Bertrand Russell, mathematical logic for example, or derivatives of it, like nonmonotonic reasoning and so on. It's interesting from a history of science perspective that even very recently, these approaches have been almost wiped out from the mainstream and have been largely replaced -- in the field that calls itself AI now -- by probabilistic and statistical models. My question is, what do you think explains that shift and is it a step in the right direction?
  • AI and robotics got to the point where you could actually do things that were useful, so it turned to the practical applications and somewhat, maybe not abandoned, but put to the side, the more fundamental scientific questions, just caught up in the success of the technology and achieving specific goals.
  • The approximating unanalyzed data kind is sort of a new approach, not totally, there's things like it in the past. It's basically a new approach that has been accelerated by the existence of massive memories, very rapid processing, which enables you to do things like this that you couldn't have done by hand. But I think, myself, that it is leading subjects like computational cognitive science into a direction of maybe some practical applicability... ..in engineering? Chomsky: ...But away from understanding.
  • I was very skeptical about the original work. I thought it was first of all way too optimistic, it was assuming you could achieve things that required real understanding of systems that were barely understood, and you just can't get to that understanding by throwing a complicated machine at it.
  • if success is defined as getting a fair approximation to a mass of chaotic unanalyzed data, then it's way better to do it this way than to do it the way the physicists do, you know, no thought experiments about frictionless planes and so on and so forth. But you won't get the kind of understanding that the sciences have always been aimed at -- what you'll get at is an approximation to what's happening.
  • Suppose you want to predict tomorrow's weather. One way to do it is okay I'll get my statistical priors, if you like, there's a high probability that tomorrow's weather here will be the same as it was yesterday in Cleveland, so I'll stick that in, and where the sun is will have some effect, so I'll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow's weather is going to be. That's not what meteorologists do -- they want to understand how it's working. And these are just two different concepts of what success means, of what achievement is.
  • if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives -- but you learn nothing about the language.
  • the right approach, is to try to see if you can understand what the fundamental principles are that deal with the core properties, and recognize that in the actual usage, there's going to be a thousand other variables intervening -- kind of like what's happening outside the window, and you'll sort of tack those on later on if you want better approximations, that's a different approach.
  • take a concrete example of a new field in neuroscience, called Connectomics, where the goal is to find the wiring diagram of very complex organisms, find the connectivity of all the neurons in say human cerebral cortex, or mouse cortex. This approach was criticized by Sidney Brenner, who in many ways is [historically] one of the originators of the approach. Advocates of this field don't stop to ask if the wiring diagram is the right level of abstraction -- maybe it's no
  • if you went to MIT in the 1960s, or now, it's completely different. No matter what engineering field you're in, you learn the same basic science and mathematics. And then maybe you learn a little bit about how to apply it. But that's a very different approach. And it resulted maybe from the fact that really for the first time in history, the basic sciences, like physics, had something really to tell engineers. And besides, technologies began to change very fast, so not very much point in learning the technologies of today if it's going to be different 10 years from now. So you have to learn the fundamental science that's going to be applicable to whatever comes along next. And the same thing pretty much happened in medicine.
  • that's the kind of transition from something like an art, that you learn how to practice -- an analog would be trying to match some data that you don't understand, in some fashion, maybe building something that will work -- to science, what happened in the modern period, roughly Galilean science.
  • it turns out that there actually are neural circuits which are reacting to particular kinds of rhythm, which happen to show up in language, like syllable length and so on. And there's some evidence that that's one of the first things that the infant brain is seeking -- rhythmic structures. And going back to Gallistel and Marr, its got some computational system inside which is saying "okay, here's what I do with these things" and say, by nine months, the typical infant has rejected -- eliminated from its repertoire -- the phonetic distinctions that aren't used in its own language.
  • people like Shimon Ullman discovered some pretty remarkable things like the rigidity principle. You're not going to find that by statistical analysis of data. But he did find it by carefully designed experiments. Then you look for the neurophysiology, and see if you can find something there that carries out these computations. I think it's the same in language, the same in studying our arithmetical capacity, planning, almost anything you look at. Just trying to deal with the unanalyzed chaotic data is unlikely to get you anywhere, just like as it wouldn't have gotten Galileo anywhere.
  • with regard to cognitive science, we're kind of pre-Galilean, just beginning to open up the subject
  • You can invent a world -- I don't think it's our world -- but you can invent a world in which nothing happens except random changes in objects and selection on the basis of external forces. I don't think that's the way our world works, I don't think it's the way any biologist thinks it is. There are all kind of ways in which natural law imposes channels within which selection can take place, and some things can happen and other things don't happen. Plenty of things that go on in the biology in organisms aren't like this. So take the first step, meiosis. Why do cells split into spheres and not cubes? It's not random mutation and natural selection; it's a law of physics. There's no reason to think that laws of physics stop there, they work all the way through. Well, they constrain the biology, sure. Chomsky: Okay, well then it's not just random mutation and selection. It's random mutation, selection, and everything that matters, like laws of physics.
  • What I think is valuable is the history of science. I think we learn a lot of things from the history of science that can be very valuable to the emerging sciences. Particularly when we realize that in say, the emerging cognitive sciences, we really are in a kind of pre-Galilean stage. We don't know wh
  • at we're looking for anymore than Galileo did, and there's a lot to learn from that.
tornekm

Of bairns and brains | The Economist - 0 views

  • especially given the steep price at which it was bought. Humans’ outsized, power-hungry brains suck up around a quarter of their body’s oxygen supplies.
  • . It was simply humanity’s good fortune that those big sexy brains turned out to be useful for lots of other things, from thinking up agriculture to building internal-combustion engines. Another idea is that human cleverness arose out of the mental demands of living in groups whose members are sometimes allies and sometimes rivals.
  • human infants take a year to learn even to walk, and need constant supervision for many years afterwards. That helplessness is thought to be one consequence of intelligence—or, at least, of brain size.
  • ...6 more annotations...
  • ever-more incompetent infants, requiring ever-brighter parents to ensure they survive childhood.
  • The self-reinforcing nature of the process would explain why intelligence is so strikingly overdeveloped in humans compared even with chimpanzees.
  • developed first in primates, a newish branch of the mammals, a group that is itself relatively young.
  • found that babies born to mothers with higher IQs had a better chance of surviving than those born to low-IQ women, which bolsters the idea that looking after human babies is indeed cognitively taxing.
  • none of this adds up to definitive proof.
  • Any such feedback loop would be a slow process (at least as reckoned by the humans themselves), most of which would have taken place in the distant past.
sissij

Pregnancy Changes the Brain in Ways That May Help Mothering - The New York Times - 0 views

  • Pregnancy changes a woman’s brain, altering the size and structure of areas involved in perceiving the feelings and perspectives of others, according to a first-of-its-kind study published Monday.
  • The results were remarkable: loss of gray matter in several brain areas involved in a process called social cognition or “theory of mind,” the ability to register and consider how other people perceive things.
  • A third possibility is that the loss is “part of the brain’s program for dealing with the future,” he said. Hormone surges in pregnancy might cause “pruning or cellular adaptation that is helpful,” he said, streamlining certain brain areas to be more efficient at mothering skills “from nurturing to extra vigilance to teaching.”
  • ...4 more annotations...
  • Pregnancy, she explained, may help a woman’s brain specialize in “a mother’s ability to recognize the needs of her infant, to recognize social threats or to promote mother-infant bonding.”
  • Researchers wanted to see if the women’s brain changes affected anything related to mothering. They found that relevant brain regions in mothers showed more activity when women looked at photos of their own babies than with photos of other children.
  • During another period of roiling hormonal change — adolescence — gray matter decreases in several brain regions that are believed to provide fine-tuning for the social, emotional and cognitive territory of being a teenager.
  • evidence against the common myth of ‘mommy brain.’
  •  
    Our brain changes during our lifetime to better fit our need. The decrease in gray matter in brain during pregnancy enables mothers to learn mothering skills fasters and be more focused on their own child. This aligns with the logic of evolution because newborns need a lot of attention and care from their mother. I am also very surprised to see that the similar thing also happens to teenager. The decrease in gray matter gives plasticity for teenagers to absorb new knowledge. It's so amazing that our brain is actually adjusting itself in different stages of life. --Sissi (12/20/2016)
dicindioha

What's at Stake in a Health Bill That Slashes the Safety Net - The New York Times - 0 views

  • It is startling to realize just how much the social safety net expanded during Barack Obama’s presidency. In 2016, means-tested entitlements like Medicaid and food stamps absorbed 3.8 percent of the nation’s gross domestic product, almost a full percentage point more than in 2008
  • Public social spending writ large — including health care, pensions, unemployment insurance, poverty alleviation and the like — reached 19.3 percent of G.D.P.
  • Government in the United States still spends less than most of its peers across the industrialized world to support the general welfare of its citizens.
  • ...11 more annotations...
  • Last week, President Trump’s sketch of a budget underscored how little interest he has in the nation’s social insurance programs — proposing to shift $54 billion next year to the military
  • Republicans in the House plan to vote this week to undo the Affordable Care Act. That law was Mr. Obama’s singular contribution toward an American welfare state, the biggest expansion of the nation’s safety net in half a century.
  • Welfare reform did hurt many poor people by converting antipoverty funds into block grants to the states. But it was accompanied by a big increase in the earned-income tax credit, the nation’s most effective antipoverty tool today.
  • “No other Congress or administration has ever put forward a plan with the intention of having fewer people covered.”
  • Who knows where this retrenchment takes the country? Maybe attaching a work requirement to Medicaid, as conservatives propose, will prod the poor to get a job. Or perhaps it will just cut more people from Medicaid’s rolls. Further up the income ladder, losing a job will become more costly when it means losing health insurance, too.
  • Millions of Americans — poor ones, mainly — will use much less health care. They will make fewer outpatient visits, have fewer mammograms and cholesterol checks.
  • In any event, public health insurance will take a big hit.
  • Under the House Republican plan, 24 million more Americans will lack health insurance by 2026, according to the nonpartisan Congressional Budget Office.
  • Might depression and mental health problems destabilize families, feeding down into the health, education and well-being of the next generation?
  • Yet it is worth remembering that among advanced nations, the United States is a laggard in life expectancy and has one of the highest infant mortality rates.
  • If American history provides any sort of guidance, it is that continuing to shred the social safety net will definitely make things worse.
  •  
    Directing spending away from American people and their access to healthcare is a definite possibility for Trump. It will be interesting to see the effect this has on the healthcare market and the American people. This article says it will probably hurt many poor people and decrease their health.
Javier E

The psychology of hate: How we deny human beings their humanity - Salon.com - 0 views

  • The cross-cultural psychologist Gustav Jahoda catalogued how Europeans since the time of the ancient Greeks viewed those living in relatively primitive cultures as lacking a mind in one of two ways: either lacking self-control and emotions, like an animal, or lacking reason and intellect, like a child. So foreign in appearance, language, and manner, “they” did not simply become other people, they became lesser people. More specifically, they were seen as having lesser minds, diminished capacities to either reason or feel.
  • In the early 1990ss, California State Police commonly referred to crimes involving young black men as NHI—No Humans Involved.
  • The essence of dehumanization is, therefore, failing to recognize the fully human mind of another person. Those who fight against dehumanization typically deal with extreme cases that can make it seem like a relatively rare phenomenon. It is not. Subtle versions are all around us.
  • ...15 more annotations...
  • Even doctors—those whose business is to treat others humanely— can remain disengaged from the minds of their patients, particularly when those patients are easily seen as different from the doctors themselves. Until the early 1990s, for instance, it was routine practice for infants to undergo surgery without anesthesia. Why? Because at the time, doctors did not believe that infants were able to experience pain, a fundamental capacity of the human mind.
  • Your sixth sense functions only when you engage it. When you do not, you may fail to recognize a fully human mind that is right before your eyes.
  • Although it is indeed true that the ability to read the minds of others exists along a spectrum with stable individual differences, I believe that the more useful knowledge comes from understanding the moment-to-moment, situational influences that can lead even the most social person—yes, even you and me—to treat others as mindless animals or objects.
  • None of the cases described in this chapter so far involve people with chronic and stable personality disorders. Instead, they all come from predictable contexts in which people’s sixth sense remained disengaged for one fundamental reason: distance.
  • This three-part chain—sharing attention, imitating action, and imitation creating experience—shows one way in which your sixth sense works through your physical senses. More important, it also shows how your sixth sense could remain disengaged, leaving you disconnected from the minds of others. Close your eyes, look away, plug your ears, stand too far away to see or hear, or simply focus your attention elsewhere, and your sixth sense may not be triggered.
  • Distance keeps your sixth sense disengaged for at least two reasons. First, your ability to understand the minds of others can be triggered by your physical senses. When you’re too far away in physical space, those triggers do not get pulled. Second, your ability to understand the minds of others is also engaged by your cognitive inferences. Too far away in psychological space—too different, too foreign, too other—and those triggers, again, do not get pulled
  • For psychologists, distance is not just physical space. It is also psychological space, the degree to which you feel closely connected to someone else. You are describing psychological distance when you say that you feel “distant” from your spouse, “out of touch” with your kids’ lives, “worlds apart” from a neighbor’s politics, or “separated” from your employees. You don’t mean that you are physically distant from other people; you mean that you feel psychologically distant from them in some way
  • Interviews with U.S. soldiers in World War II found that only 15 to 20 percent were able to discharge their weapons at the enemy in close firefights. Even when they did shoot, soldiers found it hard to hit their human targets. In the U.S. Civil War, muskets were capable of hitting a pie plate at 70 yards and soldiers could typically reload anywhere from 4 to 5 times per minute. Theoretically, a regiment of 200 soldiers firing at a wall of enemy soldiers 100 feet wide should be able to kill 120 on the first volley. And yet the kill rate during the Civil War was closer to 1 to 2 men per minute, with the average distance of engagement being only 30 yards.
  • Modern armies now know that they have to overcome these empathic urges, so soldiers undergo relentless training that desensitizes them to close combat, so that they can do their jobs. Modern technology also allows armies to kill more easily because it enables killing at such a great physical distance. Much of the killing by U.S. soldiers now comes through the hands of drone pilots watching a screen from a trailer in Nevada, with their sixth sense almost completely disengaged.
  • Other people obviously do not need to be standing right in front of you for you to imagine what they are thinking or feeling or planning. You can simply close your eyes and imagine it.
  • The MPFC and a handful of other brain regions undergird the inferential component of your sixth sense. When this network of brain regions is engaged, you are thinking about others’ minds. Failing to engage this region when thinking about other people is then a solid indication that you’re overlooking their minds.
  • Research confirms that the MPFC is engaged more when you’re thinking about yourself, your close friends and family, and others who have beliefs similar to your own. It is activated when you care enough about others to care what they are thinking, and not when you are indifferent to others
  • As people become more and more different from us, or more distant from our immediate social networks, they become less and less likely to engage our MPFC. When we don’t engage this region, others appear relatively mindless, something less than fully human.
  • The mistake that can arise when you fail to engage with the minds of others is that you may come to think of them as relatively mindless. That is, you may come to think that these others have less going on between their ears than, say, you do.
  • It’s not only free will that other minds might seem to lack. This lesser minds effect has many manifestations, including what appears to be a universal tendency to assume that others’ minds are less sophisticated and more superficial than one’s own. Members of distant out-groups, ranging from terrorists to poor hurricane victims to political opponents, are also rated as less able to experience complicated emotions, such as shame, pride, embarassment, and guilt than close members of one’s own group.
anonymous

Why Childhood Memories Disappear - The Atlantic - 0 views

  • Most adults can’t remember much of what happened to them before age 3 or so. What happens to the memories formed in those earliest years?
  • When I talk about my first memory, what I really mean is my first retained memory. Carole Peterson, a professor of psychology at Memorial University Newfoundland, studies children’s memories. Her research has found that small children can recall events from when they were as young as 20 months old, but these memories typically fade by the time they’re between 4 and 7 years old.
  • “People used to think that the reason that we didn’t have early memories was because children didn’t have a memory system or they were unable to remember things, but it turns out that’s not the case,” Peterson said. “Children have a very good memory system. But whether or not something hangs around long-term depends on on several other factors.”
  • ...8 more annotations...
  • Two of the most important factors, Peterson explained, are whether the memory “has emotion infused in it,” and whether the memory is coherent: Does the story our memory tells us actually hang together and make sense when we recall it later?
  • A professor at the University of North Carolina-Chapel Hill, Reznick explained that shortly after birth, infants can start forming impressions of faces and react when they see those faces again; this is recognition memory. The ability to understand words and learn language relies on working memory, which kicks in at around six months old. More sophisticated forms of memory develop in the child’s second year, as semantic memory allows children to retain understanding of concepts and general knowledge about the world.
  • I formed earlier memories using more rudimentary, pre-verbal means, and that made those memories unreachable as the acquisition of language reshaped how my mind works, as it does for everyone.
  • False memories do exist, but their construction appears to begin much later in life
  • A study by Peterson presented young children with fictitious events to see if they could be misled into remembering these non-existent events, yet the children almost universally avoided the bait. As for why older children and adults begin to fill in gaps in their memories with invented details, she pointed out that memory is a fundamentally constructive activity: We use it to build understanding of the world, and that sometimes requires more complete narratives than our memories can recall by themselves.
  • as people get older, it becomes easier to conflate actual memories with other stimuli.
  • He explained that recognition memory is our most pervasive system, and that associations with my hometown I formed as an infant could well have endured more than 20 years later, however vaguely.
  • give me enough time, and I’m sure that detail will be added to my memory. It’s just too perfect a story.
  •  
    Alasdair Wilkins talks about her childhood memories, false memory and how children remember moments of their early years.
knudsenlu

How badly do you want something? Babies can tell | MIT News - 0 views

  • Babies as young as 10 months can assess how much someone values a particular goal by observing how hard they are willing to work to achieve it, according to a new study from MIT and Harvard University.
  • This ability requires integrating information about both the costs of obtaining a goal and the benefit gained by the person seeking it, suggesting that babies acquire very early an intuition about how people make decisions.
  • “This paper is not the first to suggest that idea, but its novelty is that it shows this is true in much younger babies than anyone has seen. These are preverbal babies, who themselves are not actively doing very much, yet they appear to understand other people’s actions in this sophisticated, quantitative way,”
  • ...4 more annotations...
  • “This study is an important step in trying to understand the roots of common-sense understanding of other people’s actions. It shows quite strikingly that in some sense, the basic math that is at the heart of how economists think about rational choice is very intuitive to babies who don’t know math, don’t speak, and can barely understand a few words
  • “Abstract, interrelated concepts like cost and value — concepts at the center both of our intuitive psychology and of utility theory in philosophy and economics — may originate in an early-emerging system by which infants understand other people's actions,” she says. 
  • In other words, they apply the well-known logic that all of us rely on when we try to assess someone’s preferences: The harder she tries to achieve something, the more valuable is the expected reward to her when she succeeds.”
  • “We have to recognize that we’re very far from building AI systems that have anything like the common sense even of a 10-month-old,” Tenenbaum says. “But if we can understand in engineering terms the intuitive theories that even these young infants seem to have, that hopefully would be the basis for building machines that have more human-like intelligence.
Emily Horwitz

'I Wanna Eat You Up!' Why We Go Crazy for Cute | LiveScience - 1 views

  • NEW ORLEANS — Ever reacted to the sight of a cute puppy or darling infant by squealing, "I want to eat you up!"? Or maybe you can't help but want to pinch your grandbaby's adorable cheeks. You're not alone. New research finds that seemingly strange aggressive responses to cuteness are actually the norm.
  • In the study, presented Friday (Jan. 18) here at the annual meeting of the Society for Personality and Social Psychology, researchers found that people watching a slideshow of adorable pictures popped more bubbles on a sheet of bubble wrap than did people viewing funny or neutral pictures.
  • The participants rated the pictures on cuteness and funniness, as well as on how much they felt the pictures made them lose control — for example, if they agreed with statements such as "I can't handle it!" The participants also rated the extent to which the pictures made them "want to say something like 'grr!'" and "want to squeeze something." Sure enough, the cuter the animal, the less control and more desire to "grrr" and squeeze something that people felt. Cute animals produced this feeling significantly more strongly than did funny animals. The funny critters in turn produced the feeling more strongly than did neutral animals, perhaps because the funny animals were perceived as cute, too, Dyer said.
  • ...4 more annotations...
  • yer got interested in what she and her colleagues call "cute aggression" after chatting with a fellow student about how adorable Internet pictures often produce the desire to squish or squeeze the cute critter. All the existing research on cuteness suggests the reaction should be the opposite, she told LiveScience. People should want to treat a cute thing with gentleness and care.
  • That's exactly what happened. The people watching a cute slideshow popped 120 bubbles, on average, compared with 80 for the funny slideshow and just a hair over 100 for the neutral one.
  • It's possible that seeing a wide-eyed baby or roly-poly pup triggers our drive to care for that creature, Dyer said. But since the animal is just a picture, and since even in real life we might not be able to care for the creature as much as we want, this urge may be frustrated, she said. That frustration could lead to aggression.
  • Or the reason might not be specific to cuteness, Dyer said. Many overwhelmingly positive emotions look negative, as when Miss America sobs while receiving her crown. Such high levels of positive emotion may overwhelm people.
Javier E

The Science Behind 'They All Look Alike to Me' - The New York Times - 1 views

  • the “other-race effect,” a cognitive phenomenon that makes it harder for people of one race to readily recognize or identify individuals of another.
  • It is not bias or bigotry, the researchers say, that makes it difficult for people to distinguish between people of another race. It is the lack of early and meaningful exposure to other groups that often makes it easier for us to quickly identify and remember people of our own ethnicity or race while we often struggle to do the same for others.
  • That racially loaded phrase “they all look alike to me,” turns out to be largely scientifically accurate
  • ...4 more annotations...
  • Psychologists say that starting when they are infants and young children, people become attuned to the key facial features and characteristics of the those around them. Whites often become accustomed to focusing on differences in hair color and eye color. African-Americans grow more familiar with subtle shadings of skin color.
  • Minorities tend to be better at cross-race identification than whites, Professor Meissner said, in part because they have more extensive and meaningful exposure to whites than the other way around.
  • Professor Malpass, who has trained police officers and border patrol agents, urges law enforcement agencies to make sure black or Hispanic officers are involved when creating lineups of black and Hispanic suspects. And he warns of the dangers of relying on cross-racial identifications from eyewitnesses, who can be fallible.
  • “I don’t think we should be offended,” he said. “This is really an ability issue.
Dunia Tonob

Circumcision in Germany: Incisive arguments | The Economist - 0 views

  • The court decided that, although the doctor was innocent, circumcising an infant for non-medical reasons violates Germany's constitutional protection of every person's bodily integrity—and should thus be a crime.
  • As it happens, the movement against circumcision is spreading, from California, where “intactivists” have tried to ban it, to Israel, where some parents now opt for brit shalom (the “covenant of peace”) as a ritual alternative
  • Dieter Graumann, president of Germany's Central Council of Jews, asserted that the verdict, if it is upheld, would make Jewish life in Germany, just as it is blooming again, practically impossible
  • ...3 more annotations...
  • ne one hand, Germany's constitution, written after the second world war to prevent any repeat of Nazi horrors, assures the rights of parents and of religious freedom. But on the other hand, it guarantees the physical inviolability of every person
  • The court felt that the boy's right to inviolability trumped the religious and parental rights of his mother and father.
  • it is wrong to make an exception for involuntary male circumcision when female circumcision is seen as barbaric. And he maintains that arguments which lean on tradition alone are inadequate, for the same reason that tradition cannot, nowadays, justify polygamy or footbinding.
oliviaodon

How Do We Learn Languages? | Brain Blogger - 0 views

  • The use of sound is one of the most common methods of communication both in the animal kingdom and between humans.
  • human speech is a very complex process and therefore needs intensive postnatal learning to be used effectively. Furthermore, to be effective the learning phase should happen very early in life and it assumes a normally functioning hearing and brain systems.
  • Nowadays, scientists and doctors are discovering the important brain zones involved in the processing of language information. Those zones are reassembled in a number of a language networks including the Broca, the Wernicke, the middle temporal, the inferior parietal and the angular gyrus. The variety of such brain zones clearly shows that the language processing is a very complex task. On the functional level, decoding a language begins in the ear where the incoming sounds are summed in the auditory nerve as an electrical signal and delivered to the auditory cortex where neurons extract auditory objects from that signal.
  • ...6 more annotations...
  • The effectiveness of this process is so great that human brain is able to accurately identify words and whole phrases from a noisy background. This power of analysis brings to minds the great similarity between the brain and powerful supercomputers.
  • Functional imaging of the brain revealed that activated brain parts are different between native and non-native speakers. The superior temporal gyrus is an important brain region involved in language learning. For a native speaker this part is responsible for automated processing of lexical retrieval and the build of phrase structure. In native speakers this zone is much more activated than in non-native ones.
  • infants begin their lives with a very flexible brain that allows them to acquire virtually any language they are exposed to. Moreover, they can learn a language words almost equally by listening or by visual coding. This brain plasticity is the motor drive of the children capability of “cracking the speech code” of a language. With time, this ability is dramatically decreased and adults find it harder to acquire a new language.
  • clearly demonstrated that there are anatomical brain differences between fast and slow learners of foreign languages. By analyzing a group of people having a homogenous language background, scientists found that differences in specific brain regions can predict the capacity of a person to learn a second language.
  • Until the last decade few studies compared the language acquisition in adults and children. Thanks to modern imaging and electroencephalography we are now able to address this question.
  • Language acquisition is a long-term process by which information are stored in the brain unconsciously making them appropriate to oral and written usage. In contrast, language learning is a conscious process of knowledge acquisition that needs supervision and control by the person.
  •  
    Another cool article about how the brain works and language (inductive reasoning). 
nataliedepaulo1

Are Babies Wired to Understand the World From Birth? - The Atlantic - 0 views

  • Can Babies Understand the World From Birth?
  • Over the past couple of decades, researchers like Saxe have used functional MRI to study brain activity in adults and children. But fMRI, like a 19th-century daguerreotype, requires subjects to lie perfectly still lest the image become hopelessly blurred. Babies are jittering bundles of motion when not asleep, and they can’t be cajoled or bribed into stillness. The few fMRI studies done on babies to date mostly focused on playing sounds to them while they slept.
  • But Saxe wanted to understand how babies see the world when they’re awake; she wanted to image Arthur’s brain as he looked at video clips, the kind of thing that adult research subjects do easily.
  • ...2 more annotations...
  • Do babies’ brains work like miniature versions of adult brains, or are they completely different?
  • We’re only beginning to understand how babies’ brains are organized; it will require many more hours collecting data from a larger number of babies to have a fuller picture of how their brains work. But Saxe and her colleagues have shown that such a study can be done, which opens up new areas of investigation. “It is possible to get good fMRI data in awake babies—if you are extremely patient,” Saxe said. “Now let’s try to figure out what we can learn from it.”
sissij

How humans bond: The brain chemistry revealed: New research finds that dopamine is invo... - 0 views

  • Northeastern University psychology professor Lisa Feldman Barrett found, for the first time, that the neurotransmitter dopamine is involved in human bonding, bringing the brain's reward system into our understanding of how we form human attachments.
  • To conduct the study, the researchers turned to a novel technology: a machine capable of performing two types of brain scans simultaneously -- functional magnetic resonance imaging, or fMRI, and positron emission tomography, or PET.
  • Barrett's team focused on the neurotransmitter dopamine, a chemical that acts in various brain systems to spark the motivation necessary to work for a reward.
  • ...1 more annotation...
  • The mothers who were more synchronous with their own infants showed both an increased dopamine response when viewing their child at play and stronger connectivity within the medial amygdala network.
  •  
    I think this article is very interesting because it is trying to explain human social behaviors through chemistry and biology. Although there are a lot of factors in human science, by converting it to a natural science problem, we can make the question easier to answer. It also shows the interaction between different subfields of science. --Sissi (2/20/2017)
Javier E

Why It's OK to Let Apps Make You a Better Person - Evan Selinger - Technology - The Atl... - 0 views

  • one theme emerges from the media coverage of people's relationships with our current set of technologies: Consumers want digital willpower. App designers in touch with the latest trends in behavioral modification--nudging, the quantified self, and gamification--and good old-fashioned financial incentive manipulation, are tackling weakness of will. They're harnessing the power of payouts, cognitive biases, social networking, and biofeedback. The quantified self becomes the programmable self.
  • the trend still has multiple interesting dimensions
  • Individuals are turning ever more aspects of their lives into managerial problems that require technological solutions. We have access to an ever-increasing array of free and inexpensive technologies that harness incredible computational power that effectively allows us to self-police behavior everywhere we go. As pervasiveness expands, so does trust.
  • ...20 more annotations...
  • Some embrace networked, data-driven lives and are comfortable volunteering embarrassing, real time information about what we're doing, whom we're doing it with, and how we feel about our monitored activities.
  • Put it all together and we can see that our conception of what it means to be human has become "design space." We're now Humanity 2.0, primed for optimization through commercial upgrades. And today's apps are more harbinger than endpoint.
  • philosophers have had much to say about the enticing and seemingly inevitable dispersion of technological mental prosthetic that promise to substitute or enhance some of our motivational powers.
  • beyond the practical issues lie a constellation of central ethical concerns.
  • they should cause us to pause as we think about a possible future that significantly increases the scale and effectiveness of willpower-enhancing apps. Let's call this hypothetical future Digital Willpower World and characterize the ethical traps we're about to discuss as potential general pitfalls
  • it is antithetical to the ideal of " resolute choice." Some may find the norm overly perfectionist, Spartan, or puritanical. However, it is not uncommon for folks to defend the idea that mature adults should strive to develop internal willpower strong enough to avoid external temptations, whatever they are, and wherever they are encountered.
  • In part, resolute choosing is prized out of concern for consistency, as some worry that lapse of willpower in any context indicates a generally weak character.
  • Fragmented selves behave one way while under the influence of digital willpower, but another when making decisions without such assistance. In these instances, inconsistent preferences are exhibited and we risk underestimating the extent of our technological dependency.
  • It simply means that when it comes to digital willpower, we should be on our guard to avoid confusing situational with integrated behaviors.
  • the problem of inauthenticity, a staple of the neuroethics debates, might arise. People might start asking themselves: Has the problem of fragmentation gone away only because devices are choreographing our behavior so powerfully that we are no longer in touch with our so-called real selves -- the selves who used to exist before Digital Willpower World was formed?
  • Infantalized subjects are morally lazy, quick to have others take responsibility for their welfare. They do not view the capacity to assume personal responsibility for selecting means and ends as a fundamental life goal that validates the effort required to remain committed to the ongoing project of maintaining willpower and self-control.
  • Michael Sandel's Atlantic essay, "The Case Against Perfection." He notes that technological enhancement can diminish people's sense of achievement when their accomplishments become attributable to human-technology systems and not an individual's use of human agency.
  • Borgmann worries that this environment, which habituates us to be on auto-pilot and delegate deliberation, threatens to harm the powers of reason, the most central component of willpower (according to the rationalist tradition).
  • In several books, including Technology and the Character of Contemporary Life, he expresses concern about technologies that seem to enhance willpower but only do so through distraction. Borgmann's paradigmatic example of the non-distracted, focally centered person is a serious runner. This person finds the practice of running maximally fulfilling, replete with the rewarding "flow" that can only comes when mind/body and means/ends are unified, while skill gets pushed to the limit.
  • Perhaps the very conception of a resolute self was flawed. What if, as psychologist Roy Baumeister suggests, willpower is more "staple of folk psychology" than real way of thinking about our brain processes?
  • novel approaches suggest the will is a flexible mesh of different capacities and cognitive mechanisms that can expand and contract, depending on the agent's particular setting and needs. Contrary to the traditional view that identifies the unified and cognitively transparent self as the source of willed actions, the new picture embraces a rather diffused, extended, and opaque self who is often guided by irrational trains of thought. What actually keeps the self and its will together are the given boundaries offered by biology, a coherent self narrative created by shared memories and experiences, and society. If this view of the will as an expa
  • nding and contracting system with porous and dynamic boundaries is correct, then it might seem that the new motivating technologies and devices can only increase our reach and further empower our willing selves.
  • "It's a mistake to think of the will as some interior faculty that belongs to an individual--the thing that pushes the motor control processes that cause my action," Gallagher says. "Rather, the will is both embodied and embedded: social and physical environment enhance or impoverish our ability to decide and carry out our intentions; often our intentions themselves are shaped by social and physical aspects of the environment."
  • It makes perfect sense to think of the will as something that can be supported or assisted by technology. Technologies, like environments and institutions can facilitate action or block it. Imagine I have the inclination to go to a concert. If I can get my ticket by pressing some buttons on my iPhone, I find myself going to the concert. If I have to fill out an application form and carry it to a location several miles away and wait in line to pick up my ticket, then forget it.
  • Perhaps the best way forward is to put a digital spin on the Socratic dictum of knowing myself and submit to the new freedom: the freedom of consuming digital willpower to guide me past the sirens.
grayton downing

Newborn Immune Systems Suppressed | The Scientist Magazine® - 0 views

  • sterile world of the womb, at birth babies are thrust into an environment full of bacteria, viruses, and parasites. They are very vulnerable to these infections for their first months of life—a trait that has long been blamed on their immature immune systems.
  • “This more intricate regulation of immune responses makes more sense than immaturity,” said Sing Sing Way, who led the study, “because it allows a protective response to be mounted if needed.” This may explain why newborn immune responses, though generally weak, also vary wildly between different babies and across different studies.
  • njecting 6-day-old mice with splenocytes (a type of white blood cell) from adults. Newborn mice are normally 1,000 times more suspectible to bacterial infections than adults, but despite receiving working immune cells, they became no less vulnerable.
  • ...3 more annotations...
  • Way added that there might be other reasons why newborns should carry immunosuppressive cells
  • Baby formulas contain small amounts of arginine. Sidney Morris, a biochemist from the University of Pittsburgh, said that it may be important to avoid fortifying them with extra arginine, lest it swamps the arginase activity of CD71+ cells, releases the immune system, and causes problems for the developing infants’ guts.
  • “Whether the precise mechanism of immunosuppression is the same or different in each of these circumstances remains to be determined,” he said.
1 - 20 of 43 Next › Last »
Showing 20 items per page