Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged neuroscience

Rss Feed Group items tagged

Ed Webb

Mind - Research Upends Traditional Thinking on Study Habits - NYTimes.com - 1 views

  • instead of sticking to one study location, simply alternating the room where a person studies improves retention. So does studying distinct but related skills or concepts in one sitting, rather than focusing intensely on a single thing. “We have known these principles for some time, and it’s intriguing that schools don’t pick them up, or that people don’t learn them by trial and error,” said Robert A. Bjork, a psychologist at the University of California, Los Angeles. “Instead, we walk around with all sorts of unexamined beliefs about what works that are mistaken.”
  • The brain makes subtle associations between what it is studying and the background sensations it has at the time, the authors say, regardless of whether those perceptions are conscious. It colors the terms of the Versailles Treaty with the wasted fluorescent glow of the dorm study room, say; or the elements of the Marshall Plan with the jade-curtain shade of the willow tree in the backyard. Forcing the brain to make multiple associations with the same material may, in effect, give that information more neural scaffolding.
  • Cognitive scientists do not deny that honest-to-goodness cramming can lead to a better grade on a given exam. But hurriedly jam-packing a brain is akin to speed-packing a cheap suitcase, as most students quickly learn — it holds its new load for a while, then most everything falls out. “With many students, it’s not like they can’t remember the material” when they move to a more advanced class, said Henry L. Roediger III, a psychologist at Washington University in St. Louis. “It’s like they’ve never seen it before.”
  • ...6 more annotations...
  • An hour of study tonight, an hour on the weekend, another session a week from now: such so-called spacing improves later recall, without requiring students to put in more overall study effort or pay more attention, dozens of studies have found.
  • “The idea is that forgetting is the friend of learning,” said Dr. Kornell. “When you forget something, it allows you to relearn, and do so effectively, the next time you see it.”
  • cognitive scientists see testing itself — or practice tests and quizzes — as a powerful tool of learning, rather than merely assessment. The process of retrieving an idea is not like pulling a book from a shelf; it seems to fundamentally alter the way the information is subsequently stored, making it far more accessible in the future.
  • “Testing not only measures knowledge but changes it,” he says — and, happily, in the direction of more certainty, not less.
  • “Testing has such bad connotation; people think of standardized testing or teaching to the test,” Dr. Roediger said. “Maybe we need to call it something else, but this is one of the most powerful learning tools we have.”
  • The harder it is to remember something, the harder it is to later forget. This effect, which researchers call “desirable difficulty,”
Ed Webb

Top News - What educators can learn from brain research - 0 views

  • neuroplasticity, meaning that the brain can still learn new concepts after various ages, and that every student can be taught many different ways. In a sense, the brain can be rewired.
  • the best research is tied to classroom practice.
  • "Education is an applied field, like engineering," said Atherton. "If there's no connection to practice, then that research is best left to basic researchers in the cognitive neurosciences."
Ed Webb

The powerful and mysterious brain circuitry that makes us love Google, Twitter, and tex... - 0 views

  • For humans, this desire to search is not just about fulfilling our physical needs. Panksepp says that humans can get just as excited about abstract rewards as tangible ones. He says that when we get thrilled about the world of ideas, about making intellectual connections, about divining meaning, it is the seeking circuits that are firing.
  • Our internal sense of time is believed to be controlled by the dopamine system. People with hyperactivity disorder have a shortage of dopamine in their brains, which a recent study suggests may be at the root of the problem. For them even small stretches of time seem to drag.
  • When we get the object of our desire (be it a Twinkie or a sexual partner), we engage in consummatory acts that Panksepp says reduce arousal in the brain and temporarily, at least, inhibit our urge to seek.
  • ...3 more annotations...
  • But our brains are designed to more easily be stimulated than satisfied. "The brain seems to be more stingy with mechanisms for pleasure than for desire," Berridge has said. This makes evolutionary sense. Creatures that lack motivation, that find it easy to slip into oblivious rapture, are likely to lead short (if happy) lives. So nature imbued us with an unquenchable drive to discover, to explore. Stanford University neuroscientist Brian Knutson has been putting people in MRI scanners and looking inside their brains as they play an investing game. He has consistently found that the pictures inside our skulls show that the possibility of a payoff is much more stimulating than actually getting one.
  • all our electronic communication devices—e-mail, Facebook feeds, texts, Twitter—are feeding the same drive as our searches. Since we're restless, easily bored creatures, our gadgets give us in abundance qualities the seeking/wanting system finds particularly exciting. Novelty is one. Panksepp says the dopamine system is activated by finding something unexpected or by the anticipation of something new. If the rewards come unpredictably—as e-mail, texts, updates do—we get even more carried away. No wonder we call it a "CrackBerry."
  • If humans are seeking machines, we've now created the perfect machines to allow us to seek endlessly. This perhaps should make us cautious. In Animals in Translation, Temple Grandin writes of driving two indoor cats crazy by flicking a laser pointer around the room. They wouldn't stop stalking and pouncing on this ungraspable dot of light—their dopamine system pumping. She writes that no wild cat would indulge in such useless behavior: "A cat wants to catch the mouse, not chase it in circles forever." She says "mindless chasing" makes an animal less likely to meet its real needs "because it short-circuits intelligent stalking behavior." As we chase after flickering bits of information, it's a salutary warning.
Ed Webb

The Myth Of AI | Edge.org - 0 views

  • The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person? Here we have this interesting confluence between two totally different worlds. We have the world of money and politics and the so-called conservative Supreme Court, with this other world of what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people. In both cases, there's an intellectual tradition that goes back many decades. Previously they'd been separated; they'd been worlds apart. Now, suddenly they've been intertwined.
  • Since our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. And people often accept that because there's no empirical alternative to compare it to, there's no baseline. It's bad personal science. It's bad self-understanding.
  • there's no way to tell where the border is between measurement and manipulation in these systems
  • ...8 more annotations...
  • It's not so much a rise of evil as a rise of nonsense. It's a mass incompetence, as opposed to Skynet from the Terminator movies. That's what this type of AI turns into.
  • What's happened here is that translators haven't been made obsolete. What's happened instead is that the structure through which we receive the efforts of real people in order to make translations happen has been optimized, but those people are still needed.
  • because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri
  • If you talk to translators, they're facing a predicament, which is very similar to some of the other early victim populations, due to the particular way we digitize things. It's similar to what's happened with recording musicians, or investigative journalists—which is the one that bothers me the most—or photographers. What they're seeing is a severe decline in how much they're paid, what opportunities they have, their long-term prospects.
  • In order to create this illusion of a freestanding autonomous artificial intelligent creature, we have to ignore the contributions from all the people whose data we're grabbing in order to make it work. That has a negative economic consequence.
  • If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous. It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.
  • This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it. There are about to be a whole bunch of those. And that'll involve some kind of new societal structure that isn't perfect anarchy. Nobody in the tech world wants to face that, so we lose ourselves in these fantasies of AI. But if you could somehow prevent AI from ever happening, it would have nothing to do with the actual problem that we fear, and that's the sad thing, the difficult thing we have to face.
  • To reject your own ignorance just casts you into a silly state where you're a lesser scientist. I don't see that so much in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that it does start to bleed over into all kinds of other things.
1 - 4 of 4
Showing 20 items per page