Skip to main content

Home/ TOK Friends/ Group items tagged visualization

Rss Feed Group items tagged

caelengrubb

Why Is Memory So Good and So Bad? - Scientific American - 0 views

  • Memories of visual images (e.g., dinner plates) are stored in what is called visual memory.
  • Our minds use visual memory to perform even the simplest of computations; from remembering the face of someone we’ve just met, to remembering what time it was last we checked. Without visual memory, we wouldn’t be able to store—and later retrieve—anything we see.
  • ust as a computer’s memory capacity constrains its abilities, visual memory capacity has been correlated with a number of higher cognitive abilities, including academic success, fluid intelligence (the ability to solve novel problems), and general comprehension.
  • ...13 more annotations...
  • For many reasons, then, it would be very useful to understand how visual memory facilitates these mental operations, as well as constrains our ability to perform them
  • Visual working memory is where visual images are temporarily stored while your mind works away at other tasks—like a whiteboard on which things are briefly written and then wiped away. We rely on visual working memory when remembering things over brief intervals, such as when copying lecture notes to a notebook.
  • UC Davis psychologists Weiwei Zhang and Steven Luck have shed some light on this problem. In their experiment, participants briefly saw three colored squares flashed on a computer screen, and were asked to remember the colors of each square. Then, after 1, 4 or 10 seconds the squares re-appeared, except this time their colors were missing, so that all that was visible were black squares outlined in white.
  • The participants had a simple task: to recall the color of one particular square, not knowing in advance which square they would be asked to recall. The psychologists assumed that measuring how visual working memory behaves over increasing demands (i.e., the increasing durations of 1,4 or 10 seconds) would reveal something about how the system works.
  • If short-term visual memories fade away—if they are gradually wiped away from the whiteboard—then after longer intervals participants’ accuracy in remembering the colors should still be high, deviating only slightly from the square’s original color. But if these memories are wiped out all at once—if the whiteboard is left untouched until, all at once, scrubbed clean—then participants should make very precise responses (corresponding to instances when the memories are still untouched) and then, after the interval grows too long, very random guesses.
  • Which is exactly what happened: Zhang & Luck found that participants were either very precise, or they completely guessed; that is, they either remembered the square’s color with great accuracy, or forgot it completely
  • But this, it turns out, is not true of all memories
  • In a recent paper, Researchers at MIT and Harvard found that, if a memory can survive long enough to make it into what is called “visual long-term memory,” then it doesn’t have to be wiped out at all.
  • Talia Konkle and colleagues showed participants a stream of three thousand images of different scenes, such as ocean waves, golf courses or amusement parks. Then, participants were shown two hundred pairs of images—an old one they had seen in the first task, and a completely new one—and asked to indicate which was the old one.
  • Participants were remarkably accurate at spotting differences between the new and old images—96 percent
  • In a recent review, researchers at Harvard and MIT argue that the critical factor is how meaningful the remembered images are—whether the content of the images you see connects to pre-existing knowledge about them
  • This prior knowledge changes how these images are processed, allowing thousands of them to be transferred from the whiteboard of short-term memory into the bank vault of long-term memory, where they are stored with remarkable detail.
  • Together, these experiments suggest why memories are not eliminated equally— indeed, some don’t seem to be eliminated at all. This might also explain why we’re so hopeless at remembering some things, and yet so awesome at remembering others.
ilanaprincilus06

Animals could help reveal why humans fall for optical illusions | Laura and Jennifer Ke... - 0 views

  • they remind us of the discrepancy between perception and reality. But our knowledge of such illusions has been largely limited to studying humans.
  • Understanding whether these illusions arise in different brains could help us understand how evolution shapes visual perception.
  • illusions not only reveal how visual scenes are interpreted and mentally reconstructed, they also highlight constraints in our perception.
  • ...9 more annotations...
  • Some of the most common types of illusory percepts are those that affect the impression of size, length or distance.
  • As visual processing needs to be both rapid and generally accurate, the brain constantly uses shortcuts and makes assumptions about the world that can, in some cases, be misleading.
  • These illusions are the result of visual processes shaped by evolution. Using that process may have been once beneficial (or still is), but it also allows our brains to be tricked.
  • if animals are tricked by the same illusions, then perhaps revealing why a different evolutionary path leads to the same visual process might help us understand why evolution favours this development.
  • Great bowerbirds could be the ultimate illusory artists. For example, their males construct forced perspective illusions to make them more attractive to mates.
  • When a male has two smaller clawed males on either side of him he is more attractive to a female (because he looks relatively larger) than if he was surrounded by two larger clawed males.
  • This effect is known as the Ebbinghaus illusion (see image), and suggests that males may easily manipulate their perceived attractiveness by surrounding themselves with less attractive rivals.
  • Deceptions of the senses are the truths of perception.
  • Visual illusions (and those in the non-visual senses) are a crucial tool for determining what perceptual assumptions animals make about the world around them.
julia rhodes

University of California - Science Today | How the brain functions during visual searches - 0 views

  • task that took you a couple seconds to complete is a task that computers -- despite decades of advancement and intricate calculations -- still can't perform as efficiently as humans: the visual search.
  • "Our daily lives are comprised of little searches that are constantly changing, depending on what we need to do," said Miguel Eckstein, UC Santa Barbara professor of psychological and brain sciences. "So the idea is, where does that take place in the brain?"
  • What Eckstein and co-authors wanted to determine was how we decide whether the target object we are looking for is actually in the scene, how difficult the search is, and how we know we've found what we wanted.
  • ...7 more annotations...
  • In the parts of the human brain used earlier in the processing stream, regions stimulated by specific features like color, motion, and direction are a major part of the search. However, in the dorsal frontoparietal network, activity is not confined to any specific features of the object.
  • . By watching the intraparietal sulcus (IPS), located within the dorsal frontoparietal network, the researchers were able to note not only whether their subjects found the objects, but also how confident they were in their finds.
  • The IPS region would be stimulated even if the object was not there, said Eckstein, but the pattern of activity would not be the same as it would had the object actually existed in the scene.
  • "As you go further up in processing, the neurons are less interested in a specific feature, but they're more interested in whatever is behaviorally relevant to you at the moment,"
  • Thus, a search for an apple, for instance, would make red, green, and rounded shapes relevant.
  • "For visual search to be efficient, we want those visual features related to what we are looking for to elicit strong responses in our brain and not others that are not related to our search, and are distracting,"
  • "What we're trying to really understand is what other mechanisms or strategies the brain has to make searches efficient and easy,
anonymous

Paying attention as the eyes move -- ScienceDaily - 0 views

  • The visual system optimally maintains attention on relevant objects even as eye movements are made, shows a study by the German Primate Center.
  • Their study shows that the rhesus macaque's brain quickly and efficiently shifts attention with each eye-movement in a well-synchronized manner. Since humans and monkeys exhibit very similar eye-movements and visual function, these findings are likely to generalize to the human brain. These results may help understand disorders like schizophrenia, visual neglect and other attention deficit disorders.
  • Since different locations on the retina stimulate different visual neurons in the brain, this means that one set of visual neurons responds to the child before the eye-movement, while a different second set of neurons responds to the child after the eye-movement. Thus, to optimally maintain attention on the child, the brain has to enhance the responses of the first set of neurons right until the eye-movement begins and then switch to enhance the responses of the second set of neurons right around when the eye-movement ends.
  • ...1 more annotation...
  • To measure the activity of single neurons, the scientists inserted electrodes thinner than a human hair into the monkey's brain and recorded the neurons' electrical activity. Because the brain is not pain-sensitive, this insertion of electrodes is painless for the animal. By recording from single neurons in an area of the monkey's brain known as area MT, the scientists were able to show that attentional enhancement indeed switches from the first set of neurons to the second set of neurons in a fast and saccade-synchronized manner. Attentional enhancement in the brain is therefore well-timed to maintain spatial attention on relevant stimuli, so that they can be optimally tracked and processed across saccades.
kushnerha

BBC - Future - Will emoji become a new language? - 2 views

  • Emoji are now used in around half of every sentence on sites like Instagram, and Facebook looks set to introduce them alongside the famous “like” button as a way of expression your reaction to a post.
  • If you were to believe the headlines, this is just the tipping point: some outlets have claimed that emoji are an emerging language that could soon compete with English in global usage. To many, this would be an exciting evolution of the way we communicate; to others, it is linguistic Armageddon.
  • Do emoji show the same characteristics of other communicative systems and actual languages? And what do they help us to express that words alone can’t say?When emoji appear with text, they often supplement or enhance the writing. This is similar to gestures that appear along with speech. Over the past three decades, research has shown that our hands provide important information that often transcends and clarifies the message in speech. Emoji serve this function too – for instance, adding a kissy or winking face can disambiguate whether a statement is flirtatiously teasing or just plain mean.
  • ...17 more annotations...
  • This is a key point about language use: rarely is natural language ever limited to speech alone. When we are speaking, we constantly use gestures to illustrate what we mean. For this reason, linguists say that language is “multi-modal”. Writing takes away that extra non-verbal information, but emoji may allow us to re-incorporate it into our text.
  • Emoji are not always used as embellishments, however – sometimes, strings of the characters can themselves convey meaning in a longer sequence on their own. But to constitute their own language, they would need a key component: grammar.
  • A grammatical system is a set of constraints that governs how the meaning of an utterance is packaged in a coherent way. Natural language grammars have certain traits that distinguish them. For one, they have individual units that play different roles in the sequence – like nouns and verbs in a sentence. Also, grammar is different from meaning
  • When emoji are isolated, they are primarily governed by simple rules related to meaning alone, without these more complex rules. For instance, according to research by Tyler Schnoebelen, people often create strings of emoji that share a common meaning
  • This sequence has little internal structure; even when it is rearranged, it still conveys the same message. These images are connected solely by their broader meaning. We might consider them to be a visual list: “here are all things related to celebrations and birthdays.” Lists are certainly a conventionalised way of communicating, but they don’t have grammar the way that sentences do.
  • What if the order did matter though? What if they conveyed a temporal sequence of events? Consider this example, which means something like “a woman had a party where they drank, and then opened presents and then had cake”:
  • In all cases, the doer of the action (the agent) precedes the action. In fact, this pattern is commonly found in both full languages and simple communication systems. For example, the majority of the world’s languages place the subject before the verb of a sentence.
  • These rules may seem like the seeds of grammar, but psycholinguist Susan Goldin-Meadow and colleagues have found this order appears in many other systems that would not be considered a language. For example, this order appears when people arrange pictures to describe events from an animated cartoon, or when speaking adults communicate using only gestures. It also appears in the gesture systems created by deaf children who cannot hear spoken languages and are not exposed to sign languages.
  • describes the children as lacking exposure to a language and thus invent their own manual systems to communicate, called “homesigns”. These systems are limited in the size of their vocabularies and the types of sequences they can create. For this reason, the agent-act order seems not to be due to a grammar, but from basic heuristics – practical workarounds – based on meaning alone. Emoji seem to tap into this same system.
  • Nevertheless, some may argue that despite emoji’s current simplicity, this may be the groundwork for emerging complexity – that although emoji do not constitute a language at the present time, they could develop into one over time.
  • Could an emerging “emoji visual language” be developing in a similar way, with actual grammatical structure? To answer that question, you need to consider the intrinsic constraints on the technology itself.Emoji are created by typing into a computer like text. But, unlike text, most emoji are provided as whole units, except for the limited set of emoticons which convert to emoji, like :) or ;). When writing text, we use the building blocks (letters) to create the units (words), not by searching through a list of every whole word in the language.
  • emoji force us to convey information in a linear unit-unit string, which limits how complex expressions can be made. These constraints may mean that they will never be able to achieve even the most basic complexity that we can create with normal and natural drawings.
  • What’s more, these limits also prevent users from creating novel signs – a requisite for all languages, especially emerging ones. Users have no control over the development of the vocabulary. As the “vocab list” for emoji grows, it will become increasingly unwieldy: using them will require a conscious search process through an external list, not an easy generation from our own mental vocabulary, like the way we naturally speak or draw. This is a key point – it means that emoji lack the flexibility needed to create a new language.
  • we already have very robust visual languages, as can be seen in comics and graphic novels. As I argue in my book, The Visual Language of Comics, the drawings found in comics use a systematic visual vocabulary (such as stink lines to represent smell, or stars to represent dizziness). Importantly, the available vocabulary is not constrained by technology and has developed naturally over time, like spoken and written languages.
  • grammar of sequential images is more of a narrative structure – not of nouns and verbs. Yet, these sequences use principles of combination like any other grammar, including roles played by images, groupings of images, and hierarchic embedding.
  • measured participants’ brainwaves while they viewed sequences one image at a time where a disruption appeared either within the groupings of panels or at the natural break between groupings. The particular brainwave responses that we observed were similar to those that experimenters find when violating the syntax of sentences. That is, the brain responds the same way to violations of “grammar”, whether in sentences or sequential narrative images.
  • I would hypothesise that emoji can use a basic narrative structure to organise short stories (likely made up of agent-action sequences), but I highly doubt that they would be able to create embedded clauses like these. I would also doubt that you would see the same kinds of brain responses that we saw with the comic strip sequences.
margogramiak

What happens when your brain can't tell which way is up or down? Study shows that how w... - 0 views

  • What feels like up may actually be some other direction depending on how our brains process our orientation, according to psychology researchers at York University's Faculty of Health.
  • What feels like up may actually be some other direction depending on how our brains process our orientation, according to psychology researchers at York University's Faculty of Health.
    • margogramiak
       
      Excited to get an explanation for this statement
  • an individual's interpretation of the direction of gravity can be altered by how their brain responds to visual information.
    • margogramiak
       
      So, that means that everyone's brain responds differently to visual information. What factor plays into this?
  • ...6 more annotations...
  • found, using virtual reality, that people differ in how much they are influenced by their visual environment
    • margogramiak
       
      oh, interesting.
  • "These findings may also help us to better understand and predict why astronauts may misestimate how far they have moved in a given situation, especially in the microgravity of space," says Harris.
    • margogramiak
       
      I didn't know this was an issue in the first place....
  • Not only did the VRI-vulnerable group rely more on vision to tell them how they were oriented, but they also found visual motion to be more powerful in evoking the sensation of moving through the scene,
    • margogramiak
       
      wow, that's interesting.
  • This decision is helped by the fact that we normally move at right angles to gravity.
    • margogramiak
       
      One of Newton's laws!!!!
  • But if a person's perception of gravity is altered by the visual environment or by removing gravity, this distinction becomes much harder."
    • margogramiak
       
      That makes sense.
  • The findings could also be helpful for virtual reality game designers, as certain virtual environments may lead to differences in how players interpret and move through the game.
    • margogramiak
       
      It's hard to imagine virtual reality getting more realistic than it is now.
mcginnisca

Why Do We Teach Girls That It's Cute to Be Scared? - The New York Times - 0 views

  • Why Do We Teach Girls That It’s Cute to Be Scared?
  • Apparently, fear is expected of women.
  • parents cautioned their daughters about the dangers of the fire pole significantly more than they did their sons and were much more likely to assist them
  • ...13 more annotations...
  • But both moms and dads directed their sons to face their fears, with instruction on how to complete the task on their own.
  • Misadventures meant that I should try again. With each triumph over fear and physical adversity, I gained confidence.
  • She said that her own mother had been very fearful, gasping at anything remotely rough-and-tumble. “I had been so discouraged from having adventures, and I wanted you to have a more exciting childhood,”
  • arents are “four times more likely to tell girls than boys to be more careful”
  • “Girls may be less likely than boys to try challenging physical activities, which are important for developing new skills.” This study points to an uncomfortable truth: We think our daughters are more fragile, both physically and emotionally, than our sons. Advertisement Continue reading the main story Advertisement Continue reading the main story
  • Nobody is saying that injuries are good, or that girls should be reckless. But risk taking is important
  • It follows that by cautioning girls away from these experiences, we are not protecting them. We are failing to prepare them for life.
  • When a girl learns that the chance of skinning her knee is an acceptable reason not to attempt the fire pole, she learns to avoid activities outside her comfort zone.
  • Fear becomes a go-to feminine trait, something girls are expected to feel and express at will.
  • By the time a girl reaches her tweens no one bats an eye when she screams at the sight of an insect.
  • When girls become women, this fear manifests as deference and timid decision making
  • We must chuck the insidious language of fear (Be careful! That’s too scary!) and instead use the same terms we offer boys, of bravery and resilience. We need to embolden girls to master skills that at first appear difficult, even dangerous. And it’s not cute when a 10-year-old girl screeches, “I’m too scared.”
  • I was often scared. Of course I was. So were the men.
Javier E

The Positive Power of Negative Thinking - NYTimes.com - 0 views

  • visualizing a successful outcome, under certain conditions, can make people less likely to achieve it. She rendered her experimental participants dehydrated, then asked some of them to picture a refreshing glass of water. The water-visualizers experienced a marked decline in energy levels, compared with those participants who engaged in negative or neutral fantasies. Imagining their goal seemed to deprive the water-visualizers of their get-up-and-go, as if they’d already achieved their objective.
  • take affirmations, those cheery slogans intended to lift the user’s mood by repeating them: “I am a lovable person!” “My life is filled with joy!” Psychologists at the University of Waterloo concluded that such statements make people with low self-esteem feel worse
  • Ancient philosophers and spiritual teachers understood the need to balance the positive with the negative, optimism with pessimism, a striving for success and security with an openness to failure and uncertainty
  • ...3 more annotations...
  • Buddhist meditation, too, is arguably all about learning to resist the urge to think positively — to let emotions and sensations arise and pass, regardless of their content
  • Very brief training in meditation, according to a 2009 article in The Journal of Pain, brought significant reductions in pain
  • the relentless cheer of positive thinking begins to seem less like an expression of joy and more like a stressful effort to stamp out any trace of negativity.
grayton downing

Snakes on a Visual Plane | The Scientist Magazine® - 0 views

  • primates have a remarkable ability to detect snakes, even in a chaotic visual environment.
  • first evidence of snake-selective neurons in the primate brain that I’m aware of,
  • recorded pulvinar neuronal activity via electrodes implanted into the brains of two adult macaques—one male, one female—as they were shown images of monkey faces, monkey hands, geometric shapes, and snakes. The brains of both monkeys—which were raised at a national monkey farm in Amami Island, Japan, and had no known encounters with snakes before the experiment—showed preferential activity of neurons in the medial and dorsolateral pulvinar to images of snakes, as compared with the other stimuli. Further, snakes elicited the fastest and strongest responses from these neurons.
  • ...3 more annotations...
  • neurobiological evidence of pulvinar neuron responses to a potential predation threat is convincing, Dobson noted more work is needed to support a role for snakes in primate evolution.
  • “fear module” in the primate brain—a construct that enables “quick responses to stimuli that signal danger, such as predators and threatening faces”
  • “Since they [the authors] haven’t—to my knowledge—tested the same stimuli on various other parts of the visual system, they don’t have evidence that these putatively selective cells are a specialization of the ‘fear module’ at all,”
jlessner

Why Facebook's News Experiment Matters to Readers - NYTimes.com - 0 views

  • Facebook’s new plan to host news publications’ stories directly is not only about page views, advertising revenue or the number of seconds it takes for an article to load. It is about who owns the relationship with readers.
  • It’s why Google, a search engine, started a social network and why Facebook, a social network, started a search engine. It’s why Amazon, a shopping site, made a phone and why Apple, a phone maker, got into shopping.
  • Facebook’s experiment, called instant articles, is small to start — just a few articles from nine media companies, including The New York Times. But it signals a major shift in the relationship between publications and their readers. If you want to read the news, Facebook is saying, come to Facebook, not to NBC News or The Atlantic or The Times — and when you come, don’t leave. (For now, these articles can be viewed on an iPhone running the Facebook app.)
  • ...6 more annotations...
  • The front page of a newspaper and the cover of a magazine lost their dominance long ago.
  • Facebook executives have insisted that they intend to exert no editorial control because they leave the makeup of the news feed to the algorithm. But an algorithm is not autonomous. It is written by humans and tweaked all the time. Advertisement Continue reading the main story Advertisement Continue reading the main story
  • “In digital, every story becomes unbundled from each other, so if you’re not thinking of each story as living on its own, it’s tying yourself back to an analog era,” Mr. Kim said.
  • But news reports, like albums before them, have not been created that way. One of the services that editors bring to readers has been to use their news judgment, considering a huge range of factors, when they decide how articles fit together and where they show up. The news judgment of The New York Times is distinct from that of The New York Post, and for generations readers appreciated that distinction.
  • That raises some journalistic questions. The news feed algorithm works, in part, by showing people more of what they have liked in the past. Some studies have suggested that means they might not see as wide a variety of news or points of view, though others, including one by Facebook researchers, have found they still do.
  • Tech companies, Facebook included, are notoriously fickle with their algorithms. Publications became so dependent on Facebook in the first place because of a change in its algorithm that sent more traffic their way. Later, another change demoted articles from sites that Facebook deemed to run click-bait headlines. Then last month, Facebook decided to prioritize some posts from friends over those from publications.
Javier E

The Science of Snobbery: How We're Duped Into Thinking Fancy Things Are Better - The At... - 0 views

  • Expert judges and amateurs alike claim to judge classical musicians based on sound. But Tsay’s research suggests that the original judges, despite their experience and expertise, judged the competition (which they heard and watched live) based on visual information, just as amateurs do.
  • just like with classical music, we do not appraise wine in the way that we expect. 
  • Priceonomics revisited this seemingly damning research: the lack of correlation between wine enjoyment and price in blind tastings, the oenology students tricked by red food dye into describing a white wine like a red, a distribution of medals at tastings equivalent to what one would expect from pure chance, the grand crus described like cheap wines and vice-versa when the bottles are switched.
  • ...26 more annotations...
  • Taste does not simply equal your taste buds. It draws on information from all our senses as well as context. As a result, food is susceptible to the same trickery as wine. Adding yellow food dye to vanilla pudding leads people to experience a lemony taste. Diners eating in the dark at a chic concept restaurant confuse veal for tuna. Branding, packaging, and price tags are equally important to enjoyment. Cheap fish is routinely passed off as its pricier cousins at seafood and sushi restaurants. 
  • Just like with wine and classical music, we often judge food based on very different criteria than what we claim. The result is that our perceptions are easily skewed in ways we don’t anticipate. 
  • What does it mean for wine that presentation so easily trumps the quality imbued by being grown on premium Napa land or years of fruitful aging? Is it comforting that the same phenomenon is found in food and classical music, or is it a strike against the authenticity of our enjoyment of them as well? How common must these manipulations be until we concede that the influence of the price tag of a bottle of wine or the visual appearance of a pianist is not a trick but actually part of the quality?
  • To answer these questions, we need to investigate the underlying mechanism that leads us to judge wine, food, and music by criteria other than what we claim to value. And that mechanism seems to be the quick, intuitive judgments our minds unconsciously make
  • this unknowability also makes it easy to be led astray when our intuition makes a mistake. We may often be able to count on the price tag or packaging of food and wine for accurate information about quality. But as we believe that we’re judging based on just the product, we fail to recognize when presentation manipulates our snap judgments.
  • Participants were just as effective when watching 6 second video clips and when comparing their ratings to ratings of teacher effectiveness as measured by actual student test performance. 
  • The power of intuitive first impressions has been demonstrated in a variety of other contexts. One experiment found that people predicted the outcome of political elections remarkably well based on silent 10 second video clips of debates - significantly outperforming political pundits and predictions made based on economic indicators.
  • In a real world case, a number of art experts successfully identified a 6th century Greek statue as a fraud. Although the statue had survived a 14 month investigation by a respected museum that included the probings of a geologist, they instantly recognized something was off. They just couldn’t explain how they knew.
  • Cases like this represent the canon behind the idea of the “adaptive unconscious,” a concept made famous by journalist Malcolm Gladwell in his book Blink. The basic idea is that we constantly, quickly, and unconsciously do the equivalent of judging a book by its cover. After all, a cover provides a lot of relevant information in a world in which we don’t have time to read every page.
  • Gladwell describes the adaptive unconscious as “a kind of giant computer that quickly and quietly processes a lot of the data we need in order to keep functioning as human beings.”
  • In a famous experiment, psychologist Nalini Ambady provided participants in an academic study with 30 second silent video clips of a college professor teaching a class and asked them to rate the effectiveness of the professor.
  • In follow up experiments, Chia-Jung Tsay found that those judging musicians’ auditions based on visual cues were not giving preference to attractive performers. Rather, they seemed to look for visual signs of relevant characteristics like passion, creativity, and uniqueness. Seeing signs of passion is valuable information. But in differentiating between elite performers, it gives an edge to someone who looks passionate over someone whose play is passionate
  • Outside of these more eccentric examples, it’s our reliance on quick judgments, and ignorance of their workings, that cause people to act on ugly, unconscious biases
  • It’s also why - from a business perspective - packaging and presentation is just as important as the good or service on offer. Why marketing is just as important as product. 
  • Gladwell ends Blink optimistically. By paying closer attention to our powers of rapid cognition, he argues, we can avoid its pitfalls and harness its powers. We can blindly audition musicians behind a screen, look at a piece of art devoid of other context, and pay particular attention to possible unconscious bias in our performance reports.
  • But Gladwell’s success in demonstrating how the many calculations our adaptive unconscious performs without our awareness undermines his hopeful message of consciously harnessing its power.
  • As a former world-class tennis player and coach of over 50 years, Braden is a perfect example of the ideas behind thin slicing. But if he can’t figure out what his unconscious is up to when he recognizes double faults, why should anyone else expect to be up to the task?
  • flawed judgment in fields like medicine and investing has more serious consequences. The fact that expertise is so tricky leads psychologist Daniel Kahneman to assert that most experts should seek the assistance of statistics and algorithms in making decisions.
  • In his book Thinking, Fast and Slow, he describes our two modes of thought: System 1, like the adaptive unconscious, is our “fast, instinctive, and emotional” intuition. System 2 is our “slower, more deliberative, and more logical” conscious thought. Kahneman believes that we often leave decisions up to System 1 and generally place far “too much confidence in human judgment” due to the pitfalls of our intuition described above.
  • Not every judgment will be made in a field that is stable and regular enough for an algorithm to help us make judgments or predictions. But in those cases, he notes, “Hundreds of studies have shown that wherever we have sufficient information to build a model, it will perform better than most people.”
  • Experts can avoid the pitfalls of intuition more easily than laypeople. But they need help too, especially as our collective confidence in expertise leads us to overconfidence in their judgments. 
  • This article has referred to the influence of price tags and context on products and experiences like wine and classical music concerts as tricks that skew our perception. But maybe we should consider them a real, actual part of the quality.
  • Losing ourselves in a universe of relativism, however, will lead us to miss out on anything new or unique. Take the example of the song “Hey Ya!” by Outkast. When the music industry heard it, they felt sure it would be a hit. When it premiered on the radio, however, listeners changed the channel. The song sounded too dissimilar from songs people liked, so they responded negatively. 
  • It took time for people to get familiar with the song and realize that they enjoyed it. Eventually “Hey Ya!” became the hit of the summer.
  • Many boorish people talking about the ethereal qualities of great wine probably can't even identify cork taint because their impressions are dominated by the price tag and the wine label. But the classic defense of wine - that you need to study it to appreciate it - is also vindicated. The open question - which is both editorial and empiric - is what it means for the industry that constant vigilance and substantial study is needed to dependably appreciate wine for the product quality alone. But the questions is relevant to the enjoyment of many other products and experiences that we enjoy in life.
  • Maybe the most important conclusion is to not only recognize the fallibility of our judgments and impressions, but to recognize when it matters, and when it doesn’t
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
rachelramirez

Exercise May Aid Brain's 'Rewiring' - The New York Times - 0 views

  • Exercise May Aid Brain's 'Rewiring'
  • Moderate levels of exercise may increase the brain’s flexibility and improve learning
  • The visual cortex, the part of the brain that processes visual information, loses the ability to “rewire” itself with age,
  • ...3 more annotations...
  • Alessandro Sale asked 20 adults to watch a movie with one eye patched while relaxing in a chair. Later, the participants exercised on a stationary bike for 10-minute intervals while watching a movie.
  • When one eye is patched, the visual cortex compensates for the limited input by increasing its activity level
  • The differences in strength between the eyes were more pronounced after exercise
dicindioha

The Neuroscience of Illusion - Scientific American - 0 views

  • It is a fact of neuroscience that everything we experience is a figment of our imagination. Although our sensations feel accurate and truthful, they do not necessarily reproduce the physical reality of the outside world.
    • dicindioha
       
      I find it interesting that a part of science, that studies the brain, says it is a fact that reality is not what we perceive. This is a science of showing humans that the way we see the world comes from our own brains. Learning these things through neuroscience probably made other scientists want to move our progress of biology and other sciences even more forward to try and gain a better look at the world. Perception seems to be another problem on top of never being able to come to a proof of science.
  • In other words, the real and the imagined share a physical source in the brain. So take a lesson from Socrates: “All I know is that I know nothing.”
    • dicindioha
       
      This is a humorous quote, and although maybe a bit extreme, it reflects somewhat of the way I felt after our recent discussions about how what we know in science is just a theory that has not been disproved yet.
  • One of the most important tools used by neuroscientists to understand how the brain creates its sense of reality is the visual illusion. Historically, artists as well as researchers have used illusions to gain insights into the inner workings of the visual system.
    • dicindioha
       
      A way of studying the brain is tricking it.
  • ...1 more annotation...
  • Because of this disconnect between perception and reality, visual illusions demonstrate the ways in which the brain can fail to re-create the physical world.
    • dicindioha
       
      I like that the concepts we talk about in TOK, or what I think of as a course about theories and knowledge, come up in areas of science. Then this translates to how I feel as if I trust science very much, and how this is okay, but I must remember that what is shown to the public is affected by the scientific community
Javier E

Eric Kandel's Visions - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • Judith, "barely clothed and fresh from the seduction and slaying of Holofernes, glows in her voluptuousness. Her hair is a dark sky between the golden branches of Assyrian trees, fertility symbols that represent her eroticism. This young, ecstatic, extravagantly made-up woman confronts the viewer through half-closed eyes in what appears to be a reverie of orgasmic rapture," writes Eric Kandel in his new book, The Age of Insight. Wait a minute. Writes who? Eric Kandel, the Nobel-winning neuroscientist who's spent most of his career fixated on the generously sized neurons of sea snails
  • Kandel goes on to speculate, in a bravura paragraph a few hundred pages later, on the exact neurochemical cognitive circuitry of the painting's viewer:
  • "At a base level, the aesthetics of the image's luminous gold surface, the soft rendering of the body, and the overall harmonious combination of colors could activate the pleasure circuits, triggering the release of dopamine. If Judith's smooth skin and exposed breast trigger the release of endorphins, oxytocin, and vasopressin, one might feel sexual excitement. The latent violence of Holofernes's decapitated head, as well as Judith's own sadistic gaze and upturned lip, could cause the release of norepinephrine, resulting in increased heart rate and blood pressure and triggering the fight-or-flight response. In contrast, the soft brushwork and repetitive, almost meditative, patterning may stimulate the release of serotonin. As the beholder takes in the image and its multifaceted emotional content, the release of acetylcholine to the hippocampus contributes to the storing of the image in the viewer's memory. What ultimately makes an image like Klimt's 'Judith' so irresistible and dynamic is its complexity, the way it activates a number of distinct and often conflicting emotional signals in the brain and combines them to produce a staggeringly complex and fascinating swirl of emotions."
  • ...18 more annotations...
  • His key findings on the snail, for which he shared the 2000 Nobel Prize in Physiology or Medicine, showed that learning and memory change not the neuron's basic structure but rather the nature, strength, and number of its synaptic connections. Further, through focus on the molecular biology involved in a learned reflex like Aplysia's gill retraction, Kandel demonstrated that experience alters nerve cells' synapses by changing their pattern of gene expression. In other words, learning doesn't change what neurons are, but rather what they do.
  • In Search of Memory (Norton), Kandel offered what sounded at the time like a vague research agenda for future generations in the budding field of neuroaesthetics, saying that the science of memory storage lay "at the foothills of a great mountain range." Experts grasp the "cellular and molecular mechanisms," he wrote, but need to move to the level of neural circuits to answer the question, "How are internal representations of a face, a scene, a melody, or an experience encoded in the brain?
  • Since giving a talk on the matter in 2001, he has been piecing together his own thoughts in relation to his favorite European artists
  • The field of neuroaesthetics, says one of its founders, Semir Zeki, of University College London, is just 10 to 15 years old. Through brain imaging and other studies, scholars like Zeki have explored the cognitive responses to, say, color contrasts or ambiguities of line or perspective in works by Titian, Michelangelo, Cubists, and Abstract Expressionists. Researchers have also examined the brain's pleasure centers in response to appealing landscapes.
  • it is fundamental to an understanding of human cognition and motivation. Art isn't, as Kandel paraphrases a concept from the late philosopher of art Denis Dutton, "a byproduct of evolution, but rather an evolutionary adaptation—an instinctual trait—that helps us survive because it is crucial to our well-being." The arts encode information, stories, and perspectives that allow us to appraise courses of action and the feelings and motives of others in a palatable, low-risk way.
  • "as far as activity in the brain is concerned, there is a faculty of beauty that is not dependent on the modality through which it is conveyed but which can be activated by at least two sources—musical and visual—and probably by other sources as well." Specifically, in this "brain-based theory of beauty," the paper says, that faculty is associated with activity in the medial orbitofrontal cortex.
  • It also enables Kandel—building on the work of Gombrich and the psychoanalyst and art historian Ernst Kris, among others—to compare the painters' rendering of emotion, the unconscious, and the libido with contemporaneous psychological insights from Freud about latent aggression, pleasure and death instincts, and other primal drives.
  • Kandel views the Expressionists' art through the powerful multiple lenses of turn-of-the-century Vienna's cultural mores and psychological insights. But then he refracts them further, through later discoveries in cognitive science. He seeks to reassure those who fear that the empirical and chemical will diminish the paintings' poetic power. "In art, as in science," he writes, "reductionism does not trivialize our perception—of color, light, and perspective—but allows us to see each of these components in a new way. Indeed, artists, particularly modern artists, have intentionally limited the scope and vocabulary of their expression to convey, as Mark Rothko and Ad Reinhardt do, the most essential, even spiritual ideas of their art."
  • The author of a classic textbook on neuroscience, he seems here to have written a layman's cognition textbook wrapped within a work of art history.
  • "our initial response to the most salient features of the paintings of the Austrian Modernists, like our response to a dangerous animal, is automatic. ... The answer to James's question of how an object simply perceived turns into an object emotionally felt, then, is that the portraits are never objects simply perceived. They are more like the dangerous animal at a distance—both perceived and felt."
  • If imaging is key to gauging therapeutic practices, it will be key to neuroaesthetics as well, Kandel predicts—a broad, intense array of "imaging experiments to see what happens with exaggeration, distorted faces, in the human brain and the monkey brain," viewers' responses to "mixed eroticism and aggression," and the like.
  • while the visual-perception literature might be richer at the moment, there's no reason that neuroaesthetics should restrict its emphasis to the purely visual arts at the expense of music, dance, film, and theater.
  • although Kandel considers The Age of Insight to be more a work of intellectual history than of science, the book summarizes centuries of research on perception. And so you'll find, in those hundreds of pages between Kandel's introduction to Klimt's "Judith" and the neurochemical cadenza about the viewer's response to it, dossiers on vision as information processing; the brain's three-dimensional-space mapping and its interpretations of two-dimensional renderings; face recognition; the mirror neurons that enable us to empathize and physically reflect the affect and intentions we see in others; and many related topics. Kandel elsewhere describes the scientific evidence that creativity is nurtured by spells of relaxation, which foster a connection between conscious and unconscious cognition.
  • Zeki's message to art historians, aesthetic philosophers, and others who chafe at that idea is twofold. The more diplomatic pitch is that neuroaesthetics is different, complementary, and not oppositional to other forms of arts scholarship. But "the stick," as he puts it, is that if arts scholars "want to be taken seriously" by neurobiologists, they need to take advantage of the discoveries of the past half-century. If they don't, he says, "it's a bit like the guys who said to Galileo that we'd rather not look through your telescope."
  • Matthews, a co-author of The Bard on the Brain: Understanding the Mind Through the Art of Shakespeare and the Science of Brain Imaging (Dana Press, 2003), seems open to the elucidations that science and the humanities can cast on each other. The neural pathways of our aesthetic responses are "good explanations," he says. But "does one [type of] explanation supersede all the others? I would argue that they don't, because there's a fundamental disconnection still between ... explanations of neural correlates of conscious experience and conscious experience" itself.
  • There are, Matthews says, "certain kinds of problems that are fundamentally interesting to us as a species: What is love? What motivates us to anger?" Writers put their observations on such matters into idiosyncratic stories, psychologists conceive their observations in a more formalized framework, and neuroscientists like Zeki monitor them at the level of functional changes in the brain. All of those approaches to human experience "intersect," Matthews says, "but no one of them is the explanation."
  • "Conscious experience," he says, "is something we cannot even interrogate in ourselves adequately. What we're always trying to do in effect is capture the conscious experience of the last moment. ... As we think about it, we have no way of capturing more than one part of it."
  • Kandel sees art and art history as "parent disciplines" and psychology and brain science as "antidisciplines," to be drawn together in an E.O. Wilson-like synthesis toward "consilience as an attempt to open a discussion between restricted areas of knowledge." Kandel approvingly cites Stephen Jay Gould's wish for "the sciences and humanities to become the greatest of pals ... but to keep their ineluctably different aims and logics separate as they ply their joint projects and learn from each other."
Javier E

The Science Behind Dreaming: Scientific American - 0 views

  • these findings suggest that the neurophysiological mechanisms that we employ while dreaming (and recalling dreams) are the same as when we construct and retrieve memories while we are awake.
  • the researchers found that vivid, bizarre and emotionally intense dreams (the dreams that people usually remember) are linked to parts of the amygdala and hippocampus. While the amygdala plays a primary role in the processing and memory of emotional reactions, the hippocampus has been implicated in important memory functions, such as the consolidation of information from short-term to long-term memory.
  • it was not until a few years ago that a patient reported to have lost her ability to dream while having virtually no other permanent neurological symptoms. The patient suffered a lesion in a part of the brain known as the right inferior lingual gyrus (located in the visual cortex). Thus, we know that dreams are generated in, or transmitted through this particular area of the brain, which is associated with visual processing, emotion and visual memories.
  • ...3 more annotations...
  • a reduction in REM sleep (or less “dreaming”) influences our ability to understand complex emotions in daily life – an essential feature of human social functioning
  • Dreams seem to help us process emotions by encoding and constructing memories of them. What we see and experience in our dreams might not necessarily be real, but the emotions attached to these experiences certainly are. Our dream stories essentially try to strip the emotion out of a certain experience by creating a memory of it. This way, the emotion itself is no longer active.  This mechanism fulfils an important role because when we don’t process our emotions, especially negative ones, this increases personal worry and anxiety.
  • In short, dreams help regulate traffic on that fragile bridge which connects our experiences with our emotions and memories.
summertyler

Seeing Isn't Believing | The Scientist Magazine® - 0 views

  • Much of the early research on motion perception was performed on insects,1 but similar results have been found for a huge range of species, from fishes to birds to mammals
  • Correspondingly, prey animals would find color vision of little use, but they are extremely good at seeing the motion of an approaching predator.
  • Ambiguous illusions that can be interpreted in two different ways, but not both ways at the same time, can also shed light on how we perceive the world around us
  • ...3 more annotations...
  • But as good as animals are at detecting motion, they can also be fooled.
  • Visual movement can be thought of as a change in brightness, or luminance, over space and time
  • Why does the visual system treat this jumping dot as a single object in motion, instead of seeing one spot disappear while an unrelated spot appears nearby at the same instant? First, the brain usually treats “suspicious coincidences” as being more than coincidences: it is more likely that this is a single spot in motion rather than two separate events. Second, the visual system is tolerant of brief gaps in stimuli, filling in those gaps when necessary. This perception of apparent motion is, of course, the basis of the entire movie and TV industries, as viewers see a smooth motion picture when in reality they are simply watching a series of stationary stills.
  •  
    The perception of illusions
grayton downing

Language Makes the Invisible Visible | The Scientist Magazine® - 0 views

  • Language helps the human brain perceive obscured objects,
  • While some scientists have argued that vision is independent from outside factors, such as sounds or the brain’s accumulated knowledge, the study indicates that language influences perception at its most basic level.
  • “I think [the study] makes a really important contribution to the field of visual perception and cognition in general,” said Michael Spivey,
  • ...6 more annotations...
  • tested the effects of language on perception by either saying or not saying a word and showing study participants either an obscured image that matched the word, an obscured image that did not match the word, or no image at all. Images ranged from kangaroos to bananas to laundry baskets. The researchers then asked the participants if they had perceived any objects and, if so, ascertained what they had seen.
  • participants were more likely to perceive an image if they had been given an accurate verbal cue first than if they had been given no cue or an incorrect one. With an accurate cue, they identified the object correctly around 50 percent and 80 percent of the time
  •  By using continuous flash suppression, Lupyan has “done the best job yet of showing where the interaction happening is in perception,” Spivey said.
  • Lupyan said that his work could help researchers discern whether people who speak different languages perceive the world differently. For instance, if two people spoke different languages that either did or did not have words for a certain color or texture, the person lacking language to describe the color or texture might be less likely to perceive it.
  • “More and more what the field is finding is that any cognitive or perceptual capacity you find interesting is probably richly connected with other ones.” “The visual system—and perception in general—uses all the information it can get to make sense of the inputs,” said Lupyan. “Vision is not just about the photons hitting the eye.”
  • “The visual system—and perception in general—uses all the information it can get to make sense of the inputs,” said Lupyan
sissij

Gaymoji: A New Language for That Search - The New York Times - 1 views

  • You don’t need a degree in semiotics to read meaning into an eggplant balanced on a ruler or peach with an old-fashioned telephone receiver on top. That the former is the universally recognized internet symbol for a large male member and the latter visual shorthand for a booty call is something most any 16-year-old could all too readily explain.
  • And so, starting this week, Grindr will offer to users a set of trademarked emoji, called Gaymoji — 500 icons that function as visual shorthand for terms and acts and states of being that seem funnier, breezier and less freighted with complication when rendered in cartoon form in place of words.
  • That is, toward a visual language of rainbow unicorns, bears, otters and handcuffs — to cite some of the images available in the first set of 100 free Gaymoji symbols.
  • ...5 more annotations...
  • “Partly, this project started because the current set of emojis set by some international board were limited and not evolving fast enough for us,” said Mr. Simkhai, who in certain ways fits the stereotype of a gay man in West Hollywood: a lithe, gym-fit, hairless nonsmoker who enjoys dancing at gay circuit parties.
  • Like most every other human in the developed world, they had their heads buried in their screens.
  • “We’re all so attached to our phones that when people talk about the notion of the computer melding with the human and ask when that’s going to happen, I say it already has,” Mr. Simkhai said. He added that the prospect of being deprived of a phone for 20 minutes induced in him “the highest level of anxiety I can possibly have.”
  • Gaymoji, then, serve as both conversational and even existential placeholders, Ms. McCulloch said: “You’re using them to say, ‘I’m still here and I still want to be talking to you.’”
  • As if to emphasized that assertion, a reporter combing through the new set of Gaymoji in search of something that would symbolize a person of Mr. Simkhai’s vintage could find only one.It was an image of a gray-haired daddy holding aloft a credit card.
  •  
    Emoji is becoming more and more popular in people's chatting and comments on social medias. People use emoji because they faster, more convenient, and funnier. And now people can even design their own emoji to have it suit for various conditions. But can emoji really replace the letters and language? I sometimes feel that emoji is so fast and cheap. It only take a click to sent an emoji and people usually sent without any further thoughts because it is so quick and easy. Although emoji is sometimes make the comments seem cuter and funnier, it makes people's comments less hearty. I think typing in some letters in the comments does oblige us to think about what we are saying before we sent it out. --Sissi (3/15/2017)
katedriscoll

Confirmation Bias - an overview | ScienceDirect Topics - 0 views

  • Confirmation bias is a ubiquitous phenomenon, the effects of which have been traced as far back as Pythagoras’ studies of harmonic relationships in the 6th century B.C. (Nickerson, 1998), and is referenced in the writings of William Shakespeare and Francis Bacon (Risinger, Saks, Thompson, & Rosenthal, 2002). It is also a problematic phenomenon, having been implicated in “a significant fraction of the disputes, altercations, and misunderstandings that occur among individuals, groups, and nations” throughout human history, including the witch trials of Western Europe and New England, and the perpetuation of inaccurate medical diagnoses, ineffective medical treatments, and erroneous scientific theories (Nickerson, 1998, p. 175).
  • For over a century, psychologists have observed that people naturally favor information that is consistent with their beliefs or desires, and ignore or discount evidence to the contrary. In an article titled “The Mind’s Eye,” Jastrow (1899) was among the first to explain how the mind plays an active role in information processing, such that two individuals with different mindsets might interpret the same information in entirely different ways (see also Boring, 1930). Since then, a wealth of empirical research has demonstrated that confirmation bias affects how we perceive visual stimuli (e.g., Bruner & Potter, 1964; Leeper, 1935), how we gather and evaluate evidence (e.g., Lord, Ross, & Lepper, 1979; Wason, 1960), and how we judge—and behave toward—other people (e.g., Asch, 1946; Rosenthal & Jacobson, 1966; Snyder & Swann, 1978).
1 - 20 of 160 Next › Last »
Showing 20 items per page