Skip to main content

Home/ TOK Friends/ Group items tagged processing

Rss Feed Group items tagged

grayton downing

How the Brain Creates Personality: A New Theory - Stephen M. Kosslyn and G. Wayne Mille... - 0 views

  • It is possible to examine any object—including a brain—at different levels
  • if we want to know how the brain gives rise to thoughts, feelings, and behaviors, we want to focus on the bigger picture of how its structure allows it to store and process information—the architecture, as it were. To understand the brain at this level, we don’t have to know everything about the individual connections among brain cells or about any other biochemical process.
  • top parts and the bottom parts of the brain have differ­ent functions. The top brain formulates and executes plans (which often involve deciding where to move objects or how to move the body in space), whereas the bottom brain classifies and interprets incoming information about the world. The two halves always work together;
  • ...9 more annotations...
  • You have probably heard of this theory, in which the left and right halves of the brain are characterized, respectively, as logical versus intuitive, verbal versus perceptual, analytic versus synthetic, and so forth. The trouble is that none of these sweeping generalizations has stood up to careful scientific scrutiny. The dif­ferences between the left and right sides of the brain are nuanced, and simple, sweeping dichotomies do not in fact explain how the two sides function.
  • top and bottom portions of the brain have very different functions. This fact was first discovered in the context of visual perception, and it was supported in 1982 in a landmark report by National Medal of Science winner Mortimer Mishkin and Leslie G. Ungerleider, of the National Institute of Mental Health.
  • scientists trained monkeys to perform two tasks. In the first task, the monkeys had to learn to recognize which of two shapes concealed a bit of food.
  • These functions occur relatively close to where neural connec­tions deliver inputs from the eyes and ears—but processing doesn’t just stop there.
  • top parts of our frontal lobe can take into account the confluence of information about “what’s out there,” our emo­tional reactions to it, and our goals.
  • Four distinct cognitive modes emerge from how the top-brain and bottom-brain systems can interac
  • The two systems always work together. You use the top brain to decide to walk over to talk to your friend only after you know who she is (courtesy of the bottom brain). And after talking to her, you formulate another plan, to enter the date and time in your calendar, and then you need to monitor what hap­pens (again using the bottom brain) as you try to carry out this plan (a top-brain activity).
  • speak of differences in the degree to which a person relies on the top-brain and bottom-brain systems, we are speaking of differences in this second kind of utilization, in the kind of processing that’s not simply dictated by a given situation. In this sense, you can rely on one or the other brain system to a greater or lesser degree.
  • The degree to which you tend to use each system will affect your thoughts, feel­ings, and behavior in profound ways. The notion that each system can be more or less highly utilized, in this sense is the foundation of the Theory of Cognitive Modes. 
grayton downing

Post-Publication Peer Review Mainstreamed | The Scientist Magazine® - 0 views

  • peer review. The process has been blamed for everything from slowing down the communication of new discoveries to introducing woeful biases to the literature
  • peer review does not elevate the quality of published science and that many published research findings are later shown to be false. In response, a growing number of scientists are working to impose a new vision of the scientific process through post-publication review,
  • organized post-publication peer review system could help “clarify experiments, suggest avenues for follow-up work and even catch errors.” If used by a critical mass of scientists, he added, “it could strengthen the scientific process.”  
  • ...2 more annotations...
  • allowing for an
  • onymous comments, PubPeer aims to create an open, debate-friendly environment, while maintaining the rigor the closed review process currently used by most journals. Its creators, who describe themselves as “early-stage scientists,” have also decided to remain anonymous, citing career concerns.
Javier E

Vitamins Hide the Low Quality of Our Food - NYTimes.com - 0 views

  • we fail to notice that food marketers use synthetic vitamins to sell unhealthful products. Not only have we become dependent on these synthetic vitamins to keep ourselves safe from deficiencies, but the eating habits they encourage are having disastrous consequences on our health.
  • vitamins spread from the labs of scientists to the offices of food marketers, and began to take on a life of their own.
  • Nutritionists are correct when they tell us that most of us don’t need to be taking multivitamins. But that’s only because multiple vitamins have already been added to our food.
  • ...11 more annotations...
  • Given the poor quality of the typical American diet, this fortification is far from superfluous. In fact, for products like milk and flour, where fortification and enrichment have occurred for so long that they’ve become invisible, it would be almost irresponsible not to add synthetic vitamins.
  • synthetic vitamins are as essential to food companies as they are to us. To be successful in today’s market, food manufacturers must create products that can be easily transported over long distances and stored for extended periods.
  • They also need to be sure that their products offer some nutritional value so that customers don’t have to go elsewhere to meet their vitamin needs. But the very processing that’s necessary to create long shelf lives destroys vitamins, among other important nutrients. It’s nearly impossible to create foods that can sit for months in a supermarket that are also naturally vitamin-rich.
  • Today, it would be easy to blame food marketers for using vitamins to deceive us into buying their products. But our blindness is largely our own fault.
  • we’ve entered into a complicit agreement with them: They depend on us to buy their products, and we depend on the synthetic vitamins they add to those products to support eating habits that might otherwise leave us deficient
  • extra vitamins do not protect us from the long-term “diseases of civilization” that are currently ravaging our country, including obesity, heart disease and Type 2 diabetes — many of which are strongly associated with diet.
  • natural foods contain potentially protective substances such as phytochemicals and polyunsaturated fat that also are affected by processing, but that are not usually replaced. If these turn out to be as important as many researchers suspect, then our exclusive focus on vitamins could mean we’re protecting ourselves against the wrong dangers. It’s as if we’re taking out earthquake insurance policies in an area more at risk for floods.
  • And adding back vitamins after the fact ignores the issue of synergy: how nutrients work naturally as opposed to when they are isolated. A 2011 study on broccoli, for example, found that giving subjects fresh broccoli florets led them to absorb and metabolize seven times more of the anticancer compounds known as glucosinolates, present in broccoli and other cruciferous vegetables
  • And yet we refuse to change our eating habits in the ways that would actually protect us, which would require refocusing our diets on minimally processed foods that are naturally nutrient-rich.
  • The popularity of dietary supplements and vitamin-enhanced processed “health” foods means that even those of us who try to do right by our health are often getting it wrong.
  • we mustn’t let it distract us from an even more fundamental question: how we’ve allowed the word “vitamin” to become synonymous with “health.”
anonymous

Magnetic brain stimulation alters negative emotion perception: A new study in Biologica... - 0 views

  • A new study published in Biological Psychiatry: Cognitive Neuroscience and Neuroimaging reports that processing of negative emotion can be strengthened or weakened by tuning the excitability of the right frontal part of the brain.Using magnetic stimulation outside the brain, a technique called repetitive transcranial magnetic stimulation (rTMS), researchers at University of Münster, Germany, show that, despite the use of inhibitory stimulation currently used to treat depression, excitatory stimulation better reduced a person's response to fearful images.
  • "This study confirms that modulating the frontal region of the brain, in the right hemisphere, directly effects the regulation of processing of emotional information in the brain in a 'top-down' manner,"
  • In depression, processing of emotion is disrupted in the frontal region of both the left and right brain hemispheres (known as the dorsolateral prefrontal cortices, dlPFC). The disruptions are thought to be at the root of increased negative emotion and diminished positive emotion in the disorder. Reducing excitability of the right dlPFC using inhibitory magnetic stimulation has been shown to have antidepressant effects, even though it's based on an idea -- that this might reduce processing of negative emotion in depression -- that has yet to be fully tested in humans.
sandrine_h

Darwin's Influence on Modern Thought - Scientific American - 0 views

  • Great minds shape the thinking of successive historical periods. Luther and Calvin inspired the Reformation; Locke, Leibniz, Voltaire and Rousseau, the Enlightenment. Modern thought is most dependent on the influence of Charles Darwin
  • one needs schooling in the physicist’s style of thought and mathematical techniques to appreciate Einstein’s contributions in their fullness. Indeed, this limitation is true for all the extraordinary theories of modern physics, which have had little impact on the way the average person apprehends the world.
  • The situation differs dramatically with regard to concepts in biology.
  • ...10 more annotations...
  • Many biological ideas proposed during the past 150 years stood in stark conflict with what everybody assumed to be true. The acceptance of these ideas required an ideological revolution. And no biologist has been responsible for more—and for more drastic—modifications of the average person’s worldview than Charles Darwin
  • . Evolutionary biology, in contrast with physics and chemistry, is a historical science—the evolutionist attempts to explain events and processes that have already taken place. Laws and experiments are inappropriate techniques for the explication of such events and processes. Instead one constructs a historical narrative, consisting of a tentative reconstruction of the particular scenario that led to the events one is trying to explain.
  • The discovery of natural selection, by Darwin and Alfred Russel Wallace, must itself be counted as an extraordinary philosophical advance
  • The concept of natural selection had remarkable power for explaining directional and adaptive changes. Its nature is simplicity itself. It is not a force like the forces described in the laws of physics; its mechanism is simply the elimination of inferior individuals
  • A diverse population is a necessity for the proper working of natural selection
  • Because of the importance of variation, natural selection should be considered a two-step process: the production of abundant variation is followed by the elimination of inferior individuals
  • By adopting natural selection, Darwin settled the several-thousandyear- old argument among philosophers over chance or necessity. Change on the earth is the result of both, the first step being dominated by randomness, the second by necessity
  • Another aspect of the new philosophy of biology concerns the role of laws. Laws give way to concepts in Darwinism. In the physical sciences, as a rule, theories are based on laws; for example, the laws of motion led to the theory of gravitation. In evolutionary biology, however, theories are largely based on concepts such as competition, female choice, selection, succession and dominance. These biological concepts, and the theories based on them, cannot be reduced to the laws and theories of the physical sciences
  • Despite the initial resistance by physicists and philosophers, the role of contingency and chance in natural processes is now almost universally acknowledged. Many biologists and philosophers deny the existence of universal laws in biology and suggest that all regularities be stated in probabilistic terms, as nearly all so-called biological laws have exceptions. Philosopher of science Karl Popper’s famous test of falsification therefore cannot be applied in these cases.
  • To borrow Darwin’s phrase, there is grandeur in this view of life. New modes of thinking have been, and are being, evolved. Almost every component in modern man’s belief system is somehow affected by Darwinian principles
katedriscoll

Sensory Perception - An Introduction to the Process of Perception - 0 views

  • An individual or organism capable of processing the stimuli in their environment is called to have a sensory perception.
  • This processing is done through the coordination between sense organs and the brain. Hearing, vision, taste, smell, and touch are the five senses we possess. The sensory perception involves detecting, recognizing, characterizing and responding to stimuli.
  • The process of sensory perception begins when something in the real world stimulates our sense organs. For instance, light reflecting from a surface stimulates our eyes. The warmth emanating from a hot cup of beverage stimulates our touch senses.
Javier E

The Irrational Consumer: Why Economics Is Dead Wrong About How We Make Choices - Derek ... - 4 views

  • Atlantic.displayRandomElement('#header li.business .sponsored-dropdown-item'); Derek Thompson - Derek Thompson is a senior editor at The Atlantic, where he oversees business coverage for the website. More Derek has also written for Slate, BusinessWeek, and the Daily Beast. He has appeared as a guest on radio and television networks, including NPR, the BBC, CNBC, and MSNBC. All Posts RSS feed Share Share on facebook Share on linkedin Share on twitter « Previous Thompson Email Print Close function plusOneCallback () { $(document).trigger('share'); } $(document).ready(function() { var iframeUrl = "\/ad\/thanks-iframe\/TheAtlanticOnline\/channel_business;src=blog;by=derek-thompson;title=the-irrational-consumer-why-economics-is-dead-wrong-about-how-we-make-choices;pos=sharing;sz=640x480,336x280,300x250"; var toolsClicked = false; $('#toolsTop').click(function() { toolsClicked = 'top'; }); $('#toolsBottom').click(function() { toolsClicked = 'bottom'; }); $('#thanksForSharing a.hide').click(function() { $('#thanksForSharing').hide(); }); var onShareClickHandler = function() { var top = parseInt($(this).css('top').replace(/px/, ''), 10); toolsClicked = (top > 600) ? 'bottom' : 'top'; }; var onIframeReady = function(iframe) { var win = iframe.contentWindow; // Don't show the box if there's no ad in it if (win.$('.ad').children().length == 1) { return; } var visibleAds = win.$('.ad').filter(function() { return !($(this).css('display') == 'none'); }); if (visibleAds.length == 0) { // Ad is hidden, so don't show return; } if (win.$('.ad').hasClass('adNotLoaded')) { // Ad failed to load so don't show return; } $('#thanksForSharing').css('display', 'block'); var top; if(toolsClicked == 'bottom' && $('#toolsBottom').length) { top = $('#toolsBottom')[0].offsetTop + $('#toolsBottom').height() - 310; } else { top = $('#toolsTop')[0].offsetTop + $('#toolsTop').height() + 10; } $('#thanksForSharing').css('left', (-$('#toolsTop').offset().left + 60) + 'px'); $('#thanksForSharing').css('top', top + 'px'); }; var onShare = function() { // Close "Share successful!" AddThis plugin popup if (window._atw && window._atw.clb && $('#at15s:visible').length) { _atw.clb(); } if (iframeUrl == null) { return; } $('#thanksForSharingIframe').attr('src', "\/ad\/thanks-iframe\/TheAtlanticOnline\/channel_business;src=blog;by=derek-thompson;title=the-irrational-consumer-why-economics-is-dead-wrong-about-how-we-make-choices;pos=sharing;sz=640x480,336x280,300x250"); $('#thanksForSharingIframe').load(function() { var iframe = this; var win = iframe.contentWindow; if (win.loaded) { onIframeReady(iframe); } else { win.$(iframe.contentDocument).ready(function() { onIframeReady(iframe); }) } }); }; if (window.addthis) { addthis.addEventListener('addthis.ready', function() { $('.articleTools .share').mouseover(function() { $('#at15s').unbind('click', onShareClickHandler); $('#at15s').bind('click', onShareClickHandler); }); }); addthis.addEventListener('addthis.menu.share', function(evt) { onShare(); }); } // This 'share' event is used for testing, so one can call // $(document).trigger('share') to get the thank you for // sharing box to appear. $(document).bind('share', function(event) { onShare(); }); if (!window.FB || (window.FB && !window.FB._apiKey)) { // Hook into the fbAsyncInit function and register our listener there var oldFbAsyncInit = (window.fbAsyncInit) ? window.fbAsyncInit : (function() { }); window.fbAsyncInit = function() { oldFbAsyncInit(); FB.Event.subscribe('edge.create', function(response) { // to hide the facebook comments box $('#facebookLike span.fb_edge_comment_widget').hide(); onShare(); }); }; } else if (window.FB) { FB.Event.subscribe('edge.create', function(response) { // to hide the facebook comments box $('#facebookLike span.fb_edge_comment_widget').hide(); onShare(); }); } }); The Irrational Consumer: Why Economics Is Dead Wrong About How We Make Choices By Derek Thompson he
  • First, making a choice is physically exhausting, literally, so that somebody forced to make a number of decisions in a row is likely to get lazy and dumb.
  • Second, having too many choices can make us less likely to come to a conclusion. In a famous study of the so-called "paradox of choice", psychologists Mark Lepper and Sheena Iyengar found that customers presented with six jam varieties were more likely to buy one than customers offered a choice of 24.
  • ...7 more annotations...
  • Many of our mistakes stem from a central "availability bias." Our brains are computers, and we like to access recently opened files, even though many decisions require a deep body of information that might require some searching. Cheap example: We remember the first, last, and peak moments of certain experiences.
  • The third check against the theory of the rational consumer is the fact that we're social animals. We let our friends and family and tribes do our thinking for us
  • neurologists are finding that many of the biases behavioral economists perceive in decision-making start in our brains. "Brain studies indicate that organisms seem to be on a hedonic treadmill, quickly habituating to homeostasis," McFadden writes. In other words, perhaps our preference for the status quo isn't just figuratively our heads, but also literally sculpted by the hand of evolution inside of our brains.
  • The popular psychological theory of "hyperbolic discounting" says people don't properly evaluate rewards over time. The theory seeks to explain why many groups -- nappers, procrastinators, Congress -- take rewards now and pain later, over and over again. But neurology suggests that it hardly makes sense to speak of "the brain," in the singular, because it's two very different parts of the brain that process choices for now and later. The choice to delay gratification is mostly processed in the frontal system. But studies show that the choice to do something immediately gratifying is processed in a different system, the limbic system, which is more viscerally connected to our behavior, our "reward pathways," and our feelings of pain and pleasure.
  • the final message is that neither the physiology of pleasure nor the methods we use to make choices are as simple or as single-minded as the classical economists thought. A lot of behavior is consistent with pursuit of self-interest, but in novel or ambiguous decision-making environments there is a good chance that our habits will fail us and inconsistencies in the way we process information will undo us.
  • Our brains seem to operate like committees, assigning some tasks to the limbic system, others to the frontal system. The "switchboard" does not seem to achieve complete, consistent communication between different parts of the brain. Pleasure and pain are experienced in the limbic system, but not on one fixed "utility" or "self-interest" scale. Pleasure and pain have distinct neural pathways, and these pathways adapt quickly to homeostasis, with sensation coming from changes rather than levels
  • Social networks are sources of information, on what products are available, what their features are, and how your friends like them. If the information is accurate, this should help you make better choices. On the other hand, it also makes it easier for you to follow the crowd rather than engaging in the due diligence of collecting and evaluating your own information and playing it against your own preferences
huffem4

Dual Process Theory - Explanation and examples - Conceptually - 1 views

  • When we’re making decisions, we use two different systems of thinking. System 1 is our intuition or gut-feeling: fast, automatic, emotional, and subconscious. System 2 is slower and more deliberate: consciously working through different considerations, applying different concepts and models and weighing them all up.
  • One takeaway from the psychological research on dual process theory is that our System 1 (intuition) is more accurate in areas where we’ve gathered a lot of data with reliable and fast feedback, like social dynamics.
  • our System 2 tends to be better for decisions where we don’t have a lot of experience; involving numbers, statistics, logic, abstractions, or models; and phenomena our ancestors never dealt with.
  • ...1 more annotation...
  • You can also use both systems, acknowledging that you have an intuition, and feeding it into your System 2 model.
ilanaprincilus06

Animals could help reveal why humans fall for optical illusions | Laura and Jennifer Ke... - 0 views

  • they remind us of the discrepancy between perception and reality. But our knowledge of such illusions has been largely limited to studying humans.
  • Understanding whether these illusions arise in different brains could help us understand how evolution shapes visual perception.
  • illusions not only reveal how visual scenes are interpreted and mentally reconstructed, they also highlight constraints in our perception.
  • ...9 more annotations...
  • Some of the most common types of illusory percepts are those that affect the impression of size, length or distance.
  • As visual processing needs to be both rapid and generally accurate, the brain constantly uses shortcuts and makes assumptions about the world that can, in some cases, be misleading.
  • These illusions are the result of visual processes shaped by evolution. Using that process may have been once beneficial (or still is), but it also allows our brains to be tricked.
  • if animals are tricked by the same illusions, then perhaps revealing why a different evolutionary path leads to the same visual process might help us understand why evolution favours this development.
  • Great bowerbirds could be the ultimate illusory artists. For example, their males construct forced perspective illusions to make them more attractive to mates.
  • When a male has two smaller clawed males on either side of him he is more attractive to a female (because he looks relatively larger) than if he was surrounded by two larger clawed males.
  • This effect is known as the Ebbinghaus illusion (see image), and suggests that males may easily manipulate their perceived attractiveness by surrounding themselves with less attractive rivals.
  • Deceptions of the senses are the truths of perception.
  • Visual illusions (and those in the non-visual senses) are a crucial tool for determining what perceptual assumptions animals make about the world around them.
Javier E

Opinion | Imagination Is More Important Than You Think - The New York Times - 0 views

  • Plato and Aristotle disagreed about the imagination
  • Plato gave the impression that imagination is a somewhat airy-fairy luxury good. It deals with illusions and make-believe and distracts us from reality and our capacity to coolly reason about it. Aristotle countered that imagination is one of the foundations of all knowledge.
  • What is imagination?
  • ...14 more annotations...
  • Imagination is the capacity to make associations among all these bits of information and to synthesize them into patterns and concepts.
  • When you walk, say, into a coffee shop you don’t see an array of surfaces, lights and angles. Your imagination instantly coalesces all that into an image: “coffee shop.”
  • Neuroscientists have come to appreciate how fantastically complicated and subjective this process of creating mental images really is. You may think perception is a simple “objective” process of taking in the world and cognition is a complicated process of thinking about it. But that’s wrong.
  • Perception — the fast process of selecting, putting together, interpreting and experiencing facts, thoughts and emotions — is the essential poetic act that makes you you.
  • For example, you don’t see the naked concept “coffee shop.” The image you create is coated with personal feelings, memories and evaluations. You see: “slightly upscale suburban coffee shop trying and failing to send off a hipster vibe.” The imagination, Charles Darwin wrote, “unites former images and ideas, independently of the will, and thus creates brilliant and novel results.”
  • Imagination helps you perceive reality, try on other realities, predict possible futures, experience other viewpoints. And yet how much do schools prioritize the cultivation of this essential ability?
  • “A fool sees not the same tree that a wise man sees,” William Blake observed.
  • Can you improve your imagination? Yes. By creating complex and varied lenses through which to see the world
  • A person who feeds his or her imagination with a fuller repertoire of thoughts and experiences has the ability not only to see reality more richly but also — even more rare — to imagine the world through the imaginations of others.
  • This is the skill we see in Shakespeare to such a miraculous degree — his ability to disappear into his characters and inhabit their points of view without ever pretending to explain them.
  • Different people have different kinds of imagination. Some people mainly focus on the parts of the world that can be quantified.
  • it often doesn’t see the subjective way people coat the world with values and emotions and aspirations, which is exactly what we want to see if we want to glimpse how they experience their experience.
  • Furthermore, imagination can get richer over time. When you go to Thanksgiving dinner, your image of Uncle Frank contains the memories of past Thanksgivings, the arguments and the jokes, and the whole sum of your common experiences. The guy you once saw as an insufferable blowhard you now see — as your range of associations has widened and deepened — as a decent soul struggling with his wounds.
  • What happens to a society that lets so much of its imaginative capacity lie fallow? Perhaps you wind up in a society in which people are strangers to one another and themselves.
Javier E

You Have Permission to Be a Smartphone Skeptic - The Bulwark - 0 views

  • the brief return of one of my favorite discursive topics—are the kids all right?—in one of my least-favorite variations: why shouldn’t each of them have a smartphone and tablet?
  • One camp says yes, the kids are fine
  • complaints about screen time merely conceal a desire to punish hard-working parents for marginally benefiting from climbing luxury standards, provide examples of the moral panic occasioned by all new technologies, or mistakenly blame screens for ill effects caused by the general political situation.
  • ...38 more annotations...
  • No, says the other camp, led by Jonathan Haidt; the kids are not all right, their devices are partly to blame, and here are the studies showing why.
  • we should not wait for the replication crisis in the social sciences to resolve itself before we consider the question of whether the naysayers are on to something. And normal powers of observation and imagination should be sufficient to make us at least wary of smartphones.
  • These powerful instruments represent a technological advance on par with that of the power loom or the automobile
  • The achievement can be difficult to properly appreciate because instead of exerting power over physical processes and raw materials, they operate on social processes and the human psyche: They are designed to maximize attention, to make it as difficult as possible to look away.
  • they have transformed the qualitative experience of existing in the world. They give a person’s sociality the appearance and feeling of a theoretically endless open network, while in reality, algorithms quietly sort users into ideological, aesthetic, memetic cattle chutes of content.
  • Importantly, the process by which smartphones change us requires no agency or judgment on the part of a teen user, and yet that process is designed to provide what feels like a perfectly natural, inevitable, and complete experience of the world.
  • Smartphones offer a tactile portal to a novel digital environment, and this environment is not the kind of space you enter and leave
  • One reason commonly offered for maintaining our socio-technological status quo is that nothing really has changed with the advent of the internet, of Instagram, of Tiktok and Youtube and 4Chan
  • It is instead a complete shadow world of endless images; disembodied, manipulable personas; and the ever-present gaze of others. It lives in your pocket and in your mind.
  • The price you pay for its availability—and the engine of its functioning—is that you are always available to it, as well. Unless you have a strength of will that eludes most adults, its emissaries can find you at any hour and in any place to issue your summons to the grim pleasure palace.
  • the self-restraint and self-discipline required to use a smartphone well—that is, to treat it purely as an occasional tool rather than as a totalizing way of life—are unreasonable things to demand of teenagers
  • these are unreasonable things to demand of me, a fully adult woman
  • To enjoy the conveniences that a smartphone offers, I must struggle against the lure of the permanent scroll, the notification, the urge to fix my eyes on the circle of light and keep them fixed. I must resist the default pseudo-activity the smartphone always calls its user back to, if I want to have any hope of filling the moments of my day with the real activity I believe is actually valuable.
  • for a child or teen still learning the rudiments of self-control, still learning what is valuable and fulfilling, still learning how to prioritize what is good over the impulse of the moment, it is an absurd bar to be asked to clear
  • The expectation that children and adolescents will navigate new technologies with fully formed and muscular capacities for reason and responsibility often seems to go along with a larger abdication of responsibility on the part of the adults involved.
  • adults have frequently given in to a Faustian temptation: offering up their children’s generation to be used as guinea pigs in a mass longitudinal study in exchange for a bit more room to breathe in their own undeniably difficult roles as educators, caretakers, and parents.
  • It is not a particular activity that you start and stop and resume, and it is not a social scene that you might abandon when it suits you.
  • And this we must do without waiting for social science to hand us a comprehensive mandate it is fundamentally unable to provide; without cowering in panic over moral panics
  • The pre-internet advertising world was vicious, to be sure, but when the “pre-” came off, its vices were moved into a compound interest account. In the world of online advertising, at any moment, in any place, a user engaged in an infinite scroll might be presented with native content about how one Instagram model learned to accept her chunky (size 4) thighs, while in the next clip, another model relates how a local dermatologist saved her from becoming an unlovable crone at the age of 25
  • developing pathological interests and capacities used to take a lot more work than it does now
  • You had to seek it out, as you once had to seek out pornography and look someone in the eye while paying for it. You were not funneled into it by an omnipresent stream of algorithmically curated content—the ambience of digital life, so easily mistaken by the person experiencing it as fundamentally similar to the non-purposive ambience of the natural world.
  • And when interpersonal relations between teens become sour, nasty, or abusive, as they often do and always have, the unbalancing effects of transposing social life to the internet become quite clear
  • For both young men and young women, the pornographic scenario—dominance and degradation, exposure and monetization—creates an experiential framework for desires that they are barely experienced enough to understand.
  • This is not a world I want to live in. I think it hurts everyone; but I especially think it hurts those young enough to receive it as a natural state of affairs rather than as a profound innovation.
  • so I am baffled by the most routine objection to any blaming of smartphones for our society-wide implosion of teenagers’ mental health,
  • In short, and inevitably, today’s teenagers are suffering from capitalism—specifically “late capitalism,
  • what shocks me about this rhetorical approach is the rush to play defense for Apple and its peers, the impulse to wield the abstract concept of capitalism as a shield for actually existing, extremely powerful, demonstrably ruthless capitalist actors.
  • This motley alliance of left-coded theory about the evils of business and right-coded praxis in defense of a particular evil business can be explained, I think, by a deeper desire than overthrowing capitalism. It is the desire not to be a prude or hysteric of bumpkin
  • No one wants to come down on the side of tamping off pleasures and suppressing teen activity.
  • No one wants to be the shrill or leaden antagonist of a thousand beloved movies, inciting moral panics, scheming about how to stop the youths from dancing on Sunday.
  • But commercial pioneers are only just beginning to explore new frontiers in the profit-driven, smartphone-enabled weaponization of our own pleasures against us
  • To limit your moral imagination to the archetypes of the fun-loving rebel versus the stodgy enforcers in response to this emerging reality is to choose to navigate it with blinders on, to be a useful idiot for the robber barons of online life rather than a challenger to the corrupt order they maintain.
  • The very basic question that needs to be asked with every product rollout and implementation is what technologies enable a good human life?
  • this question is not, ultimately, the province of social scientists, notwithstanding how useful their work may be on the narrower questions involved. It is the free privilege, it is the heavy burden, for all of us, to think—to deliberate and make judgments about human good, about what kind of world we want to live in, and to take action according to that thought.
  • I am not sure how to build a world in which childrens and adolescents, at least, do not feel they need to live their whole lives online.
  • whatever particular solutions emerge from our negotiations with each other and our reckonings with the force of cultural momentum, they will remain unavailable until we give ourselves permission to set the terms of our common life.
  • But the environments in which humans find themselves vary significantly, and in ways that have equally significant downstream effects on the particular expression of human nature in that context.
  • most of all, without affording Apple, Facebook, Google, and their ilk the defensive allegiance we should reserve for each other.
Javier E

Will ChatGPT Kill the Student Essay? - The Atlantic - 0 views

  • Essay generation is neither theoretical nor futuristic at this point. In May, a student in New Zealand confessed to using AI to write their papers, justifying it as a tool like Grammarly or spell-check: ​​“I have the knowledge, I have the lived experience, I’m a good student, I go to all the tutorials and I go to all the lectures and I read everything we have to read but I kind of felt I was being penalised because I don’t write eloquently and I didn’t feel that was right,” they told a student paper in Christchurch. They don’t feel like they’re cheating, because the student guidelines at their university state only that you’re not allowed to get somebody else to do your work for you. GPT-3 isn’t “somebody else”—it’s a program.
  • The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up
  • “You can no longer give take-home exams/homework … Even on specific questions that involve combining knowledge across domains, the OpenAI chat is frankly better than the average MBA at this point. It is frankly amazing.”
  • ...18 more annotations...
  • In the modern tech world, the value of a humanistic education shows up in evidence of its absence. Sam Bankman-Fried, the disgraced founder of the crypto exchange FTX who recently lost his $16 billion fortune in a few days, is a famously proud illiterate. “I would never read a book,” he once told an interviewer. “I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that.”
  • Elon Musk and Twitter are another excellent case in point. It’s painful and extraordinary to watch the ham-fisted way a brilliant engineering mind like Musk deals with even relatively simple literary concepts such as parody and satire. He obviously has never thought about them before.
  • The extraordinary ignorance on questions of society and history displayed by the men and women reshaping society and history has been the defining feature of the social-media era. Apparently, Mark Zuckerberg has read a great deal about Caesar Augustus, but I wish he’d read about the regulation of the pamphlet press in 17th-century Europe. It might have spared America the annihilation of social trust.
  • These failures don’t derive from mean-spiritedness or even greed, but from a willful obliviousness. The engineers do not recognize that humanistic questions—like, say, hermeneutics or the historical contingency of freedom of speech or the genealogy of morality—are real questions with real consequences
  • Everybody is entitled to their opinion about politics and culture, it’s true, but an opinion is different from a grounded understanding. The most direct path to catastrophe is to treat complex problems as if they’re obvious to everyone. You can lose billions of dollars pretty quickly that way.
  • As the technologists have ignored humanistic questions to their peril, the humanists have greeted the technological revolutions of the past 50 years by committing soft suicide.
  • As of 2017, the number of English majors had nearly halved since the 1990s. History enrollments have declined by 45 percent since 2007 alone
  • the humanities have not fundamentally changed their approach in decades, despite technology altering the entire world around them. They are still exploding meta-narratives like it’s 1979, an exercise in self-defeat.
  • Contemporary academia engages, more or less permanently, in self-critique on any and every front it can imagine.
  • the situation requires humanists to explain why they matter, not constantly undermine their own intellectual foundations.
  • The humanities promise students a journey to an irrelevant, self-consuming future; then they wonder why their enrollments are collapsing. Is it any surprise that nearly half of humanities graduates regret their choice of major?
  • Despite the clear value of a humanistic education, its decline continues. Over the past 10 years, STEM has triumphed, and the humanities have collapsed. The number of students enrolled in computer science is now nearly the same as the number of students enrolled in all of the humanities combined.
  • now there’s GPT-3. Natural-language processing presents the academic humanities with a whole series of unprecedented problems
  • Practical matters are at stake: Humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated?
  • despite the drastic divide of the moment, natural-language processing is going to force engineers and humanists together. They are going to need each other despite everything. Computer scientists will require basic, systematic education in general humanism: The philosophy of language, sociology, history, and ethics are not amusing questions of theoretical speculation anymore. They will be essential in determining the ethical and creative use of chatbots, to take only an obvious example.
  • The humanists will need to understand natural-language processing because it’s the future of language
  • that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.
  • But that’s always been the beginning of wisdom, no matter what technological era we happen to inhabit.
karenmcgregor

A Comprehensive Guide to Initiating Network Administration Assignment Writing Help on c... - 0 views

Embarking on the journey of mastering Network Administration assignments? Look no further than https://www.computernetworkassignmenthelp.com, your dedicated partner in providing specialized Network...

#networkadministrationassignmentwritinghelp #networkadministration #placeanorder #student #education education

started by karenmcgregor on 10 Jan 24 no follow-up yet
Javier E

Jonathan Haidt: Reasons Do Matter - NYTimes.com - 0 views

  • I never said that reason plays no role in judgment. Rather, I urged that we be realistic about reasoning and recognize that reasons persuade others on moral and political issues only under very special circumstances.
  • two basic kinds of cognitive events are “seeing-that” and “reasoning-why.” (These terms correspond roughly to what the psychologist Daniel Kahneman and others call “System 1” and “System 2” and that I call the “elephant” and the “rider.”)
  • We effortlessly and intuitively “see that” something is true, and then we work to find justifications, or “reasons why,” which we can give to others.  Both processes are crucial for understanding belief and persuasion. Both are needed for the kind of democratic deliberation that Lynch (and I) want to promote.
  • ...13 more annotations...
  • as an intuitionist, I see hope in an approach to deliberative democracy that uses social psychology to calm the passions and fears that make horizontal movement so difficult.
  • if your opponent succeeds in defeating your reasons, you are unlikely to change your judgment. You’ve been dragged into the upper-left quadrant, but you still feel, intuitively, that it’s wrong
  • This, I suggest, is how moral arguments proceed when people have strong intuitions anchoring their beliefs. And intuitions are rarely stronger than when they are part of our partisan identities. So I’m not saying that reasons “play no role in moral judgment.” In fact, four of the six links in my Social Intuitionist Model are reasoning links. Most of what’s going on during an argu
  • ment is reasoning
  • I’m saying that reason is far less powerful than intuition, so if you’re arguing (or deliberating) with a partner who lives on the other side of the political spectrum from you, and you approach issues such as abortion, gay marriage or income inequality with powerfully different intuitive reactions, you are unlikely to effect any persuasion no matter how good your arguments and no matter how much time you give your opponent to reflect upon your logic.
  • According to Margolis, people don’t change their minds unless they move along the horizontal dimension. Intuition is what most matters for belief. Yet a moral argument generally consists of round after round of reasoning. Each person tries to pull the other along the vertical dimension.
  • One of the issues I am most passionate about is political civility. I co-run a site at www.CivilPolitics.org where we define civility as “the ability to disagree with others while respecting their sincerity and decency.” We explain our goals like this: “We believe this ability [civility] is best fostered by indirect methods (changing contexts, payoffs and institutions) rather than by direct methods (such as pleading with people to be more civil, or asking people to sign civility pledges).” In other words, we hope to open up space for civil disagreement by creating contexts in which elephants (automatic processes and intuitions) are calmer, rather than by asking riders (controlled processes, including reasoning) to try harder.
  • We are particularly interested in organizations that try to create a sense of community and camaraderie as a precondition for political discussions.
  • if you want to persuade someone, talk to the elephant first. Trigger the right intuitions first.
  • This is why there has been such rapid movement on gay marriage and gay rights. It’s not because good arguments have suddenly appeared, which nobody thought of in the 1990s
  • younger people, who grew up knowing gay people and seeing gay couples on television, have no such disgust. For them, the arguments are much more persuasive.
  • I love Aristotle’s emphasis on habit — and I had a long section on virtue ethics in Chapter 6 that got cut at the last minute, but which I have just now posted online here
  • philosophers have the best norms for good thinking that I have ever encountered. When my work is critiqued by a philosopher I can be certain that he or she has read me carefully, including the footnotes, and will not turn me into a straw man. More than any other subculture I know, the philosophical community embodies the kinds of normative pressures for reason-giving and responsiveness to reasons that Allan Gibbard describes in “Wise Choices, Apt Feelings.”
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
Javier E

untitled - 0 views

  • Scientists at Stanford University and the J. Craig Venter Institute have developed the first software simulation of an entire organism, a humble single-cell bacterium that lives in the human genital and respiratory tracts.
  • the work was a giant step toward developing computerized laboratories that could carry out many thousands of experiments much faster than is possible now, helping scientists penetrate the mysteries of diseases like cancer and Alzheimer’s.
  • cancer is not a one-gene problem; it’s a many-thousands-of-factors problem.”
  • ...7 more annotations...
  • This kind of modeling is already in use to study individual cellular processes like metabolism. But Dr. Covert said: “Where I think our work is different is that we explicitly include all of the genes and every known gene function. There’s no one else out there who has been able to include more than a handful of functions or more than, say, one-third of the genes.”
  • The simulation, which runs on a cluster of 128 computers, models the complete life span of the cell at the molecular level, charting the interactions of 28 categories of molecules — including DNA, RNA, proteins and small molecules known as metabolites, which are generated by cell processes.
  • They called the simulation an important advance in the new field of computational biology, which has recently yielded such achievements as the creation of a synthetic life form — an entire bacterial genome created by a team led by the genome pioneer J. Craig Venter. The scientists used it to take over an existing cell.
  • A decade ago, scientists developed simulations of metabolism that are now being used to study a wide array of cells, including bacteria, yeast and photosynthetic organisms. Other models exist for processes like protein synthesis.
  • “Right now, running a simulation for a single cell to divide only one time takes around 10 hours and generates half a gigabyte of data,” Dr. Covert wrote. “I find this fact completely fascinating, because I don’t know that anyone has ever asked how much data a living thing truly holds. We often think of the DNA as the storage medium, but clearly there is more to it than that.”
  • scientists chose an approach called object-oriented programming, which parallels the design of modern software systems. Software designers organize their programs in modules, which communicate with one another by passing data and instructions back and forth.
  • “The major modeling insight we had a few years ago was to break up the functionality of the cell into subgroups, which we could model individually, each with its own mathematics, and then to integrate these submodels together into a whole,”
Javier E

Welcome, Robot Overlords. Please Don't Fire Us? | Mother Jones - 0 views

  • This is the happy version. It's the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they're computers: They never get tired, they're never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.
  • , just as it took us until 2025 to fill up Lake Michigan, the simple exponential curve of Moore's Law suggests it's going to take us until 2025 to build a computer with the processing power of the human brain. And it's going to happen the same way: For the first 70 years, it will seem as if nothing is happening, even though we're doubling our progress every 18 months. Then, in the final 15 years, seemingly out of nowhere, we'll finish the job.
  • And that's exactly where we are. We've moved from computers with a trillionth of the power of a human brain to computers with a billionth of the power. Then a millionth. And now a thousandth. Along the way, computers progressed from ballistics to accounting to word processing to speech recognition, and none of that really seemed like progress toward artificial intelligence. That's because even a thousandth of the power of a human brain is—let's be honest—a bit of a joke.
  • ...4 more annotations...
  • But there's another reason as well: Every time computers break some new barrier, we decide—or maybe just finally get it through our thick skulls—that we set the bar too low.
  • the best estimates of the human brain suggest that our own processing power is about equivalent to 10 petaflops. ("Peta" comes after giga and tera.) That's a lot of flops, but last year an IBM Blue Gene/Q supercomputer at Lawrence Livermore National Laboratory was clocked at 16.3 petaflops.
  • in Lake Michigan terms, we finally have a few inches of water in the lake bed, and we can see it rising. All those milestones along the way—playing chess, translating web pages, winning at Jeopardy!, driving a car—aren't just stunts. They're precisely the kinds of things you'd expect as we struggle along with platforms that aren't quite powerful enough—yet. True artificial intelligence will very likely be here within a couple of decades. Making it small, cheap, and ubiquitous might take a decade more.
  • In other words, by about 2040 our robot paradise awaits.
Javier E

The Deepest Self - NYTimes.com - 0 views

  • Deep in the core of our being there are the unconscious natural processes built in by evolution. These deep unconscious processes propel us to procreate or strut or think in certain ways, often impulsively. Then, at the top, we have our conscious, rational processes. This top layer does its best to exercise some restraint and executive function. This evolutionary description has become the primary way we understand ourselves.
  • Yet in conversation when we say someone is deep, that they have a deep mind or a deep heart, we don’t mean that they are animalistic or impulsive. We mean the opposite. When we say that someone is a deep person, we mean they have achieved a quiet, dependable mind by being rooted in something spiritual and permanent.
  • depth, the core of our being, is something we cultivate over time. We form relationships that either turn the core piece of ourselves into something more stable and disciplined or something more fragmented and disorderly. We begin with our natural biases but carve out depths according to the quality of the commitments we make. Our origins are natural; our depths are man-made — engraved by thought and action.
  • ...6 more annotations...
  • There’s great wisdom embedded in this conversational understanding of depth, and it should cause us to amend the System 1/System 2 image of human nature that we are getting from evolutionary biology. Specifically, it should cause us to make a sharp distinction between origins and depth.
  • A person of deep character has certain qualities: in the realm of intellect, she has permanent convictions about fundamental things; in the realm of emotions, she has a web of unconditional loves; in the realm of action, she has permanent commitments to transcendent projects that cannot be completed in a single lifetime.
  • the strictly evolutionary view of human nature sells humanity short. It leaves the impression that we are just slightly higher animals
  • While we start with and are influenced by evolutionary forces, people also have the chance to make themselves deep in a way not explicable in strictly evolutionary terms.
  • So much of what we call depth is built through freely chosen suffering. People make commitments — to a nation, faith, calling or loved ones — and endure the sacrifices those commitments demand.
  • The people we admire are rooted in nature but have surpassed nature. Often they grew up in cultures that encouraged them to take a loftier view of their possibilities than we do today.
julia rhodes

How people learn - The Week - 0 views

  • n a traditional classroom, the teacher stands at the front of the class explaining what is clear in their mind to a group of passive students. Yet this pedagogical strategy doesn't positively impact retention of information from lecture, improve understanding basic concepts, or affect beliefs (that is, does new information change your belief about how something works).
  • . Everything that constitutes "understanding" science and "thinking scientifically" resides in the long-term memory, which is developed via the construction and assembly of component proteins.
  • The research tells us that the human brain can hold a maximum of about seven different items in its short-term working memory and can process no more than about four ideas at once. Exactly what an "item" means when translated from the cognitive science lab into the classroom is a bit fuzzy.
  • ...13 more annotations...
  • The results were similarly disturbing when students were tested to determine understanding of basic concepts. More instruction wasn't helping students advance from novice to expert. In fact, the data indicated the opposite: students had more novice-like beliefs after they completed a course than they had when they started.
  • But in addition, experts have a mental organizational structure that facilitates the retrieval and effective application of their knowledge.
  • experts have an ability to monitor their own thinking ("metacognition"), at least in their discipline of expertise. They are able to ask themselves, "Do I understand this? How can I check my understanding?"
  • But that is not what cognitive science tells us. It tells us instead that students need to develop these different ways of thinking by means of extended, focused mental effort.
  • new ways of thinking are always built on the prior thinking of the individual, so if the educational process is to be successful, it is essential to take that prior thinking into account.
  • Given that lectures were devised as a means of transferring knowledge from one to many, it seems obvious that we would ensure that people retain the information they are consuming.
  • What is elementary, worldly wisdom? Well, the first rule is that you can't really know anything if you just remember isolated facts and try and bang 'em back. If the facts don't hang together on a latticework of theory, you don't have them in a usable form.
  • "So it makes perfect sense," Wieman writes, "that they are not learning to think like experts, even though they are passing science courses by memorizing facts and problem-solving recipes."
  • Anything one can do to reduce cognitive load improves learning.
  • A second way teachers can improve instruction is by recognizing the importance of student beliefs about science
  • My third example of how teaching and learning can be improved is by implementing the principle that effective teaching consists of engaging students, monitoring their thinking, and providing feedback.
  • I assign students to groups the first day of class (typically three to four students in adjacent seats) and design each lecture around a series of seven to 10 clicker questions that cover the key learning goals for that day.
  • The process of critiquing each other's ideas in order to arrive at a consensus also enormously improves both their ability to carry on scientific discourse and to test their own understanding. [Change]
Javier E

Why We Make Bad Decisions - NYTimes.com - 1 views

  • SIX years ago I was struck down with a mystery illness.
  • I was offered a vast range of potential diagnoses.
  • Faced with all these confusing and conflicting opinions, I had to work out which expert to trust, whom to believe and whose advice to follow. As an economist specializing in the global economy, international trade and debt, I have spent most of my career helping others make big decisions
  • ...12 more annotations...
  • up until then I hadn’t thought much about the process of decision making. So in between M.R.I.’s, CT scans and spinal taps, I dove into the academic literature on decision making. Not just in my field but also in neuroscience, psychology, sociology, information science, political science and history.
  • Physicians do get things wrong, remarkably often. Studies have shown that up to one in five patients are misdiagnosed. In the United States and Canada it is estimated that 50,000 hospital deaths each year could have been prevented if the real cause of illness had been correctly identified.
  • Yet people are loath to challenge experts.
  • when confronted with the expert, it was as if the independent decision-making parts of many subjects’ brains pretty much switched off. They simply ceded their power to decide to the expert.
  • If we are to control our own destinies, we have to switch our brains back on and come to our medical consultations with plenty of research done, able to use the relevant jargon. If we can’t do this ourselves we need to identify someone in our social or family network who can do so on our behalf.
  • Anxiety, stress and fear — emotions that are part and parcel of serious illness — can distort our choices. Stress makes us prone to tunnel vision, less likely to take in the information we need. Anxiety makes us more risk-averse than we would be regularly and more deferential.
  • It’s not that we can’t be anxious, it’s that we need to acknowledge to ourselves that we are.
  • It is also crucial to ask probing questions not only of the experts but of ourselves.
  • we bring into our decision-making process flaws and errors of our own. All of us show bias when it comes to what information we take in. We typically focus on anything that agrees with the outcome we want.
  • We need to be aware of our natural born optimism, for that harms good decision making, too.
  • We need to acknowledge our tendency to incorrectly process challenging news and actively push ourselves to hear the bad as well as the good. It felt great when I stumbled across information that implied I didn’t need any serious treatment at all. When we find data that supports our hopes we appear to get a dopamine rush similar to the one we get if we eat chocolate, have sex or fall in love
  • But it’s often information that challenges our existing opinions or wishful desires that yields the greatest insights
‹ Previous 21 - 40 of 667 Next › Last »
Showing 20 items per page