Skip to main content

Home/ TOK Friends/ Group items tagged Complexity

Rss Feed Group items tagged

Javier E

How Do You Know When Society Is About to Fall Apart? - The New York Times - 1 views

  • Tainter seemed calm. He walked me through the arguments of the book that made his reputation, “The Collapse of Complex Societies,” which has for years been the seminal text in the study of societal collapse, an academic subdiscipline that arguably was born with its publication in 1988
  • It is only a mild overstatement to suggest that before Tainter, collapse was simply not a thing.
  • His own research has moved on; these days, he focuses on “sustainability.”
  • ...53 more annotations...
  • He writes with disarming composure about the factors that have led to the disintegration of empires and the abandonment of cities and about the mechanism that, in his view, makes it nearly certain that all states that rise will one day fall
  • societal collapse and its associated terms — “fragility” and “resilience,” “risk” and “sustainability” — have become the objects of extensive scholarly inquiry and infrastructure.
  • Princeton has a research program in Global Systemic Risk, Cambridge a Center for the Study of Existential Risk
  • even Tainter, for all his caution and reserve, was willing to allow that contemporary society has built-in vulnerabilities that could allow things to go very badly indeed — probably not right now, maybe not for a few decades still, but possibly sooner. In fact, he worried, it could begin before the year was over.
  • Plato, in “The Republic,” compared cities to animals and plants, subject to growth and senescence like any living thing. The metaphor would hold: In the early 20th century, the German historian Oswald Spengler proposed that all cultures have souls, vital essences that begin falling into decay the moment they adopt the trappings of civilization.
  • that theory, which became the heart of “The Collapse of Complex Societies.” Tainter’s argument rests on two proposals. The first is that human societies develop complexity, i.e. specialized roles and the institutional structures that coordinate them, in order to solve problems
  • All history since then has been “characterized by a seemingly inexorable trend toward higher levels of complexity, specialization and sociopolitical control.”
  • Eventually, societies we would recognize as similar to our own would emerge, “large, heterogeneous, internally differentiated, class structured, controlled societies in which the resources that sustain life are not equally available to all.”
  • Something more than the threat of violence would be necessary to hold them together, a delicate balance of symbolic and material benefits that Tainter calls “legitimacy,” the maintenance of which would itself require ever more complex structures, which would become ever less flexible, and more vulnerable, the more they piled up.
  • Social complexity, he argues, is inevitably subject to diminishing marginal returns. It costs more and more, in other words, while producing smaller and smaller profits.
  • Take Rome, which, in Tainter's telling, was able to win significant wealth by sacking its neighbors but was thereafter required to maintain an ever larger and more expensive military just to keep the imperial machine from stalling — until it couldn’t anymore.
  • This is how it goes. As the benefits of ever-increasing complexity — the loot shipped home by the Roman armies or the gentler agricultural symbiosis of the San Juan Basin — begin to dwindle, Tainter writes, societies “become vulnerable to collapse.”
  • haven’t countless societies weathered military defeats, invasions, even occupations and lengthy civil wars, or rebuilt themselves after earthquakes, floods and famines?
  • Only complexity, Tainter argues, provides an explanation that applies in every instance of collapse.
  • Complexity builds and builds, usually incrementally, without anyone noticing how brittle it has all become. Then some little push arrives, and the society begins to fracture.
  • A disaster — even a severe one like a deadly pandemic, mass social unrest or a rapidly changing climate — can, in Tainter’s view, never be enough by itself to cause collapse
  • The only precedent Tainter could think of, in which pandemic coincided with mass social unrest, was the Black Death of the 14th century. That crisis reduced the population of Europe by as much as 60 percent.
  • Whether any existing society is close to collapsing depends on where it falls on the curve of diminishing returns.
  • The United States hardly feels like a confident empire on the rise these days. But how far along are we?
  • Scholars of collapse tend to fall into two loose camps. The first, dominated by Tainter, looks for grand narratives and one-size-fits-all explanations
  • The second is more interested in the particulars of the societies they study
  • Patricia McAnany, who teaches at the University of North Carolina at Chapel Hill, has questioned the usefulness of the very concept of collapse — she was an editor of a 2010 volume titled “Questioning Collapse” — but admits to being “very, very worried” about the lack, in the United States, of the “nimbleness” that crises require of governments.
  • We’re too vested and tied to places.” Without the possibility of dispersal, or of real structural change to more equitably distribute resources, “at some point the whole thing blows. It has to.”
  • In Turchin’s case the key is the loss of “social resilience,” a society’s ability to cooperate and act collectively for common goals. By that measure, Turchin judges that the United States was collapsing well before Covid-19 hit. For the last 40 years, he argues, the population has been growing poorer and more unhealthy as elites accumulate more and more wealth and institutional legitimacy founders. “The United States is basically eating itself from the inside out,
  • Inequality and “popular immiseration” have left the country extremely vulnerable to external shocks like the pandemic, and to internal triggers like the killings of George Floyd
  • Societies evolve complexity, he argues, precisely to meet such challenges.
  • Eric H. Cline, who teaches at the George Washington University, argued in “1177 B.C.: The Year Civilization Collapsed” that Late Bronze Age societies across Europe and western Asia crumbled under a concatenation of stresses, including natural disasters — earthquakes and drought — famine, political strife, mass migration and the closure of trade routes. On their own, none of those factors would have been capable of causing such widespread disintegration, but together they formed a “perfect storm” capable of toppling multiple societies all at once.
  • Collapse “really is a matter of when,” he told me, “and I’m concerned that this may be the time.”
  • In “The Collapse of Complex Societies,” Tainter makes a point that echoes the concern that Patricia McAnany raised. “The world today is full,” Tainter writes. Complex societies occupy every inhabitable region of the planet. There is no escaping. This also means, he writes, that collapse, “if and when it comes again, will this time be global.” Our fates are interlinked. “No longer can any individual nation collapse. World civilization will disintegrate as a whole.”
  • If it happens, he says, it would be “the worst catastrophe in history.”
  • The quest for efficiency, he wrote recently, has brought on unprecedented levels of complexity: “an elaborate global system of production, shipping, manufacturing and retailing” in which goods are manufactured in one part of the world to meet immediate demands in another, and delivered only when they’re needed. The system’s speed is dizzying, but so are its vulnerabilities.
  • A more comprehensive failure of fragile supply chains could mean that fuel, food and other essentials would no longer flow to cities. “There would be billions of deaths within a very short period,” Tainter says.
  • If we sink “into a severe recession or a depression,” Tainter says, “then it will probably cascade. It will simply reinforce itself.”
  • Tainter tells me, he has seen “a definite uptick” in calls from journalists: The study of societal collapse suddenly no longer seems like a purely academic pursuit
  • Turchin is keenly aware of the essential instability of even the sturdiest-seeming systems. “Very severe events, while not terribly likely, are quite possible,” he says. When he emigrated from the U.S.S.R. in 1977, he adds, no one imagined the country would splinter into its constituent parts. “But it did.”
  • He writes of visions of “bloated bureaucracies” becoming the basis of “entire political careers.” Arms races, he observes, presented a “classic example” of spiraling complexity that provides “no tangible benefit for much of the population” and “usually no competitive advantage” either.
  • It is hard not to read the book through the lens of the last 40 years of American history, as a prediction of how the country might deteriorate if resources continued to be slashed from nearly every sector but the military, prisons and police.
  • The more a population is squeezed, Tainter warns, the larger the share that “must be allocated to legitimization or coercion.
  • And so it was: As U.S. military spending skyrocketed — to, by some estimates, a total of more than $1 trillion today from $138 billion in 1980 — the government would try both tactics, ingratiating itself with the wealthy by cutting taxes while dismantling public-assistance programs and incarcerating the poor in ever-greater numbers.
  • “As resources committed to benefits decline,” Tainter wrote in 1988, “resources committed to control must increase.”
  • The overall picture drawn by Tainter’s work is a tragic one. It is our very creativity, our extraordinary ability as a species to organize ourselves to solve problems collectively, that leads us into a trap from which there is no escaping
  • Complexity is “insidious,” in Tainter’s words. “It grows by small steps, each of which seems reasonable at the time.” And then the world starts to fall apart, and you wonder how you got there.
  • Perhaps collapse is not, actually, a thing. Perhaps, as an idea, it was a product of its time, a Cold War hangover that has outlived its usefulness, or an academic ripple effect of climate-change anxiety, or a feedback loop produced by some combination of the two
  • if you pay attention to people’s lived experience, and not just to the abstractions imposed by a highly fragmented archaeological record, a different kind of picture emerges.
  • Tainter’s understanding of societies as problem-solving entities can obscure as much as it reveals
  • Plantation slavery arose in order to solve a problem faced by the white landowning class: The production of agricultural commodities like sugar and cotton requires a great deal of backbreaking labor. That problem, however, has nothing to do with the problems of the people they enslaved. Which of them counts as “society”?
  • Since the beginning of the pandemic, the total net worth of America’s billionaires, all 686 of them, has jumped by close to a trillion dollars.
  • If societies are not in fact unitary, problem-solving entities but heaving contradictions and sites of constant struggle, then their existence is not an all-or-nothing game.
  • Collapse appears not as an ending, but a reality that some have already suffered — in the hold of a slave ship, say, or on a long, forced march from their ancestral lands to reservations faraway — and survived.
  • The current pandemic has already given many of us a taste of what happens when a society fails to meet the challenges that face it, when the factions that rule over it tend solely to their own problems
  • the real danger comes from imagining that we can keep living the way we always have, and that the past is any more stable than the present.
  • If you close your eyes and open them again, the periodic disintegrations that punctuate our history — all those crumbling ruins — begin to fade, and something else comes into focus: wiliness, stubbornness and, perhaps the strongest and most essential human trait, adaptability.
  • When one system fails, we build another. We struggle to do things differently, and we push on. As always, we have no other choice.
Javier E

The Lasting Lessons of John Conway's Game of Life - The New York Times - 0 views

  • “Because of its analogies with the rise, fall and alterations of a society of living organisms, it belongs to a growing class of what are called ‘simulation games,’” Mr. Gardner wrote when he introduced Life to the world 50 years ago with his October 1970 column.
  • The Game of Life motivated the use of cellular automata in the rich field of complexity science, with simulations modeling everything from ants to traffic, clouds to galaxies. More trivially, the game attracted a cult of “Lifenthusiasts,” programmers who spent a lot of time hacking Life — that is, constructing patterns in hopes of spotting new Life-forms.
  • The tree of Life also includes oscillators, such as the blinker, and spaceships of various sizes (the glider being the smallest).
  • ...24 more annotations...
  • Patterns that didn’t change one generation to the next, Dr. Conway called still lifes — such as the four-celled block, the six-celled beehive or the eight-celled pond. Patterns that took a long time to stabilize, he called methuselahs.
  • The second thing Life shows us is something that Darwin hit upon when he was looking at Life, the organic version. Complexity arises from simplicity!
  • I first encountered Life at the Exploratorium in San Francisco in 1978. I was hooked immediately by the thing that has always hooked me — watching complexity arise out of simplicity.
  • Life shows you two things. The first is sensitivity to initial conditions. A tiny change in the rules can produce a huge difference in the output, ranging from complete destruction (no dots) through stasis (a frozen pattern) to patterns that keep changing as they unfold.
  • Life shows us complex virtual “organisms” arising out of the interaction of a few simple rules — so goodbye “Intelligent Design.”
  • I’ve wondered for decades what one could learn from all that Life hacking. I recently realized it’s a great place to try to develop “meta-engineering” — to see if there are general principles that govern the advance of engineering and help us predict the overall future trajectory of technology.
  • Melanie Mitchell— Professor of complexity, Santa Fe Institute
  • Given that Conway’s proof that the Game of Life can be made to simulate a Universal Computer — that is, it could be “programmed” to carry out any computation that a traditional computer can do — the extremely simple rules can give rise to the most complex and most unpredictable behavior possible. This means that there are certain properties of the Game of Life that can never be predicted, even in principle!
  • I use the Game of Life to make vivid for my students the ideas of determinism, higher-order patterns and information. One of its great features is that nothing is hidden; there are no black boxes in Life, so you know from the outset that anything that you can get to happen in the Life world is completely unmysterious and explicable in terms of a very large number of simple steps by small items.
  • In Thomas Pynchon’s novel “Gravity’s Rainbow,” a character says, “But you had taken on a greater and more harmful illusion. The illusion of control. That A could do B. But that was false. Completely. No one can do. Things only happen.”This is compelling but wrong, and Life is a great way of showing this.
  • In Life, we might say, things only happen at the pixel level; nothing controls anything, nothing does anything. But that doesn’t mean that there is no such thing as action, as control; it means that these are higher-level phenomena composed (entirely, with no magic) from things that only happen.
  • Stephen Wolfram— Scientist and C.E.O., Wolfram Research
  • Brian Eno— Musician, London
  • Bert Chan— Artificial-life researcher and creator of the continuous cellular automaton “Lenia,” Hong Kong
  • it did have a big impact on beginner programmers, like me in the 90s, giving them a sense of wonder and a kind of confidence that some easy-to-code math models can produce complex and beautiful results. It’s like a starter kit for future software engineers and hackers, together with Mandelbrot Set, Lorenz Attractor, et cetera.
  • if we think about our everyday life, about corporations and governments, the cultural and technical infrastructures humans built for thousands of years, they are not unlike the incredible machines that are engineered in Life.
  • In normal times, they are stable and we can keep building stuff one component upon another, but in harder times like this pandemic or a new Cold War, we need something that is more resilient and can prepare for the unpreparable. That would need changes in our “rules of life,” which we take for granted.
  • Rudy Rucker— Mathematician and author of “Ware Tetralogy,” Los Gatos, Calif.
  • That’s what chaos is about. The Game of Life, or a kinky dynamical system like a pair of pendulums, or a candle flame, or an ocean wave, or the growth of a plant — they aren’t readily predictable. But they are not random. They do obey laws, and there are certain kinds of patterns — chaotic attractors — that they tend to produce. But again, unpredictable is not random. An important and subtle distinction which changed my whole view of the world.
  • William Poundstone— Author of “The Recursive Universe: Cosmic Complexity and the Limits of Scientific Knowledge,” Los Angeles, Calif.
  • The Game of Life’s pulsing, pyrotechnic constellations are classic examples of emergent phenomena, introduced decades before that adjective became a buzzword.
  • Fifty years later, the misfortunes of 2020 are the stuff of memes. The biggest challenges facing us today are emergent: viruses leaping from species to species; the abrupt onset of wildfires and tropical storms as a consequence of a small rise in temperature; economies in which billions of free transactions lead to staggering concentrations of wealth; an internet that becomes more fraught with hazard each year
  • Looming behind it all is our collective vision of an artificial intelligence-fueled future that is certain to come with surprises, not all of them pleasant.
  • The name Conway chose — the Game of Life — frames his invention as a metaphor. But I’m not sure that even he anticipated how relevant Life would become, and that in 50 years we’d all be playing an emergent game of life and death.
Javier E

Global Warming Denial Explained by Rebecca Costa - The Daily Beast - 3 views

  • While railing about how difficult it’s become for the man on the street to separate facts from beliefs, he brought up his favorite global impasse again: climate change. Despite scientific evidence that stacks higher than the Egyptian pyramids, Maher lamented that there are still Americans walking around who “don’t think the sun is hot.”
  • Maher asks why facts are becoming marginalized. The answer is right under his nose. When Darwin discovered the slow pace of evolutionary change (millions of years), he also explained what happens to us when the complexity of our problems exceeds the capabilities our brains have evolved to this point. It’s simple: when facts become incomprehensible, we switch to beliefs. In other words, all societies eventually become irrational when confronted with problems that are too complex, too large, too messy to solve.
  • Thankfully, we have two weapons earlier civilizations didn’t have: models for high failure rates and neuroscience. Take the venture capital model for example. No matter how much due diligence venture capitalists perform, they can’t pick a winner from a loser more than 20 percent of the time. But the enormous success of those winners overshadows the failures, so venture capitalists are successful in spite of themselves.
  • ...1 more annotation...
  • Secondly, we can turn to neuroscience. Until recently we haven’t been able to look under the skull and see what the brain does when a problem is highly complex. The good news? The brain has a secret weapon against complexity, a process neuroscientists are now calling “insight.” We are learning more everyday about insight’s ability to catch the brain up to complexity—the real antidote to reverting to beliefs as a default.
margogramiak

Scientists uncover new path toward treating a rare but deadly neurologic condition: Com... - 0 views

  • Molybdenum cofactor (Moco) is a compound that is little known but is essential for life.
  • Molybdenum cofactor (Moco) is a compound that is little known but is essential for life.
    • margogramiak
       
      Interested in learning more... how can something be essential but unknown?
  • Children born without the ability to synthesize Moco die young.
    • margogramiak
       
      Oh wow, it really is important.
  • ...6 more annotations...
  • Studies with a popular laboratory model, the nematode Caenorhabditis elegans, have revealed a possible therapeutic avenue for a rare but deadly condition in which children are born without the ability to make molybdenum cofactor (Moco) on their own
    • margogramiak
       
      How have I never heard of this?
  • Moco is essential for life
    • margogramiak
       
      But what is it?
  • This suggests that such protein-Moco complexes could be used as a treatment for Moco deficiency in people.
    • margogramiak
       
      Nice! Maybe we have a fix!
  • C. elegans
    • margogramiak
       
      This seems really familiar... I think I had an ACT passage that talked about it.
  • The researchers found that the worms could take in Moco as a range of purified Moco-protein complexes. These included complexes with proteins from bacteria, bread mold, green algae and cow's milk. Ingesting these complexes saved the Moco-deficient worms.
    • margogramiak
       
      That's super interesting. I wonder how that figured that out.
  • "We do not want to overstate our findings, especially as they relate to patients, but we are extremely excited about the therapeutic and fundamental implications of this work."
    • margogramiak
       
      That seems very promising! I'm glad I got to learn about something I was so uninformed about!
Javier E

The "missing law" of nature was here all along | Salon.com - 0 views

  • recently published scientific article proposes a sweeping new law of nature, approaching the matter with dry, clinical efficiency that still reads like poetry.
  • “Evolving systems are asymmetrical with respect to time; they display temporal increases in diversity, distribution, and/or patterned behavior,” they continue, mounting their case from the shoulders of Charles Darwin, extending it toward all things living and not. 
  • To join the known physics laws of thermodynamics, electromagnetism and Newton’s laws of motion and gravity, the nine scientists and philosophers behind the paper propose their “law of increasing functional information.”
  • ...27 more annotations...
  • In short, a complex and evolving system — whether that’s a flock of gold finches or a nebula or the English language — will produce ever more diverse and intricately detailed states and configurations of itself.
  • Some of these more diverse and intricate configurations, the scientists write, are shed and forgotten over time. The configurations that persist are ones that find some utility or novel function in a process akin to natural selection, but a selection process driven by the passing-on of information rather than just the sowing of biological genes
  • Have they finally glimpsed, I wonder, the connectedness and symbiotic co-evolution of their own scientific ideas with those of the world’s writers
  • Have they learned to describe in their own quantifying language that cradle from which both our disciplines have emerged and the firmament on which they both stand — the hearing and telling of stories in order to exist?
  • Have they quantified the quality of all existent matter, living and not: that all things inherit a story in data to tell, and that our stories are told by the very forms we take to tell them? 
  • “Is there a universal basis for selection? Is there a more quantitative formalism underlying this conjectured conceptual equivalence—a formalism rooted in the transfer of information?,” they ask of the world’s disparate phenomena. “The answer to both questions is yes.”
  • Yes. They’ve glimpsed it, whether they know it or not. Sing to me, O Muse, of functional information and its complex diversity.
  • The principle of complexity evolving at its own pace when left to its own devices, independent of time but certainly in a dance with it, is nothing new. Not in science, nor in its closest humanities kin, science and nature writing. Give things time and nourishing environs, protect them from your own intrusions and — living organisms or not — they will produce abundant enlacement of forms.
  • This is how poetry was born from the same larynxes and phalanges that tendered nuclear equations: We featherless bipeds gave language our time and delighted attendance until its forms were so multivariate that they overflowed with inevitable utility.
  • In her Pulitzer-winning “Pilgrim at Tinker Creek,” nature writer Annie Dillard explains plainly that evolution is the vehicle of such intricacy in the natural world, as much as it is in our own thoughts and actions. 
  • “The stability of simple forms is the sturdy base from which more complex, stable forms might arise, forming in turn more complex forms,” she explains, drawing on the undercap frills of mushrooms and filament-fine filtering tubes inside human kidneys to illustrate her point. 
  • “Utility to the creature is evolution’s only aesthetic consideration. Form follows function in the created world, so far as I know, and the creature that functions, however bizarre, survives to perpetuate its form,” writes Dillard.
  • “Of the multiplicity of forms, I know nothing. Except that, apparently, anything goes. This holds for forms of behavior as well as design — the mantis munching her mate, the frog wintering in mud.” 
  • She notes that, of all forms of life we’ve ever known to exist, only about 10% are still alive. What extravagant multiplicity. 
  • “Intricacy is that which is given from the beginning, the birthright, and in the intricacy is the hardiness of complexity that ensures against the failures of all life,” Dillard writes. “The wonder is — given the errant nature of freedom and the burgeoning texture of time — the wonder is that all the forms are not monsters, that there is beauty at all, grace gratuitous.”
  • “This paper, and the reason why I'm so proud of it, is because it really represents a connection between science and the philosophy of science that perhaps offers a new lens into why we see everything that we see in the universe,” lead scientist Michael Wong told Motherboard in a recent interview. 
  • Wong is an astrobiologist and planetary scientist at the Carnegie Institute for Science. In his team’s paper, that bridge toward scientific philosophy is not only preceded by a long history of literary creativity but directly theorizes about the creative act itself.  
  • “The creation of art and music may seem to have very little to do with the maintenance of society, but their origins may stem from the need to transmit information and create bonds among communities, and to this day, they enrich life in innumerable ways,” Wong’s team writes.  
  • “Perhaps, like eddies swirling off of a primary flow field, selection pressures for ancillary functions can become so distant from the core functions of their host systems that they can effectively be treated as independently evolving systems,” the authors add, pointing toward the elaborate mating dance culture observed in birds of paradise.
  • “Perhaps it will be humanity’s ability to learn, invent, and adopt new collective modes of being that will lead to its long-term persistence as a planetary phenomenon. In light of these considerations, we suspect that the general principles of selection and function discussed here may also apply to the evolution of symbolic and social systems.”
  • The Mekhilta teaches that all Ten Commandments were pronounced in a single utterance. Similarly, the Maharsha says the Torah’s 613 mitzvoth are only perceived as a plurality because we’re time-bound humans, even though they together form a singular truth which is indivisible from He who expressed it. 
  • Or, as the Mishna would have it, “the creations were all made in generic form, and they gradually expanded.” 
  • Like swirling eddies off of a primary flow field.
  • “O Lord, how manifold are thy works!,” cried out David in his psalm. “In wisdom hast thou made them all: the earth is full of thy riches. So is this great and wide sea, wherein are things creeping innumerable, both small and great beasts.” 
  • In all things, then — from poetic inventions, to rare biodiverse ecosystems, to the charted history of our interstellar equations — it is best if we conserve our world’s intellectual and physical diversity, for both the study and testimony of its immeasurable multiplicity.
  • Because, whether wittingly or not, science is singing the tune of the humanities. And whether expressed in algebraic logic or ancient Greek hymn, its chorus is the same throughout the universe: Be fruitful and multiply. 
  • Both intricate configurations of art and matter arise and fade according to their shared characteristic, long-known by students of the humanities: each have been graced with enough time to attend to the necessary affairs of their most enduring pleasures. 
Javier E

An Existential Problem in the Search for Alien Life - The Atlantic - 0 views

  • The fact is, we still don’t know what life is.
  • since the days of Aristotle, scientists and philosophers have struggled to draw a precise line between what is living and what is not, often returning to criteria such as self-organization, metabolism, and reproduction but never finding a definition that includes, and excludes, all the right things.
  • If you say life consumes fuel to sustain itself with energy, you risk including fire; if you demand the ability to reproduce, you exclude mules. NASA hasn’t been able to do better than a working definition: “Life is a self-sustaining chemical system capable of Darwinian evolution.”
  • ...20 more annotations...
  • it lacks practical application. If humans found something on another planet that seemed to be alive, how much time would we have to sit around and wait for it to evolve?
  • The only life we know is life on Earth. Some scientists call this the n=1 problem, where n is the number of examples from which we can generalize.
  • Cronin studies the origin of life, also a major interest of Walker’s, and it turned out that, when expressed in math, their ideas were essentially the same. They had both zeroed in on complexity as a hallmark of life. Cronin is devising a way to systematize and measure complexity, which he calls Assembly Theory.
  • What we really want is more than a definition of life. We want to know what life, fundamentally, is. For that kind of understanding, scientists turn to theories. A theory is a scientific fundamental. It not only answers questions, but frames them, opening new lines of inquiry. It explains our observations and yields predictions for future experiments to test.
  • Consider the difference between defining gravity as “the force that makes an apple fall to the ground” and explaining it, as Newton did, as the universal attraction between all particles in the universe, proportional to the product of their masses and so on. A definition tells us what we already know; a theory changes how we understand things.
  • the potential rewards of unlocking a theory of life have captivated a clutch of researchers from a diverse set of disciplines. “There are certain things in life that seem very hard to explain,” Sara Imari Walker, a physicist at Arizona State University who has been at the vanguard of this work, told me. “If you scratch under the surface, I think there is some structure that suggests formalization and mathematical laws.”
  • Walker doesn’t think about life as a biologist—or an astrobiologist—does. When she talks about signs of life, she doesn’t talk about carbon, or water, or RNA, or phosphine. She reaches for different examples: a cup, a cellphone, a chair. These objects are not alive, of course, but they’re clearly products of life. In Walker’s view, this is because of their complexity. Life brings complexity into the universe, she says, in its own being and in its products, because it has memory: in DNA, in repeating molecular reactions, in the instructions for making a chair.
  • He measures the complexity of an object—say, a molecule—by calculating the number of steps necessary to put the object’s smallest building blocks together in that certain way. His lab has found, for example, when testing a wide range of molecules, that those with an “assembly number” above 15 were exclusively the products of life. Life makes some simpler molecules, too, but only life seems to make molecules that are so complex.
  • I reach for the theory of gravity as a familiar parallel. Someone might ask, “Okay, so in terms of gravity, where are we in terms of our understanding of life? Like, Newton?” Further back, further back, I say. Walker compares us to pre-Copernican astronomers, reliant on epicycles, little orbits within orbits, to make sense of the motion we observe in the sky. Cleland has put it in terms of chemistry, in which case we’re alchemists, not even true chemists yet
  • Walker’s whole notion is that it’s not only theoretically possible but genuinely achievable to identify something smaller—much smaller—that still nonetheless simply must be the result of life. The model would, in a sense, function like biosignatures as an indication of life that could be searched for. But it would drastically improve and expand the targets.
  • Walker would use the theory to predict what life on a given planet might look like. It would require knowing a lot about the planet—information we might have about Venus, but not yet about a distant exoplanet—but, crucially, would not depend at all on how life on Earth works, what life on Earth might do with those materials.
  • Without the ability to divorce the search for alien life from the example of life we know, Walker thinks, a search is almost pointless. “Any small fluctuations in simple chemistry can actually drive you down really radically different evolutionary pathways,” she told me. “I can’t imagine [life] inventing the same biochemistry on two worlds.”
  • Walker’s approach is grounded in the work of, among others, the philosopher of science Carol Cleland, who wrote The Quest for a Universal Theory of Life.
  • she warns that any theory of life, just like a definition, cannot be constrained by the one example of life we currently know. “It’s a mistake to start theorizing on the basis of a single example, even if you’re trying hard not to be Earth-centric. Because you’re going to be Earth-centric,” Cleland told me. In other words, until we find other examples of life, we won’t have enough data from which to devise a theory. Abstracting away from Earthliness isn’t a way to be agnostic, Cleland argues. It’s a way to be too abstract.
  • Cleland calls for a more flexible search guided by what she calls “tentative criteria.” Such a search would have a sense of what we’re looking for, but also be open to anomalies that challenge our preconceptions, detections that aren’t life as we expected but aren’t familiar not-life either—neither a flower nor a rock
  • it speaks to the hope that exploration and discovery might truly expand our understanding of the cosmos and our own world.
  • The astrobiologist Kimberley Warren-Rhodes studies life on Earth that lives at the borders of known habitability, such as in Chile’s Atacama Desert. The point of her experiments is to better understand how life might persist—and how it might be found—on Mars. “Biology follows some rules,” she told me. The more of those rules you observe, the better sense you have of where to look on other worlds.
  • In this light, the most immediate concern in our search for extraterrestrial life might be less that we only know about life on Earth, and more that we don’t even know that much about life on Earth in the first place. “I would say we understand about 5 percent,” Warren-Rhodes estimates of our cumulative knowledge. N=1 is a problem, and we might be at more like n=.05.
  • who knows how strange life on another world might be? What if life as we know it is the wrong life to be looking for?
  • We understand so little, and we think we’re ready to find other life?
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

Google Alters Search to Handle More Complex Queries - NYTimes.com - 0 views

  • Google on Thursday announced one of the biggest changes to its search engine, a rewriting of its algorithm to handle more complex queries that affects 90 percent of all searches.
  • Google originally matched keywords in a search query to the same words on Web pages. Hummingbird is the culmination of a shift to understanding the meaning of phrases in a query and displaying Web pages that more accurately match that meaning
  • “They said, ‘Let’s go back and basically replace the engine of a 1950s car,’ ” said Danny Sullivan, founding editor of Search Engine Land, an industry blog. “It’s fair to say the general public seemed not to have noticed that Google ripped out its engine while driving down the road and replaced it with something else.
  • ...3 more annotations...
  • The company made the changes, executives said, because Google users are asking increasingly long and complex questions and are searching Google more often on mobile phones with voice search.
  • The algorithm also builds on work Google has done to understand conversational language, like interpreting what pronouns in a search query refer to. Hummingbird extends that to all Web searches, not just results related to entities included in the Knowledge Graph. It tries to connect phrases and understand concepts in a long query.
  • The outcome is not a change in how Google searches the Web, but in the results that it shows. Unlike some of its other algorithm changes, including one that pushed down so-called content farms in search results, Hummingbird is unlikely to noticeably affect certain categories of Web businesses, Mr. Sullivan said. Instead, Google says it believes that users will see more precise results
Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 1 views

  • Skinner's approach stressed the historical associations between a stimulus and the animal's response -- an approach easily framed as a kind of empirical statistical analysis, predicting the future as a function of the past.
  • Chomsky's conception of language, on the other hand, stressed the complexity of internal representations, encoded in the genome, and their maturation in light of the right data into a sophisticated computational system, one that cannot be usefully broken down into a set of associations.
  • Chomsky acknowledged that the statistical approach might have practical value, just as in the example of a useful search engine, and is enabled by the advent of fast computers capable of processing massive data. But as far as a science goes, Chomsky would argue it is inadequate, or more harshly, kind of shallow
  • ...17 more annotations...
  • David Marr, a neuroscientist colleague of Chomsky's at MIT, defined a general framework for studying complex biological systems (like the brain) in his influential book Vision,
  • a complex biological system can be understood at three distinct levels. The first level ("computational level") describes the input and output to the system, which define the task the system is performing. In the case of the visual system, the input might be the image projected on our retina and the output might our brain's identification of the objects present in the image we had observed. The second level ("algorithmic level") describes the procedure by which an input is converted to an output, i.e. how the image on our retina can be processed to achieve the task described by the computational level. Finally, the third level ("implementation level") describes how our own biological hardware of cells implements the procedure described by the algorithmic level.
  • The emphasis here is on the internal structure of the system that enables it to perform a task, rather than on external association between past behavior of the system and the environment. The goal is to dig into the "black box" that drives the system and describe its inner workings, much like how a computer scientist would explain how a cleverly designed piece of software works and how it can be executed on a desktop computer.
  • As written today, the history of cognitive science is a story of the unequivocal triumph of an essentially Chomskyian approach over Skinner's behaviorist paradigm -- an achievement commonly referred to as the "cognitive revolution,"
  • While this may be a relatively accurate depiction in cognitive science and psychology, behaviorist thinking is far from dead in related disciplines. Behaviorist experimental paradigms and associationist explanations for animal behavior are used routinely by neuroscientists
  • Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field's heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the "new AI" -- focused on using statistical learning techniques to better mine and predict data -- is unlikely to yield general principles about the nature of intelligent beings or about cognition.
  • Behaviorist principles of associations could not explain the richness of linguistic knowledge, our endlessly creative use of it, or how quickly children acquire it with only minimal and imperfect exposure to language presented by their environment.
  • it has been argued in my view rather plausibly, though neuroscientists don't like it -- that neuroscience for the last couple hundred years has been on the wrong track.
  • Implicit in this endeavor is the assumption that with enough sophisticated statistical tools and a large enough collection of data, signals of interest can be weeded it out from the noise in large and poorly understood biological systems.
  • Brenner, a contemporary of Chomsky who also participated in the same symposium on AI, was equally skeptical about new systems approaches to understanding the brain. When describing an up-and-coming systems approach to mapping brain circuits called Connectomics, which seeks to map the wiring of all neurons in the brain (i.e. diagramming which nerve cells are connected to others), Brenner called it a "form of insanity."
  • These debates raise an old and general question in the philosophy of science: What makes a satisfying scientific theory or explanation, and how ought success be defined for science?
  • Ever since Isaiah Berlin's famous essay, it has become a favorite pastime of academics to place various thinkers and scientists on the "Hedgehog-Fox" continuum: the Hedgehog, a meticulous and specialized worker, driven by incremental progress in a clearly defined field versus the Fox, a flashier, ideas-driven thinker who jumps from question to question, ignoring field boundaries and applying his or her skills where they seem applicable.
  • Chomsky's work has had tremendous influence on a variety of fields outside his own, including computer science and philosophy, and he has not shied away from discussing and critiquing the influence of these ideas, making him a particularly interesting person to interview.
  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • An unlikely pair, systems biology and artificial intelligence both face the same fundamental task of reverse-engineering a highly complex system whose inner workings are largely a mystery
  • neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
kushnerha

BBC - Future - Will emoji become a new language? - 2 views

  • Emoji are now used in around half of every sentence on sites like Instagram, and Facebook looks set to introduce them alongside the famous “like” button as a way of expression your reaction to a post.
  • If you were to believe the headlines, this is just the tipping point: some outlets have claimed that emoji are an emerging language that could soon compete with English in global usage. To many, this would be an exciting evolution of the way we communicate; to others, it is linguistic Armageddon.
  • Do emoji show the same characteristics of other communicative systems and actual languages? And what do they help us to express that words alone can’t say?When emoji appear with text, they often supplement or enhance the writing. This is similar to gestures that appear along with speech. Over the past three decades, research has shown that our hands provide important information that often transcends and clarifies the message in speech. Emoji serve this function too – for instance, adding a kissy or winking face can disambiguate whether a statement is flirtatiously teasing or just plain mean.
  • ...17 more annotations...
  • This is a key point about language use: rarely is natural language ever limited to speech alone. When we are speaking, we constantly use gestures to illustrate what we mean. For this reason, linguists say that language is “multi-modal”. Writing takes away that extra non-verbal information, but emoji may allow us to re-incorporate it into our text.
  • Emoji are not always used as embellishments, however – sometimes, strings of the characters can themselves convey meaning in a longer sequence on their own. But to constitute their own language, they would need a key component: grammar.
  • A grammatical system is a set of constraints that governs how the meaning of an utterance is packaged in a coherent way. Natural language grammars have certain traits that distinguish them. For one, they have individual units that play different roles in the sequence – like nouns and verbs in a sentence. Also, grammar is different from meaning
  • When emoji are isolated, they are primarily governed by simple rules related to meaning alone, without these more complex rules. For instance, according to research by Tyler Schnoebelen, people often create strings of emoji that share a common meaning
  • This sequence has little internal structure; even when it is rearranged, it still conveys the same message. These images are connected solely by their broader meaning. We might consider them to be a visual list: “here are all things related to celebrations and birthdays.” Lists are certainly a conventionalised way of communicating, but they don’t have grammar the way that sentences do.
  • What if the order did matter though? What if they conveyed a temporal sequence of events? Consider this example, which means something like “a woman had a party where they drank, and then opened presents and then had cake”:
  • In all cases, the doer of the action (the agent) precedes the action. In fact, this pattern is commonly found in both full languages and simple communication systems. For example, the majority of the world’s languages place the subject before the verb of a sentence.
  • These rules may seem like the seeds of grammar, but psycholinguist Susan Goldin-Meadow and colleagues have found this order appears in many other systems that would not be considered a language. For example, this order appears when people arrange pictures to describe events from an animated cartoon, or when speaking adults communicate using only gestures. It also appears in the gesture systems created by deaf children who cannot hear spoken languages and are not exposed to sign languages.
  • describes the children as lacking exposure to a language and thus invent their own manual systems to communicate, called “homesigns”. These systems are limited in the size of their vocabularies and the types of sequences they can create. For this reason, the agent-act order seems not to be due to a grammar, but from basic heuristics – practical workarounds – based on meaning alone. Emoji seem to tap into this same system.
  • Nevertheless, some may argue that despite emoji’s current simplicity, this may be the groundwork for emerging complexity – that although emoji do not constitute a language at the present time, they could develop into one over time.
  • Could an emerging “emoji visual language” be developing in a similar way, with actual grammatical structure? To answer that question, you need to consider the intrinsic constraints on the technology itself.Emoji are created by typing into a computer like text. But, unlike text, most emoji are provided as whole units, except for the limited set of emoticons which convert to emoji, like :) or ;). When writing text, we use the building blocks (letters) to create the units (words), not by searching through a list of every whole word in the language.
  • emoji force us to convey information in a linear unit-unit string, which limits how complex expressions can be made. These constraints may mean that they will never be able to achieve even the most basic complexity that we can create with normal and natural drawings.
  • What’s more, these limits also prevent users from creating novel signs – a requisite for all languages, especially emerging ones. Users have no control over the development of the vocabulary. As the “vocab list” for emoji grows, it will become increasingly unwieldy: using them will require a conscious search process through an external list, not an easy generation from our own mental vocabulary, like the way we naturally speak or draw. This is a key point – it means that emoji lack the flexibility needed to create a new language.
  • we already have very robust visual languages, as can be seen in comics and graphic novels. As I argue in my book, The Visual Language of Comics, the drawings found in comics use a systematic visual vocabulary (such as stink lines to represent smell, or stars to represent dizziness). Importantly, the available vocabulary is not constrained by technology and has developed naturally over time, like spoken and written languages.
  • grammar of sequential images is more of a narrative structure – not of nouns and verbs. Yet, these sequences use principles of combination like any other grammar, including roles played by images, groupings of images, and hierarchic embedding.
  • measured participants’ brainwaves while they viewed sequences one image at a time where a disruption appeared either within the groupings of panels or at the natural break between groupings. The particular brainwave responses that we observed were similar to those that experimenters find when violating the syntax of sentences. That is, the brain responds the same way to violations of “grammar”, whether in sentences or sequential narrative images.
  • I would hypothesise that emoji can use a basic narrative structure to organise short stories (likely made up of agent-action sequences), but I highly doubt that they would be able to create embedded clauses like these. I would also doubt that you would see the same kinds of brain responses that we saw with the comic strip sequences.
caelengrubb

5 key facts about language and the brain - 0 views

  • Language is a complex topic, interwoven with issues of identity, rhetoric, and ar
  • While other animals do have their own codes for communication — to indicate, for instance, the presence of danger, a willingness to mate, or the presence of food — such communications are typically “repetitive instrumental acts” that lack a formal structure of the kind that humans use when they utter sentences
  • As Homo sapiens, we have the necessary biological tools to utter the complex constructions that constitute language, the vocal apparatus, and a brain structure complex and well-developed enough to create a varied vocabulary and strict sets of rules on how to use it.
  • ...7 more annotations...
  • Though it remains unclear at what point the ancestors of modern humans first started to develop spoken language, we know that our Homo sapiens predecessors emerged around 150,000–200,000 years ago. So, Prof. Pagel explains, complex speech is likely at least as old as that
  • A study led by researchers from Lund University in Sweden found that committed language students experienced growth in the hippocampus, a brain region associated with learning and spatial navigation, as well as in parts of the cerebral cortex, or the outmost layer of the brain.
  • In fact, researchers have drawn many connections between bilingualism or multilingualism and the maintenance of brain health
  • Multiple studies, for instance, have found that bilingualism can protect the brain against Alzheimer’s disease and other forms of dementia.
  • Being bilingual has other benefits, too, such as training the brain to process information efficiently while expending only the necessary resources on the tasks at hand.
  • Research now shows that her assessment was absolutely correct — the language that we use does change not only the way we think and express ourselves, but also how we perceive and interact with the world.
  • Language holds such power over our minds, decision-making processes, and lives, so Broditsky concludes by encouraging us to consider how we might use it to shape the way we think about ourselves and the world.
anniina03

This Strange Microbe May Mark One of Life's Great Leaps - The New York Times - 0 views

  • A bizarre tentacled microbe discovered on the floor of the Pacific Ocean may help explain the origins of complex life on this planet and solve one of the deepest mysteries in biology, scientists reported on Wednesday.Two billion years ago, simple cells gave rise to far more complex cells. Biologists have struggled for decades to learn how it happened.
  • The new species, called Prometheoarchaeum, turns out to be just such a transitional form, helping to explain the origins of all animals, plants, fungi — and, of course, humans. The research was reported in the journal Nature.
  • Species that share these complex cells are known as eukaryotes, and they all descend from a common ancestor that lived an estimated two billion years ago.Before then, the world was home only to bacteria and a group of small, simple organisms called archaea. Bacteria and archaea have no nuclei, lysosomes, mitochondria or skeletons
  • ...6 more annotations...
  • In the late 1900s, researchers discovered that mitochondria were free-living bacteria at some point in the past. Somehow they were drawn inside another cell, providing new fuel for their host. In 2015, Thijs Ettema of Uppsala University in Sweden and his colleagues discovered fragments of DNA in sediments retrieved from the Arctic Ocean. The fragments contained genes from a species of archaea that seemed to be closely related to eukaryotes.Dr. Ettema and his colleagues named them Asgard archaea. (Asgard is the home of the Norse gods.) DNA from these mystery microbes turned up in a river in North Carolina, hot springs in New Zealand and other places around the world.
  • Masaru K. Nobu, a microbiologist at the National Institute of Advanced Industrial Science and Technology in Tsukuba, Japan, and his colleagues managed to grow these organisms in a lab. The effort took more than a decade.The microbes, which are adapted to life in the cold seafloor, have a slow-motion existence. Prometheoarchaeum can take as long as 25 days to divide. By contrast, E. coli divides once every 20 minutes.
  • In the lab, the researchers mimicked the conditions in the seafloor by putting the sediment in a chamber without any oxygen. They pumped in methane and extracted deadly waste gases that might kill the resident microbes.The mud contained many kinds of microbes. But by 2015, the researchers had isolated an intriguing new species of archaea. And when Dr. Ettema and colleagues announced the discovery of Asgard archaea DNA, the Japanese researchers were shocked. Their new, living microbe belonged to that group.The researchers then undertook more painstaking research to understand the new species and link it to the evolution of eukaryotes.The researchers named the microbe Prometheoarchaeum syntrophicum, in honor of Prometheus, the Greek god who gave humans fire — after fashioning them from clay.
  • This finding suggests that the proteins that eukaryotes used to build complex cells started out doing other things, and only later were assigned new jobs.Dr. Nobu and his colleagues are now trying to figure out what those original jobs were. It’s possible, he said, that Prometheoarchaeum creates its tentacles with genes later used by eukaryotes to build cellular skeletons.
  • Before the discovery of Prometheoarchaeum, some researchers suspected that the ancestors of eukaryotes lived as predators, swallowing up smaller microbes. They might have engulfed the first mitochondria this way.
  • Instead of hunting prey, Prometheoarchaeum seems to make its living by slurping up fragments of proteins floating by. Its partners feed on its waste. They, in turn, provide Prometheoarchaeum with vitamins and other essential compounds.
Javier E

Quantum Computing Advance Begins New Era, IBM Says - The New York Times - 0 views

  • While researchers at Google in 2019 claimed that they had achieved “quantum supremacy” — a task performed much more quickly on a quantum computer than a conventional one — IBM’s researchers say they have achieved something new and more useful, albeit more modestly named.
  • “We’re entering this phase of quantum computing that I call utility,” said Jay Gambetta, a vice president of IBM Quantum. “The era of utility.”
  • Present-day computers are called digital, or classical, because they deal with bits of information that are either 1 or 0, on or off. A quantum computer performs calculations on quantum bits, or qubits, that capture a more complex state of information. Just as a thought experiment by the physicist Erwin Schrödinger postulated that a cat could be in a quantum state that is both dead and alive, a qubit can be both 1 and 0 simultaneously.
  • ...15 more annotations...
  • That allows quantum computers to make many calculations in one pass, while digital ones have to perform each calculation separately. By speeding up computation, quantum computers could potentially solve big, complex problems in fields like chemistry and materials science that are out of reach today.
  • When Google researchers made their supremacy claim in 2019, they said their quantum computer performed a calculation in 3 minutes 20 seconds that would take about 10,000 years on a state-of-the-art conventional supercomputer.
  • The IBM researchers in the new study performed a different task, one that interests physicists. They used a quantum processor with 127 qubits to simulate the behavior of 127 atom-scale bar magnets — tiny enough to be governed by the spooky rules of quantum mechanics — in a magnetic field. That is a simple system known as the Ising model, which is often used to study magnetism.
  • This problem is too complex for a precise answer to be calculated even on the largest, fastest supercomputers.
  • On the quantum computer, the calculation took less than a thousandth of a second to complete. Each quantum calculation was unreliable — fluctuations of quantum noise inevitably intrude and induce errors — but each calculation was quick, so it could be performed repeatedly.
  • Indeed, for many of the calculations, additional noise was deliberately added, making the answers even more unreliable. But by varying the amount of noise, the researchers could tease out the specific characteristics of the noise and its effects at each step of the calculation.“We can amplify the noise very precisely, and then we can rerun that same circuit,” said Abhinav Kandala, the manager of quantum capabilities and demonstrations at IBM Quantum and an author of the Nature paper. “And once we have results of these different noise levels, we can extrapolate back to what the result would have been in the absence of noise.”In essence, the researchers were able to subtract the effects of noise from the unreliable quantum calculations, a process they call error mitigation.
  • Altogether, the computer performed the calculation 600,000 times, converging on an answer for the overall magnetization produced by the 127 bar magnets.
  • Although an Ising model with 127 bar magnets is too big, with far too many possible configurations, to fit in a conventional computer, classical algorithms can produce approximate answers, a technique similar to how compression in JPEG images throws away less crucial data to reduce the size of the file while preserving most of the image’s details
  • Certain configurations of the Ising model can be solved exactly, and both the classical and quantum algorithms agreed on the simpler examples. For more complex but solvable instances, the quantum and classical algorithms produced different answers, and it was the quantum one that was correct.
  • Thus, for other cases where the quantum and classical calculations diverged and no exact solutions are known, “there is reason to believe that the quantum result is more accurate,”
  • Mr. Anand is currently trying to add a version of error mitigation for the classical algorithm, and it is possible that could match or surpass the performance of the quantum calculations.
  • In the long run, quantum scientists expect that a different approach, error correction, will be able to detect and correct calculation mistakes, and that will open the door for quantum computers to speed ahead for many uses.
  • Error correction is already used in conventional computers and data transmission to fix garbles. But for quantum computers, error correction is likely years away, requiring better processors able to process many more qubits
  • “This is one of the simplest natural science problems that exists,” Dr. Gambetta said. “So it’s a good one to start with. But now the question is, how do you generalize it and go to more interesting natural science problems?”
  • Those might include figuring out the properties of exotic materials, accelerating drug discovery and modeling fusion reactions.
Javier E

Wine-tasting: it's junk science | Life and style | The Observer - 0 views

  • google_ad_client = 'ca-guardian_js'; google_ad_channel = 'lifeandstyle'; google_max_num_ads = '3'; // Comments Click here to join the discussion. We can't load the discussion on guardian.co.uk because you don't have JavaScript enabled. if (!!window.postMessage) { jQuery.getScript('http://discussion.guardian.co.uk/embed.js') } else { jQuery('#d2-root').removeClass('hd').html( '' + 'Comments' + 'Click here to join the discussion.We can\'t load the ' + 'discussion on guardian.co.uk ' + 'because your web browser does not support all the features that we ' + 'need. If you cannot upgrade your browser to a newer version, you can ' + 'access the discussion ' + 'here.' ); } Wor
  • Hodgson approached the organisers of the California State Fair wine competition, the oldest contest of its kind in North America, and proposed an experiment for their annual June tasting sessions.Each panel of four judges would be presented with their usual "flight" of samples to sniff, sip and slurp. But some wines would be presented to the panel three times, poured from the same bottle each time. The results would be compiled and analysed to see whether wine testing really is scientific.
  • Results from the first four years of the experiment, published in the Journal of Wine Economics, showed a typical judge's scores varied by plus or minus four points over the three blind tastings. A wine deemed to be a good 90 would be rated as an acceptable 86 by the same judge minutes later and then an excellent 94.
  • ...9 more annotations...
  • Hodgson's findings have stunned the wine industry. Over the years he has shown again and again that even trained, professional palates are terrible at judging wine."The results are disturbing," says Hodgson from the Fieldbrook Winery in Humboldt County, described by its owner as a rural paradise. "Only about 10% of judges are consistent and those judges who were consistent one year were ordinary the next year."Chance has a great deal to do with the awards that wines win."
  • French academic Frédéric Brochet tested the effect of labels in 2001. He presented the same Bordeaux superior wine to 57 volunteers a week apart and in two different bottles – one for a table wine, the other for a grand cru.The tasters were fooled.When tasting a supposedly superior wine, their language was more positive – describing it as complex, balanced, long and woody. When the same wine was presented as plonk, the critics were more likely to use negatives such as weak, light and flat.
  • In 2011 Professor Richard Wiseman, a psychologist (and former professional magician) at Hertfordshire University invited 578 people to comment on a range of red and white wines, varying from £3.49 for a claret to £30 for champagne, and tasted blind.People could tell the difference between wines under £5 and those above £10 only 53% of the time for whites and only 47% of the time for reds. Overall they would have been just as a successful flipping a coin to guess.
  • why are ordinary drinkers and the experts so poor at tasting blind? Part of the answer lies in the sheer complexity of wine.For a drink made by fermenting fruit juice, wine is a remarkably sophisticated chemical cocktail. Dr Bryce Rankine, an Australian wine scientist, identified 27 distinct organic acids in wine, 23 varieties of alcohol in addition to the common ethanol, more than 80 esters and aldehydes, 16 sugars, plus a long list of assorted vitamins and minerals that wouldn't look out of place on the ingredients list of a cereal pack. There are even harmless traces of lead and arsenic that come from the soil.
  • "People underestimate how clever the olfactory system is at detecting aromas and our brain is at interpreting them," says Hutchinson."The olfactory system has the complexity in terms of its protein receptors to detect all the different aromas, but the brain response isn't always up to it. But I'm a believer that everyone has the same equipment and it comes down to learning how to interpret it." Within eight tastings, most people can learn to detect and name a reasonable range of aromas in wine
  • People struggle with assessing wine because the brain's interpretation of aroma and bouquet is based on far more than the chemicals found in the drink. Temperature plays a big part. Volatiles in wine are more active when wine is warmer. Serve a New World chardonnay too cold and you'll only taste the overpowering oak. Serve a red too warm and the heady boozy qualities will be overpowering.
  • Colour affects our perceptions too. In 2001 Frédérick Brochet of the University of Bordeaux asked 54 wine experts to test two glasses of wine – one red, one white. Using the typical language of tasters, the panel described the red as "jammy' and commented on its crushed red fruit.The critics failed to spot that both wines were from the same bottle. The only difference was that one had been coloured red with a flavourless dye
  • Other environmental factors play a role. A judge's palate is affected by what she or he had earlier, the time of day, their tiredness, their health – even the weather.
  • Robert Hodgson is determined to improve the quality of judging. He has developed a test that will determine whether a judge's assessment of a blind-tasted glass in a medal competition is better than chance. The research will be presented at a conference in Cape Town this year. But the early findings are not promising."So far I've yet to find someone who passes," he says.
sissij

Color of 2017? Pantone Picks a Spring Shade - The New York Times - 0 views

  • Not just any old green, of course: Pantone 15-0343, colloquially known as greenery, which is to say a “yellow-green shade that evokes the first days of spring.”
  • That is, the Color of the Year for 2017.
  • “This is the color of hopefulness, and of our connection to nature. It speaks to what we call the ‘re’ words: regenerate, refresh, revitalize, renew. Every spring we enter a new cycle and new shoots come from the ground. It is something life affirming to look forward to.”
  • ...3 more annotations...
  • Certainly the psychology of color ranges from the obvious (red represents aggression; pink is swaddling and calms people) to the chiaroscuro.
  • a combination of yellow and blue, or warmth and a certain cool,” she said. “It’s a complex marriage.”
  • You could argue that the selection is something of a self-fulfilling prophecy, except the point is that the products are already there (otherwise they couldn’t be marketed so immediately), which supports Pantone’s contention that it has identified a burgeoning trend.
  •  
    I found this article very interesting because it talks about the different impression color can give us. Take the example in this article, the color of the year is greenery because it gives people a refreshing feeling. This is related to how we assign different meaning to things based on our pattern recognition. Since we always see color green in the nature, so it very easy for us to relate green with new life and energy. They way Ms. Eiseman said about color is also very interesting. She saw colors with significant meaning and though green is the kid of the complex marriage between blue and yellow. She even personified the colors. This shows how we tends to assign human properties to completely lifeless objects. --Sissi (12/9/2016)
Javier E

How Did Consciousness Evolve? - The Atlantic - 0 views

  • Theories of consciousness come from religion, from philosophy, from cognitive science, but not so much from evolutionary biology. Maybe that’s why so few theories have been able to tackle basic questions such as: What is the adaptive value of consciousness? When did it evolve and what animals have it?
  • The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions.
  • The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence
  • ...23 more annotations...
  • Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition.
  • It coordinates something called overt attention – aiming the satellite dishes of the eyes, ears, and nose toward anything important.
  • Selective enhancement therefore probably evolved sometime between hydras and arthropods—between about 700 and 600 million years ago, close to the beginning of complex, multicellular life
  • The next evolutionary advance was a centralized controller for attention that could coordinate among all senses. In many animals, that central controller is a brain area called the tectum
  • At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing.
  • All vertebrates—fish, reptiles, birds, and mammals—have a tectum. Even lampreys have one, and they appeared so early in evolution that they don’t even have a lower jaw. But as far as anyone knows, the tectum is absent from all invertebrates
  • According to fossil and genetic evidence, vertebrates evolved around 520 million years ago. The tectum and the central control of attention probably evolved around then, during the so-called Cambrian Explosion when vertebrates were tiny wriggling creatures competing with a vast range of invertebrates in the sea.
  • The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning.
  • The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement
  • In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.
  • With the evolution of reptiles around 350 to 300 million years ago, a new brain structure began to emerge – the wulst. Birds inherited a wulst from their reptile ancestors. Mammals did too, but our version is usually called the cerebral cortex and has expanded enormously
  • The cortex also takes in sensory signals and coordinates movement, but it has a more flexible repertoire. Depending on context, you might look toward, look away, make a sound, do a dance, or simply store the sensory event in memory in case the information is useful for the future.
  • The most important difference between the cortex and the tectum may be the kind of attention they control. The tectum is the master of overt attention—pointing the sensory apparatus toward anything important. The cortex ups the ante with something called covert attention. You don’t need to look directly at something to covertly attend to it. Even if you’ve turned your back on an object, your cortex can still focus its processing resources on it
  • The cortex needs to control that virtual movement, and therefore like any efficient controller it needs an internal model. Unlike the tectum, which models concrete objects like the eyes and the head, the cortex must model something much more abstract. According to the AST, it does so by constructing an attention schema—a constantly updated set of information that describes what covert attention is doing moment-by-moment and what its consequences are
  • Covert attention isn’t intangible. It has a physical basis, but that physical basis lies in the microscopic details of neurons, synapses, and signals. The brain has no need to know those details. The attention schema is therefore strategically vague. It depicts covert attention in a physically incoherent way, as a non-physical essence
  • this, according to the theory, is the origin of consciousness. We say we have consciousness because deep in the brain, something quite primitive is computing that semi-magical self-description.
  • I’m reminded of Teddy Roosevelt’s famous quote, “Do what you can with what you have where you are.” Evolution is the master of that kind of opportunism. Fins become feet. Gill arches become jaws. And self-models become models of others. In the AST, the attention schema first evolved as a model of one’s own covert attention. But once the basic mechanism was in place, according to the theory, it was further adapted to model the attentional states of others, to allow for social prediction. Not only could the brain attribute consciousness to itself, it began to attribute consciousness to others.
  • In the AST’s evolutionary story, social cognition begins to ramp up shortly after the reptilian wulst evolved. Crocodiles may not be the most socially complex creatures on earth, but they live in large communities, care for their young, and can make loyal if somewhat dangerous pets.
  • If AST is correct, 300 million years of reptilian, avian, and mammalian evolution have allowed the self-model and the social model to evolve in tandem, each influencing the other. We understand other people by projecting ourselves onto them. But we also understand ourselves by considering the way other people might see us.
  • t the cortical networks in the human brain that allow us to attribute consciousness to others overlap extensively with the networks that construct our own sense of consciousness.
  • Language is perhaps the most recent big leap in the evolution of consciousness. Nobody knows when human language first evolved. Certainly we had it by 70 thousand years ago when people began to disperse around the world, since all dispersed groups have a sophisticated language. The relationship between language and consciousness is often debated, but we can be sure of at least this much: once we developed language, we could talk about consciousness and compare notes
  • Maybe partly because of language and culture, humans have a hair-trigger tendency to attribute consciousness to everything around us. We attribute consciousness to characters in a story, puppets and dolls, storms, rivers, empty spaces, ghosts and gods. Justin Barrett called it the Hyperactive Agency Detection Device, or HADD
  • the HADD goes way beyond detecting predators. It’s a consequence of our hyper-social nature. Evolution turned up the amplitude on our tendency to model others and now we’re supremely attuned to each other’s mind states. It gives us our adaptive edge. The inevitable side effect is the detection of false positives, or ghosts.
Javier E

Teach the Books, Touch the Heart - NYTimes.com - 0 views

  • It is ironic, then, that English Language Arts exams are designed for “cultural neutrality.” This is supposed to give students a level playing field on the exams, but what it does is bleed our English classes dry. We are trying to teach students to read increasingly complex texts, but they are complex only on the sentence level — not because the ideas they present are complex, not because they are symbolic, allusive or ambiguous. These are literary qualities, and they are more or less absent from testing materials.
  • Of course no teacher disputes the necessity of being able to read for information. But if literature has no place in these tests, and if preparation for the tests becomes the sole goal of education, then the reading of literature will go out of fashion in our schools.
  • we should abandon altogether the multiple-choice tests, which are in vogue not because they are an effective tool for judging teachers or students but because they are an efficient means of producing data. Instead, we should move toward extensive written exams, in which students could grapple with literary passages and books they have read in class, along with assessments of students’ reports and projects from throughout the year. This kind of system would be less objective and probably more time-consuming for administrators, but it would also free teachers from endless test preparation and let students focus on real learning.
  • ...1 more annotation...
  • We cannot enrich the minds of our students by testing them on texts that purposely ignore their hearts. By doing so, we are withholding from our neediest students any reason to read at all. We are teaching them that words do not dazzle but confound. We may succeed in raising test scores by relying on these methods, but we will fail to teach them that reading can be transformative and that it belongs to them.
Javier E

This Sociological Theory Explains Why Wall Street Is Rigged for Crisis - Bill Davidow -... - 0 views

  • near brush with nuclear catastrophe, brought on by a single foraging bear, is an example of what sociologist Charles Perrow calls a “normal accident.” These frightening incidents are “normal” not because they happen often, but because they are almost certain to occur in any tightly connected complex system.
  • Perrow had a fairly simple solution for the problem. High-risk systems, such as nuclear power plants, should be built only as a last resort.
  • errow stresses the role that human error and mismanagement play in these scenarios. The important lesson: failures in complex systems are caused not only by the hardware and software problems but by people and their motivations.
  • ...4 more annotations...
  • Normal accidents, like these, occur because two or more independent failures happen and interact in unpredictable ways. After studying calamities such as the Three Mile Island meltdown, explosions at chemical plants, and ships colliding in the open sea, Perrow observed that safety mechanisms put in place to make the systems safer in fact frequently trigger the final failure.
  • That solution won’t work for financial markets. We need currency hedges, futures markets, and derivatives to keep our economic systems functioning. But we also have to realize tweaking the current system will not fix the problem. Most of the supposedly strong cures implemented by legislators to date, such as prohibiting bank holding companies from proprietary trading, are inadequate as well.
  • how do we make our markets less danger prone? A good place to start would be to reduce the excessive trading volumes that lie at the root of accidents like the Flash Freeze, Flash Cash, and Goldman debacle. There is no valid reason for high frequency trading to make up more than 50 percent of all stock trades, and there is no pressing need for some $4 trillion in daily foreign currency transactions. A Tobin tax on transactions, first suggested by Noble laureate James Tobin in 1972, of as little as 0.1 percent, would significantly reduce these volumes. Smaller transaction volumes would reduce the size of accidents and possibly their frequency.
  • But tinkering with the current system and looking for easy ways out, as we are now, is bound to fail. We’re in danger of letting normal accidents in the financial system become all too normal.
Javier E

E. O. Wilson's Theory of Everything - Magazine - The Atlantic - 0 views

  • Wilson told me the new proposed evolutionary model pulls the field “out of the fever swamp of kin selection,” and he confidently predicted a coming paradigm shift that would promote genetic research to identify the “trigger” genes that have enabled a tiny number of cases, such as the ant family, to achieve complex forms of cooperation.
  • In the book, he proposes a theory to answer what he calls “the great unsolved problem of biology,” namely how roughly two dozen known examples in the history of life—humans, wasps, termites, platypodid ambrosia beetles, bathyergid mole rats, gall-making aphids, one type of snapping shrimp, and others—made the breakthrough to life in highly social, complex societies. Eusocial species, Wilson noted, are by far “the most successful species in the history of life.”
  • Summarizing parts of it for me, Wilson was particularly unsparing of organized religion, likening the Book of Revelation, for example, to the ranting of “a paranoid schizophrenic who was allowed to write down everything that came to him.” Toward philosophy, he was only slightly kinder. Generation after generation of students have suffered trying to “puzzle out” what great thinkers like Socrates, Plato, and Descartes had to say on the great questions of man’s nature, Wilson said, but this was of little use, because philosophy has been based on “failed models of the brain.”
  • ...6 more annotations...
  • His theory draws upon many of the most prominent views of how humans emerged. These range from our evolution of the ability to run long distances to our development of the earliest weapons, which involved the improvement of hand-eye coordination. Dramatic climate change in Africa over the course of a few tens of thousands of years also may have forced Australopithecus and Homo to adapt rapidly. And over roughly the same span, humans became cooperative hunters and serious meat eaters, vastly enriching our diet and favoring the development of more-robust brains. By themselves, Wilson says, none of these theories is satisfying. Taken together, though, all of these factors pushed our immediate prehuman ancestors toward what he called a huge pre-adaptive step: the formation of the earliest communities around fixed camps.
  • “When humans started having a camp—and we know that Homo erectus had campsites—then we know they were heading somewhere,” he told me. “They were a group progressively provisioned, sending out some individuals to hunt and some individuals to stay back and guard the valuable campsite. They were no longer just wandering through territory, emitting calls. They were on long-term campsites, maybe changing from time to time, but they had come together. They began to read intentions in each other’s behavior, what each other are doing. They started to learn social connections more solidly.”
  • “The humans become consistent with all the others,” he said, and the evolutionary steps were likely similar—beginning with the formation of groups within a freely mixing population, followed by the accumulation of pre-adaptations that make eusociality more likely, such as the invention of campsites. Finally comes the rise to prevalence of eusocial alleles—one of two or more alternative forms of a gene that arise by mutation, and are found at the same place on a chromosome—which promote novel behaviors (like communal child care) or suppress old, asocial traits. Now it is up to geneticists, he adds, to “determine how many genes are involved in crossing the eusociality threshold, and to go find those genes.”
  • Wilson posits that two rival forces drive human behavior: group selection and what he calls “individual selection”—competition at the level of the individual to pass along one’s genes—with both operating simultaneously. “Group selection,” he said, “brings about virtue, and—this is an oversimplification, but—individual selection, which is competing with it, creates sin. That, in a nutshell, is an explanation of the human condition.
  • “Within groups, the selfish are more likely to succeed,” Wilson told me in a telephone conversation. “But in competition between groups, groups of altruists are more likely to succeed. In addition, it is clear that groups of humans proselytize other groups and accept them as allies, and that that tendency is much favored by group selection.” Taking in newcomers and forming alliances had become a fundamental human trait, he added, because “it is a good way to win.”
  • If Wilson is right, the human impulse toward racism and tribalism could come to be seen as a reflection of our genetic nature as much as anything else—but so could the human capacity for altruism, and for coalition- and alliance-building. These latter possibilities may help explain Wilson’s abiding optimism—about the environment and many other matters. If these traits are indeed deeply written into our genetic codes, we might hope that we can find ways to emphasize and reinforce them, to build problem-solving coalitions that can endure, and to identify with progressively larger and more-inclusive groups over time.
julia rhodes

How our brain assess bargains - 0 views

  • The 'supermarket shoppers' were brain-scanned to test their reactions to promotions and special offers in a major cutting-edge project by UK-based SBXL, one of Europe's leading shopping behaviour specialists and Bangor University's respected School of Psychology.
  • We know from other research that people are not as good at making rational decisions as they might expect, often using "rules of thumb" and educated guesses to evaluate decisions. Using brain imaging techniques we hope to get a better understanding of how the brain responds to special offers and how this may influence the decisions we make. This also gives us the chance to do some research on how we make decisions in a real world context."
  • Our data also agrees with previous research suggesting that as offers, or decisions get more complex, instead of working things out, our brains take shortcuts, and may guess that an offer is good. Interestingly, in our study people were just as good at selecting good complex offers from bad as they were for less complex ones, suggesting this guessing method may be as good in some cases as "working it out"."
  • ...3 more annotations...
  • "It turns out we are not as good at picking good offers as you might expect, with the average shopper in our experiment only picking 60% of good offers compared to bad. We also found that age had a strong negative affect on the ability to choose good offers, with older people less able to choose good offers over bad ones. We find this latter effect very interesting and would like to do some more research to find out why this may be the case."
  • The advantage of using fMRI to image the brain while actively making shopping decisions is that it enables us to see how the whole brain responds, including the 'deeper' areas of the brain, such as those associated with emotion and desire. This lets us understand more about what makes an offer appealing: in some cases the choice appears to be more rational, and in other cases we can see emotional circuitry getting involved in the decision-making process".
  • In particular we are interested in how factors we are unconsciously aware of can override what might be considered the optimal choice based on conscious judgements. We hope this partnership with SBXL will lead to further research in this area."
1 - 20 of 370 Next › Last »
Showing 20 items per page