Skip to main content

Home/ TOK Friends/ Group items tagged computing

Rss Feed Group items tagged

qkirkpatrick

Sebastian Seung's Quest to Map the Human Brain - NYTimes.com - 0 views

  • What the field needed, Tank said, was a computer program that could trace them automatically — a way to map the brain’s connections by the millions, opening a new area of scientific discovery.
  • What has made the early 21st century a particularly giddy moment for scientific mapmakers, though, is the precipitous rise of information technology. Advances in computers have provided a cheap means to collect and analyze huge volumes of data, and Moore’s Law, which predicts regular doublings in computing power, has shown little sign of flagging.
  • The Brain Initiative, the United States government’s 12-year, $4.5 billion brain-mapping effort, is a conscious echo of the genome project, but neuroscientists find themselves in a far more tenuous position at the outset.
  •  
    Mapping the Brain
Javier E

Why Silicon Valley can't fix itself | News | The Guardian - 1 views

  • After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.
  • Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.”
  • Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.
  • ...52 more annotations...
  • It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity
  • The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.
  • its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction.
  • In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.
  • the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track
  • The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”,
  • In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention.
  • After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.
  • these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires
  • Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform
  • Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes
  • they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.
  • To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”
  • this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives
  • Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.
  • They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.
  • Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.
  • The story of our species began when we began to make tools
  • All of which is to say: humanity and technology are not only entangled, they constantly change together.
  • This is not just a metaphor. Recent research suggests that the human hand evolved to manipulate the stone tools that our ancestors used
  • The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities
  • Yet as we lose certain capacities, we gain new ones.
  • The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology
  • Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.
  • Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them
  • Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes
  • The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.
  • They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit.
  • This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.
  • Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power.
  • these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.
  • reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”
  • Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable
  • not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”
  • Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently.
  • Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.
  • Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact
  • emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform.
  • “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics
  • industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.
  • there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use.
  • It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.
  • The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power
  • The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.
  • If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right
  • The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.
  • Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology
  • What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power.
  • Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources
  • democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation
  • This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run.
  • we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.
Javier E

Opinion | Is Computer Code a Foreign Language? - The New York Times - 1 views

  • the proposal that foreign language learning can be replaced by computer coding knowledge is misguided:
  • Our profound and impressive ability to create complex tools with which to manipulate our environments is secondary to our ability to conceptualize and communicate about those environments in natural languages.
  • more urgent is my alarm at the growing tendency to accept and even foster the decline of the sort of interpersonal human contact that learning languages both requires and cultivates.
  • ...6 more annotations...
  • Language is an essential — perhaps the essential — marker of our species. We learn in and through natural languages; we develop our most fundamental cognitive skills by speaking and hearing languages; and we ultimately assume our identities as human beings and members of communities by exercising those languages
  • It stems from a widely held but mistaken belief that science and technology education should take precedence over subjects like English, history and foreign languages.
  • Natural languages aren’t just more complex versions of the algorithms with which we teach machines to do tasks; they are also the living embodiments of our essence as social animals.
  • We express our love and our losses, explore beauty, justice and the meaning of our existence, and even come to know ourselves all though natural languages.
  • we are fundamentally limited in how much we can know about another’s thoughts and feelings, and that this limitation and the desire to transcend it is essential to our humanity
  • or us humans, communication is about much more than getting information or following instructions; it’s about learning who we are by interacting with others.
Javier E

ROUGH TYPE | Nicholas Carr's blog - 0 views

  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • ...39 more annotations...
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Social skills and relationships seem to suffer as well.
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
Javier E

While You Were Sleeping - The New York Times - 0 views

  • look at where we are today thanks to artificial intelligence from digital computers — and the amount of middle-skill and even high-skill work they’re supplanting — and then factor in how all of this could be supercharged in a decade by quantum computing.
  • In December 2016, Amazon announced plans for the Amazon Go automated grocery store, in which a combination of computer vision and deep-learning technologies track items and only charges customers when they remove the items from the store. In February 2017, Bank of America began testing three ‘employee-less’ branch locations that offer full-service banking automatically, with access to a human, when necessary, via video teleconference.”
  • This will be a challenge for developed countries, but even more so for countries like Egypt, Pakistan, Iran, Syria, Saudi Arabia, China and India — where huge numbers of youths are already unemployed because they lack the education for even this middle-skill work THAT’S now being automated.
  • ...4 more annotations...
  • Some jobs will be displaced, but 100 percent of jobs will be augmented by A.I.,” added Rometty. Technology companies “are inventing these technologies, so we have the responsibility to help people adapt to it — and I don’t mean just giving them tablets or P.C.s, but lifelong learning systems.”
  • Each time work gets outsourced or tasks get handed off to a machine, “we must reach up and learn a new skill or in some ways expand our capabilities as humans in order to fully realize our collaborative potential,” McGowan said.
  • Therefore, education needs to shift “from education as a content transfer to learning as a continuous process where the focused outcome is the ability to learn and adapt with agency as opposed to the transactional action of acquiring a set skill,
  • “Instructors/teachers move from guiding and accessing that transfer process to providing social and emotional support to the individual as they move into the role of driving their own continuous learning.”
Javier E

Opinion | A.I. Is Harder Than You Think - The New York Times - 1 views

  • The limitations of Google Duplex are not just a result of its being announced prematurely and with too much fanfare; they are also a vivid reminder that genuine A.I. is far beyond the field’s current capabilities, even at a company with perhaps the largest collection of A.I. researchers in the world, vast amounts of computing power and enormous quantities of data.
  • The crux of the problem is that the field of artificial intelligence has not come to grips with the infinite complexity of language. Just as you can make infinitely many arithmetic equations by combining a few mathematical symbols and following a small set of rules, you can make infinitely many sentences by combining a modest set of words and a modest set of rules.
  • A genuine, human-level A.I. will need to be able to cope with all of those possible sentences, not just a small fragment of them.
  • ...3 more annotations...
  • No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.
  • Once upon a time, before the fashionable rise of machine learning and “big data,” A.I. researchers tried to understand how complex knowledge could be encoded and processed in computers. This project, known as knowledge engineering, aimed not to create programs that would detect statistical patterns in huge data sets but to formalize, in a system of rules, the fundamental elements of human understanding, so that those rules could be applied in computer programs.
  • That job proved difficult and was never finished. But “difficult and unfinished” doesn’t mean misguided. A.I. researchers need to return to that project sooner rather than later, ideally enlisting the help of cognitive psychologists who study the question of how human cognition manages to be endlessly flexible.
Javier E

The Lasting Lessons of John Conway's Game of Life - The New York Times - 0 views

  • “Because of its analogies with the rise, fall and alterations of a society of living organisms, it belongs to a growing class of what are called ‘simulation games,’” Mr. Gardner wrote when he introduced Life to the world 50 years ago with his October 1970 column.
  • The Game of Life motivated the use of cellular automata in the rich field of complexity science, with simulations modeling everything from ants to traffic, clouds to galaxies. More trivially, the game attracted a cult of “Lifenthusiasts,” programmers who spent a lot of time hacking Life — that is, constructing patterns in hopes of spotting new Life-forms.
  • The tree of Life also includes oscillators, such as the blinker, and spaceships of various sizes (the glider being the smallest).
  • ...24 more annotations...
  • Patterns that didn’t change one generation to the next, Dr. Conway called still lifes — such as the four-celled block, the six-celled beehive or the eight-celled pond. Patterns that took a long time to stabilize, he called methuselahs.
  • Life shows us complex virtual “organisms” arising out of the interaction of a few simple rules — so goodbye “Intelligent Design.”
  • I first encountered Life at the Exploratorium in San Francisco in 1978. I was hooked immediately by the thing that has always hooked me — watching complexity arise out of simplicity.
  • Life shows you two things. The first is sensitivity to initial conditions. A tiny change in the rules can produce a huge difference in the output, ranging from complete destruction (no dots) through stasis (a frozen pattern) to patterns that keep changing as they unfold.
  • The second thing Life shows us is something that Darwin hit upon when he was looking at Life, the organic version. Complexity arises from simplicity!
  • I’ve wondered for decades what one could learn from all that Life hacking. I recently realized it’s a great place to try to develop “meta-engineering” — to see if there are general principles that govern the advance of engineering and help us predict the overall future trajectory of technology.
  • Melanie Mitchell— Professor of complexity, Santa Fe Institute
  • Given that Conway’s proof that the Game of Life can be made to simulate a Universal Computer — that is, it could be “programmed” to carry out any computation that a traditional computer can do — the extremely simple rules can give rise to the most complex and most unpredictable behavior possible. This means that there are certain properties of the Game of Life that can never be predicted, even in principle!
  • I use the Game of Life to make vivid for my students the ideas of determinism, higher-order patterns and information. One of its great features is that nothing is hidden; there are no black boxes in Life, so you know from the outset that anything that you can get to happen in the Life world is completely unmysterious and explicable in terms of a very large number of simple steps by small items.
  • In Thomas Pynchon’s novel “Gravity’s Rainbow,” a character says, “But you had taken on a greater and more harmful illusion. The illusion of control. That A could do B. But that was false. Completely. No one can do. Things only happen.”This is compelling but wrong, and Life is a great way of showing this.
  • Brian Eno— Musician, London
  • Stephen Wolfram— Scientist and C.E.O., Wolfram Research
  • In Life, we might say, things only happen at the pixel level; nothing controls anything, nothing does anything. But that doesn’t mean that there is no such thing as action, as control; it means that these are higher-level phenomena composed (entirely, with no magic) from things that only happen.
  • Bert Chan— Artificial-life researcher and creator of the continuous cellular automaton “Lenia,” Hong Kong
  • it did have a big impact on beginner programmers, like me in the 90s, giving them a sense of wonder and a kind of confidence that some easy-to-code math models can produce complex and beautiful results. It’s like a starter kit for future software engineers and hackers, together with Mandelbrot Set, Lorenz Attractor, et cetera.
  • if we think about our everyday life, about corporations and governments, the cultural and technical infrastructures humans built for thousands of years, they are not unlike the incredible machines that are engineered in Life.
  • In normal times, they are stable and we can keep building stuff one component upon another, but in harder times like this pandemic or a new Cold War, we need something that is more resilient and can prepare for the unpreparable. That would need changes in our “rules of life,” which we take for granted.
  • Rudy Rucker— Mathematician and author of “Ware Tetralogy,” Los Gatos, Calif.
  • That’s what chaos is about. The Game of Life, or a kinky dynamical system like a pair of pendulums, or a candle flame, or an ocean wave, or the growth of a plant — they aren’t readily predictable. But they are not random. They do obey laws, and there are certain kinds of patterns — chaotic attractors — that they tend to produce. But again, unpredictable is not random. An important and subtle distinction which changed my whole view of the world.
  • William Poundstone— Author of “The Recursive Universe: Cosmic Complexity and the Limits of Scientific Knowledge,” Los Angeles, Calif.
  • The Game of Life’s pulsing, pyrotechnic constellations are classic examples of emergent phenomena, introduced decades before that adjective became a buzzword.
  • Fifty years later, the misfortunes of 2020 are the stuff of memes. The biggest challenges facing us today are emergent: viruses leaping from species to species; the abrupt onset of wildfires and tropical storms as a consequence of a small rise in temperature; economies in which billions of free transactions lead to staggering concentrations of wealth; an internet that becomes more fraught with hazard each year
  • Looming behind it all is our collective vision of an artificial intelligence-fueled future that is certain to come with surprises, not all of them pleasant.
  • The name Conway chose — the Game of Life — frames his invention as a metaphor. But I’m not sure that even he anticipated how relevant Life would become, and that in 50 years we’d all be playing an emergent game of life and death.
margogramiak

How To Fight Deforestation In The Amazon From Your Couch | HuffPost - 0 views

  • If you’ve got as little as 30 seconds and a decent internet connection, you can help combat the deforestation of the Amazon. 
  • Some 15% of the Amazon, the world’s largest rainforest and a crucial carbon repository, has been cut or burned down. Around two-thirds of the Amazon lie within Brazil’s borders, where almost 157 square miles of forest were cleared in April alone. In addition to storing billions of tons of carbon, the Amazon is home to tens of millions of people and some 10% of the Earth’s biodiversity.
    • margogramiak
       
      all horrifying stats.
  • you just have to be a citizen that is concerned about the issue of deforestation,
    • margogramiak
       
      that's me!
  • ...12 more annotations...
  • If you’ve got as little as 30 seconds and a decent internet connection, you can help combat the deforestation of the Amazon. 
    • margogramiak
       
      great!
  • to build an artificial intelligence model that can recognize signs of deforestation. That data can be used to alert governments and conservation organizations where intervention is needed and to inform policies that protect vital ecosystems. It may even one day predict where deforestation is likely to happen next.
    • margogramiak
       
      That sounds super cool, and definitely useful.
  • To monitor deforestation, conservation organizations need an eye in the sky.
    • margogramiak
       
      bird's eye view pictures of deforestation are always super impactful.
  • WRI’s Global Forest Watch online tracking system receives images of the world’s forests taken every few days by NASA satellites. A simple computer algorithm scans the images, flagging instances where before there were trees and now there are not. But slight disturbances, such as clouds, can trip up the computer, so experts are increasingly interested in using artificial intelligence.
    • margogramiak
       
      that's so cool.
  • Inman was surprised how willing people have been to spend their time clicking on abstract-looking pictures of the Amazon.
    • margogramiak
       
      I'm glad so many people want to help.
  • Look at these nine blocks and make a judgment about each one. Does that satellite image look like a situation where human beings have transformed the landscape in some way?” Inman explained.
    • margogramiak
       
      seems simple enough
  • It’s not always easy; that’s the point. For example, a brown patch in the trees could be the result of burning to clear land for agriculture (earning a check mark for human impact), or it could be the result of a natural forest fire (no check mark). Keen users might be able to spot subtle signs of intervention the computer would miss, like the thin yellow line of a dirt road running through the clearing. 
    • margogramiak
       
      I was thinking about this issue... that's a hard problem to solve.
  • SAS’s website offers a handful of examples comparing natural forest features and manmade changes. 
    • margogramiak
       
      I guess that would be helpful. What happens if someone messes up though?
  • users have analyzed almost 41,000 images, covering an area of rainforest nearly the size of the state of Montana. Deforestation caused by human activity is evident in almost 2 in 5 photos.
    • margogramiak
       
      wow.
  • The researchers hope to use historical images of these new geographies to create a predictive model that could identify areas most at risk of future deforestation. If they can show that their AI model is successful, it could be useful for NGOs, governments and forest monitoring bodies, enabling them to carefully track forest changes and respond by sending park rangers and conservation teams to threatened areas. In the meantime, it’s a great educational tool for the citizen scientists who use the app
    • margogramiak
       
      But then what do they do with this data? How do they use it to make a difference?
  • Users simply select the squares in which they’ve spotted some indication of human impact: the tell-tale quilt of farm plots, a highway, a suspiciously straight edge of tree line. 
    • margogramiak
       
      I could do that!
  • we have still had people from 80 different countries come onto the app and make literally hundreds of judgments that enabled us to resolve 40,000 images,
    • margogramiak
       
      I like how in a sense it makes all the users one big community because of their common goal of wanting to help the earth.
Javier E

J. Robert Oppenheimer's Defense of Humanity - WSJ - 0 views

  • Von Neumann, too, was deeply concerned about the inability of humanity to keep up with its own inventions. “What we are creating now,” he said to his wife Klári in 1945, “is a monster whose influence is going to change history, provided there is any history left.” Moving to the subject of future computing machines he became even more agitated, foreseeing disaster if “people” could not “keep pace with what they create.”
  • Oppenheimer, Einstein, von Neumann and other Institute faculty channeled much of their effort toward what AI researchers today call the “alignment” problem: how to make sure our discoveries serve us instead of destroying us. Their approaches to this increasingly pressing problem remain instructive.
  • Von Neumann focused on applying the powers of mathematical logic, taking insights from games of strategy and applying them to economics and war planning. Today, descendants of his “game theory” running on von Neumann computing architecture are applied not only to our nuclear strategy, but also many parts of our political, economic and social lives. This is one approach to alignment: humanity survives technology through more technology, and it is the researcher’s role to maximize progress.
  • ...5 more annotations...
  • he also thought that this approach was not enough. “What are we to make of a civilization,” he asked in 1959, a few years after von Neumann’s death, “which has always regarded ethics as an essential part of human life, and…which has not been able to talk about the prospect of killing almost everybody, except in prudential and game-theoretical terms?”
  • to design a “fairness algorithm” we need to know what fairness is. Fairness is not a mathematical constant or even a variable. It is a human value, meaning that there are many often competing and even contradictory visions of it on offer in our societies.
  • Hence Oppenheimer set out to make the Institute for Advanced Study a place for thinking about humanistic subjects like Russian culture, medieval history, or ancient philosophy, as well as about mathematics and the theory of the atom. He hired scholars like George Kennan, the diplomat who designed the Cold War policy of Soviet “containment”; Harold Cherniss, whose work on the philosophies of Plato and Aristotle influenced many Institute colleagues; and the mathematical physicist Freeman Dyson, who had been one of the youngest collaborators in the Manhattan Project. Traces of their conversations and collaborations are preserved not only in their letters and biographies, but also in their research, their policy recommendations, and in their ceaseless efforts to help the public understand the dangers and opportunities technology offers the world.
  • In their biography “American Prometheus,” which inspired Nolan’s film, Martin Sherwin and Kai Bird document Oppenheimer’s conviction that “the safety” of a nation or the world “cannot lie wholly or even primarily in its scientific or technical prowess.” If humanity wants to survive technology, he believed, it needs to pay attention not only to technology but also to ethics, religions, values, forms of political and social organization, and even feelings and emotions.
  • Preserving any human value worthy of the name will therefore require not only a computer scientist, but also a sociologist, psychologist, political scientist, philosopher, historian, theologian. Oppenheimer even brought the poet T.S. Eliot to the Institute, because he believed that the challenges of the future could only be met by bringing the technological and the human together. The technological challenges are growing, but the cultural abyss separating STEM from the arts, humanities, and social sciences has only grown wider. More than ever, we need institutions capable of helping them think together.
Javier E

Campaigns Mine Personal Lives to Get Out Vote - NYTimes.com - 0 views

  • Strategists affiliated with the Obama and Romney campaigns say they have access to information about the personal lives of voters at a scale never before imagined. And they are using that data to try to influence voting habits — in effect, to train voters to go to the polls through subtle cues, rewards and threats in a manner akin to the marketing efforts of credit card companies and big-box retailers.
  • In the weeks before Election Day, millions of voters will hear from callers with surprisingly detailed knowledge of their lives. These callers — friends of friends or long-lost work colleagues — will identify themselves as volunteers for the campaigns or independent political groups. The callers will be guided by scripts and call lists compiled by people — or computers — with access to details like whether voters may have visited pornography Web sites, have homes in foreclosure, are more prone to drink Michelob Ultra than Corona or have gay friends or enjoy expensive vacations.
  • “You don’t want your analytical efforts to be obvious because voters get creeped out,” said a Romney campaign official who was not authorized to speak to a reporter. “A lot of what we’re doing is behind the scenes.”
  • ...4 more annotations...
  • however, consultants to both campaigns said they had bought demographic data from companies that study details like voters’ shopping histories, gambling tendencies, interest in get-rich-quick schemes, dating preferences and financial problems. The campaigns themselves, according to campaign employees, have examined voters’ online exchanges and social networks to see what they care about and whom they know. They have also authorized tests to see if, say, a phone call from a distant cousin or a new friend would be more likely to prompt the urge to cast a ballot.
  • The campaigns have planted software known as cookies on voters’ computers to see if they frequent evangelical or erotic Web sites for clues to their moral perspectives. Voters who visit religious Web sites might be greeted with religion-friendly messages when they return to mittromney.com or barackobama.com. The campaigns’ consultants have run experiments to determine if embarrassing someone for not voting by sending letters to their neighbors or posting their voting histories online is effective.
  • “I’ve had half-a-dozen conversations with third parties who are wondering if this is the year to start shaming,” said one consultant who works closely with Democratic organizations. “Obama can’t do it. But the ‘super PACs’ are anonymous. They don’t have to put anything on the flier to let the voter know who to blame.”
  • Officials at both campaigns say the most insightful data remains the basics: a voter’s party affiliation, voting history, basic information like age and race, and preferences gleaned from one-on-one conversations with volunteers. But more subtle data mining has helped the Obama campaign learn that their supporters often eat at Red Lobster, shop at Burlington Coat Factory and listen to smooth jazz. Romney backers are more likely to drink Samuel Adams beer, eat at Olive Garden and watch college football.
Javier E

Moral code | Rough Type - 0 views

  • So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?
  • As we begin to have computer-controlled cars, robots, and other machines operating autonomously out in the chaotic human world, situations will inevitably arise in which the software has to choose between a set of bad, even horrible, alternatives. How do you program a computer to choose the lesser of two evils? What are the criteria, and how do you weigh them?
  • Since we humans aren’t very good at codifying responses to moral dilemmas ourselves, particularly when the precise contours of a dilemma can’t be predicted ahead of its occurrence, programmers will find themselves in an extraordinarily difficult situation. And one assumes that they will carry a moral, not to mention a legal, burden for the code they write.
  • ...1 more annotation...
  • We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.
Emily Horwitz

Paralyzed Mom Controls Robotic Arm Using Her Thoughts - Yahoo! News - 0 views

  • After years of paralysis, the one thing Jan Scheuermann wanted was to feed herself. Now, thanks to a mind-controlled robotic arm, Scheuermann has done just that.
  • By implanting two quarter-inch-by-quarter-inch electrodes in her brain and connecting them to a sophisticated robotic arm, researchers at the University of Pittsburgh School of Medicine and University of Pittsburgh Medical Center have allowed the mother of two to manipulate objects by using only her thoughts through a brain-computer interface, or BCI.
  • "They asked me if there was something special I wanted to do," Sheuermann said. "And I said my goal is to feed myself a bar of chocolate. And I did that today."
  • ...6 more annotations...
  • Quadriplegics like Scheuermann have manipulated robotic arms using BCI before.
  • "With three degrees of control, you can do things like manipulate a computer screen and that gentleman was able to reach out and touch his daughter
  • But to actually manipulate objects, to feed yourself for example, you need more than those three dimensions of control. That's what makes Jan so remarkable
  • "The biggest change," Boninger said, "is the sophistication with which we've learned to interpret electrical activity in the brain."
  • "I wouldn't say we have decoded the brain," Boninger said. "But we are getting closer. We can't read emotions but we can interpret motions the brain wants the body to make."
  • "For me, it's been one of the most exciting endeavors I have ever undertaken," Sheuermann wrote on the University of Pittsburgh Medical Center blog. "Being with a team of scientists and using cutting-edge technology that makes me the only person in the world who can scratch her nose with a robotic arm, well, that's thrilling."
Lindsay Lyon

Largest Prime Discovered | Mathematics | LiveScience - 0 views

  • The largest prime number yet has been discovered — and it's 17,425,170 digits long. The new prime number crushes the last one discovered in 2008, which was a paltry 12,978,189 digits long.
  • The number — 2 raised to the 57,885,161 power minus 1 — was discovered by University of Central Missouri mathematician Curtis Cooper as part of a giant network of volunteer computers
  • "It's analogous to climbing Mt. Everest," said George Woltman, the retired, Orlando, Fla.-based computer scientist who created GIMPS. "People enjoy it for the challenge of the discovery of finding something that's never been known before."
  • ...4 more annotations...
  • mathematicians have devised a much cleverer strategy, that dramatically reduces the time to find primes. That method uses a formula to check much fewer numbers.
  • the number is the 48th example of a rare class of primes called Mersenne Primes. Mersenne primes take the form of 2 raised to the power of a prime number minus 1. Since they were first described by French monk Marin Mersenne 350 years ago, only 48 of these elusive numbers have been found, including the most recent discovery. [The Most Massive Numbers in the Universe]
  • mber is the 48th example of a rare class of primes called Mersenne Primes. Mersenne primes take the form of 2 raised to the power of a prime number minus 1. Since they were first described by French monk Marin Mersenne 350 years ago, only 48 of these elusive numbers have been found
  • the 48th example of a rare class of primes called Mersenne Primes. Mersenne primes take the form of 2 raised to the power of a prime number minus 1
  •  
    An interesting article that reminded me of the discussions we had last year on whether math was invented or discovered. With regard to prime numbers, could we ever stop finding "new" ones? Can we ever find a formula to pinpoint every single prime number without dividing it by other numbers?
Emily Horwitz

Ah, Wilderness! Nature Hike Could Unlock Your Imagination : Shots - Health News : NPR - 0 views

  • Want to be more creative? Drop that iPad and head to the great outdoors.
  • David Strayer, a cognitive neuroscientist in Uta
  • t sent students out into nature with computers, to test their attention spans. "It was an abysmal failure,
  • ...12 more annotations...
  • students didn't want to be anywhere near the computers
  • pencil-and-paper creativity quiz, the so-called Remote Associates Test, which asks people to identify word associations that aren't immediately obvious. Four days into the trip, they took the test again
  • 45 percent improvement.
  • hardly scientific.
  • took the test four days into the wilderness did 50 percent better than those who were still immersed in modern lif
  • Outward Bound is notoriously strict about bringing artifacts of modern life into the wilderness.
  • Half of the 56 hikers took the test before going backpacking in the wilderness, and the other half took the RAT test on the fourth day of their trip. The groups went into the wild in Alaska, Colorado, Maine and Washington.
  • researchers had already taken the test once.
  • exposure to nature over a number of days, which has been shown in other studies to improve thinking
  • exercise
  • abandoning electronic devices
  • constant texting and checking in on Facebook are not making us think more clearly.
  •  
    An interesting connection between being in nature and being creative and mentally present.
Javier E

Facebook Has 50 Minutes of Your Time Each Day. It Wants More. - The New York Times - 0 views

  • Fifty minutes.That’s the average amount of time, the company said, that users spend each day on its Facebook, Instagram and Messenger platforms
  • there are only 24 hours in a day, and the average person sleeps for 8.8 of them. That means more than one-sixteenth of the average user’s waking time is spent on Facebook.
  • That’s more than any other leisure activity surveyed by the Bureau of Labor Statistics, with the exception of watching television programs and movies (an average per day of 2.8 hours)
  • ...19 more annotations...
  • It’s more time than people spend reading (19 minutes); participating in sports or exercise (17 minutes); or social events (four minutes). It’s almost as much time as people spend eating and drinking (1.07 hours).
  • the average time people spend on Facebook has gone up — from around 40 minutes in 2014 — even as the number of monthly active users has surged. And that’s just the average. Some users must be spending many hours a day on the site,
  • time has become the holy grail of digital media.
  • Time is the best measure of engagement, and engagement correlates with advertising effectiveness. Time also increases the supply of impressions that Facebook can sell, which brings in more revenue (a 52 percent increase last quarter to $5.4 billion).
  • And time enables Facebook to learn more about its users — their habits and interests — and thus better target its ads. The result is a powerful network effect that competitors will be hard pressed to match.
  • the only one that comes close is Alphabet’s YouTube, where users spent an average of 17 minutes a day on the site. That’s less than half the 35 minutes a day users spent on Facebook
  • ComScore reported that television viewing (both live and recorded) dropped 2 percent last year, and it said younger viewers in particular are abandoning traditional live television. People ages 18-34 spent just 47 percent of their viewing time on television screens, and 40 percent on mobile devices.
  • People spending the most time on Facebook also tend to fall into the prized 18-to-34 demographic sought by advertisers.
  • Users spent an average of nine minutes on all of Yahoo’s sites, two minutes on LinkedIn and just one minute on Twitter
  • What aren’t Facebook users doing during the 50 minutes they spend there? Is it possibly interfering with work (and productivity), or, in the case of young people, studying and reading?
  • While the Bureau of Labor Statistics surveys nearly every conceivable time-occupying activity (even fencing and spelunking), it doesn’t specifically tally the time spent on social media, both because the activity may have multiple purposes — both work and leisure — and because people often do it at the same time they are ostensibly engaged in other activities
  • The closest category would be “computer use for leisure,” which has grown from eight minutes in 2006, when the bureau began collecting the data, to 14 minutes in 2014, the most recent survey. Or perhaps it would be “socializing and communicating with others,” which slipped from 40 minutes to 38 minutes.
  • But time spent on most leisure activities hasn’t changed much in those eight years of the bureau’s surveys. Time spent reading dropped from an average of 22 minutes to 19 minutes. Watching television and movies increased from 2.57 hours to 2.8. Average time spent working declined from 3.4 hours to 3.25. (Those hours seem low because much of the population, which includes both young people and the elderly, does not work.)
  • The bureau’s numbers, since they cover the entire population, may be too broad to capture important shifts among important demographic groups
  • “You hear a narrative that young people are fleeing Facebook. The data show that’s just not true. Younger users have a wider appetite for social media, and they spend a lot of time on multiple networks. But they spend more time on Facebook by a wide margin.”
  • Among those 55 and older, 70 percent of their viewing time was on television, according to comScore. So among young people, much social media time may be coming at the expense of traditional television.
  • comScore’s data suggests that people are spending on average just six to seven minutes a day using social media on their work computers. “I don’t think Facebook is displacing other activity,” he said. “People use it during downtime during the course of their day, in the elevator, or while commuting, or waiting.
  • Facebook, naturally, is busy cooking up ways to get us to spend even more time on the platform
  • A crucial initiative is improving its News Feed, tailoring it more precisely to the needs and interests of its users, based on how long people spend reading particular posts. For people who demonstrate a preference for video, more video will appear near the top of their news feed. The more time people spend on Facebook, the more data they will generate about themselves, and the better the company will get at the task.
Javier E

How Did Consciousness Evolve? - The Atlantic - 0 views

  • Theories of consciousness come from religion, from philosophy, from cognitive science, but not so much from evolutionary biology. Maybe that’s why so few theories have been able to tackle basic questions such as: What is the adaptive value of consciousness? When did it evolve and what animals have it?
  • The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions.
  • The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence
  • ...23 more annotations...
  • Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition.
  • It coordinates something called overt attention – aiming the satellite dishes of the eyes, ears, and nose toward anything important.
  • Selective enhancement therefore probably evolved sometime between hydras and arthropods—between about 700 and 600 million years ago, close to the beginning of complex, multicellular life
  • The next evolutionary advance was a centralized controller for attention that could coordinate among all senses. In many animals, that central controller is a brain area called the tectum
  • At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing.
  • With the evolution of reptiles around 350 to 300 million years ago, a new brain structure began to emerge – the wulst. Birds inherited a wulst from their reptile ancestors. Mammals did too, but our version is usually called the cerebral cortex and has expanded enormously
  • According to fossil and genetic evidence, vertebrates evolved around 520 million years ago. The tectum and the central control of attention probably evolved around then, during the so-called Cambrian Explosion when vertebrates were tiny wriggling creatures competing with a vast range of invertebrates in the sea.
  • The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning.
  • The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement
  • In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.
  • All vertebrates—fish, reptiles, birds, and mammals—have a tectum. Even lampreys have one, and they appeared so early in evolution that they don’t even have a lower jaw. But as far as anyone knows, the tectum is absent from all invertebrates
  • The cortex also takes in sensory signals and coordinates movement, but it has a more flexible repertoire. Depending on context, you might look toward, look away, make a sound, do a dance, or simply store the sensory event in memory in case the information is useful for the future.
  • The most important difference between the cortex and the tectum may be the kind of attention they control. The tectum is the master of overt attention—pointing the sensory apparatus toward anything important. The cortex ups the ante with something called covert attention. You don’t need to look directly at something to covertly attend to it. Even if you’ve turned your back on an object, your cortex can still focus its processing resources on it
  • The cortex needs to control that virtual movement, and therefore like any efficient controller it needs an internal model. Unlike the tectum, which models concrete objects like the eyes and the head, the cortex must model something much more abstract. According to the AST, it does so by constructing an attention schema—a constantly updated set of information that describes what covert attention is doing moment-by-moment and what its consequences are
  • Covert attention isn’t intangible. It has a physical basis, but that physical basis lies in the microscopic details of neurons, synapses, and signals. The brain has no need to know those details. The attention schema is therefore strategically vague. It depicts covert attention in a physically incoherent way, as a non-physical essence
  • this, according to the theory, is the origin of consciousness. We say we have consciousness because deep in the brain, something quite primitive is computing that semi-magical self-description.
  • I’m reminded of Teddy Roosevelt’s famous quote, “Do what you can with what you have where you are.” Evolution is the master of that kind of opportunism. Fins become feet. Gill arches become jaws. And self-models become models of others. In the AST, the attention schema first evolved as a model of one’s own covert attention. But once the basic mechanism was in place, according to the theory, it was further adapted to model the attentional states of others, to allow for social prediction. Not only could the brain attribute consciousness to itself, it began to attribute consciousness to others.
  • In the AST’s evolutionary story, social cognition begins to ramp up shortly after the reptilian wulst evolved. Crocodiles may not be the most socially complex creatures on earth, but they live in large communities, care for their young, and can make loyal if somewhat dangerous pets.
  • If AST is correct, 300 million years of reptilian, avian, and mammalian evolution have allowed the self-model and the social model to evolve in tandem, each influencing the other. We understand other people by projecting ourselves onto them. But we also understand ourselves by considering the way other people might see us.
  • t the cortical networks in the human brain that allow us to attribute consciousness to others overlap extensively with the networks that construct our own sense of consciousness.
  • Language is perhaps the most recent big leap in the evolution of consciousness. Nobody knows when human language first evolved. Certainly we had it by 70 thousand years ago when people began to disperse around the world, since all dispersed groups have a sophisticated language. The relationship between language and consciousness is often debated, but we can be sure of at least this much: once we developed language, we could talk about consciousness and compare notes
  • Maybe partly because of language and culture, humans have a hair-trigger tendency to attribute consciousness to everything around us. We attribute consciousness to characters in a story, puppets and dolls, storms, rivers, empty spaces, ghosts and gods. Justin Barrett called it the Hyperactive Agency Detection Device, or HADD
  • the HADD goes way beyond detecting predators. It’s a consequence of our hyper-social nature. Evolution turned up the amplitude on our tendency to model others and now we’re supremely attuned to each other’s mind states. It gives us our adaptive edge. The inevitable side effect is the detection of false positives, or ghosts.
Javier E

How Stanford Took On the Giants of Economics - The New York Times - 1 views

  • it is a reflection of a broader shift in the study of economics, in which the most cutting-edge work increasingly relies less on a big-brained individual scholar developing mathematical theories, and more on the ability to crunch extensive sets of data to glean insights about topics as varied as how incomes differ across society and how industries organize themselves.
  • “Who wouldn’t want to be where the future of the world is being made?” said Tyler Cowen, an economist at George Mason University (and regular contributor to The New York Times) who often blogs about trends in academic economics. Stanford’s economics department, he said, “has an excitement about it which Boston and Cambridge can’t touch.”
  • In economics, Stanford has frequently been ranked just behind Harvard, M.I.T., Princeton and the University of Chicago, including in the most recent U.S. News & World Report survey
  • ...6 more annotations...
  • In the last four years, Stanford has increased the number of senior faculty by 25 percent, and 11 scholars with millions in cumulative salary have either been recruited from other top programs or resisted poaching attempts by those programs.
  • The specialties of the new recruits vary, but they are all examples of how the momentum in economics has shifted away from theoretical modeling and toward “empirical microeconomics,” the analysis of how things work in the real world, often arranging complex experiments or exploiting large sets of data. That kind of work requires lots of research assistants, work across disciplines including fields like sociology and computer science, and the use of advanced computational techniques unavailable a generation ago.
  • the scholars who have newly signed on with Stanford described a university particularly well suited to research in that vein, with a combination of lab space, strong budgets for research support and proximity to engineering talent.
  • The Chicago School, under the intellectual imprint of Milton Friedman, was a leader in neoclassical thought that emphasizes the efficiency of markets and the risks of government intervention. M.I.T.’s economics department has a long record of economic thought in the Keynesian tradition, and it produced several of the top policy makers who have guided the world economy through the tumultuous last several years.
  • “There isn’t a Stanford school of thought,” said B. Douglas Bernheim, chairman of the university’s economics department. “This isn’t a doctrinaire place. Generally doctrine involves simplification, and increasingly we recognize that these social issues we’re trying to figure out are phenomenally complicated. The consensus at Stanford has focused around the idea that you have to be open to a lot of approaches and ways of thinking about things, and to be rigorous, thorough and careful in bringing the highest standard of craft to bear on your research.”
  • “My sense is this is a good development for economics,” Mr. Chetty said. “I think Stanford is going to be another great department at the level of Harvard and M.I.T. doing this type of work, which is an example of economics becoming a deeper field. It’s a great thing for all the universities — I don’t think it’s a zero-sum game.”
johnsonle1

Scientists Find First Observed Evidence That Our Universe May Be a Hologram | Big Think - 1 views

  • all the information in our 3-dimensional reality may actually be included in the 2-dimensional surface of its boundaries. It's like watching a 3D show on a 2D television.
  • the team found that the observational data they found was largely predictable by the math of holographic theory. 
  • After this phase comes to a close, the Universe goes into a geometric phase, which can be described by Einstein's equations.
  • ...1 more annotation...
  • It's a new paradigm for a physical reality.
  •  
    As we watched in the video "Spooky Science" in TOK, we saw how 2D and 3D world are very distinctive, but in this article, the author discussed another theory that our 3D reality may actually be included in the 2D surface of its boundaries. This theory is a rival to the theory of cosmic inflation. The holographic theory not only explains the abnormalities, it is also a more simple theory of the early universe. Now the scientists find that the math of holographic theory can very much predict the data, so it has the potential to be a new paradigm for a physical reality. --Sissi (2/6/2017)
  •  
    What is the holographic universe idea? It's not exactly that we are living in some kind of Star Trekky computer simulation. Rather the idea, first proposed in the 1990s by Leonard Susskind and Gerard 't Hooft, says that all the information in our 3-dimensional reality may actually be included in the 2-dimensional surface of its boundaries. It's like watching a 3D show on a 2D television.
sissij

Elon Musk's New Company to Merge Human Brains with Machines | Big Think - 1 views

  • His new company Neuralink will work on linking human brains with computers, utilizing “neural lace” technology.
  • Musk talked recently about this kind of technology, seeing it as a way for human to interact with machines and superintelligencies.
  • What's next? We'll wait for the details. Elon Musk's influence on our modern life and aura certainly continue to grow, especially if he'll deliver on the promises of his various enterprises.
  •  
    My mom had a little research project on Tesla and she assigned me to do that so I know some strategies and ideas of Tesla, although not very deep. I found that Tesla and Elon Must had very innovative ideas on its product. Electrical car is the signature of Tesla. The design of the car and idea of being green is really friendly to the environment of Earth. Now, they are talking about new ideas of merging human intelligence with machine. --Sissi (4/2/2017)
Javier E

Clouds' Effect on Climate Change Is Last Bastion for Dissenters - NYTimes.com - 0 views

  • For decades, a small group of scientific dissenters has been trying to shoot holes in the prevailing science of climate change, offering one reason after another why the outlook simply must be wrong. Enlarge This Image Josh Haner/The New York Times A technician at a Department of Energy site in Oklahoma launching a weather balloon to help scientists analyze clouds. More Photos » Temperature Rising Enigma in the Sky This series focuses on the central arguments in the climate debate and examining the evidence for global warming and its consequences. More From the Series » if (typeof NYTDVideoManager != "undefined") { NYTDVideoManager.setAllowMultiPlayback(false); } function displayCompanionBanners(banners, tracking) { tmDisplayBanner(banners, "videoAdContent", 300, 250, null, tracking); } Multimedia Interactive Graphic Clouds and Climate Slide Show Understanding the Atmosphere Related Green Blog: Climate Change and the Body Politic (May 1, 2012) An Underground Fossil Forest Offers Clues on Climate Change (May 1, 2012) A blog about energy and the environment. Go to Blog » Readers’ Comments "There is always some possibility that the scientific consensus may be wrong and Dr. Lindzen may be right, or that both may be wrong. But the worst possible place to resolve such issues is the political arena." Alexander Flax, Potomac, MD Read Full Comment » Post a Comment » Over time, nearly every one of their arguments has been knocked down by accumulating evidence, and polls say 97 percent of working climate scientists now see global warming as a serious risk.
  • They acknowledge that the human release of greenhouse gases will cause the planet to warm. But they assert that clouds — which can either warm or cool the earth, depending on the type and location — will shift in such a way as to counter much of the expected temperature rise and preserve the equable climate on which civilization depends.
  • At gatherings of climate change skeptics on both sides of the Atlantic, Dr. Lindzen has been treated as a star. During a debate in Australia over carbon taxes, his work was cited repeatedly. When he appears at conferences of the Heartland Institute, the primary American organization pushing climate change skepticism, he is greeted by thunderous applause.
  • ...13 more annotations...
  • His idea has drawn withering criticism from other scientists, who cite errors in his papers and say proof is lacking. Enough evidence is already in hand, they say, to rule out the powerful cooling effect from clouds that would be needed to offset the increase of greenhouse gases.
  • “If you listen to the credible climate skeptics, they’ve really pushed all their chips onto clouds.”
  • Dr. Lindzen is “feeding upon an audience that wants to hear a certain message, and wants to hear it put forth by people with enough scientific reputation that it can be sustained for a while, even if it’s wrong science,” said Christopher S. Bretherton, an atmospheric researcher at the University of Washington. “I don’t think it’s intellectually honest at all.”
  • With climate policy nearly paralyzed in the United States, many other governments have also declined to take action, and worldwide emissions of greenhouse gases are soaring.
  • The most elaborate computer programs have agreed on a broad conclusion: clouds are not likely to change enough to offset the bulk of the human-caused warming. Some of the analyses predict that clouds could actually amplify the warming trend sharply through several mechanisms, including a reduction of some of the low clouds that reflect a lot of sunlight back to space. Other computer analyses foresee a largely neutral effect. The result is a big spread in forecasts of future temperature, one that scientists have not been able to narrow much in 30 years of effort.
  • The earth’s surface has already warmed about 1.4 degrees Fahrenheit since the Industrial Revolution, most of that in the last 40 years. Modest as it sounds, it is an average for the whole planet, representing an enormous addition of heat. An even larger amount is being absorbed by the oceans. The increase has caused some of the world’s land ice to melt and the oceans to rise.
  • Even in the low projection, many scientists say, the damage could be substantial. In the high projection, some polar regions could heat up by 20 or 25 degrees Fahrenheit — more than enough, over centuries or longer, to melt the Greenland ice sheet, raising sea level by a catastrophic 20 feet or more. Vast changes in  rainfall, heat waves and other weather patterns would most likely accompany such a large warming. “The big damages come if the climate sensitivity to greenhouse gases turns out to be high,” said Raymond T. Pierrehumbert, a climate scientist at the University of Chicago. “Then it’s not a bullet headed at us, but a thermonuclear warhead.”
  • But the problem of how clouds will behave in a future climate is not yet solved — making the unheralded field of cloud research one of the most important pursuits of modern science.
  • for more than a decade, Dr. Lindzen has said that when surface temperature increases, the columns of moist air rising in the tropics will rain out more of their moisture, leaving less available to be thrown off as ice, which forms the thin, high clouds known as cirrus. Just like greenhouse gases, these cirrus clouds act to reduce the cooling of the earth, and a decrease of them would counteract the increase of greenhouse gases. Dr. Lindzen calls his mechanism the iris effect, after the iris of the eye, which opens at night to let in more light. In this case, the earth’s “iris” of high clouds would be opening to let more heat escape.
  • Dr. Lindzen acknowledged that the 2009 paper contained “some stupid mistakes” in his handling of the satellite data. “It was just embarrassing,” he said in an interview. “The technical details of satellite measurements are really sort of grotesque.” Last year, he tried offering more evidence for his case, but after reviewers for a prestigious American journal criticized the paper, Dr. Lindzen published it in a little-known Korean journal. Dr. Lindzen blames groupthink among climate scientists for his publication difficulties, saying the majority is determined to suppress any dissenting views. They, in turn, contend that he routinely misrepresents the work of other researchers.
  • Ultimately, as the climate continues warming and more data accumulate, it will become obvious how clouds are reacting. But that could take decades, scientists say, and if the answer turns out to be that catastrophe looms, it would most likely be too late. By then, they say, the atmosphere would contain so much carbon dioxide as to make a substantial warming inevitable, and the gas would not return to a normal level for thousands of years.
  • In his Congressional appearances, speeches and popular writings, Dr. Lindzen offers little hint of how thin the published science supporting his position is. Instead, starting from his disputed iris mechanism, he makes what many of his colleagues see as an unwarranted leap of logic, professing near-certainty that climate change is not a problem society needs to worry about.
  • “Even if there were no political implications, it just seems deeply unprofessional and irresponsible to look at this and say, ‘We’re sure it’s not a problem,’ ” said Kerry A. Emanuel, another M.I.T. scientist. “It’s a special kind of risk, because it’s a risk to the collective civilization.”
« First ‹ Previous 61 - 80 of 307 Next › Last »
Showing 20 items per page