Skip to main content

Home/ TOK Friends/ Group items tagged system

Rss Feed Group items tagged

Emily Freilich

All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines - Nicholas ... - 0 views

  • We rely on computers to fly our planes, find our cancers, design our buildings, audit our businesses. That's all well and good. But what happens when the computer fails?
  • On the evening of February 12, 2009, a Continental Connection commuter flight made its way through blustery weather between Newark, New Jersey, and Buffalo, New York.
  • The Q400 was well into its approach to the Buffalo airport, its landing gear down, its wing flaps out, when the pilot’s control yoke began to shudder noisily, a signal that the plane was losing lift and risked going into an aerodynamic stall. The autopilot disconnected, and the captain took over the controls. He reacted quickly, but he did precisely the wrong thing: he jerked back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of pushing the yoke forward to gain velocity.
  • ...43 more annotations...
  • The crash, which killed all 49 people on board as well as one person on the ground, should never have happened.
  • aptain’s response to the stall warning, the investigators reported, “should have been automatic, but his improper flight control inputs were inconsistent with his training” and instead revealed “startle and confusion.
  • Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes.
  • We humans have been handing off chores, both physical and mental, to tools since the invention of the lever, the wheel, and the counting bead.
  • And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes,
  • No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,”
  • “We’re forgetting how to fly.”
  • The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world.
  • What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.
  • Examples of complacency and bias have been well documented in high-risk situations—on flight decks and battlefields, in factory control rooms—but recent studies suggest that the problems can bedevil anyone working with a computer
  • That may leave the person operating the computer to play the role of a high-tech clerk—entering data, monitoring outputs, and watching for failures. Rather than opening new frontiers of thought and action, software ends up narrowing our focus.
  • A labor-saving device doesn’t just provide a substitute for some isolated component of a job or other activity. It alters the character of the entire task, including the roles, attitudes, and skills of the people taking part.
  • when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift.
  • Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears
  • Automation is different now. Computers can be programmed to perform complex activities in which a succession of tightly coordinated tasks is carried out through an evaluation of many variables. Many software programs take on intellectual work—observing and sensing, analyzing and judging, even making decisions—that until recently was considered the preserve of humans.
  • Automation turns us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit the development of expertise.
  • Since the late 1970s, psychologists have been documenting a phenomenon called the “generation effect.” It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they generate them—than when they simply read them.
  • When you engage actively in a task, you set off intricate mental processes that allow you to retain more knowledge. You learn more and remember more. When you repeat the same task over a long period, your brain constructs specialized neural circuits dedicated to the activit
  • What looks like instinct is hard-won skill, skill that requires exactly the kind of struggle that modern software seeks to alleviate.
  • In many businesses, managers and other professionals have come to depend on decision-support systems to analyze information and suggest courses of action. Accountants, for example, use the systems in corporate audits. The applications speed the work, but some signs suggest that as the software becomes more capable, the accountants become less so.
  • You can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than merely observing.
  • Experts used to assume that there were limits to the ability of programmers to automate complicated tasks, particularly those involving sensory perception, pattern recognition, and conceptual knowledge
  • Who needs humans, anyway? That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers’ abilities are expanding so quickly and if people, by comparison, seem slow, clumsy, and error-prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation?
  • The cure for imperfect automation is total automation.
  • That idea is seductive, but no machine is infallible. Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter circumstances that its designers never anticipated. As automation technologies become more complex, relying on interdependencies among algorithms, databases, sensors, and mechanical parts, the potential sources of failure multiply. They also become harder to detect.
  • conundrum of computer automation.
  • Because many system designers assume that human operators are “unreliable and inefficient,” at least when compared with a computer, they strive to give the operators as small a role as possible.
  • People end up functioning as mere monitors, passive watchers of screens. That’s a job that humans, with our notoriously wandering minds, are especially bad at
  • people have trouble maintaining their attention on a stable display of information for more than half an hour. “This means,” Bainbridge observed, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.”
  • a person’s skills “deteriorate when they are not used,” even an experienced operator will eventually begin to act like an inexperienced one if restricted to just watching.
  • You can program software to shift control back to human operators at frequent but irregular intervals; knowing that they may need to take command at any moment keeps people engaged, promoting situational awareness and learning.
  • What’s most astonishing, and unsettling, about computer automation is that it’s still in its early stages.
  • most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity.
  • Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience.
  • Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.
  • The small island of Igloolik, off the coast of the Melville Peninsula in the Nunavut territory of northern Canada, is a bewildering place in the winter.
  • , Inuit hunters have for some 4,000 years ventured out from their homes on the island and traveled across miles of ice and tundra to search for game. The hunters’ ability to navigate vast stretches of the barren Arctic terrain, where landmarks are few, snow formations are in constant flux, and trails disappear overnight, has amazed explorers and scientists for centuries. The Inuit’s extraordinary way-finding skills are born not of technological prowess—they long eschewed maps and compasses—but of a profound understanding of winds, snowdrift patterns, animal behavior, stars, and tides.
  • The Igloolik hunters have begun to rely on computer-generated maps to get around. Adoption of GPS technology has been particularly strong among younger Inuit, and it’s not hard to understand why.
  • But as GPS devices have proliferated on Igloolik, reports of serious accidents during hunts have spread. A hunter who hasn’t developed way-finding skills can easily become lost, particularly if his GPS receiver fails.
  • The routes so meticulously plotted on satellite maps can also give hunters tunnel vision, leading them onto thin ice or into other hazards a skilled navigator would avoid.
  • An Inuit on a GPS-equipped snowmobile is not so different from a suburban commuter in a GPS-equipped SUV: as he devotes his attention to the instructions coming from the computer, he loses sight of his surroundings. He travels “blindfolded,” as Aporta puts it
  • A unique talent that has distinguished a people for centuries may evaporate in a generation.
  • Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?
  •  
    Automation increases efficiency and speed of tasks, but decreases the individual's knowledge of a task and decrease's a human's ability to learn. 
Javier E

Revolving Door: Former Federal Reserve Chairman Ben Bernanke Takes Job With Hedge Fund ... - 0 views

  • This is, of course, how systemic problems work—few individual cases are obviously unacceptable, but the whole is horrifying. In this case, it's the "revolving door" of movement between government positions and the financial sector—that is to say, from modestly paying positions in the public sector, overseeing financial firms, to higher-paying jobs in the private sector.
  • Bernanke is going to work for Citadel, a $25 billion hedge fund that is one of the country's largest. While Bernanke is a talented economist, he has also never worked in the industry, so it's fairly clear that what Citadel wants is inside information
  • this is just the latest in a stream of prominent government officials:
  • ...3 more annotations...
  • Perhaps what makes Bernanke's case so worrisome is that he has an almost universal reputation for probity. If the revolving-door system is so powerful that it can make even him look suspect, is it beyond redemption?
  • That doesn't even include non-hedge-fund and private-equity moves. Peter Orszag, who led President Obama's Office of Management and Budget, took a job with Citigroup when he left. The Obama administration had been closely involved with Citi in the aftermath of the financial collapse, and the bank received nearly $500 billion in bailouts. Orszag is not allowed to deal with the federal government directly, but as The Times noted, perhaps wryly, "Mr. Orszag’s actual duties are murky at best. He is expected to draw on his deep knowledge of public sector financial issues and his experience overseeing the federal budget to counsel Citi’s clients on various policy actions."
  • This is, of course, how systemic problems work—few individual cases are obviously unacceptable, but the whole is horrifying. In this case, it's the "revolving door" of movement between government positions and the financial sector—that is to say, from modestly paying positions in the public sector, overseeing financial firms, to higher-paying jobs in the private sector.
Javier E

How To Look Smart, Ctd - The Daily Dish | By Andrew Sullivan - 0 views

  • The Atlantic Home todaysDate();Tuesday, February 8, 2011Tuesday, February 8, 2011 Go Follow the Atlantic » Politics Presented by When Ronald Reagan Endorsed Ron Paul Joshua Green Epitaph for the DLC Marc Ambinder A Hard Time Raising Concerns About Egypt Chris Good Business Presented by Could a Hybrid Mortgage System Work? Daniel Indiviglio Fighting Bias in Academia Megan McArdle The Tech Revolution For Seniors Derek Thompson Culture Presented By 'Tiger Mother' Creates a New World Order James Fallows Justin Bieber: Daydream Believer James Parker <!-- /li
  • these questions tend to overlook the way IQ tests are designed. As a neuropsychologist who has administered hundreds of these measures, I can tell you that their structures reflect a deeply embedded bias toward intelligence as a function of reading skills
Javier E

The Threat of Motivated Reasoning In-and To-The Legal System - 1 views

  • It does three things—1) explains what motivated reasoning is; 2) explains how it’s threatening to the legal system (because motivated/biased interpretations of court findings and opinions by opposed groups of citizens threaten the very idea of court neutrality); and 3) takes a look at the Supreme Court’s 2010 term (and one bizarre Scalia dissent in particular) in this context.
  • Kahan suggests it undermines the justice system if battling groups of advocates (say, the American Constitution Society and the Federalist Society) are constantly blasting court opinions from diametrically opposed perspectives, and using motivated reasoning to do so. At some point, as this continues, you wind up legitimating the claim that really, courts don’t know anything special or have any particular expertise—it’s all just opinion, and biased opinion at that. This precipitates a “neutrality crisis” over whether courts can really judge fairly.
  • There is thus an inherent risk that citizens will perceive decisions that threaten their group commitments to be a product of judicial bias. The outcomes might strike them as so patently inconsistent with the facts, or with controlling legal principles, that they are impelled to infer bad faith.
  • ...1 more annotation...
  • Scalia’s opinion thus makes the neutrality problem even worse—because it suggests it’s all bias, all the way down, even for the most professional of us. Is that true? And how do you preserve neutral courts–or, to switch to another sector, trust in the findings of the scientific community–if that is indeed the case?
Javier E

Opinion | The Strange Failure of the Educated Elite - The New York Times - 0 views

  • We replaced a system based on birth with a fairer system based on talent. We opened up the universities and the workplace to Jews, women and minorities. University attendance surged, creating the most educated generation in history. We created a new boomer ethos, which was egalitarian (bluejeans everywhere!), socially conscious (recycling!) and deeply committed to ending bigotry.
  • The older establishment won World War II and built the American Century. We, on the other hand, led to Donald Trump. The chief accomplishment of the current educated elite is that it has produced a bipartisan revolt against itself.
  • the new meritocratic aristocracy has come to look like every other aristocracy. The members of the educated class use their intellectual, financial and social advantages to pass down privilege to their children, creating a hereditary elite that is ever more insulated from the rest of society. We need to build a meritocracy that is true to its values, truly open to all.
  • ...17 more annotations...
  • But the narrative is insufficient. The real problem with the modern meritocracy can be found in the ideology of meritocracy itself. Meritocracy is a system built on the maximization of individual talent, and that system unwittingly encourages several ruinous beliefs:
  • Exaggerated faith in intelligence.
  • Many of the great failures of the last 50 years, from Vietnam to Watergate to the financial crisis, were caused by extremely intelligent people who didn’t care about the civic consequences of their actions.
  • Misplaced faith in autonomy
  • The meritocracy is based on the metaphor that life is a journey. On graduation days, members for the educated class give their young Dr. Seuss’ “Oh, the Places You’ll Go!” which shows a main character, “you,” who goes on a solitary, unencumbered journey through life toward success. If you build a society upon this metaphor you will wind up with a society high in narcissism and low in social connection
  • Life is not really an individual journey. Life is more like settling a sequence of villages. You help build a community at home, at work, in your town and then you go off and settle more villages.
  • Instead of seeing the self as the seat of the soul, the meritocracy sees the self as a vessel of human capital, a series of talents to be cultivated and accomplishments to be celebrated.
  • Misplaced notion of the self
  • If you base a society on a conception of self that is about achievement, not character, you will wind up with a society that is demoralized; that puts little emphasis on the sorts of moral systems that create harmony within people, harmony between people and harmony between people and their ultimate purpose.
  • Inability to think institutionally.
  • Previous elites poured themselves into institutions and were pretty good at maintaining existing institutions, like the U.S. Congress, and building new ones, like the postwar global order.
  • The current generation sees institutions as things they pass through on the way to individual success. Some institutions, like Congress and the political parties, have decayed to the point of uselessness, while others, like corporations, lose their generational consciousness
  • Misplaced idolization of diversity
  • But diversity is a midpoint, not an endpoint. Just as a mind has to be opened so that it can close on something, an organization has to be diverse so that different perspectives can serve some end.
  • Diversity for its own sake, without a common telos, is infinitely centrifugal, and leads to social fragmentation.
  • The essential point is this: Those dimwitted, stuck up blue bloods in the old establishment had something we meritocrats lack — a civic consciousness, a sense that we live life embedded in community and nation, that we owe a debt to community and nation and that the essence of the admirable life is community before self.
  • The meritocracy is here to stay, thank goodness, but we probably need a new ethos to reconfigure it — to redefine how people are seen, how applicants are selected, how social roles are understood and how we narrate a common national purpose
Javier E

Climatologist Michael E Mann: 'Good people fall victim to doomism. I do too sometimes' ... - 0 views

  • the “inactivists”, as I call them, haven’t given up; they have simply shifted from hard denial to a new array of tactics that I describe in the book as the new climate war.
  • Who is the enemy in the new climate war?It is fossil fuel interests, climate change deniers, conservative media tycoons, working together with petrostate actors like Saudi Arabia and Russia. I call this the coalition of the unwilling.
  • Today Russia uses cyberware – bot armies and trolls – to get climate activists to fight one another and to seed arguments on social media. Russian trolls have attempted to undermine carbon pricing in Canada and Australia, and Russian fingerprints have been detected in the yellow-vest protests in France.
  • ...22 more annotations...
  • I am optimistic about a favourable shift in the political wind. The youth climate movement has galvanised attention and re-centred the debate on intergenerational ethics. We are seeing a tipping point in public consciousness. That bodes well. There is still a viable way forward to avoid climate catastrophe.
  • You can see from the talking points of inactivists that they are really in retreat. Republican pollsters like Frank Luntz have advised clients in the fossil fuel industry and the politicians who carry water for them that you can’t get away with denying climate change any more.
  • Let’s dig into deniers’ tactics. One that you mention is deflection. What are the telltale signs?Any time you are told a problem is your fault because you are not behaving responsibly, there is a good chance that you are being deflected from systemic solutions and policies
  • Blaming the individual is a tried and trusted playbook that we have seen in the past with other industries. In the 1970s, Coca Cola and the beverage industry did this very effectively to convince us we don’t need regulations on waste disposal. Because of that we now have a global plastic crisis. The same tactics are evident in the gun lobby’s motto, “guns don’t kill people, people kill people”, which is classic deflection
  • look at BP, which gave us the world’s first individual carbon footprint calculator. Why did they do that? Because BP wanted us looking at our carbon footprint not theirs.
  • Of course lifestyle changes are necessary but they alone won’t get us where we need to be. They make us more healthy, save money and set a good example for others.
  • But we can’t allow the forces of inaction to convince us these actions alone are the solution and that we don’t need systemic changes
  • I don’t eat meat. We get power from renewable energy. I have a plug-in hybrid vehicle. I do those things and encourage others to do them. but i don’t think it is helpful to shame people people who are not as far along as you.
  • Instead, let’s help everybody to move in that direction. That is what policy and system change is about: creating incentives so even those who don’t think about their environmental footprint are still led in that direction.
  • Another new front in the new climate war is what you call “doomism”. What do you mean by that?Doom-mongering has overtaken denial as a threat and as a tactic. Inactivists know that if people believe there is nothing you can do, they are led down a path of disengagement
  • They unwittingly do the bidding of fossil fuel interests by giving up.What is so pernicious about this is that it seeks to weaponise environmental progressives who would otherwise be on the frontline demanding change. These are folk of good intentions and good will, but they become disillusioned or depressed and they fall into despair.
  • Many of the prominent doomist narratives – [Jonathan] Franzen, David Wallace-Wells, the Deep Adaptation movement – can be traced back to a false notion that an Arctic methane bomb will cause runaway warming and extinguish all life on earth within 10 years. This is completely wrong. There is no science to support that.
  • Good people fall victim to doomism. I do too sometimes. It can be enabling and empowering as long as you don’t get stuck there. It is up to others to help ensure that experience can be cathartic.
  • the entry of new participants. Bill Gates is perhaps the most prominent. His new book, How to Prevent a Climate Disaster, offers a systems analyst approach to the problem, a kind of operating system upgrade for the planet. What do you make of his take?I want to thank him for using his platform to raise awareness of the climate crisis
  • I disagree with him quite sharply on the prescription. His view is overly technocratic and premised on an underestimate of the role that renewable energy can play in decarbonising our civilisation
  • If you understate that potential, you are forced to make other risky choices, such as geoengineering and carbon capture and sequestration. Investment in those unproven options would crowd out investment in better solutions.
  • Gates writes that he doesn’t know the political solution to climate change. But the politics are the problem buddy. If you don’t have a prescription of how to solve that, then you don’t have a solution and perhaps your solution might be taking us down the wrong path.
  • What are the prospects for political change with Joe Biden in the White House?Breathtaking. Biden has surprised even the most ardent climate hawks in the boldness of his first 100 day agenda, which goes well beyond any previous president, including Obama when it comes to use of executive actions. He has incorporated climate policy into every single government agency and we have seen massive investments in renewable energy infrastructure, cuts in subsidies for fossil fuels, and the cancellation of the Keystone XL pipeline.
  • On the international front, the appointment of John Kerry, who helped negotiate the Paris Accord, has telegraphed to the rest of the world that the US is back and ready to lead again
  • That is huge and puts pressure on intransigent state actors like [Australian prime minister] Scott Morrison, who has been a friend of the fossil fuel industry in Australia. Morrison has changed his rhetoric dramatically since Biden became president. I think that creates an opportunity like no other.
  • Have the prospects for that been helped or hindered by Covid?I see a perfect storm of climate opportunity. Terrible as the pandemic has been, this tragedy can also provide lessons, particularly on the importance of listening to the word of science when facing risks
  • Out of this crisis can come a collective reconsideration of our priorities. How to live sustainably on a finite planet with finite space, food and water. A year from now, memories and impacts of coronavirus will still feel painful, but the crisis itself will be in the rear-view mirror thanks to vaccines. What will loom larger will be the greater crisis we face – the climate crisis.
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015&nbsp;Gallup&nbsp;survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he&nbsp;has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job.&nbsp;The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520&nbsp;undergraduates&nbsp;at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to&nbsp;cause a “brain drain” that can&nbsp;diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  • &nbsp;Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic&nbsp;Cynthia Ozick&nbsp;once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they&nbsp;pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,”&nbsp;the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  • &nbsp;Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the&nbsp;history&nbsp;of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that&nbsp;the qualities&nbsp;that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and&nbsp;Samsung&nbsp;and app writers like&nbsp;Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We]&nbsp;understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the&nbsp;Columbia University&nbsp;psychologist&nbsp;Betsy Sparrow&nbsp;and including the late Harvard memory expert&nbsp;Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google&nbsp;effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their&nbsp;own&nbsp;mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which&nbsp;people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A&nbsp;second experiment&nbsp;conducted by the researchers produced&nbsp;similar&nbsp;results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think.&nbsp;And that means putting&nbsp;some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
caelengrubb

Why language might be the optimal self-regulating system | Aeon Essays - 0 views

  • Language changes all the time. Some changes really are chaotic, and disruptive.
  • Descriptivists – that is, virtually all academic linguists – will point out that semantic creep is how languages work. It’s just something words do: look up virtually any nontechnical word in the great historical Oxford English Dictionary (OED), which lists a word’s senses in historical order
  • here is another fact to bear in mind: no language has fallen apart from lack of care
  • ...9 more annotations...
  • Prescriptivists cannot point to a single language that became unusable or inexpressive as a result of people’s failure to uphold traditional vocabulary and grammar. Every language existing today is fantastically expressive
  • Nonetheless, despite potential harm done by an individual word’s change in meaning, cultures tend to have all the words they need for all the things they want to talk about.
  • Every language has a characteristic inventory of contrasting sounds, called phonemes.
  • The answer is that language is a system. Sounds, words and grammar do not exist in isolation: each of these three levels of language constitutes a system in itself.
  • During the Great Vowel Shift, ee and oo started to move towards the sounds they have today. Nobody knows why
  • Words also weaken with frequent use
  • At the level of grammar, change might seem the most unsettling, threatening a deeper kind of harm than a simple mispronunciation or new use for an old word
  • what are the objects without those crucial case endings? The answer is boring: word order
  • Language is self-regulating. It’s a genius system – with no genius
caelengrubb

How Did Language Begin? | Linguistic Society of America - 0 views

  • The question is not how languages gradually developed over time into the languages of the world today. Rather, it is how the human species developed over time so that we - and not our closest relatives, the chimpanzees and bonobos - became capable of using language.
  • Human language can express thoughts on an unlimited number of topics (the weather, the war, the past, the future, mathematics, gossip, fairy tales, how to fix the sink...). It can be used not just to convey information, but to solicit information (questions) and to give orders.
  • Every human language has a vocabulary of tens of thousands of words, built up from several dozen speech sounds
  • ...14 more annotations...
  • Animal communication systems, in contrast, typically have at most a few dozen distinct calls, and they are used only to communicate immediate issues such as food, danger, threat, or reconciliation. Many of the sorts of meanings conveyed by chimpanzee communication have counterparts in human 'body language'.
  • The basic difficulty with studying the evolution of language is that the evidence is so sparse. Spoken languages don't leave fossils, and fossil skulls only tell us the overall shape and size of hominid brains, not what the brains could do
  • All present-day languages, including those of hunter-gatherer cultures, have lots of words, can be used to talk about anything under the sun, and can express negation. As far back as we have written records of human language - 5000 years or so - things look basically the same.
  • According to current thinking, the changes crucial for language were not just in the size of the brain, but in its character: the kinds of tasks it is suited to do - as it were, the 'software' it comes furnished with.
  • So the properties of human language are unique in the natural world.
  • About the only definitive evidence we have is the shape of the vocal tract (the mouth, tongue, and throat): Until anatomically modern humans, about 100,000 years ago, the shape of hominid vocal tracts didn't permit the modern range of speech sounds. But that doesn't mean that language necessarily began the
  • Some researchers even propose that language began as sign language, then (gradually or suddenly) switched to the vocal modality, leaving modern gesture as a residue.
  • . In an early stage, sounds would have been used to name a wide range of objects and actions in the environment, and individuals would be able to invent new vocabulary items to talk about new things
  • In order to achieve a large vocabulary, an important advance would have been the ability to 'digitize' signals into sequences of discrete speech sounds - consonants and vowels - rather than unstructured calls.
  • These two changes alone would yield a communication system of single signals - better than the chimpanzee system but far from modern language. A next plausible step would be the ability to string together several such 'words' to create a message built out of the meanings of its parts.
  • This has led some researchers to propose that the system of 'protolanguage' is still present in modern human brains, hidden under the modern system except when the latter is impaired or not yet developed.
  • Again, it's very hard to tell. We do know that something important happened in the human line between 100,000 and 50,000 years ago: This is when we start to find cultural artifacts such as art and ritual objects, evidence of what we would call civilization.
  • One tantalizing source of evidence has emerged recently. A mutation in a gene called FOXP2 has been shown to lead to deficits in language as well as in control of the face and mouth. This gene is a slightly altered version of a gene found in apes, and it seems to have achieved its present form between 200,000 and 100,000 years ago.
  • Nevertheless, if we are ever going to learn more about how the human language ability evolved, the most promising evidence will probably come from the human genome, which preserves so much of our species' history. The challenge for the future will be to decode it.
pier-paolo

Computers Already Learn From Us. But Can They Teach Themselves? - The New York Times - 0 views

  • We teach computers to see patterns, much as we teach children to read. But the future of A.I. depends on computer systems that learn on their own, without supervision, researchers say.
  • When a mother points to a dog and tells her baby, “Look at the doggy,” the child learns what to call the furry four-legged friends. That is supervised learning. But when that baby stands and stumbles, again and again, until she can walk, that is something else.Computers are the same.
  • ven if a supervised learning system read all the books in the world, he noted, it would still lack human-level intelligence because so much of our knowledge is never written down.
  • ...9 more annotations...
  • upervised learning depends on annotated data: images, audio or text that is painstakingly labeled by hordes of workers. They circle people or outline bicycles on pictures of street traffic. The labeled data is fed to computer algorithms, teaching the algorithms what to look for. After ingesting millions of labeled images, the algorithms become expert at recognizing what they have been taught to see.
  • There is also reinforcement learning, with very limited supervision that does not rely on training data. Reinforcement learning in computer science,
  • is modeled after reward-driven learning in the brain: Think of a rat learning to push a lever to receive a pellet of food. The strategy has been developed to teach computer systems to take actions.
  • My money is on self-supervised learning,” he said, referring to computer systems that ingest huge amounts of unlabeled data and make sense of it all without supervision or reward. He is working on models that learn by observation, accumulating enough background knowledge that some sort of common sense can emerge.
  • redict outcomes and choose a course of action. “Everybody agrees we need predictive learning, but we disagree about how to get there,”
  • A more inclusive term for the future of A.I., he said, is “predictive learning,” meaning systems that not only recognize patterns but also p
  • A huge fraction of what we do in our day-to-day jobs is constantly refining our mental models of the world and then using those mental models to solve problems,” he said. “That encapsulates an awful lot of what we’d like A.I. to do.”Image
  • Currently, robots can operate only in well-defined environments with little variation.
  • “Our working assumption is that if we build sufficiently general algorithms, then all we really have to do, once that’s done, is to put them in robots that are out there in the real world doing real things,”
blythewallick

Facial Recognition Moves Into a New Front: Schools - The New York Times - 0 views

  • im Shultz tried everything he could think of to stop facial recognition technology from entering the public schools in Lockport, a small city 20 miles east of Niagara Falls. He posted about the issue in a Facebook group called Lockportians. He wrote an Op-Ed in The New York Times. He filed a petition with the superintendent of the district, where his daughter is in high school.But a few weeks ago, he lost. The Lockport City School District turned on the technology to monitor who’s on the property at its eight schools, becoming the first known public school district in New York to adopt facial recognition, and one of the first in the nation.
  • Proponents call it a crucial crime-fighting tool, to help prevent mass shootings and stop sexual predators. Robert LiPuma, the Lockport City School District’s director of technology, said he believed that if the technology had been in place at Marjory Stoneman Douglas High School in Parkland, Fla., the deadly 2018 attack there may never have happened.
  • “You had an expelled student that would have been put into the system, because they were not supposed to be on school grounds,” Mr. LiPuma said. “They snuck in through an open door. The minute they snuck in, the system would have identified that person.”
  • ...7 more annotations...
  • “Subjecting 5-year-olds to this technology will not make anyone safer, and we can’t allow invasive surveillance to become the norm in our public spaces,” said Stefanie Coyle, deputy director of the Education Policy Center for the New York Civil Liberties Union.
  • When the system is on, Mr. LiPuma said, the software looks at the faces captured by the hundreds of cameras and calculates whether those faces match a “persons of interest” list made by school administrators.
  • Jayde McDonald, a political science major at Buffalo State College, grew up as one of the few black students in Lockport public schools. She said she thought it was too risky for the school to install a facial recognition system that could automatically call the police.
  • “I’m not sure where they are in the school or even think I’ve seen them,” said Brooke Cox, 14, a freshman at Lockport High School. “I don’t fully know why we have the cameras. I haven’t been told what their purpose is.”
  • “If suspended students are put on the watch list, they are going to be scrutinized more heavily,” he said, which could lead to a higher likelihood that they could enter into the criminal justice system.
  • Days after the district announced that the technology had been turned on, some students said they had been told very little about how it worked.
  • “We all want to keep our children safe in school,” she said. “But there are more effective, proven ways to do so that are less costly.”
Javier E

Law professor Kim Wehle's latest book is 'How To Think Like a Lawyer - and Why' : NPR - 0 views

  • a five-step process she calls the BICAT method - BICAT.
  • KIM WEHLE: B is to break a problem down into smaller pieces
  • I is to identify our values. A lot of people think lawyers are really about winning all the time. But the law is based on a value system. And I suggest that people be very deliberate about what matters to them with whatever decision there is
  • ...19 more annotations...
  • C is to collect a lot of information. Thirty years ago, the challenge was finding information in a card catalog at the library. Now it's, how do we separate the good stuff from the bad stuff?
  • A is to analyze both sides. Lawyers have to turn the coin over and exhaust counterarguments or we'll lose in court.
  • So lawyers are trained to look for the gray areas, to look for the questions are not the answers. And if we kind of orient our thinking that way, I think we're less likely to shut down competing points of view.
  • My argument in the book is, we can feel good about a decision even if we don't get everything that we want. We have to make compromises.
  • I tell my students, you'll get through the bar. The key is to look for questions and not answers. If you could answer every legal question with a Wikipedia search, there would be no reason to hire lawyers.
  • Lawyers are hired because there are arguments on both sides, you know? Every Supreme Court decision that is split 6-3, 5-4, that means there were really strong arguments on both sides.
  • T is, tolerate the fact that you won't get everything you want every time
  • So we have to be very careful about the source of what you're getting, OK? Is this source neutral? Is this source really care about facts and not so much about an agenda?
  • Step 3, the collecting information piece. I think it's a new skill for all of us that we are overloaded with information into our phones. We have algorithms that somebody else developed that tailor the information that comes into our phones based on what the computer thinks we already believe
  • No. 2 - this is the beauty of social media and the internet - you can pull original sources. We can click on the indictment. Click on the new bill that has been proposed in the United States Congress.
  • then the book explains ways that you can then sort through that information for yourself. Skills are empowering.
  • Maybe as a replacement for sort of being empowered by being part of a team - a red team versus a blue team - that's been corrosive, I think, in American politics and American society. But arming ourselves with good facts, that leads to self-determination.
  • MARTINEZ: Now, you've written two other books - "How To Read The Constitution" and "What You Need To Know About Voting" - along with this one, "How To Think Like A Lawyer - And Why.
  • It kind of makes me think, Kim, that you feel that Americans might be lacking a basic level of civics education or understanding. So what is lacking when it comes to teaching civics or in civics discourse today?
  • studies have shown that around a third of Americans can't name the three branches of government. But if we don't understand our government, we don't know how to hold our government accountable
  • Democracies can't stay open if we've got elected leaders that are caring more about entrenching their own power and misinformation than actually preserving democracy by the people. I think that's No. 1.
  • No. 2 has to do with a value system. We talk about American values - reward for hard work, integrity, honesty. The same value system should apply to who we hire for government positions. And I think Americans have lost that.
  • in my own life, I'm very careful about who gets to be part of the inner circle because I have a strong value system. Bring that same sense to bear at the voting booth. Don't vote for red versus blue. Vote for people that live your value system
  • just like the Ukrainians are fighting for their children's democracy, we need to do that as well. And we do that through informing ourselves with good information, tolerating competing points of view and voting - voting, voting, voting - to hold elected leaders accountable if they cross boundaries that matter to us in our own lives.
Javier E

Climate Change: The Technologies That Could Make All the Difference - WSJ - 0 views

  • The modularity of DAC systems implies that costs for CO2 removal might drop 90% to 95% over a couple of decades, just like the recent cost declines for other modular solutions such as wind turbines, solar panels and lithium-ion batteries.
  • Unlike other pollutants, what matters with carbon dioxide isn’t the location of its release but the total atmospheric accumulation. Releasing greenhouse gases in industrial corridors and then removing them from the atmosphere in remote locations has essentially the same net effect as if the carbon wasn’t emitted in the first place. That means we can deploy DAC systems wherever the energy for their operation is cheapest, ecosystem impacts are lowest, and the economic activity would be welcome.
  • A solar microgrid, which generates, stores and distributes clean energy to homes and facilities in a local network, provides a strong answer to these needs and wants. It can integrate with the main electric grid or disconnect and operate autonomously if the main grid is stressed or goes down. The physical pieces—solar panels, batteries and inverters—have been improving for a while. What’s new and coming, though, is the ability to orchestrate these different pieces into agile electric grids.
  • ...18 more annotations...
  • With digital tools and data science, demand for energy can now be sculpted locally to match available resources, reducing the number of power plants that utilities need to keep in reserve.
  • The key is giving consumers the ability to separate flexible energy uses—say, operating a Jacuzzi—from essential needs, which can now be done with phone apps for smart appliances and service panels.
  • Meanwhile, connections between groups of customers can be opened and closed as needed with modern, communicating circuit switchgear.
  • The good news is that record amounts of batteries are being installed in U.S. homes and on the electric grid, despite supply-chain bottlenecks.
  • The bad news is that current battery technology only offers a few hours of storage. What’s needed are more-powerful battery systems that can extend the length or scale of storing, which could be even more enabling to sun and wind power.
  • Two such solutions are on the horizon. Stationary metal-air batteries, such as iron-air batteries, don’t hold as much energy per kilogram as lithium-ion batteries so it takes a larger, heavier battery to do the job. But they are cheaper, iron is a plentiful metal, and the batteries, whose chemistry works via interaction of the metal with air, can be sized and installed to store and discharge a large level of electricity over days or weeks.
  • improved large iron-air batteries are poised to become a great new backup for renewable energy within the next couple of years to address those times of year when drops in renewable energy production can last for days and not hours.
  • For households, a battery configuration called a virtual power plant also holds huge potential to extend the use of renewables. These systems allow a local utility or electricity distributor to collect excess energy stored in multiple households’ battery systems and feed it back to the grid when there is a surge in demand or generation shortfall.
  • Electrification is a good choice for smaller vessels on short voyages, like the world’s largest all-electric ferry launched in Norway in 2021. It isn’t yet a viable option for ships on transoceanic voyages because batteries are still too large and heavy, though innovative approaches for battery swapping are being explored.
  • Greener Shipping
  • For longer voyages, e-ammonia, a hydrogen-derived fuel made with renewable energy, has been identified as a prime candidate, although work is needed to ensure safety. Ammonia has a higher energy density than some other options, making it a more economical option for powering large ships across oceans.
  • Among ammonia projects in the works, MAN Energy Solutions is developing engines that can run on conventional fossil fuel or ammonia, a coalition of Nordic partners is designing the world’s first ammonia-powered vessel, and Singapore is evaluating how to bunker the fuel.
  • Some EV makers such as Tesla are now embracing an older, less-expensive battery technology known as lithium-iron-phosphate, or LFP, used originally in scooters and small EVs in China
  • . It draws entirely on cheap and abundant minerals and is less flammable. The power density of LFP is less than NMC, but that disadvantage can be overcome by advances in vehicle design.
  • One approach being tested eliminates the outer packaging of the batteries altogether and directly installs cells, packed in layers, into a cavity in the EV’s body chassis.
  • Eventually, solid-state batteries—with a solid electrolyte made from common minerals like glass or ceramics—could become a key EV battery technology.
  • The solid electrolyte is more chemically stable, lighter, recharges faster and has many more lifetime recharging cycles than lithium-ion batteries, which depend on heavy liquid electrolytes.
  • these innovations are good news for those hoping to speed up EV adoption. They also suggest that batteries, far from becoming a standardized commodity, are going to be customized as auto makers create their own vehicle designs and battery makers develop proprietary platforms.
Javier E

The "missing law" of nature was here all along | Salon.com - 0 views

  • recently published scientific article proposes a sweeping new law of nature, approaching the matter with dry, clinical efficiency that still reads like poetry.
  • “Evolving systems are asymmetrical with respect to time; they display temporal increases in diversity, distribution, and/or patterned behavior,” they continue, mounting their case from the shoulders of Charles Darwin, extending it toward all things living and not.&nbsp;
  • To join the known physics laws of thermodynamics, electromagnetism and Newton’s laws of motion and gravity, the nine scientists and philosophers behind the paper propose their “law of increasing functional information.”
  • ...27 more annotations...
  • In short, a complex and evolving system — whether that’s a flock of gold finches or a nebula or the English language — will produce ever more diverse and intricately detailed states and configurations of itself.
  • Some of these more diverse and intricate configurations, the scientists write, are shed and forgotten over time. The configurations that persist are ones that find some utility or novel function in a process akin to natural selection, but a selection process driven by the passing-on of information rather than just the sowing of biological genes
  • Have they finally glimpsed, I wonder, the connectedness and symbiotic co-evolution of their own scientific ideas with those of the world’s writers
  • Have they learned to describe in their own quantifying language that cradle from which both our disciplines have emerged and the firmament on which they both stand — the hearing and telling of stories in order to exist?
  • Have they quantified the quality of all existent matter, living and not: that all things inherit a story in data to tell, and that our stories are told by the very forms we take to tell them?&nbsp;
  • “Is there a universal basis for selection? Is there a more quantitative formalism underlying this conjectured conceptual equivalence—a formalism rooted in the transfer of information?,” they ask of the world’s disparate phenomena. “The answer to both questions is yes.”
  • Yes. They’ve glimpsed it, whether they know it or not. Sing to me, O Muse, of functional information and its complex diversity.
  • The principle of complexity evolving at its own pace when left to its own devices, independent of time but certainly in a dance with it, is nothing new. Not in science, nor in its closest humanities kin, science and nature writing. Give things time and nourishing environs, protect them from your own intrusions and — living organisms or not — they will produce abundant enlacement of forms.
  • This is how poetry was born from the same larynxes and phalanges that tendered nuclear equations: We featherless bipeds gave language our time and delighted attendance until its forms were so multivariate that they overflowed with inevitable utility.
  • In her Pulitzer-winning “Pilgrim at Tinker Creek,” nature writer Annie Dillard explains plainly that evolution is the vehicle of such intricacy in the natural world, as much as it is in our own thoughts and actions.&nbsp;
  • “The stability of simple forms is the sturdy base from which more complex, stable forms might arise, forming in turn more complex forms,” she explains, drawing on the undercap frills of mushrooms and filament-fine filtering tubes inside human kidneys to illustrate her point.&nbsp;
  • “Utility to the creature is evolution’s only aesthetic consideration. Form follows function in the created world, so far as I know, and the creature that functions, however bizarre, survives to perpetuate its form,” writes Dillard.
  • “Of the multiplicity of forms, I know nothing. Except that, apparently, anything goes. This holds for forms of behavior as well as design — the mantis munching her mate, the frog wintering in mud.”&nbsp;
  • She notes that, of all forms of life we’ve ever known to exist, only about 10% are still alive. What extravagant multiplicity.&nbsp;
  • “Intricacy is that which is given from the beginning, the birthright, and in the intricacy is the hardiness of complexity that ensures against the failures of all life,” Dillard writes. “The wonder is — given the errant nature of freedom and the burgeoning texture of time — the wonder is that all the forms are not monsters, that there is beauty at all, grace gratuitous.”
  • “This paper, and the reason why I'm so proud of it, is because it really represents a connection between science and the philosophy of science that perhaps offers a new lens into why we see everything that we see in the universe,” lead scientist Michael Wong told Motherboard in a recent interview.&nbsp;
  • Wong is an astrobiologist and planetary scientist at the Carnegie Institute for Science. In his team’s paper, that bridge toward scientific philosophy is not only preceded by a long history of literary creativity but directly theorizes about the creative act itself. &nbsp;
  • “The creation of art and music may seem to have very little to do with the maintenance of society, but their origins may stem from the need to transmit information and create bonds among communities, and to this day, they enrich life in innumerable ways,” Wong’s team writes. &nbsp;
  • “Perhaps, like eddies swirling off of a primary flow field, selection pressures for ancillary functions can become so distant from the core functions of their host systems that they can effectively be treated as independently evolving systems,” the authors add, pointing toward the elaborate mating dance culture observed in birds of paradise.
  • “Perhaps it will be humanity’s ability to learn, invent, and adopt new collective modes of being that will lead to its long-term persistence as a planetary phenomenon. In light of these considerations, we suspect that the general principles of selection and function discussed here may also apply to the evolution of symbolic and social systems.”
  • The Mekhilta teaches that all Ten Commandments were pronounced in a single utterance. Similarly, the Maharsha says the Torah’s 613 mitzvoth are only perceived as a plurality because we’re time-bound humans, even though they together form a singular truth which is indivisible from He who expressed it.&nbsp;
  • Or, as the Mishna would have it, “the creations were all made in generic form, and they gradually expanded.”&nbsp;
  • Like swirling eddies off of a primary flow field.
  • “O Lord, how manifold are thy works!,” cried out David in his psalm. “In wisdom hast thou made them all: the earth is full of thy riches. So is this great and wide sea, wherein are things creeping innumerable, both small and great beasts.”&nbsp;
  • In all things, then — from poetic inventions, to rare biodiverse ecosystems, to the charted history of our interstellar equations — it is best if we conserve our world’s intellectual and physical diversity, for both the study and testimony of its immeasurable multiplicity.
  • Because, whether wittingly or not, science is singing the tune of the humanities. And whether expressed in algebraic logic or ancient Greek hymn, its chorus is the same throughout the universe: Be fruitful and multiply.&nbsp;
  • Both intricate configurations of art and matter arise and fade according to their shared characteristic, long-known by students of the humanities: each have been graced with enough time to attend to the necessary affairs of their most enduring pleasures.&nbsp;
Javier E

His Job Was to Make Instagram Safe for Teens. His 14-Year-Old Showed Him What the App W... - 0 views

  • The experience of young users on Meta’s Instagram—where Bejar had spent the previous two years working as a consultant—was especially acute. In a subsequent email to Instagram head Adam Mosseri, one statistic stood out: One in eight users under the age of 16 said they had experienced unwanted sexual advances on the platform over the previous seven days.
  • For Bejar, that finding was hardly a surprise. His daughter and her friends had been receiving unsolicited penis pictures and other forms of harassment on the platform since the age of 14, he wrote, and Meta’s systems generally ignored their reports—or responded by saying that the harassment didn’t violate platform rules.
  • “I asked her why boys keep doing that,” Bejar wrote to Zuckerberg and his top lieutenants. “She said if the only thing that happens is they get blocked, why wouldn’t they?”
  • ...39 more annotations...
  • For the well-being of its users, Bejar argued, Meta needed to change course, focusing less on a flawed system of rules-based policing and more on addressing such bad experiences
  • The company would need to collect data on what upset users and then work to combat the source of it, nudging those who made others uncomfortable to improve their behavior and isolating communities of users who deliberately sought to harm others.
  • “I am appealing to you because I believe that working this way will require a culture shift,” Bejar wrote to Zuckerberg—the company would have to acknowledge that its existing approach to governing Facebook and Instagram wasn’t working.
  • During and after Bejar’s time as a consultant, Meta spokesman Andy Stone said, the company has rolled out several product features meant to address some of the Well-Being Team’s findings. Those features include warnings to users before they post comments that Meta’s automated systems flag as potentially offensive, and reminders to be kind when sending direct messages to users like content creators who receive a large volume of messages.&nbsp;
  • Meta’s classifiers were reliable enough to remove only a low single-digit percentage of hate speech with any degree of precision.
  • Bejar was floored—all the more so when he learned that virtually all of his daughter’s friends had been subjected to similar harassment. “DTF?” a user they’d never met would ask, using shorthand for a vulgar proposition. Instagram acted so rarely on reports of such behavior that the girls no longer bothered reporting them.&nbsp;
  • Meta’s own statistics suggested that big problems didn’t exist.&nbsp;
  • Meta had come to approach governing user behavior as an overwhelmingly automated process. Engineers would compile data sets of unacceptable content—things like terrorism, pornography, bullying or “excessive gore”—and then train machine-learning models to screen future content for similar material.
  • While users could still flag things that upset them, Meta shifted resources away from reviewing them. To discourage users from filing reports, internal documents from 2019 show, Meta added steps to the reporting process. Meta said the changes were meant to discourage frivolous reports and educate users about platform rules.&nbsp;
  • The outperformance of Meta’s automated enforcement relied on what Bejar considered two sleights of hand. The systems didn’t catch anywhere near the majority of banned content—only the majority of what the company ultimately removed
  • “Please don’t talk about my underage tits,” Bejar’s daughter shot back before reporting his comment to Instagram. A few days later, the platform got back to her: The insult didn’t violate its community guidelines.
  • Also buttressing Meta’s statistics were rules written narrowly enough to ban only unambiguously vile material. Meta’s rules didn’t clearly prohibit adults from flooding the comments section on a teenager’s posts with kiss emojis or posting pictures of kids in their underwear, inviting their followers to “see more” in a private Facebook Messenger group.&nbsp;
  • “Mark personally values freedom of expression first and foremost and would say this is a feature and not a bug,” Rosen responded
  • Narrow rules and unreliable automated enforcement systems left a lot of room for bad behavior—but they made the company’s child-safety statistics look pretty good according to Meta’s metric of choice: prevalence.
  • Defined as the percentage of content viewed worldwide that explicitly violates a Meta rule, prevalence was the company’s preferred measuring stick for the problems users experienced.
  • According to prevalence, child exploitation was so rare on the platform that it couldn’t be reliably estimated, less than 0.05%, the threshold for functional measurement. Content deemed to encourage self-harm, such as eating disorders, was just as minimal, and rule violations for bullying and harassment occurred in just eight of 10,000 views.&nbsp;
  • “There’s a grading-your-own-homework problem,”
  • Meta defines what constitutes harmful content, so it shapes the discussion of how successful it is at dealing with it.”
  • It could reconsider its AI-generated “beauty filters,” which internal research suggested made both the people who used them and those who viewed the images more self-critical
  • the team built a new questionnaire called BEEF, short for “Bad Emotional Experience Feedback.
  • A recurring survey of issues 238,000 users had experienced over the past seven days, the effort identified problems with prevalence from the start: Users were 100 times more likely to tell Instagram they’d witnessed bullying in the last week than Meta’s bullying-prevalence statistics indicated they should.
  • “People feel like they’re having a bad experience or they don’t,” one presentation on BEEF noted. “Their perception isn’t constrained by policy.
  • they seemed particularly common among teens on Instagram.
  • Among users under the age of 16, 26% recalled having a bad experience in the last week due to witnessing hostility against someone based on their race, religion or identity
  • More than a fifth felt worse about themselves after viewing others’ posts, and 13% had experienced unwanted sexual advances in the past seven days.&nbsp;
  • The vast gap between the low prevalence of content deemed problematic in the company’s own statistics and what users told the company they experienced suggested that Meta’s definitions were off, Bejar argued
  • To minimize content that teenagers told researchers made them feel bad about themselves, Instagram could cap how much beauty- and fashion-influencer content users saw.
  • Proving to Meta’s leadership that the company’s prevalence metrics were missing the point was going to require data the company didn’t have. So Bejar and a group of staffers from the Well-Being Team started collecting it
  • And it could build ways for users to report unwanted contacts, the first step to figuring out how to discourage them.
  • One experiment run in response to BEEF data showed that when users were notified that their comment or post had upset people who saw it, they often deleted it of their own accord. “Even if you don’t mandate behaviors,” said Krieger, “you can at least send signals about what behaviors aren’t welcome.”
  • But among the ranks of Meta’s senior middle management, Bejar and Krieger said, BEEF hit a wall. Managers who had made their careers on incrementally improving prevalence statistics weren’t receptive to the suggestion that the approach wasn’t working.&nbsp;
  • After three decades in Silicon Valley, he understood that members of the company’s C-Suite might not appreciate a damning appraisal of the safety risks young users faced from its product—especially one citing the company’s own data.&nbsp;
  • “This was the email that my entire career in tech trained me not to send,” he says. “But a part of me was still hoping they just didn’t know.”
  • “Policy enforcement is analogous to the police,” he wrote in the email Oct. 5, 2021—arguing that it’s essential to respond to crime, but that it’s not what makes a community safe. Meta had an opportunity to do right by its users and take on a problem that Bejar believed was almost certainly industrywide.
  • fter Haugen’s airing of internal research, Meta had cracked down on the distribution of anything that would, if leaked, cause further reputational damage. With executives privately asserting that the company’s research division harbored a fifth column of detractors, Meta was formalizing a raft of new rules for employees’ internal communication.
  • Among the mandates for achieving “Narrative Excellence,” as the company called it, was to keep research data tight and never assert a moral or legal duty to fix a problem.
  • “I had to write about it as a hypothetical,” Bejar said. Rather than acknowledging that Instagram’s survey data showed that teens regularly faced unwanted sexual advances, the memo merely suggested how Instagram might help teens if they faced such a problem.
  • The hope that the team’s work would continue didn’t last. The company stopped conducting the specific survey behind BEEF, then laid off most everyone who’d worked on it as part of what Zuckerberg called Meta’s “year of efficiency.
  • If Meta was to change, Bejar told the Journal, the effort would have to come from the outside. He began consulting with a coalition of state attorneys general who filed suit against the company late last month, alleging that the company had built its products to maximize engagement at the expense of young users’ physical and mental health. Bejar also got in touch with members of Congress about where he believes the company’s user-safety efforts fell short.&nbsp;
Javier E

If We Knew Then What We Know Now About Covid, What Would We Have Done Differently? - WSJ - 0 views

  • For much of 2020, doctors and public-health officials thought the virus was transmitted through droplets emitted from one person’s mouth and touched or inhaled by another person nearby. We were advised to stay at least 6 feet away from each other to avoid the droplets
  • A small cadre of aerosol scientists had a different theory. They suspected that Covid-19 was transmitted not so much by droplets but by smaller infectious aerosol particles that could travel on air currents way farther than 6 feet and linger in the air for hours. Some of the aerosol particles, they believed, were small enough to penetrate the cloth masks widely used at the time.
  • The group had a hard time getting public-health officials to embrace their theory. For one thing, many of them were engineers, not doctors.
  • ...37 more annotations...
  • “My first and biggest wish is that we had known early that Covid-19 was airborne,”
  • ,&nbsp;“Once you’ve realized that, it informs an entirely different strategy for protection.” Masking, ventilation and air cleaning become key, as well as avoiding high-risk encounters with strangers, he says.
  • Instead of washing our produce and wearing hand-sewn cloth masks, we could have made sure to avoid superspreader events and worn more-effective N95 masks or their equivalent. “We could have made more of an effort to develop and distribute N95s to everyone,” says&nbsp;Dr. Volckens. “We could have had an Operation Warp Speed for masks.”
  • We didn’t realize how important clear, straight talk would be to maintaining public trust. If we had, we could have explained the biological nature of a virus and warned that Covid-19 would change in unpredictable ways. &nbsp;
  • We didn’t know how difficult it would be to get the basic data needed to make good public-health and medical decisions. If we’d had the data, we could have more effectively allocated scarce resources
  • In the face of a pandemic, he says, the public needs an early basic and blunt lesson in virology
  • and mutates, and since we’ve never seen this particular virus before, we will need to take unprecedented actions and we will make mistakes, he says.
  • Since the public wasn’t prepared, “people weren’t able to pivot when the knowledge changed,”
  • By the time the vaccines became available, public trust had been eroded by myriad contradictory messages—about the usefulness of masks, the ways in which the virus could be spread, and whether the virus would have an end date.
  • , the absence of a single, trusted source of clear information meant that many people gave up on trying to stay current or dismissed the different points of advice as partisan and untrustworthy.
  • “The science is really important, but if you don’t get the trust and communication right, it can only take you so far,”
  • people didn’t know whether it was OK to visit elderly relatives or go to a dinner party.
  • Doctors didn’t know what medicines worked. Governors and mayors didn’t have the information they needed to know whether to require masks. School officials lacked the information needed to know whether it was safe to open schools.
  • Had we known that even a mild case of Covid-19 could result in long Covid and other serious chronic health problems, we might have calculated our own personal risk differently and taken more care.
  • just months before the outbreak of the pandemic, the Council of State and Territorial Epidemiologists released a white paper detailing the urgent need to modernize the nation’s public-health system still reliant on manual data collection methods—paper records, phone calls, spreadsheets and faxes.
  • While the U.K. and Israel were collecting and disseminating Covid case data promptly, in the U.S. the CDC couldn’t. It didn’t have a centralized health-data collection system like those countries did, but rather relied on voluntary reporting by underfunded state and local public-health systems and hospitals.
  • doctors and scientists say they had to depend on information from Israel, the U.K. and South Africa to understand the nature of new variants and the effectiveness of treatments and vaccines. They relied heavily on private data collection efforts such as a dashboard at Johns Hopkins University’s Coronavirus Resource Center that tallied cases, deaths and vaccine rates globally.
  • For much of the pandemic, doctors, epidemiologists, and state and local governments had no way to find out in real time how many people were contracting Covid-19, getting hospitalized and dying
  • To solve the data problem, Dr. Ranney says, we need to build a&nbsp;public-health system that can collect and disseminate data and acts like an electrical grid. The power company sees a storm coming and lines up repair crews.
  • If we’d known how damaging lockdowns would be to mental health, physical health and the economy, we could have taken a more strategic approach to closing businesses and keeping people at home.
  • t many doctors say they were crucial at the start of the pandemic to give doctors and hospitals a chance to figure out how to accommodate and treat the avalanche of very sick patients.
  • The measures reduced deaths, according to many studies—but at a steep cost.
  • The lockdowns didn’t have to be so harmful, some scientists say. They could have been more carefully tailored to protect the most vulnerable, such as those in nursing homes and retirement communities, and to minimize widespread disruption.
  • Lockdowns could, during Covid-19 surges, close places such as bars and restaurants where the virus is most likely to spread, while allowing other businesses to stay open with safety precautions like masking and ventilation in place. &nbsp;
  • The key isn’t to have the lockdowns last a long time, but that they are deployed earlier,
  • If England’s March 23, 2020, lockdown&nbsp;had begun one week earlier, the measure would have nearly halved the estimated 48,600 deaths in the first wave of England’s pandemic
  • If the lockdown had begun a week later, deaths in the same period would have more than doubled
  • It is possible to avoid lockdowns altogether. Taiwan, South Korea and Hong Kong—all countries&nbsp;experienced at handling disease outbreaks such as SARS in 2003 and MERS—avoided lockdowns by widespread masking, tracking the spread of the virus through testing and contact tracing and quarantining infected individuals.
  • With good data, Dr. Ranney says, she could have better managed staffing and taken steps to alleviate the strain on doctors and nurses by arranging child care for them.
  • Early in the pandemic, public-health officials were clear: The people at increased risk for severe Covid-19 illness were older, immunocompromised, had chronic kidney disease, Type 2 diabetes or serious heart conditions
  • t had the unfortunate effect of giving a false sense of security to people who weren’t in those high-risk categories. Once case rates dropped, vaccines became available and fear of the virus wore off, many people let their guard down, ditching masks, spending time in crowded indoor places.
  • it has become clear that even people with mild cases of Covid-19 can develop long-term serious and debilitating diseases. Long Covid, whose symptoms include months of persistent fatigue, shortness of breath, muscle aches and brain fog, hasn’t been the virus’s only nasty surprise
  • In February 2022, a study found that, for at least a year, people who had Covid-19 had a substantially increased risk of heart disease—even people who were younger and had not been hospitalized
  • respiratory conditions.
  • Some scientists now suspect that Covid-19 might be capable of affecting nearly every organ system in the body. It may play a role in the activation of dormant viruses and latent autoimmune conditions people didn’t know they had
  • &nbsp;A blood test, he says,&nbsp;would tell&nbsp;people if they are at higher risk of long Covid and whether they should have antivirals on hand to take right away should they contract Covid-19.
  • If the risks of long Covid had been known, would people have reacted differently, especially given the confusion over masks and lockdowns and variants? Perhaps. At the least, many people might not have assumed they were out of the woods just because they didn’t have any of the risk factors.
aliciathompson1

The Limits of Human Reason, in One Dramatic Video | Psychology Today - 1 views

  • Hello to the real world of human perception, the product of a system of physical and psychological processes which blend facts and feelings, intellect and instinct, and, when the two conflict, a system which gives the upper hand not to conscious evidence-based reason but to instinctive and subconscious gut reaction.
  • Just like balance, our risk perception system employs several distinct components; one is purposeful conscious reasoning about the facts (think of that as the vision of visitors to Demon Hill), and one is a set of psychological processes and instincts and emotions that help us make quick subconscious judgments about how those facts feel
  • Just as visual and vestibular information conflicts in visitors to Demon Hill, in risk perception, when reason and evidence clashes with emotion and instinct, no matter how clear and compelling the evidence, we are ‘cognitively impenetrable’ to just the facts, and our brain literally denies that evidence if it conflicts with how our instincts – the subconscious part of the risk perception system that is beyond our control - make that evidence feel
  • ...1 more annotation...
  • Perception, informed not just by the facts but our instinctive interpretations of how those facts feel, IS reality…a potentially dangerous reality
Javier E

Robert Reich: A single-payer health care system is inevitable - Salon.com - 1 views

  • In a nutshell, the more sick people and the fewer healthy people a private for-profit insurer attracts, the less competitive that insurer becomes relative to other insurers that don’t attract as high a percentage of the sick but a higher percentage of the healthy.
  • If insurers had no idea who’d be sick and who’d be healthy when they sign up for insurance (and keep them insured at the same price even after they become sick), this wouldn’t be a problem. But they do know — and they’re developing more and more sophisticated ways of finding out.
  • Health insurers spend lots of time, effort and money trying to attract people who have high odds of staying healthy (the young and the fit) while doing whatever they can to fend off those who have high odds of getting sick (the older, infirm and the unfit).
  • ...6 more annotations...
  • As a result we end up with the most bizarre health-insurance system imaginable: One ever better designed to avoid sick people.
  • America’s giant health insurers are now busily consolidating into ever-larger behemoths.
  • In reality, they’re becoming huge to get more bargaining leverage over everyone they do business with — hospitals, doctors, employers, the government and consumers. That way they make even bigger profits.
  • researchers found, for example, that after Aetna merged with Prudential HealthCare in 1999, premiums rose 7 percent higher than had the merger not occurred.
  • The real choice in the future is either a hugely expensive for-profit oligopoly with the market power to charge high prices even to healthy people and stop insuring sick people.
  • Or else a government-run single payer system — such as is in place in almost every other advanced economy — dedicated to lower premiums and better care for everyone.
Emily Freilich

The Man Who Would Teach Machines to Think - James Somers - The Atlantic - 1 views

  • Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
  • “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done
  • Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.
  • ...43 more annotations...
  • “It depends on what you mean by artificial intelligence.”
  • Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
  • Ever since he was about 14, when he found out that his youngest sister, Molly, couldn’t understand language, because she “had something deeply wrong with her brain” (her neurological condition probably dated from birth, and was never diagnosed), he had been quietly obsessed by the relation of mind to matter.
  • How could consciousness be physical? How could a few pounds of gray gelatin give rise to our very thoughts and selves?
  • Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.”
  • In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself.
  • But then AI changed, and Hofstadter didn’t change with it, and for that he all but disappeared.
  • By the early 1980s, the pressure was great enough that AI, which had begun as an endeavor to answer yes to Alan Turing’s famous question, “Can machines think?,” started to mature—or mutate, depending on your point of view—into a subfield of software engineering, driven by applications.
  • Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force.
  • Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?”
  • AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?
  • It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something
  • Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines a
  • How do you make a search engine that understands if you don’t know how you understand?
  • s “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes”
  • That’s what it means to understand. But how does understanding work?
  • analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.
  • there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.
  • in Hofstadter’s telling, the story goes like this: when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote, “patiently, systematically, brilliantly,” way out of the light of day, chipped away at the real problem. “Very few people are interested in how human intelligence works,”
  • For more than 30 years, Hofstadter has worked as a professor at Indiana University at Bloomington
  • The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited
  • project out of IBM called Candide. The idea behind Candide, a machine-translation system, was to start by admitting that the rules-based approach requires too deep an understanding of how language is produced; how semantics, syntax, and morphology work; and how words commingle in sentences and combine into paragraphs—to say nothing of understanding the ideas for which those words are merely conduits.
  • , Hofstadter directs the Fluid Analogies Research Group, affectionately known as FARG.
  • Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why.
  • When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
  • ut very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.
  • “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”
  • “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.
  • So IBM threw that approach out the window. What the developers did instead was brilliant, but so straightforward,
  • The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence
  • What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2&nbsp;million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.)
  • By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable resul
  • Google Translate team can be made up of people who don’t speak most of the languages their application translates. “It’s a bang-for-your-buck argument,” Estelle says. “You probably want to hire more engineers instead” of native speakers.
  • But the need to serve 1&nbsp;billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself.
  • “Did we sit down when we built Watson and try to model human cognition?” Dave Ferrucci, who led the Watson team at IBM, pauses for emphasis. “Absolutely not. We just tried to create a machine that could win at Jeopardy.”
  • For Ferrucci, the definition of intelligence is simple: it’s what a program can do. Deep Blue was intelligent because it could beat Garry Kasparov at chess. Watson was intelligent because it could beat Ken Jennings at Jeopardy.
  • “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something.
  • Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”
  • Of course, the folly of being above the fray is that you’re also not a part of it
  • As our machines get faster and ingest more data, we allow ourselves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them.
  • Hofstadter hasn’t been to an artificial-intelligence conference in 30 years. “There’s no communication between me and these people,” he says of his AI peers. “None. Zero. I don’t want to talk to colleagues that I find very, very intransigent and hard to convince of anything
  • Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn’t agree with those ideas.
  • Academia is not an environment where you just sit in your bath and have ideas and expect everyone to run around getting excited. It’s possible that in 50 years’ time we’ll say, ‘We really should have listened more to Doug Hofstadter.’ But it’s incumbent on every scientist to at least think about what is needed to get people to understand the ideas.”
Javier E

Minsky's moment | The Economist - 0 views

  • Minsky started with an explanation of investment. It is, in essence, an exchange of money today for money tomorrow. A firm pays now for the construction of a factory; profits from running the facility will, all going well, translate into money for it in coming years.
  • Put crudely, money today can come from one of two sources: the firm’s own cash or that of others (for example, if the firm borrows from a bank). The balance between the two is the key question for the financial system.
  • Minsky distinguished between three kinds of financing. The first, which he called “hedge financing”, is the safest: firms rely on their future cashflow to repay all their borrowings. For this to work, they need to have very limited borrowings and healthy profits. The second, speculative financing, is a bit riskier: firms rely on their cashflow to repay the interest on their borrowings but must roll over their debt to repay the principal. This should be manageable as long as the economy functions smoothly, but a downturn could cause distress. The third, Ponzi financing, is the most dangerous. Cashflow covers neither principal nor interest; firms are betting only that the underlying asset will appreciate by enough to cover their liabilities. If that fails to happen, they will be left exposed.
  • ...10 more annotations...
  • Economies dominated by hedge financing—that is, those with strong cashflows and low debt levels—are the most stable. When speculative and, especially, Ponzi financing come to the fore, financial systems are more vulnerable. If asset values start to fall, either because of monetary tightening or some external shock, the most overstretched firms will be forced to sell their positions. This further undermines asset values, causing pain for even more firms. They could avoid this trouble by restricting themselves to hedge financing. But over time, particularly when the economy is in fine fettle, the temptation to take on debt is irresistible. When growth looks assured, why not borrow more? Banks add to the dynamic, lowering their credit standards the longer booms last. If defaults are minimal, why not lend more? Minsky’s conclusion was unsettling. Economic stability breeds instability. Periods of prosperity give way to financial fragility.
  • Minsky’s insight might sound obvious. Of course, debt and finance matter. But for decades the study of economics paid little heed to the former and relegated the latter to a sub-discipline, not an essential element in broader theories.
  • Minsky was a maverick. He challenged both the Keynesian backbone of macroeconomics and a prevailing belief in efficient markets.
  • t Messrs Hicks and Hansen largely left the financial sector out of the picture, even though Keynes was keenly aware of the importance of markets. To Minsky, this was an “unfair and naive representation of Keynes’s subtle and sophisticated views”. Minsky’s financial-instability hypothesis helped fill in the holes.
  • His challenge to the prophets of efficient markets was even more acute. Eugene Fama and Robert Lucas, among others, persuaded most of academia and policymaking circles that markets tended towards equilibrium as people digested all available information. The structure of the financial system was treated as almost irrelevant
  • In recent years, behavioural economists have attacked one plank of efficient-market theory: people, far from being rational actors who maximise their gains, are often clueless about what they want and make the wrong decisions.
  • But years earlier Minsky had attacked another: deep-seated forces in financial systems propel them towards trouble, he argued, with stability only ever a fleeting illusion.
  • Investors were faster than professors to latch onto his views. More than anyone else it was Paul McCulley of PIMCO, a fund-management group, who popularised his ideas. He coined the term “Minsky moment” to describe a situation when debt levels reach breaking-point and asset prices across the board start plunging. Mr McCulley initially used the term in explaining the Russian financial crisis of 1998. Since the global turmoil of 2008, it has become ubiquitous. For investment analysts and fund managers, a “Minsky moment” is now virtually synonymous with a financial crisis.
  • it would be a stretch to expect the financial-instability hypothesis to become a new foundation for economic theory. Minsky’s legacy has more to do with focusing on the right things than correctly structuring quantifiable models. It is enough to observe that debt and financial instability, his main preoccupations, have become some of the principal topics of inquiry for economists today
  • As Mr Krugman has quipped: “We are all Minskyites now.”
« First ‹ Previous 41 - 60 of 782 Next › Last »
Showing 20 items per page