Skip to main content

Home/ TOK Friends/ Group items tagged chip

Rss Feed Group items tagged

Javier E

Why The CHIPS and Science Act Is a Climate Bill - The Atlantic - 0 views

  • Over the next five years, the CHIPS Act will direct an estimated $67 billion, or roughly a quarter of its total funding, toward accelerating the growth of zero-carbon industries and conducting climate-relevant research, according to an analysis from RMI, a nonpartisan energy think tank based in Colorado.
  • That means that the CHIPS Act is one of the largest climate bills ever passed by Congress. It exceeds the total amount of money that the government spent on renewable-energy tax credits from 2005 to 2019
  • And it’s more than half the size of the climate spending in President Barack Obama’s 2009 stimulus bill. That’s all the more remarkable because the CHIPS Act was passed by large bipartisan majorities, with 41 Republicans and nearly all Democrats supporting it in the House and the Senate.
  • ...15 more annotations...
  • When viewed with the Inflation Reduction Act, which the House is poised to pass later this week, and last year’s bipartisan infrastructure law, a major shift in congressional climate spending comes into focus. According to the RMI analysis, these three laws are set to more than triple the federal government’s average annual spending on climate and clean energy this decade, compared with the 2010s.
  • Within a few years, when the funding has fully ramped up, the government will spend roughly $80 billion a year on accelerating the development and deployment of zero-carbon energy and preparing for the impacts of climate change. That exceeds the GDP of about 120 of the 192 countries that have signed the Paris Agreement on Climate Change
  • By the end of the decade, the federal government will have spent more than $521 billion
  • the bill’s programs focus on the bleeding edge of the decarbonization problem, investing money in technology that should lower emissions in the 2030s and beyond.
  • The International Energy Association has estimated that almost half of global emissions reductions by 2050 will come from technologies that exist only as prototypes or demonstration projects today.
  • To get those technologies ready in time, we need to deploy those new ideas as fast as we can, then rapidly get them to commercial scale, Carey said. “What used to take two decades now needs to take six to 10 years.” That’s what the CHIPS Act is supposed to do
  • The law, for instance, establishes a new $20 billion Directorate for Technology, which will specialize in pushing new technologies from the prototype stage into the mass market. It is meant to prevent what happened with the solar industry—where America invented a new technology, only to lose out on commercializing it—from happening again
  • Congress has explicitly tasked the new office with studying “natural and anthropogenic disaster prevention or mitigation” as well as “advanced energy and industrial efficiency technologies,” including next-generation nuclear reactors.
  • The bill also directs about $12 billion in new research, development, and demonstration funding to the Department of Energy, according to RMI’s estimate. That includes doubling the budget for ARPA-E, the department’s advanced-energy-projects skunk works.
  • it allocates billions to upgrade facilities at the government’s in-house defense and energy research institutes, including the National Renewable Energy Laboratory, the Princeton Plasma Physics Laboratory, and Berkeley Lab, which conducts environmental-science research.
  • RMI’s estimate of the climate spending in the CHIPS bill should be understood as just that: an estimate. The bill text rarely specifies how much of its new funding should go to climate issues.
  • When you add CHIPS, the IRA, and the infrastructure law together, Washington appears to be unifying behind a new industrial policy, focused not only on semiconductors and defense technology but clean energy
  • The three bills combine to form a “a coordinated, strategic policy for accelerating the transition to the technologies that are going to define the 21st century,”
  • scholars and experts have speculated about whether industrial policy—the intentional use of law to nurture and grow certain industries—might make a comeback to help fight climate change. Industrial policy was central to some of the Green New Deal’s original pitch, and it has helped China develop a commanding lead in the global solar industry.
  • “Industrial policy,” he said, “is back.”
Javier E

As ARM Chief Steps Down, Successor Talks About 'Body Computing' - NYTimes.com - 0 views

  • ARM was originally a project inside Acorn Computer, a personal computer maker long since broken up. From relative obscurity, ARM’s chip designs now make up nearly one-third of new chip consumption, hurting companies like Intel.
  • The big coming focus, Mr. Segars said, will be deploying chips into a sensor-rich world. “Low-cost microcontrollers with a wireless interface,” he said. “There will be billions of these.” The sensor data will be processed both locally, on millions of small computers, with capabilities to make decisions locally, or collected and passed along to even bigger computer systems. “The systems will go through different aggregation points,” Mr. Segars said. “If an aggregator in the home can tell a fridge is using too much power, maybe it needs servicing.”
  • “The car is ripe for a revolution. It will evolve into a consumer electronics device, paying for parking as you pull up to the curb.” Eventually, said Mr. East, “it’s getting into people’s bodies. Over the next several years, semiconductors will be so small and use so little power that they’ll run inside us as systems.”
runlai_jiang

How Cellphone Chips Became a National-Security Concern - WSJ - 0 views

  • The U.S. made clear this week that containing China’s growing clout in wireless technology is now a national-security priority. Telecommunications-industry leaders say such fears are justified—but question whether the government’s extraordinary intervention in a corporate takeover battle that doesn’t even involve a Chinese company will make a difference.
  • Those worries are rooted in how modern communication works. Cellular-tower radios, internet routers and related electronics use increasingly complex hardware and software, with millions of lines of code
  • Hackers can potentially control the equipment through intentional or inadvertent security flaws, such as the recently disclosed “Meltdown” and “Spectre” flaws that could have affected most of the world’s computer chips.
  • ...4 more annotations...
  • Qualcomm is one of the few American leaders in developing standards and patents for 5G, the next generation of wireless technology that should be fast enough to enable self-driving cars and other innovations. The CFIUS letter said a weakened Qualcomm could strengthen Chinese rivals, specifically Huawei Technologies Co., the world’s top cellular-equipment maker and a leading smartphone brand.
  • Washington has taken unusual steps to hinder Huawei’s business in the U.S., concerned that Beijing could force the company to exploit its understanding of the equipment to spy or disable telecom networks.
  • Many European wireless carriers, including British-based Vodafone Group PLC, praise Huawei’s equipment, saying it is often cheaper and more advanced that those of its competitors. That is another big worry for Washington.
  • board and senior management team are American. “It’s barely a foreign company now, but politics and logic aren’t often friends,” said Stacy Rasgon, a Bernstein Research analyst. “I’m just not convinced that Qualcomm’s going to slash and burn the 5G roadmap and leave it open to Huawei” if Broadcom buys it.
anonymous

Controversial Quantum Machine Tested by NASA and Google Shows Promise | MIT Technology ... - 0 views

  • artificial-intelligence software.
  • Google says it has proof that a controversial machine it bought in 2013 really can use quantum physics to work through a type of math that’s crucial to artificial intelligence much faster than a conventional computer.
  • “It is a truly disruptive technology that could change how we do everything,” said Rupak Biswas, director of exploration technology at NASA’s Ames Research Center in Mountain View, California.
  • ...7 more annotations...
  • An alternative algorithm is known that could have let the conventional computer be more competitive, or even win, by exploiting what Neven called a “bug” in D-Wave’s design. Neven said the test his group staged is still important because that shortcut won’t be available to regular computers when they compete with future quantum annealers capable of working on larger amounts of data.
  • “For a specific, carefully crafted proof-of-concept problem we achieve a 100-million-fold speed-up,” said Neven.
  • “the world’s first commercial quantum computer.” The computer is installed at NASA’s Ames Research Center in Mountain View, California, and operates on data using a superconducting chip called a quantum annealer.
  • Google is competing with D-Wave to make a quantum annealer that could do useful work.
  • Martinis is also working on quantum hardware that would not be limited to optimization problems, as annealers are.
  • Government and university labs, Microsoft (see “Microsoft’s Quantum Mechanics”), and IBM (see “IBM Shows Off a Quantum Computing Chip”) are also working on that technology.
  • “it may be several years before this research makes a difference to Google products.”
aqconces

BBC - Future - The man who gets drunk on chips - 0 views

  • giving
  • “Every day for a year I would wake up and vomit,” he says. “Sometimes it would come on over the course of a few days, sometimes it was just like ‘bam! I’m drunk’.”
  • No alcohol had passed his lips, but not everyone believed him. At one point, his wife searched the house from top to bottom for hidden bottles of booze. “I thought everyone was just giving me a rough time, until my wife filmed me and then I saw it – I looked drunk.”
  • ...3 more annotations...
  • giving
  • he suffers from “auto-brewery syndrome”
  • “The problem arises when the yeast in our gut gets out of hand. Bacteria normally keep the yeast in check, but sometimes the yeast takes over.”
Javier E

GPT-4 has arrived. It will blow ChatGPT out of the water. - The Washington Post - 0 views

  • GPT-4, in contrast, is a state-of-the-art system capable of creating not just words but describing images in response to a person’s simple written commands.
  • When shown a photo of a boxing glove hanging over a wooden seesaw with a ball on one side, for instance, a person can ask what will happen if the glove drops, and GPT-4 will respond that it would hit the seesaw and cause the ball to fly up.
  • an AI program, known as a large language model, that early testers had claimed was remarkably advanced in its ability to reason and learn new things
  • ...22 more annotations...
  • hose promises have also fueled anxiety over how people will be able to compete for jobs outsourced to eerily refined machines or trust the accuracy of what they see online.
  • Officials with the San Francisco lab said GPT-4’s “multimodal” training across text and images would allow it to escape the chat box and more fully emulate a world of color and imagery, surpassing ChatGPT in its “advanced reasoning capabilities.”
  • A person could upload an image and GPT-4 could caption it for them, describing the objects and scene.
  • AI language models often confidently offer wrong answers because they are designed to spit out cogent phrases, not actual facts. And because they have been trained on internet text and imagery, they have also learned to emulate human biases of race, gender, religion and class.
  • GPT-4 still makes many of the errors of previous versions, including “hallucinating” nonsense, perpetuating social biases and offering bad advice. It also lacks knowledge of events that happened after about September 2021, when its training data was finalized, and “does not learn from its experience,” limiting people’s ability to teach it new things.
  • Microsoft has invested billions of dollars in OpenAI in the hope its technology will become a secret weapon for its workplace software, search engine and other online ambitions. It has marketed the technology as a super-efficient companion that can handle mindless work and free people for creative pursuits, helping one software developer to do the work of an entire team or allowing a mom-and-pop shop to design a professional advertising campaign without outside help.
  • it could lead to business models and creative ventures no one can predict.
  • sparked criticism that the companies are rushing to exploit an untested, unregulated and unpredictable technology that could deceive people, undermine artists’ work and lead to real-world harm.
  • the company held back the feature to better understand potential risks. As one example, she said, the model might be able to look at an image of a big group of people and offer up known information about them, including their identities — a possible facial recognition use case that could be used for mass surveillance.
  • OpenAI researchers wrote, “As GPT-4 and AI systems like it are adopted more widely,” they “will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in.”
  • “We can agree as a society broadly on some harms that a model should not contribute to,” such as building a nuclear bomb or generating child sexual abuse material, she said. “But many harms are nuanced and primarily affect marginalized groups,” she added, and those harmful biases, especially across other languages, “cannot be a secondary consideration in performance.”
  • OpenAI said its new model would be able to handle more than 25,000 words of text, a leap forward that could facilitate longer conversations and allow for the searching and analysis of long documents.
  • OpenAI developers said GPT-4 was more likely to provide factual responses and less likely to refuse harmless requests
  • Duolingo, the language learning app, has already used GPT-4 to introduce new features, such as an AI conversation partner and a tool that tells users why an answer was incorrect.
  • The company did not share evaluations around bias that have become increasingly common after pressure from AI ethicists.
  • GPT-4 will have competition in the growing field of multisensory AI. DeepMind, an AI firm owned by Google’s parent company Alphabet, last year released a “generalist” model named Gato that can describe images and play video games. And Google this month released a multimodal system, PaLM-E, that folded AI vision and language expertise into a one-armed robot on wheels: If someone told it to go fetch some chips, for instance, it could comprehend the request, wheel over to a drawer and choose the right bag.
  • The systems, though — as critics and the AI researchers are quick to point out — are merely repeating patterns and associations found in their training data without a clear understanding of what it’s saying or when it’s wrong.
  • GPT-4, the fourth “generative pre-trained transformer” since OpenAI’s first release in 2018, relies on a breakthrough neural-network technique in 2017 known as the transformer that rapidly advanced how AI systems can analyze patterns in human speech and imagery.
  • The systems are “pre-trained” by analyzing trillions of words and images taken from across the internet: news articles, restaurant reviews and message-board arguments; memes, family photos and works of art.
  • Giant supercomputer clusters of graphics processing chips are mapped out their statistical patterns — learning which words tended to follow each other in phrases, for instance — so that the AI can mimic those patterns, automatically crafting long passages of text or detailed images, one word or pixel at a time.
  • In 2019, the company refused to publicly release GPT-2, saying it was so good they were concerned about the “malicious applications” of its use, from automated spam avalanches to mass impersonation and disinformation campaigns.
  • Altman has also marketed OpenAI’s vision with the aura of science fiction come to life. In a blog post last month, he said the company was planning for ways to ensure that “all of humanity” benefits from “artificial general intelligence,” or AGI — an industry term for the still-fantastical idea of an AI superintelligence that is generally as smart as, or smarter than, the humans themselves.
Emily Freilich

The Man Who Would Teach Machines to Think - James Somers - The Atlantic - 1 views

  • Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
  • “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done
  • Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.
  • ...43 more annotations...
  • “It depends on what you mean by artificial intelligence.”
  • Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
  • Ever since he was about 14, when he found out that his youngest sister, Molly, couldn’t understand language, because she “had something deeply wrong with her brain” (her neurological condition probably dated from birth, and was never diagnosed), he had been quietly obsessed by the relation of mind to matter.
  • How could consciousness be physical? How could a few pounds of gray gelatin give rise to our very thoughts and selves?
  • Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.”
  • In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself.
  • But then AI changed, and Hofstadter didn’t change with it, and for that he all but disappeared.
  • By the early 1980s, the pressure was great enough that AI, which had begun as an endeavor to answer yes to Alan Turing’s famous question, “Can machines think?,” started to mature—or mutate, depending on your point of view—into a subfield of software engineering, driven by applications.
  • Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force.
  • Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?”
  • AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?
  • It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something
  • Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines a
  • How do you make a search engine that understands if you don’t know how you understand?
  • s “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes”
  • That’s what it means to understand. But how does understanding work?
  • analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.
  • there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.
  • in Hofstadter’s telling, the story goes like this: when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote, “patiently, systematically, brilliantly,” way out of the light of day, chipped away at the real problem. “Very few people are interested in how human intelligence works,”
  • For more than 30 years, Hofstadter has worked as a professor at Indiana University at Bloomington
  • The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited
  • project out of IBM called Candide. The idea behind Candide, a machine-translation system, was to start by admitting that the rules-based approach requires too deep an understanding of how language is produced; how semantics, syntax, and morphology work; and how words commingle in sentences and combine into paragraphs—to say nothing of understanding the ideas for which those words are merely conduits.
  • , Hofstadter directs the Fluid Analogies Research Group, affectionately known as FARG.
  • Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why.
  • When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
  • ut very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.
  • “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”
  • “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.
  • So IBM threw that approach out the window. What the developers did instead was brilliant, but so straightforward,
  • The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence
  • What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.)
  • By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable resul
  • Google Translate team can be made up of people who don’t speak most of the languages their application translates. “It’s a bang-for-your-buck argument,” Estelle says. “You probably want to hire more engineers instead” of native speakers.
  • But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself.
  • “Did we sit down when we built Watson and try to model human cognition?” Dave Ferrucci, who led the Watson team at IBM, pauses for emphasis. “Absolutely not. We just tried to create a machine that could win at Jeopardy.”
  • For Ferrucci, the definition of intelligence is simple: it’s what a program can do. Deep Blue was intelligent because it could beat Garry Kasparov at chess. Watson was intelligent because it could beat Ken Jennings at Jeopardy.
  • “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something.
  • Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”
  • Of course, the folly of being above the fray is that you’re also not a part of it
  • As our machines get faster and ingest more data, we allow ourselves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them.
  • Hofstadter hasn’t been to an artificial-intelligence conference in 30 years. “There’s no communication between me and these people,” he says of his AI peers. “None. Zero. I don’t want to talk to colleagues that I find very, very intransigent and hard to convince of anything
  • Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn’t agree with those ideas.
  • Academia is not an environment where you just sit in your bath and have ideas and expect everyone to run around getting excited. It’s possible that in 50 years’ time we’ll say, ‘We really should have listened more to Doug Hofstadter.’ But it’s incumbent on every scientist to at least think about what is needed to get people to understand the ideas.”
Javier E

The Choice Explosion - The New York Times - 0 views

  • the social psychologist Sheena Iyengar asked 100 American and Japanese college students to take a piece of paper. On one side, she had them write down the decisions in life they would like to make for themselves. On the other, they wrote the decisions they would like to pass on to others.
  • The Americans desired choice in four times more domains than the Japanese.
  • Americans now have more choices over more things than any other culture in human history. We can choose between a broader array of foods, media sources, lifestyles and identities. We have more freedom to live out our own sexual identities and more religious and nonreligious options to express our spiritual natures.
  • ...15 more annotations...
  • But making decisions well is incredibly difficult, even for highly educated professional decision makers. As Chip Heath and Dan Heath point out in their book “Decisive,” 83 percent of corporate mergers and acquisitions do not increase shareholder value, 40 percent of senior hires do not last 18 months in their new position, 44 percent of lawyers would recommend that a young person not follow them into the law.
  • It’s becoming incredibly important to learn to decide well, to develop the techniques of self-distancing to counteract the flaws in our own mental machinery. The Heath book is a very good compilation of those techniques.
  • assume positive intent. When in the midst of some conflict, start with the belief that others are well intentioned. It makes it easier to absorb information from people you’d rather not listen to.
  • Suzy Welch’s 10-10-10 rule. When you’re about to make a decision, ask yourself how you will feel about it 10 minutes from now, 10 months from now and 10 years from now. People are overly biased by the immediate pain of some choice, but they can put the short-term pain in long-term perspective by asking these questions.
  • An "explosion" that may also be a "dissolution" or "disintegration," in my view. Unlimited choices. Conduct without boundaries. All of which may be viewed as either "great" or "terrible." The poor suffer when they have no means to pursue choices, which is terrible. The rich seem only to want more and more, wealth without boundaries, which is great for those so able to do. Yes, we need a new decision-making tool, but perhaps one that is also very old: simplify, simplify,simplify by setting moral boundaries that apply to all and which define concisely what our life together ought to be.
  • our tendency to narrow-frame, to see every decision as a binary “whether or not” alternative. Whenever you find yourself asking “whether or not,” it’s best to step back and ask, “How can I widen my options?”
  • deliberate mistakes. A survey of new brides found that 20 percent were not initially attracted to the man they ended up marrying. Sometimes it’s useful to make a deliberate “mistake” — agreeing to dinner with a guy who is not your normal type. Sometimes you don’t really know what you want and the filters you apply are hurting you.
  • It makes you think that we should have explicit decision-making curriculums in all schools. Maybe there should be a common course publicizing the work of Daniel Kahneman, Cass Sunstein, Dan Ariely and others who study the way we mess up and the techniques we can adopt to prevent error.
  • The explosion of choice places extra burdens on the individual. Poorer Americans have fewer resources to master decision-making techniques, less social support to guide their decision-making and less of a safety net to catch them when they err.
  • the stress of scarcity itself can distort decision-making. Those who experienced stress as children often perceive threat more acutely and live more defensively.
  • The explosion of choice means we all need more help understanding the anatomy of decision-making.
  • living in an area of concentrated poverty can close down your perceived options, and comfortably “relieve you of the burden of choosing life.” It’s hard to maintain a feeling of agency when you see no chance of opportunity.
  • In this way the choice explosion has contributed to widening inequality.
  • The relentless all-hour reruns of "Law and Order" in 100 channel cable markets provide direct rebuff to the touted but hollow promise/premise of wider "choice." The small group of personalities debating a pre-framed trivial point of view, over and over, nightly/daily (in video clips), without data, global comparison, historic reference, regional content, or a deep commitment to truth or knowledge of facts has resulted in many choosing narrower limits: streaming music, coffee shops, Facebook--now a "choice" of 1.65 billion users.
  • It’s important to offer opportunity and incentives. But we also need lessons in self-awareness — on exactly how our decision-making tool is fundamentally flawed, and on mental frameworks we can adopt to avoid messing up even more than we do.
sissij

The Economics of Obesity: Why Are Poor People Fat? - 0 views

  • This is what poverty looked like in the Great Depression…
  • This is what poverty looks like today…
  • For most of recorded history, fat was revered as a sign of health and prosperity. Plumpness was a status symbol. It showed that you did not have to engage in manual labor for your sustenance. And it meant that you could afford plentiful quantities of food.
  • ...5 more annotations...
  • The constant struggle to hunt and harvest ensured that we stayed active. And for those with little money, the supply of calories was meager. This ensured that most of the working class stayed slim.
  • Rich people were fat. Poor people were thin.
  • What he found is that he could buy well over 1,000 calories of cookies or potato chips. But his dollar would only buy 250 calories of carrots. He could buy almost 900 calories of soda… but only 170 calories of orange juice.
  • The primary reason that lower-income people are more overweight is because the unhealthiest and most fattening foods are the cheapest.
  • Within the current system, the best we can hope for is a situation where public funds are diverted from the corporate Agri-Giants (which is nothing more than welfare for the wealthy) to family farms and fruit and vegetable growers. Currently, almost 70% of farmers receive no subsidies at all, while the biggest and strongest take the bulk of public funds.
  •  
    This article shows a very interesting stereotyping that rich people ought to be fat and poor people ought to be thin. It reminded me of I video I have just seen, in which a poor but fat woman is trying to explain why now people in poverty is more likely to be fat. She shows us some comments from people when they hear that she is very poor. The vehement reaction and bad language they used showed that this stereotyping is very deep in our society. However, time is very different now. Food is not as expensive as we think, what is expensive is actually healthy food, that's why poor people tends to be fat. My grandpa once told me that when he was young, he was confused why poor people in Honking movie are eating chicken legs. This is the result of the transformation of society.--Sissi (2/8/2017)
Javier E

Clouds' Effect on Climate Change Is Last Bastion for Dissenters - NYTimes.com - 0 views

  • For decades, a small group of scientific dissenters has been trying to shoot holes in the prevailing science of climate change, offering one reason after another why the outlook simply must be wrong. Enlarge This Image Josh Haner/The New York Times A technician at a Department of Energy site in Oklahoma launching a weather balloon to help scientists analyze clouds. More Photos » Temperature Rising Enigma in the Sky This series focuses on the central arguments in the climate debate and examining the evidence for global warming and its consequences. More From the Series » if (typeof NYTDVideoManager != "undefined") { NYTDVideoManager.setAllowMultiPlayback(false); } function displayCompanionBanners(banners, tracking) { tmDisplayBanner(banners, "videoAdContent", 300, 250, null, tracking); } Multimedia Interactive Graphic Clouds and Climate Slide Show Understanding the Atmosphere Related Green Blog: Climate Change and the Body Politic (May 1, 2012) An Underground Fossil Forest Offers Clues on Climate Change (May 1, 2012) A blog about energy and the environment. Go to Blog » Readers’ Comments "There is always some possibility that the scientific consensus may be wrong and Dr. Lindzen may be right, or that both may be wrong. But the worst possible place to resolve such issues is the political arena." Alexander Flax, Potomac, MD Read Full Comment » Post a Comment » Over time, nearly every one of their arguments has been knocked down by accumulating evidence, and polls say 97 percent of working climate scientists now see global warming as a serious risk.
  • They acknowledge that the human release of greenhouse gases will cause the planet to warm. But they assert that clouds — which can either warm or cool the earth, depending on the type and location — will shift in such a way as to counter much of the expected temperature rise and preserve the equable climate on which civilization depends.
  • At gatherings of climate change skeptics on both sides of the Atlantic, Dr. Lindzen has been treated as a star. During a debate in Australia over carbon taxes, his work was cited repeatedly. When he appears at conferences of the Heartland Institute, the primary American organization pushing climate change skepticism, he is greeted by thunderous applause.
  • ...13 more annotations...
  • His idea has drawn withering criticism from other scientists, who cite errors in his papers and say proof is lacking. Enough evidence is already in hand, they say, to rule out the powerful cooling effect from clouds that would be needed to offset the increase of greenhouse gases.
  • “If you listen to the credible climate skeptics, they’ve really pushed all their chips onto clouds.”
  • Dr. Lindzen is “feeding upon an audience that wants to hear a certain message, and wants to hear it put forth by people with enough scientific reputation that it can be sustained for a while, even if it’s wrong science,” said Christopher S. Bretherton, an atmospheric researcher at the University of Washington. “I don’t think it’s intellectually honest at all.”
  • With climate policy nearly paralyzed in the United States, many other governments have also declined to take action, and worldwide emissions of greenhouse gases are soaring.
  • The most elaborate computer programs have agreed on a broad conclusion: clouds are not likely to change enough to offset the bulk of the human-caused warming. Some of the analyses predict that clouds could actually amplify the warming trend sharply through several mechanisms, including a reduction of some of the low clouds that reflect a lot of sunlight back to space. Other computer analyses foresee a largely neutral effect. The result is a big spread in forecasts of future temperature, one that scientists have not been able to narrow much in 30 years of effort.
  • The earth’s surface has already warmed about 1.4 degrees Fahrenheit since the Industrial Revolution, most of that in the last 40 years. Modest as it sounds, it is an average for the whole planet, representing an enormous addition of heat. An even larger amount is being absorbed by the oceans. The increase has caused some of the world’s land ice to melt and the oceans to rise.
  • Even in the low projection, many scientists say, the damage could be substantial. In the high projection, some polar regions could heat up by 20 or 25 degrees Fahrenheit — more than enough, over centuries or longer, to melt the Greenland ice sheet, raising sea level by a catastrophic 20 feet or more. Vast changes in  rainfall, heat waves and other weather patterns would most likely accompany such a large warming. “The big damages come if the climate sensitivity to greenhouse gases turns out to be high,” said Raymond T. Pierrehumbert, a climate scientist at the University of Chicago. “Then it’s not a bullet headed at us, but a thermonuclear warhead.”
  • But the problem of how clouds will behave in a future climate is not yet solved — making the unheralded field of cloud research one of the most important pursuits of modern science.
  • for more than a decade, Dr. Lindzen has said that when surface temperature increases, the columns of moist air rising in the tropics will rain out more of their moisture, leaving less available to be thrown off as ice, which forms the thin, high clouds known as cirrus. Just like greenhouse gases, these cirrus clouds act to reduce the cooling of the earth, and a decrease of them would counteract the increase of greenhouse gases. Dr. Lindzen calls his mechanism the iris effect, after the iris of the eye, which opens at night to let in more light. In this case, the earth’s “iris” of high clouds would be opening to let more heat escape.
  • Dr. Lindzen acknowledged that the 2009 paper contained “some stupid mistakes” in his handling of the satellite data. “It was just embarrassing,” he said in an interview. “The technical details of satellite measurements are really sort of grotesque.” Last year, he tried offering more evidence for his case, but after reviewers for a prestigious American journal criticized the paper, Dr. Lindzen published it in a little-known Korean journal. Dr. Lindzen blames groupthink among climate scientists for his publication difficulties, saying the majority is determined to suppress any dissenting views. They, in turn, contend that he routinely misrepresents the work of other researchers.
  • Ultimately, as the climate continues warming and more data accumulate, it will become obvious how clouds are reacting. But that could take decades, scientists say, and if the answer turns out to be that catastrophe looms, it would most likely be too late. By then, they say, the atmosphere would contain so much carbon dioxide as to make a substantial warming inevitable, and the gas would not return to a normal level for thousands of years.
  • In his Congressional appearances, speeches and popular writings, Dr. Lindzen offers little hint of how thin the published science supporting his position is. Instead, starting from his disputed iris mechanism, he makes what many of his colleagues see as an unwarranted leap of logic, professing near-certainty that climate change is not a problem society needs to worry about.
  • “Even if there were no political implications, it just seems deeply unprofessional and irresponsible to look at this and say, ‘We’re sure it’s not a problem,’ ” said Kerry A. Emanuel, another M.I.T. scientist. “It’s a special kind of risk, because it’s a risk to the collective civilization.”
Emily Horwitz

Will Science Someday Rule Out the Possibility of God? | Physics vs. God | LiveScience - 0 views

  • Over the past few centuries, science can be said to have gradually chipped away at the traditional grounds for believing in God. Much of what once seemed mysterious — the existence of humanity, the life-bearing perfection of Earth, the workings of the universe — can now be explained by biology, astronomy, physics and other domains of science. 
  • good reason to think science will ultimately arrive at a complete understanding of the universe that leaves no grounds for God whatsoever.
  • Psychology research suggests that belief in the supernatural acts as societal glue and motivates people to follow the rules; further, belief in the afterlife helps people grieve and staves off fears of death.
Javier E

Armies of Expensive Lawyers, Replaced by Cheaper Software - NYTimes.com - 0 views

  • thanks to advances in artificial intelligence, “e-discovery” software can analyze documents in a fraction of the time for a fraction of the cost.
  • Computers are getting better at mimicking human reasoning — as viewers of “Jeopardy!” found out when they saw Watson beat its human opponents — and they are claiming work once done by people in high-paying professions. The number of computer chip designers, for example, has largely stagnated because powerful software programs replace the work once done by legions of logic designers and draftsmen.
  • Software is also making its way into tasks that were the exclusive province of human decision makers, like loan and mortgage officers and tax accountants.
  • ...4 more annotations...
  • “We’re at the beginning of a 10-year period where we’re going to transition from computers that can’t understand language to a point where computers can understand quite a bit about language.”
  • E-discovery technologies generally fall into two broad categories that can be described as “linguistic” and “sociological.”
  • The most basic linguistic approach uses specific search words to find and sort relevant documents. More advanced programs filter documents through a large web of word and phrase definitions.
  • The sociological approach adds an inferential layer of analysis, mimicking the deductive powers of a human Sherlock Holmes
julia rhodes

Brainlike Computers, Learning From Experience - NYTimes.com - 0 views

  • Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.
  • Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.
  • The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information.
  • ...6 more annotations...
  • In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control.
  • “We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr
  • The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.
  • They are not “programmed.” Rather the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.
  • Traditional computers are also remarkably energy inefficient, especially when compared to actual brains, which the new neurons are built to mimic. I.B.M. announced last year that it had built a supercomputer simulation of the brain that encompassed roughly 10 billion neurons — more than 10 percent of a human brain. It ran about 1,500 times more slowly than an actual brain. Further, it required several megawatts of power, compared with just 20 watts of power used by the biological brain.
  • Running the program, known as Compass, which attempts to simulate a brain, at the speed of a human brain would require a flow of electricity in a conventional computer that is equivalent to what is needed to power both San Francisco and New York, Dr. Modha said.
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
anonymous

This Is Your Brain on Junk Food: In 'Hooked,' Michael Moss Explores Addiction - The New... - 0 views

  • This Is Your Brain on Junk Food
  • Yet after writing the book, Mr. Moss was not convinced that processed foods could be addictive.
  • In a legal proceeding two decades ago, Michael Szymanczyk, the chief executive of the tobacco giant Philip Morris, was asked to define addiction.
  • ...30 more annotations...
  • “My definition of addiction is a repetitive behavior that some people find difficult to quit,”
  • Mr. Szymanczyk was speaking in the context of smoking. But a fascinating new book by Michael Moss, an investigative journalist and best-selling author, argues that the tobacco executive’s definition of addiction could apply to our relationship with another group of products that Philip Morris sold and manufactured for decades: highly processed foods.
  • In his new book, “Hooked,” Mr. Moss explores the science behind addiction and builds a case that food companies have painstakingly engineered processed foods to hijack the reward circuitry in our brains, causing us to overeat and helping to fuel a global epidemic of obesity and chronic disease.
  • Mr. Moss suggests that processed foods like cheeseburgers, potato chips and ice cream are not only addictive, but that they can be even more addictive than alcohol, tobacco and drugs.
  • In another cynical move, Mr. Moss writes, food companies beginning in the late 1970s started buying a slew of popular diet companies, allowing them to profit off our attempts to lose the weight we gained from eating their products.
  • Heinz, the processed food giant, bought Weight Watchers in 1978 for $72 million. Unilever, which sells Klondike bars and Ben & Jerry’s ice cream, paid $2.3 billion for SlimFast in 2000. Nestle, which makes chocolate bars and Hot Pockets, purchased Jenny Craig in 2006 for $600 million. And in 2010 the private equity firm that owns Cinnabon and Carvel ice cream purchased Atkins Nutritionals, the company that sells low-carb bars, shakes and snacks. Most of these diet brands were later sold to other parent companies.
  • “The food industry blocked us in the courts from filing lawsuits claiming addiction; they started controlling the science in problematic ways, and they took control of the diet industry,”
  • “I’ve been crawling through the underbelly of the processed food industry for 10 years and I continue to be stunned by the depths of the deviousness of their strategy to not just tap into our basic instincts, but to exploit our attempts to gain control of our habits.”
  • The book explained how companies formulate junk foods to achieve a “bliss point” that makes them irresistible and market those products using tactics borrowed from the tobacco industry.
  • In the 1980s, Philip Morris acquired Kraft and General Foods, making it the largest manufacturer of processed foods in the country, with products like Kool-Aid, Cocoa Pebbles, Capri Sun and Oreo cookies.
  • “I had tried to avoid the word addiction when I was writing ‘Salt Sugar Fat,’” he said. “I thought it was totally ludicrous. How anyone could compare Twinkies to crack cocaine was beyond me.”
  • Witness
  • But as he dug into the science that shows how processed foods affect the brain, he was swayed
  • In “Hooked,” Michael Moss explores how no addictive drug can fire up the reward circuitry in our brains as rapidly as our favorite foods.
  • The faster it hits our reward circuitry, the stronger its impact.
  • That is why smoking crack cocaine is more powerful than ingesting cocaine through the nose, and smoking cigarettes produces greater feelings of reward than wearing a nicotine patch
  • : Smoking reduces the time it takes for drugs to hit the brain.
  • But no addictive drug can fire up the reward circuitry in our brains as rapidly as our favorite foods, Mr. Moss writes. “The smoke from cigarettes takes 10 seconds to stir the brain, but a touch of sugar on the tongue will do so in a little more than a half second, or six hundred milliseconds, to be precise,
  • This puts the term “fast food” in a new light. “Measured in milliseconds, and the power to addict, nothing is faster than processed food in rousing the brain,” he added.
  • Mr. Moss explains that even people in the tobacco industry took note of the powerful lure of processed foods.
  • One crucial element that influences the addictive nature of a substance and whether or not we consume it compulsively is how quickly it excites the brain.
  • As litigation against tobacco companies gained ground in the 1990s, one of the industry’s defenses was that cigarettes were no more addictive than Twinkies.
  • It may have been on to something.
  • “Smoking was given an 8.5, nearly on par with heroin,” Mr. Moss writes. “But overeating, at 7.3, was not far behind, scoring higher than beer, tranquilizers and sleeping pills.
  • But processed foods are not tobacco, and many people, including some experts, dismiss the notion that they are addictive. Mr. Moss suggests that this reluctance is in part a result of misconceptions about what addiction entails.
  • For one, a substance does not have to hook everyone for it to be addictive.
  • Studies show that most people who drink or use cocaine do not become dependent
  • Nor does everyone who smokes or uses painkillers become addicted.
  • Mr. Moss said that people who struggle with processed food can try simple strategies to conquer routine cravings, like going for a walk, calling a friend or snacking on healthy alternatives like a handful of nuts. But for some people, more extreme measures may be necessary.
  • “It depends where you are on the spectrum,” he said. “I know people who can’t touch a grain of sugar without losing control. They would drive to the supermarket and by the time they got home their car would be littered with empty wrappers. For them, complete abstention is the solution.”
  •  
    Really interesting!! How food affects your brain:
anonymous

Weight Gain and Stress Eating Are Downside of Pandemic Life - The New York Times - 0 views

  • Yes, Many of Us Are Stress-Eating and Gaining Weight in the Pandemic
  • A global study confirms that during the pandemic, many of us ate more junk food, exercised less, were more anxious and got less sleep.
  • Not long ago, Stephen Loy had a lot of healthy habits. He went to exercise classes three or four times a week, cooked nutritious dinners for his family, and snacked on healthy foods like hummus and bell peppers.
  • ...20 more annotations...
  • But that all changed when the pandemic struck. During the lockdowns, when he was stuck at home, his anxiety levels went up. He stopped exercising and started stress eating. Gone were the hummus and vegetables; instead, he snacked on cookies, sweets and Lay’s potato chips. He ate more fried foods and ordered takeout from local restaurants.
  • “We were feeding the soul more than feeding the stomach,”
  • “We were making sure to eat things that made us feel better — not just nutritional items.
  • Now a global survey conducted earlier this year confirms what Mr. Loy and many others experienced firsthand: The coronavirus pandemic and resulting lockdowns led to dramatic changes in health behaviors, prompting people around the world to cut back on physical activity and eat more junk foods.
  • While they tended to experience improvements in some aspects of their diets, such as cooking at home more and eating out less, they were also the most likely to report struggling with their weight and mental health.
  • With months to go before a vaccine becomes widely available and we can safely resume our pre-pandemic routines, now might be a good time to assess the healthy habits we may have let slip and to find new ways to be proactive about our physical and mental health.
  • The researchers found that the decline in healthy behaviors during the pandemic and widespread lockdowns was fairly common regardless of geography.
  • Alone
  • “Individuals with obesity were impacted the most — and that’s what we were afraid of,”
  • “They not only started off with higher anxiety levels before the pandemic, but they also had the largest increase in anxiety levels throughout the pandemic.”
  • The pandemic disrupted everyday life, isolated people from friends and family, and spawned an economic crisis, with tens of millions of people losing jobs or finding their incomes sharply reduced.
  • despite snacking on more junk foods, many people showed an increase in their “healthy eating scores,” a measure of their overall diet quality, which includes things like eating more fruits and fewer fried foods.
  • The researchers said that the overall improvements in diet appeared to be driven by the fact that the lockdowns prompted people to cook, bake and prepare more food at home.
  • Other recent surveys have also shown a sharp rise in home cooking and baking this year, with many people saying they are discovering new ingredients and looking for ways to make healthier foods.
  • But social isolation can take a toll on mental wellness, and that was evident in the findings.
  • About 20 percent said that their symptoms, such as experiencing dread and not being able to control or stop their worrying, were severe enough to interfere with their daily activities.
  • Dr. Flanagan said it was perhaps not surprising that people tended to engage in less healthful habits during the pandemic, as so many aspects of health are intertwined.
  • Stress can lead to poor sleep, which can cause people to exercise less, consume more junk foods, and then gain weight, and so on.
  • But she said she hoped that the findings might inspire people to take steps to be more proactive about their health, such as seeking out mental health specialists, prioritizing sleep and finding ways to exercise at home and cook more, in the event of future lockdowns.
  • “Being aware is really the No. 1 thing here.”
Javier E

How Does Science Really Work? | The New Yorker - 1 views

  • I wanted to be a scientist. So why did I find the actual work of science so boring? In college science courses, I had occasional bursts of mind-expanding insight. For the most part, though, I was tortured by drudgery.
  • I’d found that science was two-faced: simultaneously thrilling and tedious, all-encompassing and narrow. And yet this was clearly an asset, not a flaw. Something about that combination had changed the world completely.
  • “Science is an alien thought form,” he writes; that’s why so many civilizations rose and fell before it was invented. In his view, we downplay its weirdness, perhaps because its success is so fundamental to our continued existence.
  • ...50 more annotations...
  • In school, one learns about “the scientific method”—usually a straightforward set of steps, along the lines of “ask a question, propose a hypothesis, perform an experiment, analyze the results.”
  • That method works in the classroom, where students are basically told what questions to pursue. But real scientists must come up with their own questions, finding new routes through a much vaster landscape.
  • Since science began, there has been disagreement about how those routes are charted. Two twentieth-century philosophers of science, Karl Popper and Thomas Kuhn, are widely held to have offered the best accounts of this process.
  • For Popper, Strevens writes, “scientific inquiry is essentially a process of disproof, and scientists are the disprovers, the debunkers, the destroyers.” Kuhn’s scientists, by contrast, are faddish true believers who promulgate received wisdom until they are forced to attempt a “paradigm shift”—a painful rethinking of their basic assumptions.
  • Working scientists tend to prefer Popper to Kuhn. But Strevens thinks that both theorists failed to capture what makes science historically distinctive and singularly effective.
  • Sometimes they seek to falsify theories, sometimes to prove them; sometimes they’re informed by preëxisting or contextual views, and at other times they try to rule narrowly, based on t
  • Why do scientists agree to this scheme? Why do some of the world’s most intelligent people sign on for a lifetime of pipetting?
  • Strevens thinks that they do it because they have no choice. They are constrained by a central regulation that governs science, which he calls the “iron rule of explanation.” The rule is simple: it tells scientists that, “if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with”; from there, they must “conduct all disputes with reference to empirical evidence alone.”
  • , it is “the key to science’s success,” because it “channels hope, anger, envy, ambition, resentment—all the fires fuming in the human heart—to one end: the production of empirical evidence.”
  • Strevens arrives at the idea of the iron rule in a Popperian way: by disproving the other theories about how scientific knowledge is created.
  • The problem isn’t that Popper and Kuhn are completely wrong. It’s that scientists, as a group, don’t pursue any single intellectual strategy consistently.
  • Exploring a number of case studies—including the controversies over continental drift, spontaneous generation, and the theory of relativity—Strevens shows scientists exerting themselves intellectually in a variety of ways, as smart, ambitious people usually do.
  • “Science is boring,” Strevens writes. “Readers of popular science see the 1 percent: the intriguing phenomena, the provocative theories, the dramatic experimental refutations or verifications.” But, he says,behind these achievements . . . are long hours, days, months of tedious laboratory labor. The single greatest obstacle to successful science is the difficulty of persuading brilliant minds to give up the intellectual pleasures of continual speculation and debate, theorizing and arguing, and to turn instead to a life consisting almost entirely of the production of experimental data.
  • Ultimately, in fact, it was good that the geologists had a “splendid variety” of somewhat arbitrary opinions: progress in science requires partisans, because only they have “the motivation to perform years or even decades of necessary experimental work.” It’s just that these partisans must channel their energies into empirical observation. The iron rule, Strevens writes, “has a valuable by-product, and that by-product is data.”
  • Science is often described as “self-correcting”: it’s said that bad data and wrong conclusions are rooted out by other scientists, who present contrary findings. But Strevens thinks that the iron rule is often more important than overt correction.
  • Eddington was never really refuted. Other astronomers, driven by the iron rule, were already planning their own studies, and “the great preponderance of the resulting measurements fit Einsteinian physics better than Newtonian physics.” It’s partly by generating data on such a vast scale, Strevens argues, that the iron rule can power science’s knowledge machine: “Opinions converge not because bad data is corrected but because it is swamped.”
  • Why did the iron rule emerge when it did? Strevens takes us back to the Thirty Years’ War, which concluded with the Peace of Westphalia, in 1648. The war weakened religious loyalties and strengthened national ones.
  • Two regimes arose: in the spiritual realm, the will of God held sway, while in the civic one the decrees of the state were paramount. As Isaac Newton wrote, “The laws of God & the laws of man are to be kept distinct.” These new, “nonoverlapping spheres of obligation,” Strevens argues, were what made it possible to imagine the iron rule. The rule simply proposed the creation of a third sphere: in addition to God and state, there would now be science.
  • Strevens imagines how, to someone in Descartes’s time, the iron rule would have seemed “unreasonably closed-minded.” Since ancient Greece, it had been obvious that the best thinking was cross-disciplinary, capable of knitting together “poetry, music, drama, philosophy, democracy, mathematics,” and other elevating human disciplines.
  • We’re still accustomed to the idea that a truly flourishing intellect is a well-rounded one. And, by this standard, Strevens says, the iron rule looks like “an irrational way to inquire into the underlying structure of things”; it seems to demand the upsetting “suppression of human nature.”
  • Descartes, in short, would have had good reasons for resisting a law that narrowed the grounds of disputation, or that encouraged what Strevens describes as “doing rather than thinking.”
  • In fact, the iron rule offered scientists a more supple vision of progress. Before its arrival, intellectual life was conducted in grand gestures.
  • Descartes’s book was meant to be a complete overhaul of what had preceded it; its fate, had science not arisen, would have been replacement by some equally expansive system. The iron rule broke that pattern.
  • by authorizing what Strevens calls “shallow explanation,” the iron rule offered an empirical bridge across a conceptual chasm. Work could continue, and understanding could be acquired on the other side. In this way, shallowness was actually more powerful than depth.
  • it also changed what counted as progress. In the past, a theory about the world was deemed valid when it was complete—when God, light, muscles, plants, and the planets cohered. The iron rule allowed scientists to step away from the quest for completeness.
  • The consequences of this shift would become apparent only with time
  • In 1713, Isaac Newton appended a postscript to the second edition of his “Principia,” the treatise in which he first laid out the three laws of motion and the theory of universal gravitation. “I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses,” he wrote. “It is enough that gravity really exists and acts according to the laws that we have set forth.”
  • What mattered, to Newton and his contemporaries, was his theory’s empirical, predictive power—that it was “sufficient to explain all the motions of the heavenly bodies and of our sea.”
  • Descartes would have found this attitude ridiculous. He had been playing a deep game—trying to explain, at a fundamental level, how the universe fit together. Newton, by those lights, had failed to explain anything: he himself admitted that he had no sense of how gravity did its work
  • Strevens sees its earliest expression in Francis Bacon’s “The New Organon,” a foundational text of the Scientific Revolution, published in 1620. Bacon argued that thinkers must set aside their “idols,” relying, instead, only on evidence they could verify. This dictum gave scientists a new way of responding to one another’s work: gathering data.
  • Quantum theory—which tells us that subatomic particles can be “entangled” across vast distances, and in multiple places at the same time—makes intuitive sense to pretty much nobody.
  • Without the iron rule, Strevens writes, physicists confronted with such a theory would have found themselves at an impasse. They would have argued endlessly about quantum metaphysics.
  • ollowing the iron rule, they can make progress empirically even though they are uncertain conceptually. Individual researchers still passionately disagree about what quantum theory means. But that hasn’t stopped them from using it for practical purposes—computer chips, MRI machines, G.P.S. networks, and other technologies rely on quantum physics.
  • One group of theorists, the rationalists, has argued that science is a new way of thinking, and that the scientist is a new kind of thinker—dispassionate to an uncommon degree.
  • As evidence against this view, another group, the subjectivists, points out that scientists are as hopelessly biased as the rest of us. To this group, the aloofness of science is a smoke screen behind which the inevitable emotions and ideologies hide.
  • At least in science, Strevens tells us, “the appearance of objectivity” has turned out to be “as important as the real thing.”
  • The subjectivists are right, he admits, inasmuch as scientists are regular people with a “need to win” and a “determination to come out on top.”
  • But they are wrong to think that subjectivity compromises the scientific enterprise. On the contrary, once subjectivity is channelled by the iron rule, it becomes a vital component of the knowledge machine. It’s this redirected subjectivity—to come out on top, you must follow the iron rule!—that solves science’s “problem of motivation,” giving scientists no choice but “to pursue a single experiment relentlessly, to the last measurable digit, when that digit might be quite meaningless.”
  • If it really was a speech code that instigated “the extraordinary attention to process and detail that makes science the supreme discriminator and destroyer of false ideas,” then the peculiar rigidity of scientific writing—Strevens describes it as “sterilized”—isn’t a symptom of the scientific mind-set but its cause.
  • The iron rule—“a kind of speech code”—simply created a new way of communicating, and it’s this new way of communicating that created science.
  • Other theorists have explained science by charting a sweeping revolution in the human mind; inevitably, they’ve become mired in a long-running debate about how objective scientists really are
  • In “The Knowledge Machine: How Irrationality Created Modern Science” (Liveright), Michael Strevens, a philosopher at New York University, aims to identify that special something. Strevens is a philosopher of science
  • Compared with the theories proposed by Popper and Kuhn, Strevens’s rule can feel obvious and underpowered. That’s because it isn’t intellectual but procedural. “The iron rule is focused not on what scientists think,” he writes, “but on what arguments they can make in their official communications.”
  • Like everybody else, scientists view questions through the lenses of taste, personality, affiliation, and experience
  • geologists had a professional obligation to take sides. Europeans, Strevens reports, tended to back Wegener, who was German, while scholars in the United States often preferred Simpson, who was American. Outsiders to the field were often more receptive to the concept of continental drift than established scientists, who considered its incompleteness a fatal flaw.
  • Strevens’s point isn’t that these scientists were doing anything wrong. If they had biases and perspectives, he writes, “that’s how human thinking works.”
  • Eddington’s observations were expected to either confirm or falsify Einstein’s theory of general relativity, which predicted that the sun’s gravity would bend the path of light, subtly shifting the stellar pattern. For reasons having to do with weather and equipment, the evidence collected by Eddington—and by his colleague Frank Dyson, who had taken similar photographs in Sobral, Brazil—was inconclusive; some of their images were blurry, and so failed to resolve the matter definitively.
  • it was only natural for intelligent people who were free of the rule’s strictures to attempt a kind of holistic, systematic inquiry that was, in many ways, more demanding. It never occurred to them to ask if they might illuminate more collectively by thinking about less individually.
  • In the single-sphered, pre-scientific world, thinkers tended to inquire into everything at once. Often, they arrived at conclusions about nature that were fascinating, visionary, and wrong.
  • How Does Science Really Work?Science is objective. Scientists are not. Can an “iron rule” explain how they’ve changed the world anyway?By Joshua RothmanSeptember 28, 2020
pier-paolo

Where'd I Stash That Chocolate? It's Easy to Remember - The New York Times - 0 views

  • It is easier to remember where the chocolate is than where the cucumbers are, new research suggests.
  • They moved from table to table on which eight foods were placed: caramel cookies, apples, chocolate, tomatoes, melons, peanuts, potato chips and cucumbers.
  • hey were instructed to either smell or taste the foods and to rate them on likability and familiarity.
  • ...5 more annotations...
  • the real purpose of the experiment: to determine how well they could remember where the foods were located in the room.
  • Of the 512 people in the experiment, half did the test by tasting, half by smelling the food. After leaving the room, they smelled or tasted the foods again in random order and were asked to locate them on a map of the room they had just traversed.
  • they were 27 percent more likely to correctly place the high-calorie foods than the low-calorie foods they tasted, and 28 percent more likely to correctly locate the high-calorie foods they smelled.
  • “Our results seem to suggest that human minds are adapted to finding energy rich food in an efficient way,”
  • “This may have implications for how we navigate our modern food environment.”
tongoscar

US-China tech war to be 'defining issue of this century', despite signing of phase one ... - 0 views

  • Forget the phase one deal, a bitter superpower tech war will overshadow any minor progression in US-China relations emanating from this week’s trade agreement.That is the message contained in a new report to be released on Monday, the author of which says that tariffs are “a subset in a much larger, overarching, systemic rivalry between two superpowers, which is the defining issue of this century”.
  • The US has already used export controls to ban Chinese firms from accessing vital US technology and as smart devices, 5G and the internet of things become more pervasive, the definition of “dual use goods”, commercial products that can be used for military purposes, will widen.
  • Even while pursuing a trade accord on one hand, the US has been actively trying to reduce technological integration with China on the other.As US President Donald Trump and China’s Vice-Premier Liu He were signing the phase one deal in the White House, the US was lobbying Britain to ban Huawei from its “critical national infrastructure” and considering plans to invest at least US$1.25 billion “in Western-based alternatives to Chinese equipment providers Huawei and ZTE”.
  • ...3 more annotations...
  • China is aiming to increase its reliance on domestic production for key components, including chips and controlling systems, to 75 per cent by 2025, the former minister said.
  • A ban on Huawei buying US tech combined with the overall market uncertainty from the tech war led to Broadcom revising down its 2019 revenue estimate by US$2 billion.However, given that the most advanced Chinese producers of semiconductor technology are two to three generations behind their US rivals, China would be the major short-term loser should decoupling continue, Capri said.
  • “While there are changes to some Chinese technology policies, phase two is unlikely to make much progress in addressing the two countries’ rivalry,” said Chris Rogers, Research Analyst at Panjiva, S&P Global Market Intelligence.
tongoscar

China, South Korea, Japan to hold trilateral talks over trade, regional disputes - Mark... - 0 views

  • BEIJING — The leaders of China, Japan and South Korea are holding a trilateral summit in China this week amid feuds over trade, military maneuverings and historical animosities.
  • Economic cooperation and the North Korean nuclear threat are the main issues binding the Northeast Asian troika.
  • Tensions rooted in South Korean resentment over Japan’s 20th century colonial occupation spiked this year to a level unseen in decades as they traded blows over wartime history, trade and military-to-military cooperation.
  • ...3 more annotations...
  • South Korea’s relations with China, its biggest trading partner, have been strained over Seoul’s decision to host a U.S. anti-missile system that Beijing perceives as a security threat.
  • Tokyo, in turn, agreed to resume discussions with Seoul on their dispute over Japan’s tightened controls on exports of key chemicals used by major South Korean companies to make computer chips and smartphone displays.
  • China’s relations with Japan had been more acrimonious than with any other foreign state, but have in recent years undergone a remarkable transformation, partly as a result of the U.S.-China tariff war.
1 - 20 of 25 Next ›
Showing 20 items per page