Skip to main content

Home/ TOK Friends/ Group items matching "pioneer" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
22More

How Humans Ended Up With Freakishly Huge Brains | WIRED - 0 views

  • paleontologists documented one of the most dramatic transitions in human evolution. We might call it the Brain Boom. Humans, chimps and bonobos split from their last common ancestor between 6 and 8 million years ago.
  • Starting around 3 million years ago, however, the hominin brain began a massive expansion. By the time our species, Homo sapiens, emerged about 200,000 years ago, the human brain had swelled from about 350 grams to more than 1,300 grams.
  • n that 3-million-year sprint, the human brain almost quadrupled the size its predecessors had attained over the previous 60 million years of primate evolution.
  • ...19 more annotations...
  • There are plenty of theories, of course, especially regarding why: increasingly complex social networks, a culture built around tool use and collaboration, the challenge of adapting to a mercurial and often harsh climate
  • Although these possibilities are fascinating, they are extremely difficult to test.
  • Although it makes up only 2 percent of body weight, the human brain consumes a whopping 20 percent of the body’s total energy at rest. In contrast, the chimpanzee brain needs only half that.
  • contrary to long-standing assumptions, larger mammalian brains do not always have more neurons, and the ones they do have are not always distributed in the same way.
  • The human brain has 86 billion neurons in all: 69 billion in the cerebellum, a dense lump at the back of the brain that helps orchestrate basic bodily functions and movement; 16 billion in the cerebral cortex, the brain’s thick corona and the seat of our most sophisticated mental talents, such as self-awareness, language, problem solving and abstract thought; and 1 billion in the brain stem and its extensions into the core of the brain
  • In contrast, the elephant brain, which is three times the size of our own, has 251 billion neurons in its cerebellum, which helps manage a giant, versatile trunk, and only 5.6 billion in its cortex
  • primates evolved a way to pack far more neurons into the cerebral cortex than other mammals did
  • The great apes are tiny compared to elephants and whales, yet their cortices are far denser: Orangutans and gorillas have 9 billion cortical neurons, and chimps have 6 billion. Of all the great apes, we have the largest brains, so we come out on top with our 16 billion neurons in the cortex.
  • “What kinds of mutations occurred, and what did they do? We’re starting to get answers and a deeper appreciation for just how complicated this process was.”
  • there was a strong evolutionary pressure to modify the human regulatory regions in a way that sapped energy from muscle and channeled it to the brain.
  • Accounting for body size and weight, the chimps and macaques were twice as strong as the humans. It’s not entirely clear why, but it is possible that our primate cousins get more power out of their muscles than we get out of ours because they feed their muscles more energy. “Compared to other primates, we lost muscle power in favor of sparing energy for our brains,” Bozek said. “It doesn’t mean that our muscles are inherently weaker. We might just have a different metabolism.
  • a pioneering experiment. Not only were they going to identify relevant genetic mutations from our brain’s evolutionary past, they were also going to weave those mutations into the genomes of lab mice and observe the consequences.
  • Silver and Wray introduced the chimpanzee copy of HARE5 into one group of mice and the human edition into a separate group. They then observed how the embryonic mice brains grew.
  • After nine days of development, mice embryos begin to form a cortex, the outer wrinkly layer of the brain associated with the most sophisticated mental talents. On day 10, the human version of HARE5 was much more active in the budding mice brains than the chimp copy, ultimately producing a brain that was 12 percent larger
  • “It wasn’t just a couple mutations and—bam!—you get a bigger brain. As we learn more about the changes between human and chimp brains, we realize there will be lots and lots of genes involved, each contributing a piece to that. The door is now open to get in there and really start understanding. The brain is modified in so many subtle and nonobvious ways.”
  • As recent research on whale and elephant brains makes clear, size is not everything, but it certainly counts for something. The reason we have so many more cortical neurons than our great-ape cousins is not that we have denser brains, but rather that we evolved ways to support brains that are large enough to accommodate all those extra cells.
  • There’s a danger, though, in becoming too enamored with our own big heads. Yes, a large brain packed with neurons is essential to what we consider high intelligence. But it’s not sufficient
  • No matter how large the human brain grew, or how much energy we lavished upon it, it would have been useless without the right body. Three particularly crucial adaptations worked in tandem with our burgeoning brain to dramatically increase our overall intelligence: bipedalism, which freed up our hands for tool making, fire building and hunting; manual dexterity surpassing that of any other animal; and a vocal tract that allowed us to speak and sing.
  • Human intelligence, then, cannot be traced to a single organ, no matter how large; it emerged from a serendipitous confluence of adaptations throughout the body. Despite our ongoing obsession with the size of our noggins, the fact is that our intelligence has always been so much bigger than our brain.
22More

What Does Quantum Physics Actually Tell Us About the World? - The New York Times - 2 views

  • The physics of atoms and their ever-smaller constituents and cousins is, as Adam Becker reminds us more than once in his new book, “What Is Real?,” “the most successful theory in all of science.” Its predictions are stunningly accurate, and its power to grasp the unseen ultramicroscopic world has brought us modern marvels.
  • But there is a problem: Quantum theory is, in a profound way, weird. It defies our common-sense intuition about what things are and what they can do.
  • Indeed, Heisenberg said that quantum particles “are not as real; they form a world of potentialities or possibilities rather than one of things or facts.”
  • ...19 more annotations...
  • Before he died, Richard Feynman, who understood quantum theory as well as anyone, said, “I still get nervous with it...I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem.” The problem is not with using the theory — making calculations, applying it to engineering tasks — but in understanding what it means. What does it tell us about the world?
  • From one point of view, quantum physics is just a set of formalisms, a useful tool kit. Want to make better lasers or transistors or television sets? The Schrödinger equation is your friend. The trouble starts only when you step back and ask whether the entities implied by the equation can really exist. Then you encounter problems that can be described in several familiar ways:
  • Wave-particle duality. Everything there is — all matter and energy, all known forces — behaves sometimes like waves, smooth and continuous, and sometimes like particles, rat-a-tat-tat. Electricity flows through wires, like a fluid, or flies through a vacuum as a volley of individual electrons. Can it be both things at once?
  • The uncertainty principle. Werner Heisenberg famously discovered that when you measure the position (let’s say) of an electron as precisely as you can, you find yourself more and more in the dark about its momentum. And vice versa. You can pin down one or the other but not both.
  • The measurement problem. Most of quantum mechanics deals with probabilities rather than certainties. A particle has a probability of appearing in a certain place. An unstable atom has a probability of decaying at a certain instant. But when a physicist goes into the laboratory and performs an experiment, there is a definite outcome. The act of measurement — observation, by someone or something — becomes an inextricable part of the theory
  • The strange implication is that the reality of the quantum world remains amorphous or indefinite until scientists start measuring
  • Other interpretations rely on “hidden variables” to account for quantities presumed to exist behind the curtain.
  • This is disturbing to philosophers as well as physicists. It led Einstein to say in 1952, “The theory reminds me a little of the system of delusions of an exceedingly intelligent paranoiac.”
  • “Figuring out what quantum physics is saying about the world has been hard,” Becker says, and this understatement motivates his book, a thorough, illuminating exploration of the most consequential controversy raging in modern science.
  • In a way, the Copenhagen is an anti-interpretation. “It is wrong to think that the task of physics is to find out how nature is,” Bohr said. “Physics concerns what we can say about nature.”
  • Nothing is definite in Bohr’s quantum world until someone observes it. Physics can help us order experience but should not be expected to provide a complete picture of reality. The popular four-word summary of the Copenhagen interpretation is: “Shut up and calculate!”
  • Becker sides with the worriers. He leads us through an impressive account of the rise of competing interpretations, grounding them in the human stories
  • He makes a convincing case that it’s wrong to imagine the Copenhagen interpretation as a single official or even coherent statement. It is, he suggests, a “strange assemblage of claims.
  • An American physicist, David Bohm, devised a radical alternative at midcentury, visualizing “pilot waves” that guide every particle, an attempt to eliminate the wave-particle duality.
  • Competing approaches to quantum foundations are called “interpretations,” and nowadays there are many. The first and still possibly foremost of these is the so-called Copenhagen interpretation.
  • Perhaps the most popular lately — certainly the most talked about — is the “many-worlds interpretation”: Every quantum event is a fork in the road, and one way to escape the difficulties is to imagine, mathematically speaking, that each fork creates a new universe
  • if you think the many-worlds idea is easily dismissed, plenty of physicists will beg to differ. They will tell you that it could explain, for example, why quantum computers (which admittedly don’t yet quite exist) could be so powerful: They would delegate the work to their alter egos in other universes.
  • When scientists search for meaning in quantum physics, they may be straying into a no-man’s-land between philosophy and religion. But they can’t help themselves. They’re only human.
  • If you were to watch me by day, you would see me sitting at my desk solving Schrödinger’s equation...exactly like my colleagues,” says Sir Anthony Leggett, a Nobel Prize winner and pioneer in superfluidity. “But occasionally at night, when the full moon is bright, I do what in the physics community is the intellectual equivalent of turning into a werewolf: I question whether quantum mechanics is the complete and ultimate truth about the physical universe.”
15More

Desperately Seeking Hope and Help for Your Nerves? Try Reading 'Hope and Help for Your ... - 0 views

  • Five years ago, at my therapist’s urging, I kept track of every panic attack that washed over me: my record for a single day was 132. Soon I was diagnosed with agoraphobia and panic disorder, which is essentially a preoccupation with recurring panic attacks
  • it was a grey, mass-market paperback called “Hope and Help for Your Nerves,” with a front-cover blurb from Ann Landers, that became my talisman
  • Face. Accept. Float. Let time pass. That’s the recipe that Dr. Claire Weekes, the Australian clinician and relatively underrecognized pioneer of modern anxiety treatment, established in a series of books
  • ...12 more annotations...
  • This advice, when you encounter it in the midst of a cycle of breath-shortening attacks, may sound cruel.
  • First, Weekes says, you must decide to truly experience the panic, to let it burst out into your fingers, your gut, your skull.
  • Then, sink into it like a warm pool.
  • Finally, rather than mentally kicking your legs to keep your nose out of the water, flip onto your back. “Stop holding tensely onto yourself,” she writes, “trying to control your fear, trying ‘to do something about it’ while subjecting yourself to constant self-analysis.” Just float through it, observing that it’s happening and recognizing that it will end.
  • Weekes promises that “every unwelcome sensation can be banished, and you can regain peace of mind and body.”
  • her advice, hard-earned through her own lifelong anxiety, which would wake her out of sleep to torment her, is so simple that “Hope and Help” essentially turns into a soothing repetition of two points.
  • First, that what we’re mostly afraid of is fear. And second, that “by your own anxiety you are producing the very feelings you dislike so much.”
  • you can best fight your panic by refusing to fight the panic.
  • And in short: It works.
  • a cultish devotion to her simple and direct advice means that today the book is prized by the readers, including me, whom it has guided out of emotional suffocation. A scroll through its Amazon reviews turns up one gushing convert after another.
  • Weekes’s work has the particular effect of pushing me to see that something lies beyond the moments of slip-sliding terror I f
  • this one has potent advice for the present moment, when many of us feel we must push back our disquiet more tenaciously than ever. If you’re afraid, then be afraid. You might float through to the other side.
8More

What all the critics of "Unorthodox" are forgetting - The Forward - 0 views

  • The series has garnered glowing reviews
  • It also has its critical critics
  • both those who have celebrated the series and those who lambasted it are missing something. Something important.
  • ...5 more annotations...
  • intelligent assessors of artistic offerings never forget that truth and beauty are not necessarily one and the same. At times they can even diverge profoundly. There is a reason, after all, why the words “artifice” and “artificial” are based on the word “art.”
  • something obvious but all the same easily overlooked. Namely, that art and fact are entirely unrelated
  • Not only is outright fiction not fact, neither are depictions of actual lives and artistic documentaries, whether forged in words, celluloid or electrons
  • A brilliant artistic endeavor that has been a mainstay of college film studies courses is a good example. The 1935 film has been described as powerful, even overwhelming, and is cited as a pioneering archetype of the use of striking visuals and compelling narrative. It won a gold medal at the 1935 Venice Biennale and the Grand Prix at the 1937 World Exhibition in Paris. The New York Times’ J. Hoberman not long ago called it “supremely artful.”
  • And it was. As well as supremely evil, as Mr. Hoberman also explains. The film was Leni Riefenstahl’s “Triumph of the Will”
55More

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
6More

Scientists Discover Some of the Oldest Signs of Life on Earth - The Atlantic - 0 views

  • The Earth was formed around 4.54 billion years ago. If you condense that huge swath of prehistory into a single calendar year, then the 3.95-billion-year-old graphite that the Tokyo team analyzed was created in the third week of February. By contrast, the earliest fossils ever found are 3.7 billion years old; they were created in the second week of March.
  • Those fossils, from the Isua Belt in southwest Greenland, are stromatolites—layered structures created by communities of bacteria. And as I reported last year, their presence suggests that life already existed in a sophisticated form at the 3.7-billion-year mark, and so must have arisen much earlier. And indeed, scientists have found traces of biologically produced graphite throughout the region, in other Isua Belt rocks that are 3.8 billion years old, and in hydrothermal vents off the coast of Quebec that are at least a similar age, and possibly even older.
  • “As far back as the rock record extends—that is, as far back as we can look for direct evidence of early life, we are finding it. Earth has been a biotic, life-sustaining planet since close to its beginning.”
  • ...3 more annotations...
  • living organisms concentrate carbon-12 in their cells—and when they die, that signature persists. When scientists find graphite that’s especially enriched in carbon-12, relative to carbon-13, they can deduce that living things were around when that graphite was first formed. And that’s exactly what the Tokyo team found in the Saglek Block—grains of graphite, enriched in carbon-12, encased within 3.95-billion-year-old rock.
  • the team calculated the graphite was created at temperatures between 536 and 622 Celsius—a range that’s consistent with the temperatures at which the surrounding metamorphic rocks were transformed. This suggests that the graphite was already there when the rocks were heated and warped, and didn’t sneak in later. It was truly OG—original graphite.
  • Still, all of this evidence suggests Earth was home to life during its hellish infancy, and that such life abounded in a variety of habitats. Those pioneering organisms—bacteria, probably—haven’t left any fossils behind. But Sano and Komiya hope to find some clues about them by analyzing the Saglek Block rocks. The levels of nitrogen, iron, and sulfur in the rocks could reveal which energy sources those organisms exploited, and which environments they inhabited. They could tell us how life first lived.
55More

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
7More

Anosmia, the loss of smell caused by COVID-19, doesn't always go away quickly - but sme... - 0 views

  • What’s unique about COVID-19 is that it actually is not nasal congestion or that nasal inflammatory response that is causing the smell loss. The virus actually crosses the blood-brain barrier and gets into the nervous system.
  • Some people recover their ability to smell within a few days or weeks, but for some people it’s been going on for much longer.
    • cvanderloo
       
      anosmia
  • Food doesn’t taste good anymore because how you perceive taste is really a combination of smell, taste and even the sense of touch. Some people are reporting weight loss due to loss of appetite, and they’re just not able to take pleasure in the things that they’ve previously found pleasurable.
  • ...3 more annotations...
  • There’s research that suggests that our sense of smell can influence our attraction to certain people unconsciously.
  • There are also people and organizations doing smell training. Smell training is essentially smelling the same odors over and over so that you can retrain your body’s ability to detect and identify that odor.
  • It wasn’t set up specifically for COVID-19 patients but has been a pioneer in smell training.
37More

The Disease Detective - The New York Times - 1 views

  • What’s startling is how many mystery infections still exist today.
  • More than a third of acute respiratory illnesses are idiopathic; the same is true for up to 40 percent of gastrointestinal disorders and more than half the cases of encephalitis (swelling of the brain).
  • Up to 20 percent of cancers and a substantial portion of autoimmune diseases, including multiple sclerosis and rheumatoid arthritis, are thought to have viral triggers, but a vast majority of those have yet to be identified.
  • ...34 more annotations...
  • Globally, the numbers can be even worse, and the stakes often higher. “Say a person comes into the hospital in Sierra Leone with a fever and flulike symptoms,” DeRisi says. “After a few days, or a week, they die. What caused that illness? Most of the time, we never find out. Because if the cause isn’t something that we can culture and test for” — like hepatitis, or strep throat — “it basically just stays a mystery.”
  • It would be better, DeRisi says, to watch for rare cases of mystery illnesses in people, which often exist well before a pathogen gains traction and is able to spread.
  • Based on a retrospective analysis of blood samples, scientists now know that H.I.V. emerged nearly a dozen times over a century, starting in the 1920s, before it went global.
  • Zika was a relatively harmless illness before a single mutation, in 2013, gave the virus the ability to enter and damage brain cells.
  • The beauty of this approach” — running blood samples from people hospitalized all over the world through his system, known as IDseq — “is that it works even for things that we’ve never seen before, or things that we might think we’ve seen but which are actually something new.”
  • In this scenario, an undiscovered or completely new virus won’t trigger a match but will instead be flagged. (Even in those cases, the mystery pathogen will usually belong to a known virus family: coronaviruses, for instance, or filoviruses that cause hemorrhagic fevers like Ebola and Marburg.)
  • And because different types of bacteria require specific conditions in order to grow, you also need some idea of what you’re looking for in order to find it.
  • The same is true of genomic sequencing, which relies on “primers” designed to match different combinations of nucleotides (the building blocks of DNA and RNA).
  • Even looking at a slide under a microscope requires staining, which makes organisms easier to see — but the stains used to identify bacteria and parasites, for instance, aren’t the same.
  • The practice that DeRisi helped pioneer to skirt this problem is known as metagenomic sequencing
  • Unlike ordinary genomic sequencing, which tries to spell out the purified DNA of a single, known organism, metagenomic sequencing can be applied to a messy sample of just about anything — blood, mud, seawater, snot — which will often contain dozens or hundreds of different organisms, all unknown, and each with its own DNA. In order to read all the fragmented genetic material, metagenomic sequencing uses sophisticated software to stitch the pieces together by matching overlapping segments.
  • The assembled genomes are then compared against a vast database of all known genomic sequences — maintained by the government-run National Center for Biotechnology Information — making it possible for researchers to identify everything in the mix
  • Traditionally, the way that scientists have identified organisms in a sample is to culture them: Isolate a particular bacterium (or virus or parasite or fungus); grow it in a petri dish; and then examine the result under a microscope, or use genomic sequencing, to understand just what it is. But because less than 2 percent of bacteria — and even fewer viruses — can be grown in a lab, the process often reveals only a tiny fraction of what’s actually there. It’s a bit like planting 100 different kinds of seeds that you found in an old jar. One or two of those will germinate and produce a plant, but there’s no way to know what the rest might have grown into.
  • Such studies have revealed just how vast the microbial world is, and how little we know about it
  • “The selling point for researchers is: ‘Look, this technology lets you investigate what’s happening in your clinic, whether it’s kids with meningitis or something else,’” DeRisi said. “We’re not telling you what to do with it. But it’s also true that if we have enough people using this, spread out all around the world, then it does become a global network for detecting emerging pandemics
  • One study found more than 1,000 different kinds of viruses in a tiny amount of human stool; another found a million in a couple of pounds of marine sediment. And most were organisms that nobody had seen before.
  • After the Biohub opened in 2016, one of DeRisi’s goals was to turn metagenomics from a rarefied technology used by a handful of elite universities into something that researchers around the world could benefit from
  • metagenomics requires enormous amounts of computing power, putting it out of reach of all but the most well-funded research labs. The tool DeRisi created, IDseq, made it possible for researchers anywhere in the world to process samples through the use of a small, off-the-shelf sequencer, much like the one DeRisi had shown me in his lab, and then upload the results to the cloud for analysis.
  • he’s the first to make the process so accessible, even in countries where lab supplies and training are scarce. DeRisi and his team tested the chemicals used to prepare DNA for sequencing and determined that using as little as half the recommended amount often worked fine. They also 3-D print some of the labs’ tools and replacement parts, and offer ongoing training and tech support
  • The metagenomic analysis itself — normally the most expensive part of the process — is provided free.
  • But DeRisi’s main innovation has been in streamlining and simplifying the extraordinarily complex computational side of metagenomics
  • IDseq is also fast, capable of doing analyses in hours that would take other systems weeks.
  • “What IDseq really did was to marry wet-lab work — accumulating samples, processing them, running them through a sequencer — with the bioinformatic analysis,”
  • “Without that, what happens in a lot of places is that the researcher will be like, ‘OK, I collected the samples!’ But because they can’t analyze them, the samples end up in the freezer. The information just gets stuck there.”
  • Meningitis itself isn’t a disease, just a description meaning that the tissues around the brain and spinal cord have become inflamed. In the United States, bacterial infections can cause meningitis, as can enteroviruses, mumps and herpes simplex. But a high proportion of cases have, as doctors say, no known etiology: No one knows why the patient’s brain and spinal tissues are swelling.
  • When Saha and her team ran the mystery meningitis samples through IDseq, though, the result was surprising. Rather than revealing a bacterial cause, as expected, a third of the samples showed signs of the chikungunya virus — specifically, a neuroinvasive strain that was thought to be extremely rare. “At first we thought, It cannot be true!” Saha recalls. “But the moment Joe and I realized it was chikungunya, I went back and looked at the other 200 samples that we had collected around the same time. And we found the virus in some of those samples as well.”
  • Until recently, chikungunya was a comparatively rare disease, present mostly in parts of Central and East Africa. “Then it just exploded through the Caribbean and Africa and across Southeast Asia into India and Bangladesh,” DeRisi told me. In 2011, there were zero cases of chikungunya reported in Latin America. By 2014, there were a million.
  • Chikungunya is a mosquito-borne virus, but when DeRisi and Saha looked at the results from IDseq, they also saw something else: a primate tetraparvovirus. Primate tetraparvoviruses are almost unknown in humans, and have been found only in certain regions. Even now, DeRisi is careful to note, it’s not clear what effect the virus has on people. “Maybe it’s dangerous, maybe it isn’t,” DeRisi says. “But I’ll tell you what: It’s now on my radar.
  • it reveals a landscape of potentially dangerous viruses that we would otherwise never find out about. “What we’ve been missing is that there’s an entire universe of pathogens out there that are causing disease in humans,” Imam notes, “ones that we often don’t even know exist.”
  • “The plan was, Let’s let researchers around the world propose studies, and we’ll choose 10 of them to start,” DeRisi recalls. “We thought we’d get, like, a couple dozen proposals, and instead we got 350.”
  • Metagenomic sequencing is especially good at what scientists call “environmental sampling”: identifying, say, every type of bacteria present in the gut microbiome, or in a teaspoon of seawater.
  • “When you draw blood from someone who has a fever in Ghana, you really don’t know very much about what would normally be in their blood without fever — let alone about other kinds of contaminants in the environment. So how do you interpret the relevance of all the things you’re seeing?”
  • Such criticisms have led some to say that metagenomics simply isn’t suited to the infrastructure of developing countries. Along with the problem of contamination, many labs struggle to get the chemical reagents needed for sequencing, either because of the cost or because of shipping and customs holdups
  • we’re less likely to be caught off-guard. “With Ebola, there’s always an issue: Where’s the virus hiding before it breaks out?” DeRisi explains. “But also, once we start sampling people who are hospitalized more widely — meaning not just people in Northern California or Boston, but in Uganda, and Sierra Leone, and Indonesia — the chance of disastrous surprises will go down. We’ll start seeing what’s hidden.”
13More

Dengue Mosquitoes Can Be Tamed by a Common Microbe - The Atlantic - 0 views

  • Dengue fever is caused by a virus that infects an estimated 390 million people every year, and kills about 25,000; the World Health Organization has described it as one of the top 10 threats to global health.
  • It spreads through the bites of mosquitoes, particularly the species Aedes aegypti. Utarini and her colleagues have spent the past decade turning these insects from highways of dengue into cul-de-sacs. They’ve loaded the mosquitoes with a bacterium called Wolbachia, which prevents them from being infected by dengue viruses. Wolbachia spreads very quickly: If a small number of carrier mosquitoes are released into a neighborhood, almost all of the local insects should be dengue-free within a few months
  • Aedes aegypti was once a forest insect confined to sub-Saharan Africa, where it drank blood from a wide variety of animals. But at some point, one lineage evolved into an urban creature that prefers towns over forests, and humans over other animals.
  • ...10 more annotations...
  • The World Mosquito Program (WMP), a nonprofit that pioneered this technique, had run small pilot studies in Australia that suggested it could work. Utarini, who co-leads WMP Yogyakarta, has now shown conclusively that it does.
  • Carried around the world aboard slave ships, Aedes aegypti has thrived. It is now arguably the most effective human-hunter on the planet, its senses acutely attuned to the carbon dioxide in our breath, the warmth of our bodies, and the odors of our skin.
  • Wolbachia was first discovered in 1924, in a different species of mosquito. At first, it seemed so unremarkable that scientists ignored it for decades. But starting in the 1980s, they realized that it has an extraordinary knack for spreading. It passes down mainly from insect mothers to their children, and it uses many tricks to ensure that infected individuals are better at reproducing than uninfected ones. To date, it exists in at least 40 percent of all insect species, making it one of the most successful microbes on the planet.
  • The team divided a large portion of the city into 24 zones and released Wolbachia-infected mosquitoes in half of them. Almost 10,000 volunteers helped distribute egg-filled containers to local backyards. Within a year, about 95 percent of the Aedes mosquitoes in the 12 release zones harbored Wolbachia.
  • The team found that just 2.3 percent of feverish people who lived in the Wolbachia release zones had dengue, compared with 9.4 percent in the control areas. Wolbachia also seemed to work against all four dengue serotypes, and reduced the number of dengue hospitalizations by 86 percent.
  • Even then, these already remarkable numbers are likely to be underestimates. The mosquitoes moved around, carrying Wolbachia into the 12 control zones where no mosquitoes were released. And people also move: They might live in a Wolbachia release zone but be bitten and infected with dengue elsewhere. Both of these factors would have worked against the trial, weakening its results
  • The Wolbachia method does have a few limitations. The bacterium takes months to establish itself, so it can’t be “deployed to contain an outbreak today,” Vazquez-Prokopec told me. As the Yogyakarta trial showed, it works only when Wolbachia reaches a prevalence of at least 80 percent, which requires a lot of work and strong community support
  • The method has other benefits too. It is self-amplifying and self-perpetuating: If enough Wolbachia-infected mosquitoes are released initially, the bacterium should naturally come to dominate the local population, and stay that way. Unlike insecticides, Wolbachia isn’t toxic, it doesn’t kill beneficial insects (or even mosquitoes), and it doesn’t need to be reapplied, which makes it very cost-effective.
  • An analysis by Brady’s team showed that it actually saves money by preventing infections
  • Wolbachia also seems to work against the other diseases that Aedes aegypti carries, including Zika and yellow fever. It could transform this mosquito from one of the most dangerous species to humans into just another biting nuisance.
9More

Why some people like wearing masks - BBC Worklife - 0 views

  • Some people welcome face coverings for reasons ranging from the convenient and expedient to the more complex and psychological. But is this a helpful coping mechanism?
  • Since I've been wearing the mask, my awkward interactions with friends and family have significantly reduced,” he says. Now, he goes to the shops whenever he wants, without worrying about whom he might see. He hopes that, even after the pandemic ends, it will still be socially acceptable to wear a mask.
  • Some welcome the way face coverings reduce or change interactions that might otherwise spark social anxiety. But is this a helpful coping mechanism – and what happens when the pandemic comes to an end?
  • ...6 more annotations...
  • They have ditched their old makeup and shaving routines and are saving money, time and stress. Others have discovered that hiding their mouths affords them unexpected freedoms. Some restaurant servers and retail workers say they no longer feel obliged to fake-smile at customers, potentially lifting the burden of emotional labour.
  • “During a pandemic, we’re under severe stress, and whether you’re worrying about your appearance or you’re worried about someone harassing you or whistling at you, the masks can provide a respite from those things that can occupy our mind when we’re out in public. You have more freedom to be meditating or thinking about whatever you want.”
  • “Anonymity carries power,” adds Ramani Durvasula, a clinical psychologist and psychology professor at California State University, Los Angeles. “It can feel like trying on a different ‘role’ and the associated expectations of that role, perhaps freeing us of what can feel exhausting and insincere about smiling (especially when we aren't having a good day).”
  • “For introverts, it can feel great that you don’t have to talk to people you don’t know that well, but in the long run, when you get out of your comfort zone and challenge yourself… [you might form] a really fulfilling or positive relationship,”
  • Think back to the last time you failed or made an important mistake. Do you still blush with shame, and scold yourself for having been so stupid or selfish? Do you tend to feel alone in that failure, as if you were the only person to have erred? Or do you accept that error is a part of being human, and try to talk to yourself with care and tenderness?
  • “Most of us have a good friend in our lives, who is kind of unconditionally supportive,” says Kristin Neff, an associate professor of educational psychology at the University of Texas at Austin, who has pioneered this research. “Self-compassion is learning to be that same warm, supportive friend to yourself.”
7More

'Hijacked by anxiety': how climate dread is hindering climate action | Environmental ac... - 0 views

  • climate anxiety – a sense of dread, gloom and almost paralysing helplessness that is rising as we come to terms with the greatest existential challenge of our generation, or any generation.
    • huffem4
       
      Is this anxiety driving many people to disregard or ignore this crisis?
  • “When we look at this through the lens of individual and collective trauma, it changes everything about what we do and how we do it,” says Dr Renee Lertzman, a US-based pioneer of climate psychology. “It helps us make sense of the variety of ways that people are responding to what’s going on, and the mechanisms and practices we need to come through this as whole as possible.”
  • the human psyche is hardwired to disengage from information or experiences that are overwhelmingly difficult or disturbing.
  • ...2 more annotations...
  • “For many of us, we’d literally rather not know because otherwise it creates such an acutely distressing experience for us as humans.”
    • huffem4
       
      People choose to ignore this crisis because it causes panic and makes us feel out of control.
  • this inability to engage presents itself as a complete denial of the climate crisis and climate science. But even among those who accept the dire predictions for the natural world, there are “micro-denials” that can block the ability to take action.
6More

Tech Tent: The woes of the world wide web - BBC News - 0 views

  • Sir Tim Berners-Lee tells Tech Tent that he has become less optimistic about the beneficial effects of his creation - but the web's founder is up for a fight about what he regards as a vital principle, net neutrality.
  • Both he and other web pioneers were hugely optimistic about its potential to foster collaboration and an open exchange of views. "Humanity once connected by technology would do wonderful things," he says.
  • In the United States the Federal Communications Commission (FCC) has moved to scrap the net neutrality regulation brought in by the Obama administration.
  • ...3 more annotations...
  • He had told the FCC boss that advances in computer processing power had made it easier for internet service providers to discriminate against certain web users for commercial or political reasons, perhaps slowing down traffic to one political party's website or making it harder for a rival company to process payments.
  • For Johnny Hornby of the advertising firm The&Partnership, the worry is the lack of control that the internet giants appear to have over the content that appears on their platforms.
  • The internet once seemed to promise perfectly targeted advertising that pleased both consumers and the companies trying to reach them: - another utopian vision that is now looking a bit frayed around the edges.
11More

Humans Are the World's Best Pattern-Recognition Machines, But for How Long? - Big Think - 0 views

  • Not only are machines rapidly catching up to — and exceeding — humans in terms of raw computing power, they are also starting to do things that we used to consider inherently human
  • Quite simply, humans are amazing pattern-recognition machines. They have the ability to recognize many different types of patterns - and then transform these "recursive probabalistic fractals" into concrete, actionable steps.
  • Intelligence, then, is really just a matter of being able to store more patterns than anyone else
  • ...8 more annotations...
  • Artificial intelligence pioneer Ray Kurzweil was among the first to recognize how the link between pattern recognition and human intelligence could be used to build the next generation of artificially intelligent machines.
  • where human "expertise" has always trumped machine "expertise."
  • It turns out patterns matter, and they matter a lot.
  • The more you think about it, the more you can see patterns all around you. Getting to work on time in the morning is the result of recognizing patterns in your daily commute
  • it's really just a matter of recognizing the right patterns faster than anyone else, and machines just have so much processing power these days it's easy to see them becoming the future doctors and lawyers of the world.
  • The future of intelligence is in making our patterns better, our heuristics stronger.
  • One thing is clear – being able to recognize patterns is what gave humans their evolutionary edge over animals.
  • How we refine, shape and improve our pattern recognition is the key to how much longer we'll have the evolutionary edge over machines.
1More

Einstein's Theory of Relativity, Explained in a Pioneering 1923 Silent Film - Brain Pic... - 0 views

  • “This is a participatory universe,” physicist John Archibald Wheeler, who popularized the term black hole, wrote in his influential theory known as It from Bit, asserting that “physics gives rise to observer-participancy; observer-participancy gives rise to information; and information gives rise to physics” — an assertion he could not have made without Einstein’s theory of relativity and its groundbreaking insight into how the laws of physics appear to different observers with different frames of reference.
3More

Pioneering Mathematician G.H. Hardy on the Noblest Existential Ambition and How We Find... - 0 views

  • “If a man has any genuine talent he should be ready to make almost any sacrifice in order to cultivate it to the full.”
  • the four desires motivating all human behavior:
  • “Man differs from other animals in one very important respect, and that is that he has some desires which are, so to speak, infinite, which can never be fully gratified, and which would keep him restless even in Paradise. The boa constrictor, when he has had an adequate meal, goes to sleep, and does not wake until he needs another meal. Human beings, for the most part, are not like this.”
3More

The Habits of Light: A Celebration of Pioneering Astronomer Henrietta Leavitt, Whose Ca... - 0 views

  • “Nothing is fixed. All is in flux,” physicist Alan Lightman wrote in his soaring meditation on how to live with our longing for absolutes in a relative universe, reminding us that all the physical evidence gleaned through millennia of scientific inquiry indicates the inherent inconstancy of the cosmos.
  • This awareness, so unnerving against the backdrop of our irrepressible yearning for constancy and permanence, was first unlatched when the ancients began suspecting that the Earth, rather than being the static center of the heavens it was long thought to be, is in motion, right beneath our feet. But it took millennia for the most disorienting evidence of inconstancy to dawn — the discovery that the universe itself is in flux, constantly expanding, growing thinner and thinner as stars grow farther and farther apart.
  • If the universe is constantly expanding, to trace it backward along the arrow of time is to imagine it smaller and smaller, all the way down to the seeming nothingness that banged into the somethingness within which everything exists.
7More

What this sunny, religious town in California teaches us about living longer - CNN - 0 views

  • Spanish for "beautiful hill," Loma Linda, California is nestled between mountain peaks in the middle of the San Bernardino Valley. The city is known as an epicenter of health and wellness, with more than 900 physicians on the campus of Loma Linda University and Medical Center.
  • Experts say that's because Loma Linda has one of the highest concentrations of Seventh-day Adventists in the world. The religion mandates a healthy lifestyle and a life of service to the church and community, which contributes to their longevity.
  • 'I never had stress'"As far as I am concerned, stress is a manufactured thing," Dr. Ellsworth Wareham told CNN's Chief Medical Correspondent Dr. Sanjay Gupta in 2015 as part of a Vital Signs special on blue zones. Read MoreWareham was 100 years old at the time and still mowed his front yard.
  • ...4 more annotations...
  • "I could do open heart surgery right now. My hands are steady, my eyes are good," Wareham said. "My blood pressure is 117. I have noticed no deterioration in my mental ability with my age. If you gave me something to memorize, I would memorize it now just as quickly as when I was 20."
  • Wareham passed away last year, at the age of 104. Like 10% of the Adventist community, Wareham was a vegan. Another 30% are lacto-ovo vegetarians who eat dairy and eggs, while another 8% eat fish but not other meat. Vegetarianism is so prevalent that no meat can be purchased at the cafeterias at the university and medical center.
  • Other key factors to longevity: Only 1% of the Seventh-day Adventist community in the study smokes. Little to no alcohol is consumed. Daily exercise out in the fresh air of nature is the norm. The church advocates a life of service, so dedication to volunteering, humanitarian and mission work is typical, which contributes to a sense of community.
  • "The bulk of evidence suggests that changing a few simple lifestyle factors can have a profound difference in the risk of major diseases and the likelihood of living a long life," Orlich said. "The body has an amazing ability to, um, you know, heal itself to some degree.
2More

"Dune," climate fiction pioneer: The ecological lessons of Frank Herbert's sci-fi maste... - 0 views

  • Gerry Canavan, assistant professor of English at Marquette University and co-author of "Green Planets: Science Fiction and Ecology," sums up the novel's legacy well when he writes in an email interview, "'Dune' is really a turning point for science fiction that takes ecology seriously as a concept."
  • Brian Herbert recounted many instances that demonstrated his father's interest in environmental issues, including his backyard experiments with solar and wind power.
4More

Opinion | Your Kid's Existential Dread Is Normal - The New York Times - 0 views

  • my daughter said: “When the pandemic started, I was only 7, and I wasn’t scared. Now I’m 9 and I really understand.”
  • I called Sally Beville Hunter, a clinical associate professor of child and family studies at the University of Tennessee, to see if this kind of philosophical musing was typical for a young tween. “There’s a huge cognitive transition happening” around this age, Hunter told me.
  • It’s the stage when children develop the capacity for abstract thought, she said. The pioneering developmental psychologist Jean Piaget called this transition the “formal operational stage,” and in his research he found it began around age 11, but Hunter said subsequent research has found that it may begin earlier. “It’s the first time children can consider multiple possibilities and test them against each other,” she said. Which helps explain why my daughter has begun thinking about whether Covid will linger into her college years, a decade from now.
  • ...1 more annotation...
  • Another aspect of development that may be happening for her is a stage that the psychologist Erik Erikson called “identity versus role diffusion” (also referred to as “role confusion”), which is shorthand for children figuring out their position in the world. “This is the first time when kids have questions about their own existence, questions about self-identity, the meaning of life and the changing role of authority,” Hunter said.
‹ Previous 21 - 40 of 50 Next ›
Showing 20 items per page