Skip to main content

Home/ TOK Friends/ Group items tagged scenario

Rss Feed Group items tagged

Javier E

Lawyers With Lowest Pay Report More Happiness - NYTimes.com - 0 views

  • Researchers who surveyed 6,200 lawyers about their jobs and health found that the factors most frequently associated with success in the legal field, such as high income or a partner-track job at a prestigious firm, had almost zero correlation with happiness and well-being. However, lawyers in public-service jobs who made the least money, like public defenders or Legal Aid attorneys, were most likely to report being happy.
  • the two groups reported about equal overall satisfaction with their lives.
  • The problem with the more prestigious jobs, said Mr. Krieger, is that they do not provide feelings of competence, autonomy or connection to others
  • ...5 more annotations...
  • A landmark Johns Hopkins study in 1990 found that lawyers were 3.6 times as likely as non-lawyers to suffer from depression, putting them at greater risk than people in any other occupation. In December, Yale Law School released a study that said 70 percent of its students were affected by mental health issues.
  • From 1999 to 2007, lawyers were 54 percent more likely to commit suicide than people in other profession
  • the job requires an unhealthy degree of cynicism. “Research shows that an optimistic outlook is good for your mental health,” said Patricia Spataro, director of the New York State Lawyer Assistance Program, a resource for attorneys with mental health concerns. “But lawyers are trained to always look for the worst-case scenario. They benefit more from being pessimistic, and that takes a toll.”
  • the pressure to be hired by a big-name firm is so strongly ingrained in law school culture, one George Washington University student said, that even those who enroll with the intention of performing public service often find themselves redirected.
  • “It’s a very real pressure in law school,” Helen Clemens, the law student, said. “It comes from all kinds of avenues, but mostly I would say it just comes from the people surrounding you. If everyone is talking about leaders from our school who have gotten jobs at a really prestigious firm, the assumption is that we all should be trying to work at a similar place.”
carolinewren

Sarah Palin dives in poll ratings as Tina Fey impersonates her on Saturday Night Live -... - 0 views

  • Palin's poll ratings are telling a more devastating story.
  • engage with the process much earlier on – not least with their Sunday morning political talk shows
  • It currently commands 10 million viewers – a creditable figure for a primetime drama, let alone a late-night sketch show.
  • ...13 more annotations...
  • Other satirical shows, such as The Daily Show with Jon Stewart and The Colbert Report, are also enjoying record ratings, as well as influence far beyond their own viewers.
  • Even bigger than Saturday Night Live have been the presidential and vice-presidential debates. Sarah Palin's set-to with Joe Biden on October 2 attracted nearly 70 million viewers – a record for a vice-presidential debate and the highest-rated election debate since 1992
  • It is impossible to imagine a similar level of engagement with political television in this country. Gordon Brown and David Cameron would not only have to debate each other on TV – an unlikely scenario in itself – but pull in an audience bigger than the finals of Britain's Got Talent and Strictly Come Dancing put together
  • American networks do have some advantages over the BBC and ITV in planning and executing their political coverage
  • four-year timetable, avoiding the unholy scramble when a British general election is called at a month's notice.
  • In a Newsweek poll in September, voters were asked whether Palin was qualified or unqualified to be president. The result was a near dead-heat. In the same poll this month, those saying she was "unqualified" outnumbered those saying she was "qualified" by a massive 16 points
  • "I think we're learning what it means to have opinion journalism in this country on such a grand scale," says Stelter. "It's only in the last six to 12 months that those lines have hardened between Fox and MSNBC. I think the [ratings] numbers for cable have surprised people.
  • I think that shows that people are looking for different stripes of political news."
  • American political TV certainly is polarised. When Governor Palin attacked the media in her speech at the Republican convention last month, the crowd chanted "NBC"
  • Gwen Ifill, a respected anchor on the non-commercial channel PBS, who moderated the vice-presidential debate, saw her impartiality attacked because she is writing a book about African-American politics that mentions Obama in its title
  • America's networks comprehensively outstrip this country in both volume and quality of political coverage.
  • All three major US networks – ABC, CBS and NBC – offer a large amount of serious (and unbiased) political coverage, both in their evening network newscasts and in their morning equivalents of GMTV
  • Impartiality and the public service ethos hardly characterise Tina Fey's performances. Tonight's presidential debate forms part of a series driven largely by commercial networks, not publicly funded channels. Neither Fox News nor MSNBC was set up as a sop to a regulator
carolinewren

'It Is Climate Change': India's Heat Wave Now The 5th Deadliest In World History | Thin... - 0 views

  • searing and continuing heat wave in India has so far killed more than 2,300 people, making it the 5th deadliest in recorded world history.
  • As temperatures soared up to 113.7 degrees Fahrenheit and needed monsoon rains failed to materialize, the country’s minister of earth sciences did not mince words about what he says is causing the disaster.
  • “Let us not fool ourselves that there is no connection between the unusual number of deaths from the ongoing heat wave and the certainty of another failed monsoon,” Harsh Vardhan said, according to Reuters. “It’s not just an unusually hot summer, it is climate change.”
  • ...5 more annotations...
  • “Attribution of events to climate change is still emerging as a science, but recent and numerous studies continue to speak to heat waves having strong links to warming climate,”
  • India is getting hotter as humans continue to pump carbon dioxide into the atmosphere. With these increases in heat, the report — produced by 1,250 international experts and approved by every major government in the world — said with high confidence that the risk of heat-related mortality would rise due to climate change and population increases, along with greater risk of drought-related water and food shortages.
  • extreme heat events “have become as much as 10 times more likely due to the current cumulative effects of human-induced climate change.”
  • Mann said that as climate change threatens to worsen as more carbon is emitted into the atmosphere, heat events once considered extreme would become relatively common. He noted that India’s nearly unprecedented deadly heat wave is occurring at current global warming levels of just 1.5 degrees Fahrenheit — so heat waves occurring under the “business as usual” global warming scenario that sees average temperatures rise 7 to 9 degrees by the end of the century would be much, much worse
  • The impacts of climate change are widely expected to be more harmful in poor countries than in their fully developed counterparts.
Javier E

Review: Vernor Vinge's 'Fast Times' | KurzweilAI - 0 views

  • Vernor Vinge’s Hugo-award-winning short science fiction story “Fast Times at Fairmont High” takes place in a near future in which everyone lives in a ubiquitous, wireless, networked world using wearable computers and contacts or glasses on which computer graphics are projected to create an augmented reality.
  • So what is life like in Vinge’s 2020?The biggest technological change involves ubiquitous computing, wearables, and augmented reality (although none of those terms are used). Everyone wears contacts or glasses which mediate their view of the world. This allows computer graphics to be superimposed on what they see. The computers themselves are actually built into the clothing (apparently because that is the cheapest way to do it) and everything communicates wirelessly.
  • If you want a computer display, it can appear in thin air, or be attached to a wall or any other surface. If people want to watch TV together they can agree on where the screen should appear and what show they watch. When doing your work, you can have screens on all your walls, menus attached here and there, however you want to organize things. But none of it is "really" there.
  • ...7 more annotations...
  • Does your house need a new coat of paint? Don’t bother, just enter it into your public database and you have a nice new mint green paint job that everyone will see. Want to redecorate? Do it with computer graphics. You can have a birdbath in the front yard inhabited by Disneyesque animals who frolic and play. Even indoors, don’t buy artwork, just download it from the net and have it appear where you want.
  • Got a zit? No need to cover up with Clearsil, just erase it from your public face and people will see the improved version. You can dress up your clothes and hairstyle as well.
  • Of course, anyone can turn off their enhancements and see the plain old reality, but most people don’t bother most of the time because things are ugly that way.
  • Some of the kids attending Fairmont Junior High do so remotely. They appear as "ghosts", indistinguishable from the other kids except that you can walk through them. They go to classes and raise their hands to ask questions just like everyone else. They see the school and everyone at the school sees them. Instead of visiting friends, the kids can all instantly appear at one another’s locations.
  • The computer synthesizing visual imagery is able to call on the localizer network for views beyond what the person is seeing. In this way you can have 360 degree vision, or even see through walls. This is a transparent society with a vengeance!
  • The cumulative effect of all this technology was absolutely amazing and completely believable
  • One thing that was believable is that it seemed that a lot of the kids cheated, and it was almost impossible for the adults to catch them. With universal network connectivity it would be hard to make sure kids are doing their work on their own. I got the impression the school sort of looked the other way, the idea being that as long as the kids solved their problems, even if they got help via the net, that was itself a useful skill that they would be relying on all their lives.
Javier E

The Rich Have Higher Level of Narcissism, Study Shows | TIME.com - 1 views

  • The rich really are different — and, apparently more self-absorbed, according to the latest research.
  • Recent studies show, for example, that wealthier people are more likely to cut people off in traffic and to behave unethically in simulated business and charity scenarios.
  • Earlier this year, statistics on charitable giving revealed that while the wealthy donate about 1.3% of their income to charity, the poorest actually give more than twice as much as a proportion of their earnings — 3.2%.
  • ...11 more annotations...
  • In five different experiments involving several hundred undergraduates and 100 adults recruited from online communities, the researchers found higher levels of both narcissism and entitlement among those of higher income and social class.
  • when asked to visually depict themselves as circles, with size indicating relative importance, richer people picked larger circles for themselves and smaller ones for others. Another experiment found that they also looked in the mirror more frequently.
  • The wealthier participants were also more likely to agree with statements like “I honestly feel I’m just more deserving than other people
  • But which came first — did gaining wealth increase self-aggrandizement? Were self-infatuated people more likely to seek and then gain riches
  • To explore that relationship further, the researchers also asked the college students in one experiment to report the educational attainment and annual income of their parents. Those with more highly educated and wealthier parents remained higher in their self-reported entitlement and narcissistic characteristics. “That would suggest that it’s not just [that] people who feel entitled are more likely to become wealthy,” says Piff. Wealth, in other words, may breed narcissistic tendencies — and wealthy people justify their excess by convincing themselves that they are more deserving of it
  • “The strength of the study is that it uses multiple methods for measuring narcissism and entitlement and social class and multiple populations, and that can really increase our confidence in the results,”
  • “This paper should not be read as saying that narcissists are more successful because we know from lots of other studies that that’s not true.
  • “entitlement is a facet of narcissism,” says Twenge. “And [it’s the] one most associated with high social class. It’s the idea that you deserve special treatment and that things will come to you without working hard.”
  • Manipulating the sense of entitlement, however, may provide a way to influence narcissism. In the final experiment in the paper, the researchers found that having participants who listed three benefits of seeing others as equals eliminated class differences in narcissism, while simply listing three daily activities did not.
  • In the meantime, the connection between wealth and entitlement could have troubling social implications. “You have this bifurcation of rich and poor,” says Levine. “The rich are increasingly entitled, and since they set the cultural tone for advertising and all those kinds of things, I think there’s a pervasive sense of entitlement.”
  • That could perpetuate a deepening lack of empathy that could fuel narcissistic tendencies. “You could imagine negative attitudes toward wealth redistribution as a result of entitlement,” says Piff. “The more severe inequality becomes, the more entitled people may feel and the less likely to share those resources they become.”
kushnerha

The Next Genocide - The New York Times - 1 views

  • But sadly, the anxieties of our own era could once again give rise to scapegoats and imagined enemies, while contemporary environmental stresses could encourage new variations on Hitler’s ideas, especially in countries anxious about feeding their growing populations or maintaining a rising standard of living.
  • The quest for German domination was premised on the denial of science. Hitler’s alternative to science was the idea of Lebensraum.
    • kushnerha
       
      "Lebensraum linked a war of extermination to the improvement of lifestyle." Additionally, "The pursuit of peace and plenty through science, he claimed in "Mein Kampf," was a Jewish plot to distract Germans from the necessity of war."
  • Climate change has also brought uncertainties about food supply back to the center of great power politics.
  • ...8 more annotations...
  • China today, like Germany before the war, is an industrial power incapable of feeding its population from its own territory
    • kushnerha
       
      And "could make China's population susceptible to a revival of ideas like Lebensraum."
  • The risk is that a developed country able to project military power could, like Hitler’s Germany, fall into ecological panic, and take drastic steps to protect its existing standard of living.
  • United States has done more than any other nation to bring about the next ecological panic, yet it is the only country where climate science is still resisted by certain political and business elites. These deniers tend to present the empirical findings of scientists as a conspiracy and question the validity of science — an intellectual stance that is uncomfortably close to Hitler’s.
  • The Kremlin, which is economically dependent on the export of hydrocarbons to Europe, is now seeking to make gas deals with individual European states one by one in order to weaken European unity and expand its own influence.
  • Putin waxes nostalgic for the 1930s, while Russian nationalists blame gays, cosmopolitans and Jews for antiwar sentiment. None of this bodes well for Europe’s future
  • The Nazi scenario of 1941 will not reappear in precisely the same form, but several of its causal elements have already begun to assemble.
  • not difficult to imagine ethnic mass murder in Africa
    • kushnerha
       
      also no longer difficult to imagine the "triumph of a violent totalitarian strain of Islamism in the parched Middle East," a "Chinese play for resources in Africa or Russia or Eastern Europe that involves removing the people already living there," and a "growing global ecological panic if America abandons climate science or the European Union falls apart"
  • Denying science imperils the future by summoning the ghosts of the past.
    • kushnerha
       
      Americans must make the "crucial choice between science and ideology"
paisleyd

Mass participation experiment reveals how to create the perfect dream -- ScienceDaily - 0 views

  • The experiment shows that it is now possible for people to create their perfect dream, and so wake up feeling especially happy and refreshed
  • an iPhone app that monitors a person during sleep and plays a carefully crafted 'soundscape' when they dream. Each soundscape was carefully designed to evoke a pleasant scenario, such as a walk in the woods, or lying on a beach, and the team hoped that these sounds would influence people's dreams
    • paisleyd
       
      Emotions effect a great amount of our brain
  • ...4 more annotations...
  • researchers collected millions of dream reports. After studying the data, Professor Wiseman discovered that the soundscapes did indeed influence people's dreams
  • people's dreams were especially bizarre around the time of a full moon
  • the team also found that certain soundscapes produced far more pleasant dreams
  • Having positive dreams helps people wake-up in a good mood, and boosts their productivity. We have now discovered a way of giving people sweet dreams, and this may also form the basis for a new type of therapy to help those suffering from certain psychological problems, such as depression
Javier E

Uber, Arizona, and the Limits of Self-Driving Cars - The Atlantic - 0 views

  • it’s a good time for a critical review of the technical literature of self-driving cars. This literature reveals that autonomous vehicles don’t work as well as their creators might like the public to believe.
  • The world is a 3-D grid with x, y, and z coordinates. The car moves through the grid from point A to point B, using highly precise GPS measurements gathered from nearby satellites. Several other systems operate at the same time. The car’s sensors bounce out laser radar waves and measure the response time to build a “picture” of what is outside.
  • It is a masterfully designed, intricate computational system. However, there are dangers.
  • ...11 more annotations...
  • Self-driving cars navigate by GPS. What happens if a self-driving school bus is speeding down the highway and loses its navigation system at 75 mph because of a jammer in the next lane?
  • Because they are not calculating the trajectory for the stationary fire truck, only for objects in motion (like pedestrians or bicyclists), they can’t react quickly to register a previously stationary object as an object in motion.
  • If the car was programmed to save the car’s occupants at the expense of pedestrians, the autonomous-car industry is facing its first public moment of moral reckoning.
  • This kind of blind optimism about technology, the assumption that tech is always the right answer, is a kind of bias that I call technochauvinism.
  • an overwhelming number of tech people (and investors) seem to want self-driving cars so badly that they are willing to ignore evidence suggesting that self-driving cars could cause as much harm as good
  • By this point, many people know about the trolley problem as an example of an ethical decision that has to be programmed into a self-driving car.
  • With driving, the stakes are much higher. In a self-driving car, death is an unavoidable feature, not a bug.
  • t imagine the opposite scenario: The car is programmed to sacrifice the driver and the occupants to preserve the lives of bystanders. Would you get into that car with your child? Would you let anyone in your family ride in it? Do you want to be on the road, or on the sidewalk, or on a bicycle, next to cars that have no drivers and have unreliable software that is designed to kill you or the driver?
  • Plenty of people want self-driving cars to make their lives easier, but self-driving cars aren’t the only way to fix America’s traffic problems. One straightforward solution would be to invest more in public transportation.
  • Public-transportation funding is a complex issue that requires massive, collaborative effort over a period of years. It involves government bureaucracy. This is exactly the kind of project that tech people often avoid attacking, because it takes a really long time and the fixes are complicated.
  • Plenty of people, including technologists, are sounding warnings about self-driving cars and how they attempt to tackle very hard problems that haven’t yet been solved. People are warning of a likely future for self-driving cars that is neither safe nor ethical nor toward the greater good. Still,  the idea that self-driving cars are nifty and coming soon is often the accepted wisdom, and there’s a tendency to forget that technologists have been saying “coming soon” for decades now.
Javier E

The Reality of Quantum Weirdness - NYTimes.com - 1 views

  • Is there a true story, or is our belief in a definite, objective, observer-independent reality an illusion?
  • a paper published online in the journal Nature Physics presents experimental research that supports the latter scenario — that there is a “Rashomon effect” not just in our descriptions of nature, but in nature itself.
  • The electron appears to be a strange hybrid of a wave and a particle that’s neither here and there nor here or there. Like a well-trained actor, it plays the role it’s been called to perform
  • ...8 more annotations...
  • Is nature really this weird? Or is this apparent weirdness just a reflection of our imperfect knowledge of nature?
  • The answer depends on how you interpret the equations of quantum mechanics, the mathematical theory that has been developed to describe the interactions of elementary particles. The success of this theory is unparalleled: Its predictions, no matter how “spooky,” have been observed and verified with stunning precision. It has also been the basis of remarkable technological advances. So it is a powerful tool. But is it also a picture of reality?
  • Does the wave function directly correspond to an objective, observer-independent physical reality, or does it simply represent an observer’s partial knowledge of it?
  • If there is an objective reality at all, the paper demonstrates, then the wave function is in fact reality-based.
  • What this research implies is that we are not just hearing different “stories” about the electron, one of which may be true. Rather, there is one true story, but it has many facets, seemingly in contradiction, just like in “Rashomon.” There is really no escape from the mysterious — some might say, mystical — nature of the quantum world.
  • We should be careful to recognize that the weirdness of the quantum world does not directly imply the same kind of weirdness in the world of everyday experience.
  • This is why, in fact, we are able to describe the objects around us in the language of classical physics.
  • I suggest that we regard the paradoxes of quantum physics as a metaphor for the unknown infinite possibilities of our own existence.
Javier E

When Will Climate Change Make the Earth Too Hot For Humans? - 0 views

  • Is it helpful, or journalistically ethical, to explore the worst-case scenarios of climate change, however unlikely they are? How much should a writer contextualize scary possibilities with information about how probable those outcomes are, however speculative those probabilities may be?
  • I also believe very firmly in the set of propositions that animated the project from the start:
  • that the public does not appreciate the scale of climate risk
  • ...5 more annotations...
  • that this is in part because we have not spent enough time contemplating the scarier half of the distribution curve of possibilities, especially its brutal long tail, or the risks beyond sea-level rise;
  • that there is journalistic and public-interest value in spreading the news from the scientific community, no matter how unnerving it may be;
  • and that, when it comes to the challenge of climate change, public complacency is a far, far bigger problem than widespread fatalism — that many, many more people are not scared enough than are already “too scared.”
  • The science says climate change threatens nearly every aspect of human life on this planet, and that inaction will hasten the problems. In that context, I don’t think it’s a slur to call an article, or its writer, alarmist. I’ll accept that characterization. We should be alarmed.
  • It is, I promise, worse than you think. If your anxiety about global warming is dominated by fears of sea-level rise, you are barely scratching the surface of what terrors are possible, even within the lifetime of a teenager today.
Javier E

How Alignment Charts Went From Dungeons & Dragons to a Meme - The Atlantic - 0 views

  • Bartle recommends against using an alignment chart in a virtual space or online game because, on the internet, “much of what is good or evil, lawful or chaotic, is intangible.” The internet creates so many unpredictable conflicts and confusing scenarios for human interaction, judgment becomes impossible.
  • At the same time, judgment comes down constantly online. Social-media platforms frequently enforce binary responses: either award something a heart because you love it, or reply with something quick and crude when you hate it. The internet is a space of permutations and addled context, yet, as the Motherboard writer Roisin Kiberd argued in a 2019 essay collection about meme culture, “the internet is full of reductive moral judgment.”
sandrine_h

Darwin's Influence on Modern Thought - Scientific American - 0 views

  • Great minds shape the thinking of successive historical periods. Luther and Calvin inspired the Reformation; Locke, Leibniz, Voltaire and Rousseau, the Enlightenment. Modern thought is most dependent on the influence of Charles Darwin
  • one needs schooling in the physicist’s style of thought and mathematical techniques to appreciate Einstein’s contributions in their fullness. Indeed, this limitation is true for all the extraordinary theories of modern physics, which have had little impact on the way the average person apprehends the world.
  • The situation differs dramatically with regard to concepts in biology.
  • ...10 more annotations...
  • Many biological ideas proposed during the past 150 years stood in stark conflict with what everybody assumed to be true. The acceptance of these ideas required an ideological revolution. And no biologist has been responsible for more—and for more drastic—modifications of the average person’s worldview than Charles Darwin
  • . Evolutionary biology, in contrast with physics and chemistry, is a historical science—the evolutionist attempts to explain events and processes that have already taken place. Laws and experiments are inappropriate techniques for the explication of such events and processes. Instead one constructs a historical narrative, consisting of a tentative reconstruction of the particular scenario that led to the events one is trying to explain.
  • The discovery of natural selection, by Darwin and Alfred Russel Wallace, must itself be counted as an extraordinary philosophical advance
  • The concept of natural selection had remarkable power for explaining directional and adaptive changes. Its nature is simplicity itself. It is not a force like the forces described in the laws of physics; its mechanism is simply the elimination of inferior individuals
  • A diverse population is a necessity for the proper working of natural selection
  • Because of the importance of variation, natural selection should be considered a two-step process: the production of abundant variation is followed by the elimination of inferior individuals
  • By adopting natural selection, Darwin settled the several-thousandyear- old argument among philosophers over chance or necessity. Change on the earth is the result of both, the first step being dominated by randomness, the second by necessity
  • Another aspect of the new philosophy of biology concerns the role of laws. Laws give way to concepts in Darwinism. In the physical sciences, as a rule, theories are based on laws; for example, the laws of motion led to the theory of gravitation. In evolutionary biology, however, theories are largely based on concepts such as competition, female choice, selection, succession and dominance. These biological concepts, and the theories based on them, cannot be reduced to the laws and theories of the physical sciences
  • Despite the initial resistance by physicists and philosophers, the role of contingency and chance in natural processes is now almost universally acknowledged. Many biologists and philosophers deny the existence of universal laws in biology and suggest that all regularities be stated in probabilistic terms, as nearly all so-called biological laws have exceptions. Philosopher of science Karl Popper’s famous test of falsification therefore cannot be applied in these cases.
  • To borrow Darwin’s phrase, there is grandeur in this view of life. New modes of thinking have been, and are being, evolved. Almost every component in modern man’s belief system is somehow affected by Darwinian principles
sandrine_h

To advance science we need to think about the impossible | New Scientist - 0 views

  • Science sets out what we think is true – but when it gets stuck, it’s time to explore what we think isn’t
  • science has always advanced in small steps, paving the way for occasional leaps. But sometimes fact-collecting yields nothing more than a collection of facts; no revelation follows. At such times, we need to step back from the facts we know and imagine alternatives: in other words, to ask “what if?”
  • That was how Albert Einstein broke the bind in which physics found itself in the early 20th century. His conception of a scenario that received wisdom deemed impossible – that light’s speed is always the same, regardless of how you look at it – led to special relativity and demolished what we thought we knew about space and time.
  • ...3 more annotations...
  • Despite its dependence on hard evidence, science is a creative discipline. That creativity needs nurturing, even in this age of performance targets and impact assessments. Scientist need to flex their imaginations, too.
  • “Let us dare to dream,” the chemist August Kekulé once suggested, “and then perhaps we may learn the truth.”
  • Physics isn’t the only field that might benefit from a judicious dose of what-iffery. Attempts to understand consciousness are also just inching forward
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
sissij

There's a Major Problem with AI's Decision Making | Big Think - 0 views

  • For eons, God has served as a standby for “things we don’t understand.” Once an innovative researcher or tinkering alchemist figures out the science behind the miracle, humans harness the power of chemistry, biology, or computer science.
  • The process of ‘deep learning’—in which a machine extracts information, often in an unsupervised manner, to teach and transform itself—exploits a longstanding human paradox: we believe ourselves to have free will, but really we’re a habit-making and -performing animal repeatedly playing out its own patterns.
  • When we place our faith in an algorithm we don’t understand—autonomous cars, stock trades, educational policies, cancer screenings—we’re risking autonomy, as well as the higher cognitive and emotional qualities that make us human, such as compassion, empathy, and altruism.
  • ...2 more annotations...
  • Of course, defining terms is of primary importance, a task that has proven impossible when discussing the nuances of consciousness, which is effectively the power we’re attempting to imbue our machines with.
  • What type of machines are we creating if we only recognize a “sort of” intelligence under the hood of our robots? For over a century, dystopian novelists have envisioned an automated future in which our machines best us. This is no longer a future scenario.
  •  
    In the fiction books, we can always see a scene that the AI robots start to take over the world. We humans are always afraid of AI robots having emotions. As we discussed in TOK, there is a phenomenon that the more robots are like human, the more people despise of them. I think that's because if robots start to have emotions, then they would be easily out of our control. We still see AI robots as lifeless gears and machines, what if they are more than that? --Sissi (4/23/2017)
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

How 'The Good Place' Goes Beyond 'The Trolley Problem' - The Atlantic - 1 views

  • A sitcom may seem like an unlikely vehicle for serious discussions about moral philosophy, which viewers might expect to find in medical and legal dramas (albeit in less literal, didactic forms). But the subject and medium are surprisingly compatible. A comedy can broach otherwise tedious-sounding ideas with levity and self-awareness, and has more leeway to use contrived or exaggerated scenarios to bring concepts to life
  • bringing digestible ethics lessons to the masses can be seen as a moral act, ensuring that those who don’t spend hours poring over Kant and Judith Jarvis Thomson are also privy to what’s gained from understanding how people think.
  • The Good Place’s focus on ethics wouldn’t mean as much if it weren’t also remarkable in other ways—the performances, the top-notch writing, the wordplay and pun-laden jokes, the willingness to formally experiment with the sitcom genre.
  • ...1 more annotation...
  • “While we’re discussing the issues that I want to discuss, I also know that I have a responsibility to the audience to tell a story. The goal is not to change the world; the goal of this is to make a high-quality, entertaining show that has good-quality acting.” On that front, Season 2 has certainly succeeded
lucieperloff

Teenagers, Anxiety Can Be Your Friend - The New York Times - 0 views

  • A new report from the University of Michigan’s C.S. Mott Children’s Hospital National Poll on Children’s Health found that one in three teen girls and one in five teen boys have experienced new or worsening anxiety since March 2020.
  • You might be feeling tense about where things stand with your friends or perhaps you’re on edge about something else altogether: your family, your schoolwork, your future, the health of the planet.
  • But the discomfort of anxiety has a basic evolutionary function: to get us to tune into the fact that something’s not right.
  • ...10 more annotations...
  • That odd feeling in the pit of your stomach will help you to consider the situation carefully and be cautious about your next step.
  • It’s more often a friend than a foe, one that will help you notice when things are on the wrong track.
  • psychologists agree that anxiety becomes a problem if its alarm makes no sense — either going off for no reason or blaring when a chime would do.
  • Feeling a little tense before a big game is appropriate and may even improve your performance. Having a panic attack on the sidelines means your anxiety has gone too far.
  • At the physical level, the amygdala, a primitive structure in the brain, detects a threat and sends the heart and lungs into overdrive getting your body ready to fight or flee that threat.
  • Though it can sound like a daffy approach to managing tension, breathing deeply and slowly activates a powerful part of the nervous system responsible for resetting the body to its pre-anxiety state.
  • Am I overestimating the severity of the problem I’m facing? Am I underestimating my power to manage it?
  • It’s human nature to want to repeat any behavior that leads to feelings of pleasure or comfort, but every boost of avoidance-related relief increases the likelihood that you’ll want to continue to avoid what you fear.
  • For example, the realities of in-person school are sure to be more manageable than the harrowing scenarios your imagination can create.
  • Knowing what’s true about anxiety — and not — will make it easier to navigate the uncertain times ahead.
anonymous

The Happiness Course: Here What's Some Learned - The New York Times - 0 views

  • Over 3 Million People Took This Course on Happiness. Here’s What Some Learned.
  • It may seem simple, but it bears repeating: sleep, gratitude and helping other people.
  • The Yale happiness class, formally known as Psyc 157: Psychology and the Good Life, is one of the most popular classes to be offered in the university’s 320-year history
  • ...26 more annotations...
  • To date, over 3.3 million people have signed up, according to the website.
  • “Everyone knows what they need to do to protect their physical health: wash your hands, and social distance, and wear a mask,” she added. “People were struggling with what to do to protect their mental health.”
  • The Coursera curriculum, adapted from the one Dr. Santos taught at Yale, asks students to, among other things, track their sleep patterns, keep a gratitude journal, perform random acts of kindness, and take note of whether, over time, these behaviors correlate with a positive change in their general mood.
  • Ms. McIntire took the class. She called it “life-changing.”
  • A night owl, she had struggled with sleep and enforcing her own time boundaries.
  • “It’s hard to set those boundaries with yourself sometimes and say, ‘I know this book is really exciting, but it can wait till tomorrow, sleep is more important,’”
  • “That’s discipline, right? But I had never done it in that way, where it’s like, ‘It’s going to make you happier. It’s not just good for you; it’s going to actually legitimately make you happier.’”
  • has stuck with it even after finishing the class
  • Meditation also helped her to get off social media.
  • “I found myself looking inward. It helped me become more introspective,” she said. “Honestly, it was the best thing I ever did.”
  • “There’s no reason I shouldn’t be happy,” she said. “I have a wonderful marriage. I have two kids. I have a nice job and a nice house. And I just could never find happiness.
  • Since taking the course, Ms. Morgan, 52, has made a commitment to do three things every day: practice yoga for one hour, take a walk outside in nature no matter how cold it may be in Alberta, and write three to five entries in her gratitude journal before bed
  • “When you start writing down those things at the end of the day, you only think about it at the end of the day, but once you make it a routine, you start to think about it all throughout the day,”
  • some studies show that finding reasons to be grateful can increase your general sense of well-being.
  • “Somewhere along the second or third year, you do feel a bit burned out, and you need strategies for dealing with it,”
  • “I’m still feeling that happiness months later,”
  • Matt Nadel, 21, a Yale senior, was among the 1,200 students taking the class on campus in 2018. He said the rigors of Yale were a big adjustment when he started at the university in the fall of 2017.
  • “Did the class impact my life in a long term, tangible way? The answer is no.”
  • While the class wasn’t life-changing for him, Mr. Nadel said that he is more expressive now when he feels gratitude.
  • “I think I was struggling to reconcile, and to intellectually interrogate, my religion,” he said. “Also acknowledging that I just really like to hang out with this kind of community that I think made me who I am.”
  • Life-changing? No. But certainly life-affirming
  • “The class helped make me more secure and comfortable in my pre-existing religious beliefs,”
  • negative visualization. This entails thinking of a good thing in your life (like your gorgeous, reasonably affordable apartment) and then imagining the worst-case scenario (suddenly finding yourself homeless and without a safety net).
  • If gratitude is something that doesn’t come naturally, negative visualization can help you to get there.
  • “That’s something that I really keep in mind, especially when I feel like my mind is so trapped in thinking about future hurdles,
  • “I should be so grateful for everything that I have. Because you’re not built to notice these things.”
katedriscoll

Avoiding Psychological Bias in Decision Making - From MindTools.com - 0 views

  • In this scenario, your decision was affected by
  • confirmation bias. With this, you interpret market information in a way that confirms your preconceptions – instead of seeing it objectively – and you make wrong decisions as a result. Confirmation bias is one of many psychological biases to which we're all susceptible when we make decisions. In this article, we'll look at common types of bias, and we'll outline what you can do to avoid them.
  • Psychologists Daniel Kahneman, Paul Slovic, and Amos Tversky introduced the concept of psychological bias in the early 1970s. They published their findings in their 1982 book, "Judgment Under Uncertainty." They explained that psychological bias – also known as cognitive bias – is the tendency to make decisions or take action in an illogical way. For example, you might subconsciously make selective use of data, or you might feel pressured to make a decision by powerful colleagues. Psychological bias is the opposite of common sense and clear, measured judgment. It can lead to missed opportunities and poor decision making.
  • ...1 more annotation...
  • Below, we outline five psychological biases that are common in business decision making. We also look at how you can overcome them, and thereby make better decisions.
‹ Previous 21 - 40 of 75 Next › Last »
Showing 20 items per page