Skip to main content

Home/ TOK Friends/ Group items tagged alternative facts

Rss Feed Group items tagged

7More

Silicon Valley Worries About Addiction to Devices - NYTimes.com - 0 views

  • founders from Facebook, Twitter, eBay, Zynga and PayPal, and executives and managers from companies like Google, Microsoft, Cisco and others listened to or participated
  • they debated whether technology firms had a responsibility to consider their collective power to lure consumers to games or activities that waste time or distract them.
  • Eric Schiermeyer, a co-founder of Zynga, an online game company and maker of huge hits like FarmVille, has said he has helped addict millions of people to dopamine, a neurochemical that has been shown to be released by pleasurable activities, including video game playing, but also is understood to play a major role in the cycle of addiction. But what he said he believed was that people already craved dopamine and that Silicon Valley was no more responsible for creating irresistible technologies than, say, fast-food restaurants were responsible for making food with such wide appeal. “They’d say: ‘Do we have any responsibility for the fact people are getting fat?’ Most people would say ‘no,’ ” said Mr. Schiermeyer. He added: “Given that we’re human, we already want dopamine.”
  • ...4 more annotations...
  • “The responsibility we have is to put the most powerful capability into the world,” he said. “We do it with eyes wide open that some harm will be done. Someone might say, ‘Why not do so in a way that causes no harm?’ That’s naïve.” “The alternative is to put less powerful capability in people’s hands and that’s a bad trade-off,” he added.
  • the Facebook executive, said his primary concern was that people live balanced lives. At the same time, he acknowledges that the message can run counter to Facebook’s business model, which encourages people to spend more time online. “I see the paradox,” he said.
  • she believed that interactive gadgets could create a persistent sense of emergency by setting off stress systems in the brain — a view that she said was becoming more widely accepted. “It’s this basic cultural recognition that people have a pathological relationship with their devices,” she said. “People feel not just addicted, but trapped.”
  • Richard Fernandez, an executive coach at Google and one of the leaders of the mindfulness movement, said the risks of being overly engaged with devices were immense.
11More

"Breaking Bad" By Niccolo Machiavelli « The Dish - 0 views

  • If a man is truly a man through force and fraud and nerve, then Walter becomes the man he always wanted to be. He trounces every foe; he gains a huge fortune; he dies a natural death. Compared with being a high school chemistry teacher? Niccolo would scoff at the comparison. “I did it for me.”
  • Walt is consumed all along by justified resentment of the success others stole from him, and by a rage that his superior mind was out-foxed by unscrupulous colleagues. He therefore lived and died his final years for human honor – for what <img class="alignright size-medium wp-image-150262" alt="466px-Portrait_of_Niccolò_Machiavelli_by_Santi_di_Tito" src="http://sullydish.files.wordpress.com/2013/02/466px-portrait_of_niccolocc80_machiavelli_by_santi_di_tito.jpg?w=233&h=300" width="233" height="300" />Machiavelli calls virtu, a caustic, brutal inversion of Christian virtue
  • his skills were eventually proven beyond any measure in ways that would never have happened if he had never broken bad. And breaking bad cannot mean putting a limit on what you are capable of doing. What Machiavelli insisted upon was that a successful power-broker know how to be “altogether bad.”
  • ...8 more annotations...
  • the cost-benefit analysis of “breaking bad” when the alternative is imminently “dying alone” is rigged in favor of the very short term, i.e. zero-sum evil. If Walt had had to weigh a long, unpredictable lifetime of unending fear and constant danger for his family and himself, he would have stopped cooking meth.
  • was he happy? Yes, but in a way that never really reflects any inner peace. He is happy in a way that all millionaires and tyrants are happy.
  • Machiavelli differs from later realists like Hobbes—and more contemporary “neorealists” like the late Kenneth Waltz—in recognizing that human agency matters as much as the structural fact of international anarchy in determining both foreign policy behavior and ultimate outcomes in world politics.
  • It should be taught because it really does convey the egoist appeal of evil, of acting ruthlessly in the world
  • The benefits only work if your life is nasty, brutish and short. The costs are seen in the exhausted, broken eyes of Skyler, the betrayal of an only painfully faithful son, the murder of a brother-in-law, the grisly massacre of dozens, the endless nervous need to be on the alert, to run and hide and lie and lie and lie again, until life itself becomes merely a means to achieve temporary security.
  • Breaking Bad should be taught alongside Machiavelli – as a riveting companion piece.
  • a leader’s choices can have a pivotal impact on politics, both domestic and international.
  • Though fortune be capricious and history contingent, the able leader may shape his fate and that of his state through the exercise of virtu. This is not to be mistaken for “virtue”, as defined by Christian moral teaching (implying integrity, charity, humility, and the like). Rather, it denotes the human qualities prized in classical antiquity, including knowledge, courage, cunning, pride, and strength.
9More

Teaching a Different Shakespeare From the One I Love - The New York Times - 0 views

  • Even the highly gifted students in my Shakespeare classes at Harvard are less likely to be touched by the subtle magic of his words than I was so many years ago or than my students were in the 1980s in Berkeley, Calif. What has happened? It is not that my students now lack verbal facility. In fact, they write with ease, particularly if the format is casual and resembles the texting and blogging that they do so constantly. The problem is that their engagement with language, their own or Shakespeare’s, often seems surprisingly shallow or tepid.
  • There are many well-rehearsed reasons for the change: the rise of television followed by the triumph of digital technology, the sending of instant messages instead of letters, the ‘‘visual turn’’ in our culture, the pervasive use of social media. In their wake, the whole notion of a linguistic birthright could be called quaint, the artifact of particular circumstances that have now vanished
  • For my parents, born in Boston, the English language was a treasured sign of arrival and rootedness; for me, a mastery of Shakespeare, the supreme master of that language, was like a purchased coat of arms, a title of gentility tracing me back to Stratford-upon-Avon.
  • ...6 more annotations...
  • It is not that the English language has ceased to be a precious possession; on the contrary, it is far more important now than it ever was in my childhood. But its importance has little or nothing to do any longer with the dream of rootedness. English is the premier international language, the global medium of communication and exchange.
  • as I have discovered in my teaching, it is a different Shakespeare from the one with whom I first fell in love. Many of my students may have less verbal acuity than in years past, but they often possess highly developed visual, musical and performative skills. They intuitively grasp, in a way I came to understand only slowly, the pervasiveness of songs in Shakespeare’s plays, the strange ways that his scenes flow one into another or the cunning alternation of close-ups and long views
  • When I ask them to write a 10-page paper analyzing a particular web of metaphors, exploring a complex theme or amassing evidence to support an argument, the results are often wooden; when I ask them to analyze a film clip, perform a scene or make a video, I stand a better chance of receiving something extraordinary.
  • This does not mean that I should abandon the paper assignment; it is an important form of training for a range of very different challenges that lie in their future. But I see that their deep imaginative engagement with Shakespeare, their intoxication, lies elsewhere.
  • The M.I.T. Global Shakespeare Project features an electronic archive that includes images of every page of the First Folio of 1623. In the Norton Shakespeare, which I edit, the texts of his plays are now available not only in the massive printed book with which I was initiated but also on a digital platform. One click and you can hear each song as it might have sounded on the Elizabethan stage; another click and you listen to key scenes read by a troupe of professional actors. It is a kind of magic unimagined even a few years ago or rather imaginable only as the book of a wizard like Prospero in ‘‘The Tempest.’
  • But it is not the new technology alone that attracts students to Shakespeare; it is still more his presence throughout the world as the common currency of humanity. In Taiwan, Tokyo and Nanjing, in a verdant corner of the Villa Borghese gardens in Rome and in an ancient garden in Kabul, in Berlin and Bangkok and Bangalore, his plays continue to find new and unexpected ways to enchant.
20More

Lies, Damned Lies, and Medical Science - Magazine - The Atlantic - 0 views

  • How should we choose among these dueling, high-profile nutritional findings? Ioannidis suggests a simple approach: ignore them all.
  • even if a study managed to highlight a genuine health connection to some nutrient, you’re unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act together as a sort of network, and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you
  • studies report average results that typically represent a vast range of individual outcomes.
  • ...17 more annotations...
  • studies usually detect only modest effects that merely tend to whittle your chances of succumbing to a particular disease from small to somewhat smaller
  • The odds that anything useful will survive from any of these studies are poor,” says Ioannidis—dismissing in a breath a good chunk of the research into which we sink about $100 billion a year in the United States alone.
  • nutritional studies aren’t the worst. Drug studies have the added corruptive force of financial conflict of interest.
  • Even when the evidence shows that a particular research idea is wrong, if you have thousands of scientists who have invested their careers in it, they’ll continue to publish papers on it,” he says. “It’s like an epidemic, in the sense that they’re infected with these wrong ideas, and they’re spreading it to other researchers through journals.
  • Nature, the grande dame of science journals, stated in a 2006 editorial, “Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.
  • The ultimate protection against research error and bias is supposed to come from the way scientists constantly retest each other’s results—except they don’t. Only the most prominent findings are likely to be put to the test, because there’s likely to be publication payoff in firming up the proof, or contradicting it.
  • even for medicine’s most influential studies, the evidence sometimes remains surprisingly narrow. Of those 45 super-cited studies that Ioannidis focused on, 11 had never been retested
  • even when a research error is outed, it typically persists for years or even decades.
  • much, perhaps even most, of what doctors do has never been formally put to the test in credible studies, given that the need to do so became obvious to the field only in the 1990s
  • Other meta-research experts have confirmed that similar issues distort research in all fields of science, from physics to economics (where the highly regarded economists J. Bradford DeLong and Kevin Lang once showed how a remarkably consistent paucity of strong evidence in published economics studies made it unlikely that any of them were right
  • His PLoS Medicine paper is the most downloaded in the journal’s history, and it’s not even Ioannidis’s most-cited work
  • while his fellow researchers seem to be getting the message, he hasn’t necessarily forced anyone to do a better job. He fears he won’t in the end have done much to improve anyone’s health. “There may not be fierce objections to what I’m saying,” he explains. “But it’s difficult to change the way that everyday doctors, patients, and healthy people think and behave.”
  • “Usually what happens is that the doctor will ask for a suite of biochemical tests—liver fat, pancreas function, and so on,” she tells me. “The tests could turn up something, but they’re probably irrelevant. Just having a good talk with the patient and getting a close history is much more likely to tell me what’s wrong.” Of course, the doctors have all been trained to order these tests, she notes, and doing so is a lot quicker than a long bedside chat. They’re also trained to ply the patient with whatever drugs might help whack any errant test numbers back into line.
  • What they’re not trained to do is to go back and look at the research papers that helped make these drugs the standard of care. “When you look the papers up, you often find the drugs didn’t even work better than a placebo. And no one tested how they worked in combination with the other drugs,” she says. “Just taking the patient off everything can improve their health right away.” But not only is checking out the research another time-consuming task, patients often don’t even like it when they’re taken off their drugs, she explains; they find their prescriptions reassuring.
  • Already feeling that they’re fighting to keep patients from turning to alternative medical treatments such as homeopathy, or misdiagnosing themselves on the Internet, or simply neglecting medical treatment altogether, many researchers and physicians aren’t eager to provide even more reason to be skeptical of what doctors do—not to mention how public disenchantment with medicine could affect research funding.
  • We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough
  • Science is a noble endeavor, but it’s also a low-yield endeavor,” he says. “I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
6More

The Evangelical Rejection of Reason - NYTimes.com - 1 views

  • THE Republican presidential field has become a showcase of evangelical anti-intellectualism. Herman Cain, Rick Perry and Michele Bachmann deny that climate change is real and caused by humans. Mr. Perry and Mrs. Bachmann dismiss evolution as an unproven theory.
  • In response, many evangelicals created what amounts to a “parallel culture,” nurtured by church, Sunday school, summer camps and colleges, as well as publishing houses, broadcasting networks, music festivals and counseling groups.
  • Fundamentalism appeals to evangelicals who have become convinced that their country has been overrun by a vast secular conspiracy; denial is the simplest and most attractive response to change.
  • ...3 more annotations...
  • The rejection of science seems to be part of a politically monolithic red-state fundamentalism, textbook evidence of an unyielding ignorance on the part of the religious. As one fundamentalist slogan puts it, “The Bible says it, I believe it, that settles it.”
  • But in fact their rejection of knowledge amounts to what the evangelical historian Mark A. Noll, in his 1994 book, “The Scandal of the Evangelical Mind,” described as an “intellectual disaster.” He called on evangelicals to repent for their neglect of the mind, decrying the abandonment of the intellectual heritage of the Protestant Reformation. “The scandal of the evangelical mind,” he wrote, “is that there is not much of an evangelical mind.”
  • Scholars like Dr. Collins and Mr. Noll, and publications like Books & Culture, Sojourners and The Christian Century, offer an alternative to the self-anointed leaders. They recognize that the Bible does not condemn evolution and says next to nothing about gay marriage. They understand that Christian theology can incorporate Darwin’s insights and flourish in a pluralistic society.
15More

Common Core and the End of History | Alan Singer - 0 views

  • On Monday October 20, 2014, the Regents, as part of their effort to promote new national Common Core standards and mystically prepare students for non-existing 21st century technological careers, voted unanimously that students did not have to pass both United States and Global History exams in order to graduate from high school and maintained that they were actually raising academic standards.
  • The Global History exam will also be modified so that students will only be tested on events after 1750, essentially eliminating topics like the early development of civilizations, ancient empires, the rise of universal religions, the Columbian Exchange, and trans-Atlantic Slave Trade from the test.
  • As a result, social studies is no longer taught in the elementary school grades
  • ...12 more annotations...
  • Students will be able to substitute a tech sequence and local test for one of the history exams, however the Regents did not present, design, or even describe what the tech alternative will look like. Although it will be implemented immediately, the Regents left all the details completely up to local initiative.
  • Under the proposal, students can substitute career-focused courses in subjects such as carpentry, advertising or hospitality management rather than one of two history Regents exams that are now required
  • In June 2010 the Regents eliminated 5th and 8th grade social studies, history, and geography assessments so teachers and schools could concentrate on preparing students for high-stakes Common Core standardized reading and math assessments.
  • Mace reports his middle school students have no idea which were the original thirteen colonies, where they were located, or who were the founders and settlers. The students in his honors class report that all they studied in elementary school was English and math. Morning was math; afternoon was ELA. He added, "Teachers were worried that this would happen, and it has."
  • Debate over the importance of teaching history and social studies is definitely not new. During World War I, many Americans worried that new immigrants did not understand and value the history and government of the United States so new high school classes and tests that developed into the current classes and tests were put in place.
  • Mace describes his students as the "common core kids, inundated with common core, but they do not know the history of the United States." The cardinal rule of public education in the 21st Century seems to be that which gets tested is important and that which does not is dropped.
  • "By making state social studies exams optional, we have come to a point where our nation's own history has been marginalized in the classroom and, with it, the means to understand ourselves and the world around us. America's heritage is being eliminated as a requirement for graduation.
  • I am biased. I am a historian, a former social studies teacher, and I help to prepare the next generation of social studies teachers.
  • But these decisions by the Regents are politically motivated, lower graduation standards, and are outright dangerous.
  • The city is under a lot of pressure to support the revised and lower academic standards because in the next few weeks it is required to present plans to the state for turning around as many as 250 schools that are labeled as "failing."
  • Merryl Tisch, Chancellor of the State Board of Regents, described the change as an effort to "back-fill opportunities for students with different interests, with different opportunities, with different choice."
  • The need to educate immigrants and to understand global issues like ISIS and Ebola remain pressing, but I guess not for New York State high school students. Right now, it looks like social studies advocates have lost the battle and we are finally witnessing the end of history.
32More

BBC - Future - The countries that don't exist - 2 views

  • In the deep future, every territory we know could eventually become a country that doesn’t exist.
    • silveiragu
       
      Contrary to the human expectation that situations remain constant. 
  • There really is a secret world of hidden independent nations
  • Middleton, however, is here to talk about countries missing from the vast majority of books and maps for sale here. He calls them the “countries that don’t exist”
    • silveiragu
       
      Reminds us of our strange relationship with nationalism-that we forget how artificial countries' boundaries are. 
  • ...21 more annotations...
  • The problem, he says, is that we don’t have a watertight definition of what a country is. “Which as a geographer, is kind of shocking
  • The globe, it turns out, is full of small (and not so small) regions that have all the trappings of a real country
  • and are ignored on most world maps.
  • Middleton, a geographer at the University of Oxford, has now charted these hidden lands in his new book, An Atlas of Countries that Don’t Exist
  • Middleton’s quest began, appropriately enough, with Narnia
    • silveiragu
       
      Interesting connection to imagination as a way of knowing.
  • a defined territory, a permanent population, a government, and “the capacity to enter into relations with other states”.
  • In Australia, meanwhile, the Republic of Murrawarri was founded in 2013, after the indigenous tribe wrote a letter to Queen Elizabeth II asking her to prove her legitimacy to govern their land.
  • Yet many countries that meet these criteria aren‘t members of the United Nations (commonly accepted as the final seal of a country’s statehood).
  • many of them are instead members of the “Unrepresented United Nations – an alternative body to champion their rights.
  • A handful of the names will be familiar to anyone who has read a newspaper: territories such as Taiwan, Tibet, Greenland, and Northern Cyprus.
  • The others are less famous, but they are by no means less serious
    • silveiragu
       
      By what criterion, "serious"?
  • One of the most troubling histories, he says, concerns the Republic of Lakotah (with a population of 100,000). Bang in the centre of the United States of America (just east of the Rocky Mountains), the republic is an attempt to reclaim the sacred Black Hills for the Lakota Sioux tribe.
  • Their plight began in the 18th Century, and by 1868 they had finally signed a deal with the US government that promised the right to live on the Black Hills. Unfortunately, they hadn’t accounted for a gold rush
  • Similar battles are being fought across every continent.
  • In fact, you have almost certainly, unknowingly, visited one.
  • Christiania, an enclave in the heart of Copenhagen.
  • On 26 September that year, they declared it independent, with its own “direct democracy”, in which each of the inhabitants (now numbering 850) could vote on any important matter.
    • silveiragu
       
      Interesting reminder that the label "country" does not only have to arise from military or economic struggles, as is tempting to think in our study of history. Also, interesting reminder that the label of "country"-by itself-means nothing. 
  • a blind eye to the activities
    • silveiragu
       
      That is really why any interest is demonstrated towards this topic. Not that some country named Christiania exists in the heart of Denmark, but that they can legitimately call themselves a nation. We have grown up, and our parents have grown up, with a rigid definition of nationalism, and the strange notion that the lines in an atlas were always there. One interpretation of the Danish government's response to Christiania is simply that they do not know what to think. Although probably not geopolitically significant, such enclave states represent a challenge our perception of countries, one which fascinates Middleton's readers because it disconcerts them. 
  • perhaps we need to rethink the concept of the nation-state altogether? He points to Antarctica, a continent shared peacefully among the international community
    • silveiragu
       
      A sign of progress, perhaps, from the industrialism-spurred cycle of divide land, industrialize, and repeat-even if the chief reason is the region's climate. 
  • The last pages of Middleton’s Atlas contain two radical examples that question everything we think we mean by the word ‘country’.
    • silveiragu
       
      That is really why any interest is demonstrated towards this topic. Not that some country named Christiania exists in the heart of Denmark, but that they can legitimately call themselves a nation. We have grown up, and our parents have grown up, with a rigid definition of nationalism, and the strange notion that the lines in an atlas were always there. These "nonexistent countries"-and our collective disregard for them-are reminiscent of the 17th and 18th centuries: then, the notion of identifying by national lines was almost as strange and artificial as these countries' borders seem to us today. 
  • “They all raise the possibility that countries as we know them are not the only legitimate basis for ordering the planet,
15More

Review: In 'The End of Average,' Cheers for Individual Complexity - The New York Times - 1 views

  • All of us want to be normal, yet none of us want to be average.
  • We march through life measuring ourselves on one scale after another
  • Must the tyranny of the group rule us from cradle to grave? Absolutely not, says Todd Rose in a subversive and readable introduction to what has been called the new science of the individual.
  • ...12 more annotations...
  • Dr. Rose lays the blame for our modern obsession directly at the feet of Adolphe Quetelet, a 19th-century Belgian mathematician. Quetelet was an early data cruncher, the first to apply statistical tools to large groups of people
  • Among his accomplishments was devising the body mass index, a ratio of weight to height that we still use to decide if people are too big or small. For him, the average was the optimal; normal was the best thing any human could ever possibly be.
  • Not so for one of his intellectual heirs, Francis Galton of Britain, who agreed that averages were excellent tools for understanding individuals. Ultimately, though, he came to the conclusion that the average defined not the optimal but simply the mediocre, a mark to be measured only so that it could be surpassed.
  • “Typing and ranking have come to seem so elementary, natural and right that we are no longer conscious of the fact that every such judgment always erases the individuality of the person being judged,”
  • But anyone who works with people can cite case after case in which the standard metrics disappoin
  • For educators, it’s all those brilliant underachiever
  • For doctors, it’s all the outliers who survive dire disease predictors
  • But if we are not averagerians, then who are we?
  • he sets forth a variety of alternate principles. Among them is the not-unfamiliar notion that all human characteristics are multidimensional, not only in specifics but also in time and context.
  • In other words, big data may have landed us in the Age of Average, but really enormous data, with many observations of a single person’s biology and behavior taken over time and in different contexts, may yield a far better understanding of that individual than do group norms.
  • In life and in health, different pathways may lead to the same end.
  • Dr. Rose spends much of his narrative in the worlds of education and business, offering up examples of schools and companies that have defied the rule of the average, to the benefit of all. His argument will resonate in many other contexts, though: Readers will be moved to examine their own averagerian prejudices, most so ingrained as to be almost invisible, all worthy of review.
17More

Facebook will now ask users to rank news organizations they trust - The Washington Post - 0 views

  • Zuckerberg wrote Facebook is not “comfortable” deciding which news sources are the most trustworthy in a “world with so much division."
  • "We decided that having the community determine which sources are broadly trusted would be most objective," he wrote.
  • The new trust rankings will emerge from surveys the company is conducting. "Broadly trusted" outlets that are affirmed by a significant cross-section of users may see a boost in readership, while less known organizations or start-ups receiving poor ratings could see their web traffic decline
  • ...14 more annotations...
  • The company's changes include an effort to boost the content of local news outlets, which have suffered sizable subscription and readership declines
  • The changes follow another major News Feed redesign, announced last week, in which Facebook said users would begin to see less content from news organizations and brands in favor of "meaningful" posts from friends and family.
  • Currently, 5 percent of Facebook posts are generated by news organizations; that number is expected to drop to 4 percent after the redesign, Zuckerberg said.
  • On Friday, Google announced it would cancel a two-month-old experiment, called Knowledge Panel, that informed its users that a news article had been disputed by independent fact-checking organizations. Conservatives had complained the feature unfairly targeted a right-leaning outlet.
  • More than two-thirds of Americans now get some of their news from social media, according to Pew Research Center.
  • That shift has empowered Facebook and Google, putting them in an uncomfortable position of deciding what news they should distribute to their global audiences. But it also has led to questions about whether these corporations should be considered media companie
  • "Just by putting things out to a vote in terms of what the community would find trustworthy undermines the role for any serious institutionalized process to determine what’s quality and what’s not,” he said.
  • rther criticism that the social network had become vulnerable to bad actors seeking to spread disinformation.
  • Jay Rosen, a journalism professor at New York University, said that Facebook learned the wrong lesson from Trending Topics, which was to try to avoid politics at all costs
  • “One of the things that can happen if you are determined to avoid politics at all costs is you are driven to illusory solutions,” he said. “I don’t think there is any alternative to using your judgement. But Facebook is convinced that there is. This idea that they can avoid judgement is part of their problem.”
  • Facebook revealed few details about how it is conducting its trust surveys,
  • "The hard question we've struggled with is how to decide what news sources are broadly trusted," Zuckerberg wrote. "We could try to make that decision ourselves, but that's not something we're comfortable with. We considered asking outside experts, which would take the decision out of our hands but would likely not solve the objectivity problem. Or we could ask you -- the community -- and have your feedback determine the ranking."
  • Some experts wondered whether Facebook's latest effort could be gamed.
  • "This seems like a positive step toward improving the news environment on Facebook," Diresta said. "That said, the potential downside is that the survey approach unfairly penalizes emerging publications."
26More

How Tech Can Turn Doctors Into Clerical Workers - The New York Times - 0 views

  • what I see in my colleague is disillusionment, and it has come too early, and I am seeing too much of it.
  • In America today, the patient in the hospital bed is just the icon, a place holder for the real patient who is not in the bed but in the computer. That virtual entity gets all our attention. Old-fashioned “bedside” rounds conducted by the attending physician too often take place nowhere near the bed but have become “card flip” rounds
  • My young colleague slumping in the chair in my office survived the student years, then three years of internship and residency and is now a full-time practitioner and teacher. The despair I hear comes from being the highest-paid clerical worker in the hospital: For every one hour we spend cumulatively with patients, studies have shown, we spend nearly two hours on our primitive Electronic Health Records, or “E.H.R.s,” and another hour or two during sacred personal time.
  • ...23 more annotations...
  • The living, breathing source of the data and images we juggle, meanwhile, is in the bed and left wondering: Where is everyone? What are they doing? Hello! It’s my body, you know
  • Our $3.4 trillion health care system is responsible for more than a quarter of a million deaths per year because of medical error, the rough equivalent of, say, a jumbo jet’s crashing every day.
  • I can get cash and account details all over America and beyond. Yet I can’t reliably get a patient record from across town, let alone from a hospital in the same state, even if both places use the same brand of E.H.R
  • the leading E.H.R.s were never built with any understanding of the rituals of care or the user experience of physicians or nurses. A clinician will make roughly 4,000 keyboard clicks during a busy 10-hour emergency-room shift
  • In the process, our daily progress notes have become bloated cut-and-paste monsters that are inaccurate and hard to wade through. A half-page, handwritten progress note of the paper era might in a few lines tell you what a physician really thought
  • so much of the E.H.R., but particularly the physical exam it encodes, is a marvel of fiction, because we humans don’t want to leave a check box empty or leave gaps in a template.
  • For a study, my colleagues and I at Stanford solicited anecdotes from physicians nationwide about patients for whom an oversight in the exam (a “miss”) had resulted in real consequences, like diagnostic delay, radiation exposure, therapeutic or surgical misadventure, even death. They were the sorts of things that would leave no trace in the E.H.R. because the recorded exam always seems complete — and yet the omission would be glaring and memorable to other physicians involved in the subsequent care. We got more than 200 such anecdotes.
  • The reason for these errors? Most of them resulted from exams that simply weren’t done as claimed. “Food poisoning” was diagnosed because the strangulated hernia in the groin was overlooked, or patients were sent to the catheterization lab for chest pain because no one saw the shingles rash on the left chest.
  • I worry that such mistakes come because we’ve gotten trapped in the bunker of machine medicine. It is a preventable kind of failure
  • How we salivated at the idea of searchable records, of being able to graph fever trends, or white blood counts, or share records at a keystroke with another institution — “interoperability”
  • The seriously ill patient has entered another kingdom, an alternate universe, a place and a process that is frightening, infantilizing; that patient’s greatest need is both scientific state-of-the-art knowledge and genuine caring from another human being. Caring is expressed in listening, in the time-honored ritual of the skilled bedside exam — reading the body — in touching and looking at where it hurts and ultimately in localizing the disease for patients not on a screen, not on an image, not on a biopsy report, but on their bodies.
  • What if the computer gave the nurse the big picture of who he was both medically and as a person?
  • a professor at M.I.T. whose current interest in biomedical engineering is “bedside informatics,” marvels at the fact that in an I.C.U., a blizzard of monitors from disparate manufacturers display EKG, heart rate, respiratory rate, oxygen saturation, blood pressure, temperature and more, and yet none of this is pulled together, summarized and synthesized anywhere for the clinical staff to use
  • What these monitors do exceedingly well is sound alarms, an average of one alarm every eight minutes, or more than 180 per patient per day. What is our most common response to an alarm? We look for the button to silence the nuisance because, unlike those in a Boeing cockpit, say, our alarms are rarely diagnosing genuine danger.
  • By some estimates, more than 50 percent of physicians in the United States have at least one symptom of burnout, defined as a syndrome of emotional exhaustion, cynicism and decreased efficacy at work
  • It is on the increase, up by 9 percent from 2011 to 2014 in one national study. This is clearly not an individual problem but a systemic one, a 4,000-key-clicks-a-day problem.
  • The E.H.R. is only part of the issue: Other factors include rapid patient turnover, decreased autonomy, merging hospital systems, an aging population, the increasing medical complexity of patients. Even if the E.H.R. is not the sole cause of what ails us, believe me, it has become the symbol of burnou
  • burnout is one of the largest predictors of physician attrition from the work force. The total cost of recruiting a physician can be nearly $90,000, but the lost revenue per physician who leaves is between $500,000 and $1 million, even more in high-paying specialties.
  • I hold out hope that artificial intelligence and machine-learning algorithms will transform our experience, particularly if natural-language processing and video technology allow us to capture what is actually said and done in the exam room.
  • as with any lab test, what A.I. will provide is at best a recommendation that a physician using clinical judgment must decide how to apply.
  • True clinical judgment is more than addressing the avalanche of blood work, imaging and lab tests; it is about using human skills to understand where the patient is in the trajectory of a life and the disease, what the nature of the patient’s family and social circumstances is and how much they want done.
  • Much of that is a result of poorly coordinated care, poor communication, patients falling through the cracks, knowledge not being transferred and so on, but some part of it is surely from failing to listen to the story and diminishing skill in reading the body as a text.
  • As he was nearing death, Avedis Donabedian, a guru of health care metrics, was asked by an interviewer about the commercialization of health care. “The secret of quality,” he replied, “is love.”/•/
43More

The Navy's USS Gabrielle Giffords and the Future of Work - The Atlantic - 0 views

  • Minimal manning—and with it, the replacement of specialized workers with problem-solving generalists—isn’t a particularly nautical concept. Indeed, it will sound familiar to anyone in an organization who’s been asked to “do more with less”—which, these days, seems to be just about everyone.
  • Ten years from now, the Deloitte consultant Erica Volini projects, 70 to 90 percent of workers will be in so-called hybrid jobs or superjobs—that is, positions combining tasks once performed by people in two or more traditional roles.
  • If you ask Laszlo Bock, Google’s former culture chief and now the head of the HR start-up Humu, what he looks for in a new hire, he’ll tell you “mental agility.
  • ...40 more annotations...
  • “What companies are looking for,” says Mary Jo King, the president of the National Résumé Writers’ Association, “is someone who can be all, do all, and pivot on a dime to solve any problem.”
  • The phenomenon is sped by automation, which usurps routine tasks, leaving employees to handle the nonroutine and unanticipated—and the continued advance of which throws the skills employers value into flux
  • Or, for that matter, on the relevance of the question What do you want to be when you grow up?
  • By 2020, a 2016 World Economic Forum report predicted, “more than one-third of the desired core skill sets of most occupations” will not have been seen as crucial to the job when the report was published
  • I asked John Sullivan, a prominent Silicon Valley talent adviser, why should anyone take the time to master anything at all? “You shouldn’t!” he replied.
  • Minimal manning—and the evolution of the economy more generally—requires a different kind of worker, with not only different acquired skills but different inherent abilities
  • It has implications for the nature and utility of a college education, for the path of careers, for inequality and employability—even for the generational divide.
  • Then, in 2001, Donald Rumsfeld arrived at the Pentagon. The new secretary of defense carried with him a briefcase full of ideas from the corporate world: downsizing, reengineering, “transformational” technologies. Almost immediately, what had been an experimental concept became an article of faith
  • But once cadets got into actual command environments, which tend to be fluid and full of surprises, a different picture emerged. “Psychological hardiness”—a construct that includes, among other things, a willingness to explore “multiple possible response alternatives,” a tendency to “see all experience as interesting and meaningful,” and a strong sense of self-confidence—was a better predictor of leadership ability in officers after three years in the field.
  • Because there really is no such thing as multitasking—just a rapid switching of attention—I began to feel overstrained, put upon, and finally irked by the impossible set of concurrent demands. Shouldn’t someone be giving me a hand here? This, Hambrick explained, meant I was hitting the limits of working memory—basically, raw processing power—which is an important aspect of “fluid intelligence” and peaks in your early 20s. This is distinct from “crystallized intelligence”—the accumulated facts and know-how on your hard drive—which peaks in your 50
  • Others noticed the change but continued to devote equal attention to all four tasks. Their scores fell. This group, Hambrick found, was high in “conscientiousness”—a trait that’s normally an overwhelming predictor of positive job performance. We like conscientious people because they can be trusted to show up early, double-check the math, fill the gap in the presentation, and return your car gassed up even though the tank was nowhere near empty to begin with. What struck Hambrick as counterintuitive and interesting was that conscientiousness here seemed to correlate with poor performance.
  • he discovered another correlation in his test: The people who did best tended to score high on “openness to new experience”—a personality trait that is normally not a major job-performance predictor and that, in certain contexts, roughly translates to “distractibility.”
  • To borrow the management expert Peter Drucker’s formulation, people with this trait are less focused on doing things right, and more likely to wonder whether they’re doing the right things.
  • High in fluid intelligence, low in experience, not terribly conscientious, open to potential distraction—this is not the classic profile of a winning job candidate. But what if it is the profile of the winning job candidate of the future?
  • One concerns “grit”—a mind-set, much vaunted these days in educational and professional circles, that allows people to commit tenaciously to doing one thing well
  • These ideas are inherently appealing; they suggest that dedication can be more important than raw talent, that the dogged and conscientious will be rewarded in the end.
  • he studied West Point students and graduates.
  • Traditional measures such as SAT scores and high-school class rank “predicted leader performance in the stable, highly regulated environment of West Point” itself.
  • It would be supremely ironic if the advance of the knowledge economy had the effect of devaluing knowledge. But that’s what I heard, recurrentl
  • “Fluid, learning-intensive environments are going to require different traits than classical business environments,” I was told by Frida Polli, a co-founder of an AI-powered hiring platform called Pymetrics. “And they’re going to be things like ability to learn quickly from mistakes, use of trial and error, and comfort with ambiguity.”
  • “We’re starting to see a big shift,” says Guy Halfteck, a people-analytics expert. “Employers are looking less at what you know and more and more at your hidden potential” to learn new things
  • advice to employers? Stop hiring people based on their work experience. Because in these environments, expertise can become an obstacle.
  • “The Curse of Expertise.” The more we invest in building and embellishing a system of knowledge, they found, the more averse we become to unbuilding it.
  • All too often experts, like the mechanic in LePine’s garage, fail to inspect their knowledge structure for signs of decay. “It just didn’t occur to him,” LePine said, “that he was repeating the same mistake over and over.
  • The devaluation of expertise opens up ample room for different sorts of mistakes—and sometimes creates a kind of helplessness.
  • Aboard littoral combat ships, the crew lacks the expertise to carry out some important tasks, and instead has to rely on civilian help
  • Meanwhile, the modular “plug and fight” configuration was not panning out as hoped. Converting a ship from sub-hunter to minesweeper or minesweeper to surface combatant, it turned out, was a logistical nightmare
  • So in 2016 the concept of interchangeability was scuttled for a “one ship, one mission” approach, in which the extra 20-plus sailors became permanent crew members
  • “As equipment breaks, [sailors] are required to fix it without any training,” a Defense Department Test and Evaluation employee told Congress. “Those are not my words. Those are the words of the sailors who were doing the best they could to try to accomplish the missions we gave them in testing.”
  • These results were, perhaps, predictable given the Navy’s initial, full-throttle approach to minimal manning—and are an object lesson on the dangers of embracing any radical concept without thinking hard enough about the downsides
  • a world in which mental agility and raw cognitive speed eclipse hard-won expertise is a world of greater exclusion: of older workers, slower learners, and the less socially adept.
  • if you keep going down this road, you end up with one really expensive ship with just a few people on it who are geniuses … That’s not a future we want to see, because you need a large enough crew to conduct multiple tasks in combat.
  • hat does all this mean for those of us in the workforce, and those of us planning to enter it? It would be wrong to say that the 10,000-hours-of-deliberate-practice idea doesn’t hold up at all. In some situations, it clearly does
  • A spinal surgery will not be performed by a brilliant dermatologist. A criminal-defense team will not be headed by a tax attorney. And in tech, the demand for specialized skills will continue to reward expertise handsomely.
  • But in many fields, the path to success isn’t so clear. The rules keep changing, which means that highly focused practice has a much lower return
  • In uncertain environments, Hambrick told me, “specialization is no longer the coin of the realm.”
  • It leaves us with lifelong learning,
  • I found myself the target of career suggestions. “You need to be a video guy, an audio guy!” the Silicon Valley talent adviser John Sullivan told me, alluding to the demise of print media
  • I found the prospect of starting over just plain exhausting. Building a professional identity takes a lot of resources—money, time, energy. After it’s built, we expect to reap gains from our investment, and—let’s be honest—even do a bit of coasting. Are we equipped to continually return to apprentice mode? Will this burn us out?
  • Everybody I met on the Giffords seemed to share that mentality. They regarded every minute on board—even during a routine transit back to port in San Diego Harbor—as a chance to learn something new.
22More

What Does Quantum Physics Actually Tell Us About the World? - The New York Times - 2 views

  • The physics of atoms and their ever-smaller constituents and cousins is, as Adam Becker reminds us more than once in his new book, “What Is Real?,” “the most successful theory in all of science.” Its predictions are stunningly accurate, and its power to grasp the unseen ultramicroscopic world has brought us modern marvels.
  • But there is a problem: Quantum theory is, in a profound way, weird. It defies our common-sense intuition about what things are and what they can do.
  • Indeed, Heisenberg said that quantum particles “are not as real; they form a world of potentialities or possibilities rather than one of things or facts.”
  • ...19 more annotations...
  • Before he died, Richard Feynman, who understood quantum theory as well as anyone, said, “I still get nervous with it...I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem.” The problem is not with using the theory — making calculations, applying it to engineering tasks — but in understanding what it means. What does it tell us about the world?
  • From one point of view, quantum physics is just a set of formalisms, a useful tool kit. Want to make better lasers or transistors or television sets? The Schrödinger equation is your friend. The trouble starts only when you step back and ask whether the entities implied by the equation can really exist. Then you encounter problems that can be described in several familiar ways:
  • Wave-particle duality. Everything there is — all matter and energy, all known forces — behaves sometimes like waves, smooth and continuous, and sometimes like particles, rat-a-tat-tat. Electricity flows through wires, like a fluid, or flies through a vacuum as a volley of individual electrons. Can it be both things at once?
  • The uncertainty principle. Werner Heisenberg famously discovered that when you measure the position (let’s say) of an electron as precisely as you can, you find yourself more and more in the dark about its momentum. And vice versa. You can pin down one or the other but not both.
  • The measurement problem. Most of quantum mechanics deals with probabilities rather than certainties. A particle has a probability of appearing in a certain place. An unstable atom has a probability of decaying at a certain instant. But when a physicist goes into the laboratory and performs an experiment, there is a definite outcome. The act of measurement — observation, by someone or something — becomes an inextricable part of the theory
  • The strange implication is that the reality of the quantum world remains amorphous or indefinite until scientists start measuring
  • Other interpretations rely on “hidden variables” to account for quantities presumed to exist behind the curtain.
  • This is disturbing to philosophers as well as physicists. It led Einstein to say in 1952, “The theory reminds me a little of the system of delusions of an exceedingly intelligent paranoiac.”
  • “Figuring out what quantum physics is saying about the world has been hard,” Becker says, and this understatement motivates his book, a thorough, illuminating exploration of the most consequential controversy raging in modern science.
  • In a way, the Copenhagen is an anti-interpretation. “It is wrong to think that the task of physics is to find out how nature is,” Bohr said. “Physics concerns what we can say about nature.”
  • Nothing is definite in Bohr’s quantum world until someone observes it. Physics can help us order experience but should not be expected to provide a complete picture of reality. The popular four-word summary of the Copenhagen interpretation is: “Shut up and calculate!”
  • Becker sides with the worriers. He leads us through an impressive account of the rise of competing interpretations, grounding them in the human stories
  • He makes a convincing case that it’s wrong to imagine the Copenhagen interpretation as a single official or even coherent statement. It is, he suggests, a “strange assemblage of claims.
  • An American physicist, David Bohm, devised a radical alternative at midcentury, visualizing “pilot waves” that guide every particle, an attempt to eliminate the wave-particle duality.
  • Competing approaches to quantum foundations are called “interpretations,” and nowadays there are many. The first and still possibly foremost of these is the so-called Copenhagen interpretation.
  • Perhaps the most popular lately — certainly the most talked about — is the “many-worlds interpretation”: Every quantum event is a fork in the road, and one way to escape the difficulties is to imagine, mathematically speaking, that each fork creates a new universe
  • if you think the many-worlds idea is easily dismissed, plenty of physicists will beg to differ. They will tell you that it could explain, for example, why quantum computers (which admittedly don’t yet quite exist) could be so powerful: They would delegate the work to their alter egos in other universes.
  • When scientists search for meaning in quantum physics, they may be straying into a no-man’s-land between philosophy and religion. But they can’t help themselves. They’re only human.
  • If you were to watch me by day, you would see me sitting at my desk solving Schrödinger’s equation...exactly like my colleagues,” says Sir Anthony Leggett, a Nobel Prize winner and pioneer in superfluidity. “But occasionally at night, when the full moon is bright, I do what in the physics community is the intellectual equivalent of turning into a werewolf: I question whether quantum mechanics is the complete and ultimate truth about the physical universe.”
55More

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
2More

What Is the Function of Confirmation Bias? | SpringerLink - 1 views

  • Confirmation bias is one of the most widely discussed epistemically problematic cognitions, challenging reliable belief formation and the correction of inaccurate views. Given its problematic nature, it remains unclear why the bias evolved and is still with us today. To offer an explanation, several philosophers and scientists have argued that the bias is in fact adaptive. I critically discuss three recent proposals of this kind before developing a novel alternative, what I call the ‘reality-matching account’
  • Confirmation bias is typically viewed as an epistemically pernicious tendency. For instance, Mercier and Sperber (2017: 215) maintain that the bias impedes the formation of well-founded beliefs, reduces people’s ability to correct their mistaken views, and makes them, when they reason on their own, “become overconfident” (Mercier 2016: 110).
15More

What Is A Paradigm? - 0 views

  • A scientific paradigm is a framework containing all the commonly accepted views about a subject, conventions about what direction research should take and how it should be performed.
  • Paradigms contain all the distinct, established patterns, theories, common methods and standards that allow us to recognize an experimental result as belonging to a field or not.
  • The vocabulary and concepts in Newton’s three laws or the central dogma in biology are examples of scientific “open resources" that scientists have adopted and which now form part of the scientific paradigm.
  • ...12 more annotations...
  • A paradigm dictates:
  • what is observed and measured
  • the questions we ask about those observations
  • how the questions are formulated
  • how the results are interpreted
  • how research is carried out
  • what equipment is appropriate
  • In fact, Kuhn strongly suggested that research in a deeply entrenched paradigm invariably ends up reinforcing that paradigm, since anything that contradicts it is ignored or else pressed through the preset methods until it conforms to already established dogma
  • The body of pre-existing evidence in a field conditions and shapes the collection and interpretation of all subsequent evidence. The certainty that the current paradigm is reality itself is precisely what makes it so difficult to accept alternatives.
  • It is very common for scientists to discard certain models or pick up emerging theories. But once in a while, enough anomalies accumulate within a field that the entire paradigm itself is required to change to accommodate them.
  • Many physicists in the 19th century were convinced that the Newtonian paradigm that had reigned for 200 years was the pinnacle of discovery and that scientific progress was more or less a question of refinement. When Einstein published his theories on General Relativity, it was not just another idea that could fit comfortably into the existing paradigm. Instead, Newtonian Physics itself was relegated to being a special subclass of the greater paradigm ushered in by General Relativity. Newton’s three laws are still faithfully taught in schools, however we now operate within a paradigm that puts those laws into a much broader context
  • The concept of paradigm is closely related to the Platonic and Aristotelian views of knowledge. Aristotle believed that knowledge could only be based upon what is already known, the basis of the scientific method. Plato believed that knowledge should be judged by what something could become, the end result, or final purpose. Plato's philosophy is more like the intuitive leaps that cause scientific revolution; Aristotle's the patient gathering of data.
32More

Children's Screen Time Has Soared in the Pandemic, Alarming Parents and Researchers - T... - 0 views

  • overlooked the vastly increasing time that his son was spending on video games and social media
    • adonahue011
       
      Very important and notable to all of our lives
  • calling his phone his “whole life.”
    • adonahue011
       
      This seems extreme and unreasonable, but technology is very important to our generation.
  • ...24 more annotations...
  • I’m not losing my son to this.”
  • are watching their children slide down an increasingly slippery path into an all-consuming digital life.
    • adonahue011
       
      Very important note because some want this all-virtual to be a future but forget the toll it takes on us as humans.
  • There
  • “There will be a period of epic withdrawal,”
  • that children’s brains, well through adolescence, are considered “plastic,” meaning they can adapt and shift to changing circumstances.
  • elling parents not to feel guilty about allowing more screen time, given the stark challenges of lockdowns. Now, she said, she’d have given different advice if she had known how long children would end up stuck at home.
    • adonahue011
       
      I think that her advice was good for the beginning of quarantine because anything to allow our brains to be stimulated during that period helped.
    • adonahue011
       
      I think that her advice was good for the beginning of quarantine because anything to allow our brains to be stimulated during that period helped.
  • I probably would have encouraged families to turn off Wi-Fi except during school hours so kids don’t feel tempted every moment, night and day,”
  • nine months of 2020, an increase of 82 percent over the year before.
    • adonahue011
       
      I wonder how this effects younger kids mental progression
  • In the United States, for instance, children spent, on average, 97
  • minutes a day on YouTube in March and April, up from 57 minutes in February, and nearly double the use a year prior
  • “The Covid Effect.”
    • adonahue011
       
      Have never heard of "the covid effect" makes perfect sense though.
  • What concerns researchers, at a minimum, is that the use of devices is a poor substitute for activities known to be central to health, social and physical development, including physical play and other interactions that help children learn how to confront challenging social situations.
    • adonahue011
       
      Similar to TOK and how our brains learn to interact with others
  • Dr. Briasouli said. Some days, she said, she watches her son sit with three devices, alternating play among them.
  • These are the tools of their lives,” he said. “Everything they will do, they will do through one of these electronic devices, socialization included.”
    • adonahue011
       
      Seems to be the clear counter argument
  • “he laughs and has some social interaction with his buddies,”
    • adonahue011
       
      The social aspect is also very important
  • said he believed that adults and children alike could, with disciplined time away from devices, learn to disconnect. But doing so has become complicated by the fact that the devices now are at once vessels for school, social life, gaming and other activities central to life.
  • Dr. Radesky said that the mingling of all of these functions not only gives children a chance to multitask, it also allows young people to “escape” from any uncomfortable moment they may face.
    • adonahue011
       
      This is so true and I think almost everyone I know does this all the time.
  • Instead, he hangs out online with his old frie
  • I’ve failed you as a father,”
7More

Dominion Voting Systems Official Is In Hiding After Threats : NPR - 0 views

  • It's just the latest example of how people's lives are being upended and potentially ruined by the unprecedented flurry of disinformation this year.
  • As people experience their own individual Internet bubbles, it can be hard to recognize just how much misinformation exists and how the current information ecosystem compares with previous years.
  • NewsGuard, which vets news sources based on transparency and reliability standards, found recently that among the top 100 sources of news in the U.S., sources it deemed unreliable had four times as many interactions this year compared with 2019.
  • ...4 more annotations...
  • But election integrity advocates worry the disinformation won't truly begin to recede until political leaders such as Trump stop questioning the election's legitimacy.
  • Even in an election where almost all the voting was recorded on paper ballots and rigorous audits were done more than ever before, none of that helps if millions of people are working with an alternative set of facts,
  • Even if an election is run perfectly, it doesn't matter to a sizable portion of the public who believes it was unfair. No amount of transparency at the county and state level can really combat the sort of megaphone that Trump wields
  • "When we're in the realm of coupling disinformation from both foreign and domestic sources, and government and nongovernment sources, and none of it is really grounded in reality ... evidence doesn't help much,
6More

Metacontrol and body ownership: divergent thinking increases the virtual hand illusion ... - 0 views

  • The virtual hand illusion (VHI) paradigm demonstrates that people tend to perceive agency and bodily ownership for a virtual hand that moves in synchrony with their own movements. Given that this kind of effect can be taken to reflect self–other integration (i.e., the integration of some external, novel event into the representation of oneself), and given that self–other integration has been previously shown to be affected by metacontrol states (biases of information processing towards persistence/selectivity or flexibility/integration), we tested whether the VHI varies in size depending on the metacontrol bias. Persistence and flexibility biases were induced by having participants carry out a convergent thinking (Remote Associates) task or divergent-thinking (Alternate Uses) task, respectively, while experiencing a virtual hand moving synchronously or asynchronously with their real hand. Synchrony-induced agency and ownership effects were more pronounced in the context of divergent thinking than in the context of convergent thinking, suggesting that a metacontrol bias towards flexibility promotes self–other integration.
  • As in previous studies, participants were more likely to experience subjective agency and ownership for a virtual hand if it moved in synchrony with their own, real hand. As predicted, the size of this effect was significantly moderated by the type of creativity task in the context of which the illusion was induced.
  • It is important to keep in mind the fact that our present findings were obtained in a paradigm that strongly interleaved what we considered the task prime (i.e., the particular creativity task) and the induction of the VHI—the process we aimed to prime. The practical reason to do so was to increase the probability that the metacontrol state that the creativity tasks were hypothesized to induce or establish would be sufficiently close in time to the synchrony manipulation to have an impact on the thereby induced changes in self-perception. However, this implies that we are unable to disentangle the effects of the task prime proper and the effects of possible interactions between this task prime and the synchrony manipulation. There are indeed reasons to assume that such interactions are not unlikely to have occurred
  • ...2 more annotations...
  • and that they would make perfect theoretical sense. The observation that the VHI was affected by the type of creativity task and performance in the creativity tasks was affected by the synchrony manipulation suggests some degree of overlap between the ways that engaging in particular creativity tasks and experiencing particular degrees of synchrony are able to bias perceived ownership and agency. In terms of our theoretical framework, this implies that engaging in divergent thinking biases metacontrol towards flexibility in similar ways as experiencing synchrony between one’s own movements and those of a virtual effector does, while engaging in convergent thinking biases metacontrol towards persistence as experiencing asynchrony does. What the present findings demonstrate is that both kinds of manipulation together bias the VHI in the predicted direction, but they do not allow to statistically or numerically separate and estimate the contribution that each of the two confounded manipulations might have made. Accordingly, the present findings should not be taken to provide conclusive evidence that priming tasks alone are able to change self-perception without being supported (and perhaps even enabled) by the experience of synchrony
  • between proprioceptive and visual action feedback.
  •  
    This article relates to the ownership module. It talks about an experiment with VHI that is very interesting.
10More

Why Do People Believe in Conspiracy Theories? | Psychology Today - 0 views

  • The researchers found that reasons for believing in conspiracy theories can be grouped into three categories: The desire for understanding and certainty The desire for control and security The desire to maintain a positive self-image
  • Seeking explanations for events is a natural human desire
  • We all harbor false beliefs, that is, things we believe to be true but in fact are not.
  • ...7 more annotations...
  • After all, you were simply misinformed, and you’re not emotionally invested in it.
  • Conspiracy theories are also false beliefs, by definition. But people who believe in them have a vested interest in maintaining them
  • People need to feel they’re in control of their lives
  • conspiracy theories can give their believers a sense of control and security. This is especially true when the alternative account feels threatening.
  • Research shows that people who feel socially marginalized are more likely to believe in conspiracy theories. We all have a desire to maintain a positive self-image, which usually comes from the roles we play in life—our jobs and our relationships with family and friends
  • Most people who believe global warming is real or that vaccines are safe don’t do so because they understand the science. Rather, they trust the experts
  • we have a good understanding of what motivates people to believe in conspiracy theories. That is, they do so because of three basic needs we all have: to understand the world around us, to feel secure and in control, and to maintain a positive self-image.
4More

Tech-averse Supreme Court could be forced into modern era - CNNPolitics - 0 views

  • The coronavirus pandemic is forcing all courts to alter their procedures, but the US Supreme Court, imbued with an archaic, insular air and a majority of justices over age 65, will face a distinct challenge to keep operating and provide public access to proceedings.
  • The virus is bound to force Supreme Court justices into new territory. They may open their operations in more modern ways. Or, if they move in the opposite direction and shun any high-tech alternative, they might postpone all previously scheduled March and April oral argument sessions, a total 20 disputes, until next summer or fall.
  • This very practical dilemma comes as the justices already have one of the most substantively difficult slate of cases in years, testing abortion rights, anti-bias protections for LGBTQ workers, and the Trump administration's plan for deportation of certain undocumented immigrants who came to the US as children. (Those cases have already been argued, and the justices are drafting opinions to be released later this spring.)
  • ...1 more annotation...
  • If they are weighing a more sophisticated audio or visual connection -- to each other, and to the public -- the justices have the support of an on-site technology team and young law clerks, four per chamber. At the other end of the spectrum, they might weigh canceling the remaining argument sessions and resolve the cases based only on the written briefs filed. Those lengthy filings are more comprehensive than lawyers' presentations in hour-long oral sessions.
« First ‹ Previous 41 - 60 of 66 Next ›
Showing 20 items per page