Skip to main content

Home/ TOK Friends/ Group items tagged possible

Rss Feed Group items tagged

Javier E

Taking back the economy: the market as a Res Publica | openDemocracy - 0 views

  • Freedom in the republican tradition requires enjoyment of the fundamental liberties with the security that only a rule of law can provide. You must be publicly protected and resourced in such a way that it is manifest to you and to all that under local (not unnecessarily restrictive) conventions: you can speak your mind, associate with your fellows, enjoy communal resources, locate where you will, move occupation and make use of what is yours, without reason for fearing anyone or deferring to anyone. You have the standing of a liber or free person; you enjoy equal status under the public order and you share equally in control over that order.
  • The rules of public order constitute the possibility of private life in the way in which the rules of a game like chess constitute the possibility of playing that game. They represent enabling (or enabling-cum-constraining) rules, not rules that merely regulate a pre-existing domain.
  • This republican image runs into sharp conflict with a more received picture, celebrated by right-wing libertarians, according to which the rules of public order regulate the private sphere rather than serving – now in the fashion of one culture, now in the fashion of another – to make it possible
  • ...6 more annotations...
  • The conflict between the images is important because it shows up in alternative visions of the economy and the relationship between the economy and the state.
  • On the republican picture, owning is a relationship that presupposes law, if only the inchoate law of informal custom.
  • You own something only insofar as it is a matter of accepted convention that given the way you came to hold it — given public recognition of the title you have to the property — you enjoy public protection against those who would take it from you
  • This view of property, prominent in Rousseau and presupposed in the broader republican tradition, is scarcely questionable in view of the salient diversity in systems of property
  • These observations, scarcely richer than platitudes, are important for giving us a perspective on the market and the economy, undermining the libertarian image. That picture represents the market as a res privata, a private thing, suggesting that the role of the state is merely to lay low the hills in the way of the market and smooth the paths for its operation. And so it depicts any other interventions of government in the market as dubious on philosophical, not just empirical, grounds.
  • this image accounts for the continuing attachment to austerity among those on the right. They are philosophically opposed to Keynesianism, not just opposed on empirical grounds, and their ideological stance makes empirically based arguments for Keynesianism invisible to them.
Javier E

The Geography Of The Good Life « The Dish - 0 views

  • If you already live in the heartland, the message is to stay. If you come from the heartland and have left, the message is to return. But what if you’re one of the tens of millions of people who can’t stay in or go home to the heartland because your home — your roots — are in the BosWash corridor of the Northeast or the urbanized areas of the West Coast? …
  • is this even possible in a place where paying my mortgage and other bills requires that my wife and I — like my equally striving neighbors — devote ourselves to high-stress work during nearly every waking hour of our days? If I were independently wealthy, perhaps the good life that Dreher describes would be a possibility in the Philadelphia suburbs. But alas…
  • [M]aybe the lesson is that the good life is not possible in the Philadelphia suburbs, or any place where in order to keep your head above water, your job has to own you and your wife, and it keeps you from building relationships.
  • ...2 more annotations...
  • There are trade-offs in all things, and no perfect solution, geographical or otherwise. Thing is, life is short, and choices have to be made. It’s not that people living in these workaholic suburbs are bad, not at all; it’s that the culture they (we) live in defines the Good in such a way that choosing to “do the right thing” ends up hollowing out your life, leaving you vulnerable in ways you may not see until tragedy strikes.
  • The life Ruthie lived is a compelling alternative, the witness of which changed my heart. And like the Good Book says, “Where your treasure is, there will your heart be also.”
Javier E

Yelp and the Wisdom of 'The Lonely Crowd' : The New Yorker - 1 views

  • David Riesman spent the first half of his career writing one of the most important books of the twentieth century. He spent the second half correcting its pervasive misprision. “The Lonely Crowd,” an analysis of the varieties of social character that examined the new American middle class
  • the “profound misinterpretation” of the book as a simplistic critique of epidemic American postwar conformity via its description of the contours of the “other-directed character,” whose identity and behavior is shaped by its relationships.
  • he never meant to suggest that Americans now were any more conformist than they ever had been, or that there’s even such a thing as social structure without conformist consensus.
  • ...17 more annotations...
  • In this past weekend’s Styles section of the New York Times, Siegel uses “The Lonely Crowd” to analyze the putative “Yelpification” of contemporary life: according to Siegel, Riesman’s view was that “people went from being ‘inner-directed’ to ‘outer-directed,’ from heeding their own instincts and judgment to depending on the judgments and opinions of tastemakers and trendsetters.” The “conformist power of the crowd” and its delighted ability to write online reviews led Siegel down a sad path to a lackluster expensive dinner.
  • What Riesman actually suggested was that we think of social organization in terms of a series of “ideal types” along a spectrum of increasingly loose authority
  • On one end of the spectrum is a “tradition-directed” community, where we all understand that what we’re supposed to do is what we’re supposed to do because it’s just the thing that one does; authority is unequivocal, and there’s neither the room nor the desire for autonomous action
  • In the middle of the spectrum, as one moves toward a freer distribution of, and response to, authority, is “inner-direction.” The inner-directed character is concerned not with “what one does” but with “what people like us do.” Which is to say that she looks to her own internalizations of past authorities to get a sense for how to conduct her affairs.
  • Contemporary society, Riesman thought, was best understood as chiefly “other-directed,” where the inculcated authority of the vertical (one’s lineage) gives way to the muddled authority of the horizontal (one’s peers).
  • The inner-directed person orients herself by an internal “gyroscope,” while the other-directed person orients herself by “radar.”
  • It’s not that the inner-directed person consults some deep, subjective, romantically sui generis oracle. It’s that the inner-directed person consults the internalized voices of a mostly dead lineage, while her other-directed counterpart heeds the external voices of her living contemporaries.
  • “the gyroscopic mechanism allows the inner-directed person to appear far more independent than he really is: he is no less a conformist to others than the other-directed person, but the voices to which he listens are more distant, of an older generation, their cues internalized in his childhood.” The inner-directed person is, simply, “somewhat less concerned than the other-directed person with continuously obtaining from contemporaries (or their stand-ins: the mass media) a flow of guidance, expectation, and approbation.
  • Riesman drew no moral from the transition from a community of primarily inner-directed people to a community of the other-directed. Instead, he saw that each ideal type had different advantages and faced different problems
  • As Riesman understood it, the primary disciplining emotion under tradition direction is shame, the threat of ostracism and exile that enforces traditional action. Inner-directed people experience not shame but guilt, or the fear that one’s behavior won’t be commensurate with the imago within. And, finally, other-directed folks experience not guilt but a “contagious, highly diffuse” anxiety—the possibility that, now that authority itself is diffuse and ambiguous, we might be doing the wrong thing all the time.
  • Siegel is right to make the inference, if wayward in his conclusions. It makes sense to associate the anxiety of how to relate to livingly diffuse authorities with the Internet, which presents the greatest signal-to-noise-ratio problem in human history.
  • The problem with Yelp is not the role it plays, for Siegel, in the proliferation of monoculture; most people of my generation have learned to ignore Yelp entirely. It’s the fact that, after about a year of usefulness, Yelp very quickly became a terrible source of information.
  • There are several reasons for this. The first is the nature of an algorithmic response to the world. As Jaron Lanier points out in “Who Owns the Future?,” the hubris behind each new algorithm is the idea that its predictive and evaluatory structure is game-proof; but the minute any given algorithm gains real currency, all the smart and devious people devote themselves to gaming it. On Yelp, the obvious case would be garnering positive reviews by any means necessary.
  • A second problem with Yelp’s algorithmic ranking is in the very idea of using online reviews; as anybody with a book on Amazon knows, they tend to draw more contributions from people who feel very strongly about something, positively or negatively. This undermines the statistical relevance of their recommendations.
  • the biggest problem with Yelp is not that it’s a popularity contest. It’s not even that it’s an exploitable popularity contest.
  • it’s the fact that Yelp makes money by selling ads and prime placements to the very businesses it lists under ostensibly neutral third-party review
  • But Yelp’s valuations are always possibly in bad faith, even if its authority is dressed up as the distilled algorithmic wisdom of a crowd. For Riesman, that’s the worst of all possible worlds: a manipulated consumer certainty that only shores up the authority of an unchosen, hidden source. In that world, cold monkfish is the least of our problems.
kortanekev

Immanuel Kant - Wikipedia, the free encyclopedia - 0 views

  • the agreement between reality and the concepts we use to conceive it arises not because our mental concepts have come to passively mirror reality, but because reality must conform to the human mind's active concepts to be conceivable and at all possible for us to experience. Kant thus regarded the basic categories of the human mind as the transcendental "condition of possibility" for any experience.[6]
  •  
    Kant, german philosopher, argues that reality must conform to the mind, rather than the mind conform to reality, for us to begin to understand it. Interesting way to think about our perception.   (Evie, 9/23/16) 
sissij

The Right Way to Fall - The New York Times - 1 views

  • According to paratroopers, stunt professionals, physical therapists and martial arts instructors, there is indeed a “right way” to fall — and it can save you a lot of grief if you know how to do it.
  • The Agency for Healthcare Research and Quality estimates that falls cause more than a third of injury-related emergency room visits, around 7.9 million a year.
  • Moreover, falling straight forward or backward raises the risk of damaging your spine and vital organs.
  • ...4 more annotations...
  • You similarly don’t want to come crashing down on your knee so you break your kneecap or do that maneuver where you kind of pedal with your feet to catch yourself, which can lead to broken bones in your foot and ankle.
  • Paratroopers’ goal is to fall sideways in the direction the wind is carrying them — in no way resisting the momentum of the fall. When the balls of their feet barely reach the ground, they immediately distribute the impact in rapid sequence up through the calf to the thigh and buttocks.
  • Accept that you’re falling and go with it, round your body, and don’t stiffen and distribute the energy so you take the fall in the widest area possible,
  • Young children are arguably the best fallers because they have yet to develop fear or embarrassment, so they just tumble and roll without tensing up and trying to catch themselves.
  •  
    There are techniques and science even in how you choose to fall. After reading this article, I sort of take the advice metaphorically. In the article, it said: "Accept that you're falling and go with it, round your body, and don't stiffen and distribute the energy so you take the fall in the widest area possible." I think it also applies to times when we meet some obstacles and fall in our life. We sometimes just have to accept the grieve and go with it. Although there are many novels depicting heroes going against their fall, as individuals in the reality, I think the better way to deal with our down point is to go with it and let it fade away. Always have your pain and grief at a high concentration will only lead to a broken heart. --Sissi (1/26/2017)
Javier E

Atul Gawande: Failure and Rescue : The New Yorker - 0 views

  • the critical skills of the best surgeons I saw involved the ability to handle complexity and uncertainty. They had developed judgment, mastery of teamwork, and willingness to accept responsibility for the consequences of their choices. In this respect, I realized, surgery turns out to be no different than a life in teaching, public service, business, or almost anything you may decide to pursue. We all face complexity and uncertainty no matter where our path takes us. That means we all face the risk of failure. So along the way, we all are forced to develop these critical capacities—of judgment, teamwork, and acceptance of responsibility.
  • people admonish us: take risks; be willing to fail. But this has always puzzled me. Do you want a surgeon whose motto is “I like taking risks”? We do in fact want people to take risks, to strive for difficult goals even when the possibility of failure looms. Progress cannot happen otherwise. But how they do it is what seems to matter. The key to reducing death after surgery was the introduction of ways to reduce the risk of things going wrong—through specialization, better planning, and technology.
  • there continue to be huge differences between hospitals in the outcomes of their care. Some places still have far higher death rates than others. And an interesting line of research has opened up asking why.
  • ...8 more annotations...
  • I thought that the best places simply did a better job at controlling and minimizing risks—that they did a better job of preventing things from going wrong. But, to my surprise, they didn’t. Their complication rates after surgery were almost the same as others. Instead, what they proved to be really great at was rescuing people when they had a complication, preventing failures from becoming a catastrophe.
  • this is what distinguished the great from the mediocre. They didn’t fail less. They rescued more.
  • This may in fact be the real story of human and societal improvement. We talk a lot about “risk management”—a nice hygienic phrase. But in the end, risk is necessary. Things can and will go wrong. Yet some have a better capacity to prepare for the possibility, to limit the damage, and to sometimes even retrieve success from failure.
  • When things go wrong, there seem to be three main pitfalls to avoid, three ways to fail to rescue. You could choose a wrong plan, an inadequate plan, or no plan at all. Say you’re cooking and you inadvertently set a grease pan on fire. Throwing gasoline on the fire would be a completely wrong plan. Trying to blow the fire out would be inadequate. And ignoring it—“Fire? What fire?”—would be no plan at all.
  • All policies court failure—our war in Iraq, for instance, or the effort to stimulate our struggling economy. But when you refuse to even acknowledge that things aren’t going as expected, failure can become a humanitarian disaster. The sooner you’re able to see clearly that your best hopes and intentions have gone awry, the better. You have more room to pivot and adjust. You have more of a chance to rescue.
  • But recognizing that your expectations are proving wrong—accepting that you need a new plan—is commonly the hardest thing to do. We have this problem called confidence. To take a risk, you must have confidence in yourself
  • Yet you cannot blind yourself to failure, either. Indeed, you must prepare for it. For, strangely enough, only then is success possible.
  • So you will take risks, and you will have failures. But it’s what happens afterward that is defining. A failure often does not have to be a failure at all. However, you have to be ready for it—will you admit when things go wrong? Will you take steps to set them right?—because the difference between triumph and defeat, you’ll find, isn’t about willingness to take risks. It’s about mastery of rescue.
Javier E

Is the Universe a Simulation? - NYTimes.com - 0 views

  • Mathematical knowledge is unlike any other knowledge. Its truths are objective, necessary and timeless.
  • What kinds of things are mathematical entities and theorems, that they are knowable in this way? Do they exist somewhere, a set of immaterial objects in the enchanted gardens of the Platonic world, waiting to be discovered? Or are they mere creations of the human mind?
  • Many mathematicians, when pressed, admit to being Platonists. The great logician Kurt Gödel argued that mathematical concepts and ideas “form an objective reality of their own, which we cannot create or change, but only perceive and describe.” But if this is true, how do humans manage to access this hidden reality?
  • ...3 more annotations...
  • We don’t know. But one fanciful possibility is that we live in a computer simulation based on the laws of mathematics — not in what we commonly take to be the real world. According to this theory, some highly advanced computer programmer of the future has devised this simulation, and we are unknowingly part of it. Thus when we discover a mathematical truth, we are simply discovering aspects of the code that the programmer used.
  • the Oxford philosopher Nick Bostrom has argued that we are more likely to be in such a simulation than not. If such simulations are possible in theory, he reasons, then eventually humans will create them — presumably many of them. If this is so, in time there will be many more simulated worlds than nonsimulated ones. Statistically speaking, therefore, we are more likely to be living in a simulated world than the real one.
  • The jury is still out on the simulation hypothesis. But even if it proves too far-fetched, the possibility of the Platonic nature of mathematical ideas remains — and may hold the key to understanding our own reality.
Javier E

Happiness Is a Warm iPhone - NYTimes.com - 0 views

  • We fall in love with our technology. That’s how we talk about our gadgets — with the language of emotional attachment, with irrational expectations about happily ever after.
  • I loved what was possible with it. Even though I wasn’t able to actually make it do anything, I knew that someone could. And that was enough, the mere idea of a machine, one that anyone could have in their home, that would take strings of symbols and turn them into music, into movement, into something else out there in the world.
  • We’re certainly into the magic zone — and yet, the magic is somehow fading for me. Technology has crossed the uncanny valley; it is simply too good at representing our real world.
  • ...3 more annotations...
  • since buying that first iPhone, I’ve grown too used to new worlds.
  • As everything gets faster and richer and denser with information, as a whole new dimension to our physical world evolves online, some possibilities open up, and others close down. The potential congeals into the actual, the possible calcifies into the practical. What is imaginable gets pared down into what was actually imagined
  • But things are by necessity amazing in a very specific way, and with a very specific visual grammar and conceptual environment — and that environment is one that is closed, controlled, packaged for us. We’re holding magic boxes, boxes that want to serve us and coddle us, instead of challenge us. And how can you love something that doesn’t challenge you?
Javier E

Computers Jump to the Head of the Class - NYTimes.com - 0 views

  • Tokyo University, known as Todai, is Japan’s best. Its exacting entry test requires years of cramming to pass and can defeat even the most erudite. Most current computers, trained in data crunching, fail to understand its natural language tasks altogether. Ms. Arai has set researchers at Japan’s National Institute of Informatics, where she works, the task of developing a machine that can jump the lofty Todai bar by 2021. If they succeed, she said, such a machine should be capable, with appropriate programming, of doing many — perhaps most — jobs now done by university graduates.
  • There is a significant danger, Ms. Arai says, that the widespread adoption of artificial intelligence, if not well managed, could lead to a radical restructuring of economic activity and the job market, outpacing the ability of social and education systems to adjust.
  • Intelligent machines could be used to replace expensive human resources, potentially undermining the economic value of much vocational education, Ms. Arai said.
  • ...8 more annotations...
  • “Educational investment will not be attractive to those without unique skills,” she said. Graduates, she noted, need to earn a return on their investment in training: “But instead they will lose jobs, replaced by information simulation. They will stay uneducated.” In such a scenario, high-salary jobs would remain for those equipped with problem-solving skills, she predicted. But many common tasks now done by college graduates might vanish.
  • Over the next 10 to 20 years, “10 percent to 20 percent pushed out of work by A.I. will be a catastrophe,” she says. “I can’t begin to think what 50 percent would mean — way beyond a catastrophe and such numbers can’t be ruled out if A.I. performs well in the future.”
  • A recent study published by the Program on the Impacts of Future Technology, at Oxford University’s Oxford Martin School, predicted that nearly half of all jobs in the United States could be replaced by computers over the next two decades.
  • Smart machines will give companies “the opportunity to automate many tasks, redesign jobs, and do things never before possible even with the best human work forces,” according to a report this year by the business consulting firm McKinsey.
  • Advances in speech recognition, translation and pattern recognition threaten employment in the service sectors — call centers, marketing and sales — precisely the sectors that provide most jobs in developed economies.
  • Gartner’s 2013 chief executive survey, published in April, found that 60 percent of executives surveyed dismissed as “‘futurist fantasy” the possibility that smart machines could displace many white-collar employees within 15 years.
  • Kenneth Brant, research director at Gartner, told a conference in October: “Job destruction will happen at a faster pace, with machine-driven job elimination overwhelming the market’s ability to create valuable new ones.”
  • Optimists say this could lead to the ultimate elimination of work — an “Athens without the slaves” — and a possible boom for less vocational-style education. Mr. Brant’s hope is that such disruption might lead to a system where individuals are paid a citizen stipend and be free for education and self-realization. “This optimistic scenario I call Homo Ludens, or ‘Man, the Player,’ because maybe we will not be the smartest thing on the planet after all,” he said. “Maybe our destiny is to create the smartest thing on the planet and use it to follow a course of self-actualization.”
Javier E

Dick Cheney, Rand Paul, and the Possibility of Malign Leaders - Conor Friedersdorf - Th... - 0 views

  • Every American sees that leaders in foreign countries sometimes behave immorally. Yet we often seem averse to believing that our own leaders can be just as malign.
  • That's certainly my bias: Judging the character of U.S. officials, my gut impulse is to give them the benefit of the doubt.
  • But I know that my gut is sometimes wrong, that our institutions rather than anything intrinsic to our compatriots explains the comparative lack of corruption and tyranny in the United States, and that it's important to stay open to the possibility of malign or corrupt leaders—because otherwise, it's impossible to adequately guard against them. The Founders understood this. So did generations of traditional conservatives.
Ellie McGinnis

The 50 Greatest Breakthroughs Since the Wheel - James Fallows - The Atlantic - 0 views

  • Some questions you ask because you want the right answer. Others are valuable because no answer is right; the payoff comes from the range of attempts.
  • That is the diversity of views about the types of historical breakthroughs that matter, with a striking consensus on whether the long trail of innovation recorded here is now nearing its end.
  • The clearest example of consensus was the first item on the final compilation, the printing press
  • ...27 more annotations...
  • Leslie Berlin, a historian of business at Stanford, organized her nominations not as an overall list but grouped into functional categories.
  • Innovations that expand the human intellect and its creative, expressive, and even moral possibilities.
  • Innovations that are integral to the physical and operating infrastructure of the modern world
  • Innovations that enabled the Industrial Revolution and its successive waves of expanded material output
  • Innovations extending life, to use Leslie Berlin’s term
  • Innovations that allowed real-time communication beyond the range of a single human voice
  • Innovations in the physical movement of people and goods.
  • Organizational breakthroughs that provide the software for people working and living together in increasingly efficient and modern ways
  • Finally, and less prominently than we might have found in 1950 or 1920—and less prominently than I initially expected—we have innovations in killing,
  • Any collection of 50 breakthroughs must exclude 50,000 more.
  • We learn, finally, why technology breeds optimism, which may be the most significant part of this exercise.
  • Popular culture often lionizes the stars of discovery and invention
  • For our era, the major problems that technology has helped cause, and that faster innovation may or may not correct, are environmental, demographic, and socioeconomic.
  • people who have thought deeply about innovation’s sources and effects, like our panelists, were aware of the harm it has done along with the good.
  • “Does innovation raise the wealth of the planet? I believe it does,” John Doerr, who has helped launch Google, Amazon, and other giants of today’s technology, said. “But technology left to its own devices widens rather than narrows the gap between the rich and the poor.”
  • Are today’s statesmen an improvement over those of our grandparents’ era? Today’s level of public debate? Music, architecture, literature, the fine arts—these and other manifestations of world culture continually change, without necessarily improving. Tolstoy and Dostoyevsky, versus whoever is the best-selling author in Moscow right now?
  • The argument that a slowdown might happen, and that it would be harmful if it did, takes three main forms.
  • Some societies have closed themselves off and stopped inventing altogether:
  • By failing to move forward, they inevitably moved backward relative to their rivals and to the environmental and economic threats they faced. If the social and intellectual climate for innovation sours, what has happened before can happen again.
  • visible slowdown in the pace of solutions that technology offers to fundamental problems.
  • a slowdown in, say, crop yields or travel time is part of a general pattern of what economists call diminishing marginal returns. The easy improvements are, quite naturally, the first to be made; whatever comes later is slower and harder.
  • America’s history as a nation happens to coincide with a rare moment in technological history now nearing its end. “There was virtually no economic growth before 1750,” he writes in a recent paper.
  • “We can be concerned about the last 1 percent of an environment for innovation, but that is because we take everything else for granted,” Leslie Berlin told me.
  • This reduction in cost, he says, means that the next decade should be a time of “amazing advances in understanding the genetic basis of disease, with especially powerful implications for cancer.”
  • the very concept of an end to innovation defied everything they understood about human inquiry. “If you look just at the 20th century, the odds against there being any improvement in living standards are enormous,”
  • “Two catastrophic world wars, the Cold War, the Depression, the rise of totalitarianism—it’s been one disaster after another, a sequence that could have been enough to sink us back into barbarism. And yet this past half century has been the fastest-ever time of technological growth. I see no reason why that should be slowing down.”
  • “I am a technological evolutionist,” he said. “I view the universe as a phase-space of things that are possible, and we’re doing a random walk among them. Eventually we are going to fill the space of everything that is possible.”
Javier E

Feeling for the Fictional - The League of Ordinary Gentlemen - 0 views

  • We human beings read, watch, and listen to a lot of fiction. We know that it is fiction. But we have emotional responses and attachments to the characters. So, according to Colin Radford, who first put it forward, this shows that there’s something incoherent in our emotional responses: we feel for things we know don’t exist.
  • Fictional characters and situations don’t merely arouse an emotional response; they arouse an empathetic response.  This latter is not necessarily restricted to the character who causes the emotion:
  • Fiction doesn’t present the unreal; it presents the possibly real, something balancing precariously between the real and the non.  (This holds, it should be said, for fantasy, science fiction, and other “genres” as well as in realistic or literary fiction; they just go about it, as is the case in variation between individual works, in different ways.)
  • ...3 more annotations...
  • While the prodigal son and the longing father may be types, and while we all may have known or not known or share, no two are alike.  Their experiences, their reactions, their perceptions all differ.  The very particulars that preclude the true reality of the story provide for the possibility of its reality.*  The reaction it provokes is somewhere between This could be a man and There but for the grace of God…
  • We empathize with fictional beings not despite their unreality, but because of their possible reality. 
  • *Is this the space where the truth that can be found in fiction lies?
dpittenger

Elon Musk, Stephen Hawking warn of artificial intelligence dangers - 0 views

  • Call it preemptive extinction panic, smart people buying into Sci-Fi hype or simply a prudent stance on a possible future issue, but the fear around artificial intelligence is increasingly gaining traction among those with credentials to back up the distress.
  • However, history doesn't always neatly fit into our forecasts. If things continue as they have with brain-to-machine interfaces becoming ever more common, we're just as likely to have to confront the issue of enhanced humans (digitally, mechanically and/or chemically) long before AI comes close to sentience.
  • Still, whether or not you believe computers will one day be powerful enough to go off and find their own paths, which may conflict with humanity's, the very fact that so many intelligent people feel the issue is worth a public stance should be enough to grab your attention.
  •  
    Stephen Hawking and Elon Musk fear that artificial intelligence could become dangerous. We talked about this a bit in class before, but it is starting to become a new fear. Artificial intelligence could possibly become smarter than us, and that wouldn't be good.
kushnerha

The Data Against Kant - The New York Times - 0 views

  • THE history of moral philosophy is a history of disagreement, but on one point there has been virtual unanimity: It would be absurd to suggest that we should do what we couldn’t possibly do.
  • This principle — that “ought” implies “can,” that our moral obligations can’t exceed our abilities — played a central role in the work of Immanuel Kant and has been widely accepted since.
  • His thought experiments go something like this: Suppose that you and a friend are both up for the same job in another city. She interviewed last weekend, and your flight for the interview is this evening. Your car is in the shop, though, so your friend promises to drive you to the airport. But on the way, her car breaks down — the gas tank is leaking — so you miss your flight and don’t get the job.Would it make any sense to tell your friend, stranded at the side of the road, that she ought to drive you to the airport? The answer seems to be an obvious no (after all, she can’t drive you), and most philosophers treat this as all the confirmation they need for the principle.Suppose, however, that the situation is slightly different. What if your friend intentionally punctures her own gas tank to make sure that you miss the flight and she gets the job? In this case, it makes perfect sense to insist that your friend still has an obligation to drive you to the airport. In other words, we might indeed say that someone ought to do what she can’t — if we’re blaming her.
  • ...5 more annotations...
  • In our study, we presented hundreds of participants with stories like the one above and asked them questions about obligation, ability and blame. Did they think someone should keep a promise she made but couldn’t keep? Was she even capable of keeping her promise? And how much was she to blame for what happened?
  • We found a consistent pattern, but not what most philosophers would expect. “Ought” judgments depended largely on concerns about blame, not ability. With stories like the one above, in which a friend intentionally sabotages you, 60 percent of our participants said that the obligation still held — your friend still ought to drive you to the airport. But with stories in which the inability to help was accidental, the obligation all but disappeared. Now, only 31 percent of our participants said your friend still ought to drive you.
  • Professor Sinnott-Armstrong’s unorthodox intuition turns out to be shared by hundreds of nonphilosophers. So who is right? The vast majority of philosophers, or our participants?One possibility is that our participants were wrong, perhaps because their urge to blame impaired the accuracy of their moral judgments. To test this possibility, we stacked the deck in the favor of philosophical orthodoxy: We had the participants look at cases in which the urge to assign blame would be lowest — that is, only the cases in which the car accidentally broke down. Even still, we found no relationship between “ought” and “can.” The only significant relationship was between “ought” and “blame.”
  • This finding has an important implication: Even when we say that someone has no obligation to keep a promise (as with your friend whose car accidentally breaks down), it seems we’re saying it not because she’s unable to do it, but because we don’t want to unfairly blame her for not keeping it. Again, concerns about blame, not about ability, dictate how we understand obligation.
  • While this one study alone doesn’t refute Kant, our research joins a recent salvo of experimental work targeting the principle that “ought” implies “can.” At the very least, philosophers can no longer treat this principle as obviously true.
Javier E

Opinion | How Genetics Is Changing Our Understanding of 'Race' - The New York Times - 0 views

  • In 1942, the anthropologist Ashley Montagu published “Man’s Most Dangerous Myth: The Fallacy of Race,” an influential book that argued that race is a social concept with no genetic basis.
  • eginning in 1972, genetic findings began to be incorporated into this argument. That year, the geneticist Richard Lewontin published an important study of variation in protein types in blood. He grouped the human populations he analyzed into seven “races” — West Eurasians, Africans, East Asians, South Asians, Native Americans, Oceanians and Australians — and found that around 85 percent of variation in the protein types could be accounted for by variation within populations and “races,” and only 15 percent by variation across them. To the extent that there was variation among humans, he concluded, most of it was because of “differences between individuals.”
  • In this way, a consensus was established that among human populations there are no differences large enough to support the concept of “biological race.” Instead, it was argued, race is a “social construct,” a way of categorizing people that changes over time and across countries.
  • ...29 more annotations...
  • t is true that race is a social construct. It is also true, as Dr. Lewontin wrote, that human populations “are remarkably similar to each other” from a genetic point of view.
  • this consensus has morphed, seemingly without questioning, into an orthodoxy. The orthodoxy maintains that the average genetic differences among people grouped according to today’s racial terms are so trivial when it comes to any meaningful biological traits that those differences can be ignored.
  • With the help of these tools, we are learning that while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today’s racial constructs are real.
  • I have deep sympathy for the concern that genetic discoveries could be misused to justify racism. But as a geneticist I also know that it is simply no longer possible to ignore average genetic differences among “races.”
  • Groundbreaking advances in DNA sequencing technology have been made over the last two decades
  • Care.
  • The orthodoxy goes further, holding that we should be anxious about any research into genetic differences among populations
  • You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work
  • I am worried that well-meaning people who deny the possibility of substantial biological differences among human populations are digging themselves into an indefensible position, one that will not survive the onslaught of science.
  • I am also worried that whatever discoveries are made — and we truly have no idea yet what they will be — will be cited as “scientific proof” that racist prejudices and agendas have been correct all along, and that those well-meaning people will not understand the science well enough to push back against these claims.
  • This is why it is important, even urgent, that we develop a candid and scientifically up-to-date way of discussing any such difference
  • While most people will agree that finding a genetic explanation for an elevated rate of disease is important, they often draw the line there. Finding genetic influences on a propensity for disease is one thing, they argue, but looking for such influences on behavior and cognition is another
  • Is performance on an intelligence test or the number of years of school a person attends shaped by the way a person is brought up? Of course. But does it measure something having to do with some aspect of behavior or cognition? Almost certainly.
  • Recent genetic studies have demonstrated differences across populations not just in the genetic determinants of simple traits such as skin color, but also in more complex traits like bodily dimensions and susceptibility to diseases.
  • in Iceland, there has been measurable genetic selection against the genetic variations that predict more years of education in that population just within the last century.
  • consider what kinds of voices are filling the void that our silence is creating
  • Nicholas Wade, a longtime science journalist for The New York Times, rightly notes in his 2014 book, “A Troublesome Inheritance: Genes, Race and Human History,” that modern research is challenging our thinking about the nature of human population differences. But he goes on to make the unfounded and irresponsible claim that this research is suggesting that genetic factors explain traditional stereotypes.
  • 139 geneticists (including myself) pointed out in a letter to The New York Times about Mr. Wade’s book, there is no genetic evidence to back up any of the racist stereotypes he promotes.
  • Another high-profile example is James Watson, the scientist who in 1953 co-discovered the structure of DNA, and who was forced to retire as head of the Cold Spring Harbor Laboratories in 2007 after he stated in an interview — without any scientific evidence — that research has suggested that genetic factors contribute to lower intelligence in Africans than in Europeans.
  • What makes Dr. Watson’s and Mr. Wade’s statements so insidious is that they start with the accurate observation that many academics are implausibly denying the possibility of average genetic differences among human populations, and then end with a claim — backed by no evidence — that they know what those differences are and that they correspond to racist stereotypes
  • They use the reluctance of the academic community to openly discuss these fraught issues to provide rhetorical cover for hateful ideas and old racist canards.
  • This is why knowledgeable scientists must speak out. If we abstain from laying out a rational framework for discussing differences among populations, we risk losing the trust of the public and we actively contribute to the distrust of expertise that is now so prevalent.
  • If scientists can be confident of anything, it is that whatever we currently believe about the genetic nature of differences among populations is most likely wrong.
  • For example, my laboratory discovered in 2016, based on our sequencing of ancient human genomes, that “whites” are not derived from a population that existed from time immemorial, as some people believe. Instead, “whites” represent a mixture of four ancient populations that lived 10,000 years ago and were each as different from one another as Europeans and East Asians are today.
  • For me, a natural response to the challenge is to learn from the example of the biological differences that exist between males and females
  • The differences between the sexes are far more profound than those that exist among human populations, reflecting more than 100 million years of evolution and adaptation. Males and females differ by huge tracts of genetic material
  • How do we accommodate the biological differences between men and women? I think the answer is obvious: We should both recognize that genetic differences between males and females exist and we should accord each sex the same freedoms and opportunities regardless of those differences
  • fulfilling these aspirations in practice is a challenge. Yet conceptually it is straightforward.
  • Compared with the enormous differences that exist among individuals, differences among populations are on average many times smaller, so it should be only a modest challenge to accommodate a reality in which the average genetic contributions to human traits differ.
Javier E

The Tech Industry's Psychological War on Kids - Member Feature Stories - Medium - 0 views

  • she cried, “They took my f***ing phone!” Attempting to engage Kelly in conversation, I asked her what she liked about her phone and social media. “They make me happy,” she replied.
  • Even though they were loving and involved parents, Kelly’s mom couldn’t help feeling that they’d failed their daughter and must have done something terribly wrong that led to her problems.
  • My practice as a child and adolescent psychologist is filled with families like Kelly’s. These parents say their kids’ extreme overuse of phones, video games, and social media is the most difficult parenting issue they face — and, in many cases, is tearing the family apart.
  • ...88 more annotations...
  • What none of these parents understand is that their children’s and teens’ destructive obsession with technology is the predictable consequence of a virtually unrecognized merger between the tech industry and psychology.
  • Dr. B.J. Fogg, is a psychologist and the father of persuasive technology, a discipline in which digital machines and apps — including smartphones, social media, and video games — are configured to alter human thoughts and behaviors. As the lab’s website boldly proclaims: “Machines designed to change humans.”
  • These parents have no idea that lurking behind their kids’ screens and phones are a multitude of psychologists, neuroscientists, and social science experts who use their knowledge of psychological vulnerabilities to devise products that capture kids’ attention for the sake of industry profit.
  • psychology — a discipline that we associate with healing — is now being used as a weapon against children.
  • This alliance pairs the consumer tech industry’s immense wealth with the most sophisticated psychological research, making it possible to develop social media, video games, and phones with drug-like power to seduce young users.
  • Likewise, social media companies use persuasive design to prey on the age-appropriate desire for preteen and teen kids, especially girls, to be socially successful. This drive is built into our DNA, since real-world relational skills have fostered human evolution.
  • Called “the millionaire maker,” Fogg has groomed former students who have used his methods to develop technologies that now consume kids’ lives. As he recently touted on his personal website, “My students often do groundbreaking projects, and they continue having impact in the real world after they leave Stanford… For example, Instagram has influenced the behavior of over 800 million people. The co-founder was a student of mine.”
  • Persuasive technology (also called persuasive design) works by deliberately creating digital environments that users feel fulfill their basic human drives — to be social or obtain goals — better than real-world alternatives.
  • Kids spend countless hours in social media and video game environments in pursuit of likes, “friends,” game points, and levels — because it’s stimulating, they believe that this makes them happy and successful, and they find it easier than doing the difficult but developmentally important activities of childhood.
  • While persuasion techniques work well on adults, they are particularly effective at influencing the still-maturing child and teen brain.
  • “Video games, better than anything else in our culture, deliver rewards to people, especially teenage boys,” says Fogg. “Teenage boys are wired to seek competency. To master our world and get better at stuff. Video games, in dishing out rewards, can convey to people that their competency is growing, you can get better at something second by second.”
  • it’s persuasive design that’s helped convince this generation of boys they are gaining “competency” by spending countless hours on game sites, when the sad reality is they are locked away in their rooms gaming, ignoring school, and not developing the real-world competencies that colleges and employers demand.
  • Persuasive technologies work because of their apparent triggering of the release of dopamine, a powerful neurotransmitter involved in reward, attention, and addiction.
  • As she says, “If you don’t get 100 ‘likes,’ you make other people share it so you get 100…. Or else you just get upset. Everyone wants to get the most ‘likes.’ It’s like a popularity contest.”
  • there are costs to Casey’s phone obsession, noting that the “girl’s phone, be it Facebook, Instagram or iMessage, is constantly pulling her away from her homework, sleep, or conversations with her family.
  • Casey says she wishes she could put her phone down. But she can’t. “I’ll wake up in the morning and go on Facebook just… because,” she says. “It’s not like I want to or I don’t. I just go on it. I’m, like, forced to. I don’t know why. I need to. Facebook takes up my whole life.”
  • B.J. Fogg may not be a household name, but Fortune Magazine calls him a “New Guru You Should Know,” and his research is driving a worldwide legion of user experience (UX) designers who utilize and expand upon his models of persuasive design.
  • “No one has perhaps been as influential on the current generation of user experience (UX) designers as Stanford researcher B.J. Fogg.”
  • the core of UX research is about using psychology to take advantage of our human vulnerabilities.
  • As Fogg is quoted in Kosner’s Forbes article, “Facebook, Twitter, Google, you name it, these companies have been using computers to influence our behavior.” However, the driving force behind behavior change isn’t computers. “The missing link isn’t the technology, it’s psychology,” says Fogg.
  • UX researchers not only follow Fogg’s design model, but also his apparent tendency to overlook the broader implications of persuasive design. They focus on the task at hand, building digital machines and apps that better demand users’ attention, compel users to return again and again, and grow businesses’ bottom line.
  • the “Fogg Behavior Model” is a well-tested method to change behavior and, in its simplified form, involves three primary factors: motivation, ability, and triggers.
  • “We can now create machines that can change what people think and what people do, and the machines can do that autonomously.”
  • Regarding ability, Fogg suggests that digital products should be made so that users don’t have to “think hard.” Hence, social networks are designed for ease of use
  • Finally, Fogg says that potential users need to be triggered to use a site. This is accomplished by a myriad of digital tricks, including the sending of incessant notifications
  • moral questions about the impact of turning persuasive techniques on children and teens are not being asked. For example, should the fear of social rejection be used to compel kids to compulsively use social media? Is it okay to lure kids away from school tasks that demand a strong mental effort so they can spend their lives on social networks or playing video games that don’t make them think much at all?
  • Describing how his formula is effective at getting people to use a social network, the psychologist says in an academic paper that a key motivator is users’ desire for “social acceptance,” although he says an even more powerful motivator is the desire “to avoid being socially rejected.”
  • the startup Dopamine Labs boasts about its use of persuasive techniques to increase profits: “Connect your app to our Persuasive AI [Artificial Intelligence] and lift your engagement and revenue up to 30% by giving your users our perfect bursts of dopamine,” and “A burst of Dopamine doesn’t just feel good: it’s proven to re-wire user behavior and habits.”
  • Ramsay Brown, the founder of Dopamine Labs, says in a KQED Science article, “We have now developed a rigorous technology of the human mind, and that is both exciting and terrifying. We have the ability to twiddle some knobs in a machine learning dashboard we build, and around the world hundreds of thousands of people are going to quietly change their behavior in ways that, unbeknownst to them, feel second-nature but are really by design.”
  • Programmers call this “brain hacking,” as it compels users to spend more time on sites even though they mistakenly believe it’s strictly due to their own conscious choices.
  • Banks of computers employ AI to “learn” which of a countless number of persuasive design elements will keep users hooked
  • A persuasion profile of a particular user’s unique vulnerabilities is developed in real time and exploited to keep users on the site and make them return again and again for longer periods of time. This drives up profits for consumer internet companies whose revenue is based on how much their products are used.
  • “The leaders of Internet companies face an interesting, if also morally questionable, imperative: either they hijack neuroscience to gain market share and make large profits, or they let competitors do that and run away with the market.”
  • Social media and video game companies believe they are compelled to use persuasive technology in the arms race for attention, profits, and survival.
  • Children’s well-being is not part of the decision calculus.
  • one breakthrough occurred in 2017 when Facebook documents were leaked to The Australian. The internal report crafted by Facebook executives showed the social network boasting to advertisers that by monitoring posts, interactions, and photos in real time, the network is able to track when teens feel “insecure,” “worthless,” “stressed,” “useless” and a “failure.”
  • The report also bragged about Facebook’s ability to micro-target ads down to “moments when young people need a confidence boost.”
  • These design techniques provide tech corporations a window into kids’ hearts and minds to measure their particular vulnerabilities, which can then be used to control their behavior as consumers. This isn’t some strange future… this is now.
  • The official tech industry line is that persuasive technologies are used to make products more engaging and enjoyable. But the revelations of industry insiders can reveal darker motives.
  • Revealing the hard science behind persuasive technology, Hopson says, “This is not to say that players are the same as rats, but that there are general rules of learning which apply equally to both.”
  • After penning the paper, Hopson was hired by Microsoft, where he helped lead the development of the Xbox Live, Microsoft’s online gaming system
  • “If game designers are going to pull a person away from every other voluntary social activity or hobby or pastime, they’re going to have to engage that person at a very deep level in every possible way they can.”
  • This is the dominant effect of persuasive design today: building video games and social media products so compelling that they pull users away from the real world to spend their lives in for-profit domains.
  • Persuasive technologies are reshaping childhood, luring kids away from family and schoolwork to spend more and more of their lives sitting before screens and phones.
  • “Since we’ve figured to some extent how these pieces of the brain that handle addiction are working, people have figured out how to juice them further and how to bake that information into apps.”
  • Today, persuasive design is likely distracting adults from driving safely, productive work, and engaging with their own children — all matters which need urgent attention
  • Still, because the child and adolescent brain is more easily controlled than the adult mind, the use of persuasive design is having a much more hurtful impact on kids.
  • But to engage in a pursuit at the expense of important real-world activities is a core element of addiction.
  • younger U.S. children now spend 5 ½ hours each day with entertainment technologies, including video games, social media, and online videos.
  • Even more, the average teen now spends an incredible 8 hours each day playing with screens and phones
  • U.S. kids only spend 16 minutes each day using the computer at home for school.
  • Quietly, using screens and phones for entertainment has become the dominant activity of childhood.
  • Younger kids spend more time engaging with entertainment screens than they do in school
  • teens spend even more time playing with screens and phones than they do sleeping
  • kids are so taken with their phones and other devices that they have turned their backs to the world around them.
  • many children are missing out on real-life engagement with family and school — the two cornerstones of childhood that lead them to grow up happy and successful
  • persuasive technologies are pulling kids into often toxic digital environments
  • A too frequent experience for many is being cyberbullied, which increases their risk of skipping school and considering suicide.
  • And there is growing recognition of the negative impact of FOMO, or the fear of missing out, as kids spend their social media lives watching a parade of peers who look to be having a great time without them, feeding their feelings of loneliness and being less than.
  • The combined effects of the displacement of vital childhood activities and exposure to unhealthy online environments is wrecking a generation.
  • as the typical age when kids get their first smartphone has fallen to 10, it’s no surprise to see serious psychiatric problems — once the domain of teens — now enveloping young kids
  • Self-inflicted injuries, such as cutting, that are serious enough to require treatment in an emergency room, have increased dramatically in 10- to 14-year-old girls, up 19% per year since 2009.
  • While girls are pulled onto smartphones and social media, boys are more likely to be seduced into the world of video gaming, often at the expense of a focus on school
  • it’s no surprise to see this generation of boys struggling to make it to college: a full 57% of college admissions are granted to young women compared with only 43% to young men.
  • Economists working with the National Bureau of Economic Research recently demonstrated how many young U.S. men are choosing to play video games rather than join the workforce.
  • The destructive forces of psychology deployed by the tech industry are making a greater impact on kids than the positive uses of psychology by mental health providers and child advocates. Put plainly, the science of psychology is hurting kids more than helping them.
  • Hope for this wired generation has seemed dim until recently, when a surprising group has come forward to criticize the tech industry’s use of psychological manipulation: tech executives
  • Tristan Harris, formerly a design ethicist at Google, has led the way by unmasking the industry’s use of persuasive design. Interviewed in The Economist’s 1843 magazine, he says, “The job of these companies is to hook people, and they do that by hijacking our psychological vulnerabilities.”
  • Marc Benioff, CEO of the cloud computing company Salesforce, is one of the voices calling for the regulation of social media companies because of their potential to addict children. He says that just as the cigarette industry has been regulated, so too should social media companies. “I think that, for sure, technology has addictive qualities that we have to address, and that product designers are working to make those products more addictive, and we need to rein that back as much as possible,”
  • “If there’s an unfair advantage or things that are out there that are not understood by parents, then the government’s got to come forward and illuminate that.”
  • Since millions of parents, for example the parents of my patient Kelly, have absolutely no idea that devices are used to hijack their children’s minds and lives, regulation of such practices is the right thing to do.
  • Another improbable group to speak out on behalf of children is tech investors.
  • How has the consumer tech industry responded to these calls for change? By going even lower.
  • Facebook recently launched Messenger Kids, a social media app that will reach kids as young as five years old. Suggestive that harmful persuasive design is now honing in on very young children is the declaration of Messenger Kids Art Director, Shiu Pei Luu, “We want to help foster communication [on Facebook] and make that the most exciting thing you want to be doing.”
  • the American Psychological Association (APA) — which is tasked with protecting children and families from harmful psychological practices — has been essentially silent on the matter
  • APA Ethical Standards require the profession to make efforts to correct the “misuse” of the work of psychologists, which would include the application of B.J. Fogg’s persuasive technologies to influence children against their best interests
  • Manipulating children for profit without their own or parents’ consent, and driving kids to spend more time on devices that contribute to emotional and academic problems is the embodiment of unethical psychological practice.
  • “Never before in history have basically 50 mostly men, mostly 20–35, mostly white engineer designer types within 50 miles of where we are right now [Silicon Valley], had control of what a billion people think and do.”
  • Some may argue that it’s the parents’ responsibility to protect their children from tech industry deception. However, parents have no idea of the powerful forces aligned against them, nor do they know how technologies are developed with drug-like effects to capture kids’ minds
  • Others will claim that nothing should be done because the intention behind persuasive design is to build better products, not manipulate kids
  • similar circumstances exist in the cigarette industry, as tobacco companies have as their intention profiting from the sale of their product, not hurting children. Nonetheless, because cigarettes and persuasive design predictably harm children, actions should be taken to protect kids from their effects.
  • in a 1998 academic paper, Fogg describes what should happen if things go wrong, saying, if persuasive technologies are “deemed harmful or questionable in some regard, a researcher should then either take social action or advocate that others do so.”
  • I suggest turning to President John F. Kennedy’s prescient guidance: He said that technology “has no conscience of its own. Whether it will become a force for good or ill depends on man.”
  • The APA should begin by demanding that the tech industry’s behavioral manipulation techniques be brought out of the shadows and exposed to the light of public awareness
  • Changes should be made in the APA’s Ethics Code to specifically prevent psychologists from manipulating children using digital machines, especially if such influence is known to pose risks to their well-being.
  • Moreover, the APA should follow its Ethical Standards by making strong efforts to correct the misuse of psychological persuasion by the tech industry and by user experience designers outside the field of psychology.
  • It should join with tech executives who are demanding that persuasive design in kids’ tech products be regulated
  • The APA also should make its powerful voice heard amongst the growing chorus calling out tech companies that intentionally exploit children’s vulnerabilities.
Javier E

Donald Trump's despicable words - The Washington Post - 0 views

  • “We condemn in the strongest possible terms this egregious display of hatred, bigotry and violence on many sides. On many sides,” he said Saturday.
  • It is important when you consider the situation of a man whose face has been crushed by a boot to wonder if any damage might have been done to the boot.
  • At what point can we stop giving people the benefit of the doubt? “Gotta Hear Both Sides” is carved over the entrance to Hell. How long must we continue to hear from idiots who are wrong? I don’t want to hear debate unless there is something legitimately to be debated, and people’s rights to life, liberty and the pursuit of happiness are not among those things. They are self-evident, or used to seem so.
  • ...9 more annotations...
  • Of course they gathered with torches, because the only liberty they have lost is the liberty to gather with torches and decide whose house to visit with terror. That is the right that is denied them: the right to other people’s possessions, the right to be the only person in the room, the right to be the only person that the world is made for. (These are not rights. They are wrongs.)
  • You are sad because your toys have been taken, but they were never toys to begin with. They were people. It is the ending of the fairy tale; because you were a beast, you did not see that the things around you were people and not objects that existed purely for your pleasure. You should not weep that the curse is broken and you can see that your footstool was a human being.
  • so little good is unmixed. History contains heroes, but no one is a hero entirely, and no one is a hero for very long. You can be brilliant in some ways and despicable in others. You can be a clean, upright, moral individual in your private life who never swears, treats women with respect, and speaks highly of duty and honor– and go out every day and dedicate yourself to a cause that makes the world worse.
  • we have always been a country where things like this can happen. It is just harder not to notice now. And it is possible, sometimes, to be angrier at the person who makes you notice than at the thing you are seeing.
  • All right: You are not a murderer. You are a good person. But that does not mean that what you have was not ill-gotten. That does not mean that you deserve everything you have. You have to look at your history and see it, all of it.
  • We must cherish our history. (Somewhere, a dog whimpers.) Can we be a little more specific about what history? Can we be a little more specific about any of this? The specifics are where the principles are. What will we cherish, and what will we disavow? What are we putting on a pedestal, and what are we putting in a museum? Not all history is created equal.
  • A truth that murder mysteries get right about human nature is that even when you find a man stabbed before the soup course, someone always wants to finish the soup.
  • Who would stand over the body of someone who died protesting a hateful, violence, racist ideology and say that “we have to come together”? That we have to find common ground? I am sure there is common ground to be found with the people who say that some are not fit to be people. The man who thinks I ought not to exist — maybe we can compromise and agree that I will get to exist on alternate Thursdays. Let us only burn some of the villagers at the stake. We can eat just three of the children. All ideas deserve a fair hearing. Maybe we can agree that some people are only three-fifths of people, while we are at it. As long as we are giving a hearing to all views.
  • Only someone with no principles would think that such a compromise was possible. Only someone with no principles would think that such a compromise was desirable. At some point you have to judge more than just the act of fighting. You have to judge what the fighting is for. Some principles are worth fighting for, and others are not
Javier E

When Will Climate Change Make the Earth Too Hot For Humans? - 0 views

  • Is it helpful, or journalistically ethical, to explore the worst-case scenarios of climate change, however unlikely they are? How much should a writer contextualize scary possibilities with information about how probable those outcomes are, however speculative those probabilities may be?
  • I also believe very firmly in the set of propositions that animated the project from the start:
  • that the public does not appreciate the scale of climate risk
  • ...5 more annotations...
  • that this is in part because we have not spent enough time contemplating the scarier half of the distribution curve of possibilities, especially its brutal long tail, or the risks beyond sea-level rise;
  • that there is journalistic and public-interest value in spreading the news from the scientific community, no matter how unnerving it may be;
  • and that, when it comes to the challenge of climate change, public complacency is a far, far bigger problem than widespread fatalism — that many, many more people are not scared enough than are already “too scared.”
  • The science says climate change threatens nearly every aspect of human life on this planet, and that inaction will hasten the problems. In that context, I don’t think it’s a slur to call an article, or its writer, alarmist. I’ll accept that characterization. We should be alarmed.
  • It is, I promise, worse than you think. If your anxiety about global warming is dominated by fears of sea-level rise, you are barely scratching the surface of what terrors are possible, even within the lifetime of a teenager today.
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
Javier E

Andrew Sullivan: Trump's Mindless Nihilism - 2 views

  • The trouble with reactionary politics is that it is fundamentally a feeling, an impulse, a reflex. It’s not a workable program. You can see that in the word itself: it’s a reaction, an emotional response to change. Sure, it can include valuable insights into past mistakes, but it can’t undo them, without massive disruption
  • I mention this as a way to see more clearly why the right in Britain and America is either unraveling quickly into chaos, or about to inflict probably irreparable damage on a massive scale to their respective countries. Brexit and Trump are the history of Thatcher and Reagan repeating as dangerous farce, a confident, intelligent conservatism reduced to nihilist, mindless reactionism.
  • But it’s the impossible reactionary agenda that is the core problem. And the reason we have a president increasingly isolated, ever more deranged, legislatively impotent, diplomatically catastrophic, and constitutionally dangerous, is not just because he is a fucking moron requiring an adult day-care center to avoid catastrophe daily.
  • ...14 more annotations...
  • It’s because he’s a reactionary fantasist, whose policies stir the emotions but are stalled in the headwinds of reality
  • These are not conservative reforms, thought-through, possible to implement, strategically planned. They are the unhinged fantasies of a 71-year-old Fox News viewer imagining he can reconstruct the late 1950s. They cannot actually be implemented, without huge damage.
  • In Britain, meanwhile, Brexit is in exactly the same place — a reactionary policy that is close to impossible to implement without economic and diplomatic catastrophe
  • Brexit too was built on Trump-like lies, and a Trump-like fantasy that 50 years of integration with the E.U. could be magically abolished overnight, and that the Britain of the early 1970s could be instantly re-conjured. No actual conservative can possibly believe that such radical, sudden change won’t end in tears.
  • “The researchers start by simulating what happens when extra links are introduced into a social network. Their network consists of men and women from different races who are randomly distributed. In this model, everyone wants to marry a person of the opposite sex but can only marry someone with whom a connection exists. This leads to a society with a relatively low level of interracial marriage. But if the researchers add random links between people from different ethnic groups, the level of interracial marriage changes dramatically.”
  • the line to draw, it seems to me, is when a speech is actually shut down or rendered impossible by disruption. A fiery protest that initially prevents an event from starting is one thing; a disruption that prevents the speech taking place at all is another.
  • Maybe a college could set a time limit for protest — say, ten or fifteen minutes — after which the speaker must be heard, or penalties will be imposed. Heckling — that doesn’t prevent a speech — should also be tolerated to a reasonable extent. There’s a balance here that protects everyone’s free speech
  • dating apps are changing our society, by becoming the second-most common way straights meet partners, and by expanding the range of people we can meet.
  • here’s what’s intriguing: Correlated with that is a sustained, and hard-to-explain, rise in interracial marriage.
  • “It is intriguing that shortly after the introduction of the first dating websites in 1995, like Match.com, the percentage of new marriages created by interracial couples increased rapidly,” say the researchers. “The increase became steeper in the 2000s, when online dating became even more popular. Then, in 2014, the proportion of interracial marriages jumped again.” That was when Tinder took off.
  • Disruptions of events are, to my mind, integral to the exercise of free speech. Hecklers are part of the contentious and messy world of open debate. To suspend or, after three offenses, expel students for merely disrupting events is not so much to chill the possibility of dissent, but to freeze it altogether.
  • Even more encouraging, the marriages begun online seem to last longer than others.
  • I wonder if online dating doesn’t just expand your ability to meet more people of another race, by eliminating geography and the subtle grouping effect of race and class and education. Maybe it lowers some of the social inhibitions against interracial dating.
  • It’s always seemed to me that racism is deeply ingrained in human nature, and always will be, simply because our primate in-group aversion to members of an out-group expresses itself in racism, unless you actively fight it. You can try every law or custom to mitigate this, but it will only go so far.
‹ Previous 21 - 40 of 744 Next › Last »
Showing 20 items per page