Skip to main content

Home/ TOK Friends/ Group items tagged graphic

Rss Feed Group items tagged

Javier E

What Machines Can't Do - NYTimes.com - 0 views

  • certain mental skills will become less valuable because computers will take over. Having a great memory will probably be less valuable. Being able to be a straight-A student will be less valuable — gathering masses of information and regurgitating it back on tests. So will being able to do any mental activity that involves following a set of rules.
  • what human skills will be more valuable?
  • In the news business, some of those skills are already evident.
  • ...13 more annotations...
  • Technology has rewarded sprinters (people who can recognize and alertly post a message on Twitter about some interesting immediate event) and marathoners (people who can write large conceptual stories), but it has hurt middle-distance runners (people who write 800-word summaries of yesterday’s news conference).
  • Technology has rewarded graphic artists who can visualize data, but it has punished those who can’t turn written reporting into video presentations.
  • More generally, the age of brilliant machines seems to reward a few traits.
  • First, it rewards enthusiasm. The amount of information in front of us is practically infinite; so is that amount of data that can be collected with new tools. The people who seem to do best possess a voracious explanatory drive, an almost obsessive need to follow their curiosity.
  • Second, the era seems to reward people with extended time horizons and strategic discipline.
  • a human can provide an overall sense of direction and a conceptual frame. In a world of online distractions, the person who can maintain a long obedience toward a single goal, and who can filter out what is irrelevant to that goal, will obviously have enormous worth.
  • Third, the age seems to reward procedural architects. The giant Internet celebrities didn’t so much come up with ideas, they came up with systems in which other people could express ideas: Facebook, Twitter, Wikipedia, etc.
  • One of the oddities of collaboration is that tightly knit teams are not the most creative. Loosely bonded teams are, teams without a few domineering presences, teams that allow people to think alone before they share results with the group. So a manager who can organize a decentralized network around a clear question, without letting it dissipate or clump, will have enormous value.
  • Fifth, essentialists will probably be rewarded.
  • creativity can be described as the ability to grasp the essence of one thing, and then the essence of some very different thing, and smash them together to create some entirely new thing.
  • In the 1950s, the bureaucracy was the computer. People were organized into technocratic systems in order to perform routinized information processing.
  • now the computer is the computer. The role of the human is not to be dispassionate, depersonalized or neutral. It is precisely the emotive traits that are rewarded: the voracious lust for understanding, the enthusiasm for work, the ability to grasp the gist, the empathetic sensitivity to what will attract attention and linger in the mind.
  • Unable to compete when it comes to calculation, the best workers will come with heart in hand.
Javier E

Richard Dawkins, an Original Thinker Who Bashes Orthodoxy - NYTimes.com - 0 views

  • “There are endless progressions in evolution,” he says. “When the ancestors of the cheetah first began pursuing the ancestors of the gazelle, neither of them could run as fast as they can today. Profiles in Science Richard Dawkins This is the second in an occasional series of articles and videos about leaders in science. Previous Articles in the Series » Related Exulting in Science’s Mysteries (September 20, 2011) RSS Feed Get Science News From The New York Times » Readers’ Comments Share your thoughts. Post a Comment » Read All Comments (258) » “What you are looking at is the progressive evolutionary product of an arms race.”
  • So it would be no great surprise if the interior lives of animals turned out to be rather complex. Do dogs, for example, experience consciousness? Are they aware of themselves as autonomous animals in their surroundings? “Consciousness has to be there, hasn’t it?” Professor Dawkins replies. “It’s an evolved, emergent quality of brains. It’s very likely that most mammals have consciousness, and probably birds, too.”
charlottedonoho

How Technology Can Help Language Learning | Suren Ramasubbu - 0 views

  • Intelligence, according to Gardner, is of eight types - verbal-linguistic, logical-mathematical, musical-rhythmic, visual-spatial, bodily-kinesthetic, interpersonal, intrapersonal, and naturalistic; existential and moral intelligence were added as afterthoughts in the definition of Intelligence. This is the first in a series of posts that explore and understand how each of the above forms of intelligence is affected by technology-mediated education.
  • Verbal-linguistic Intelligence involves sensitivity to spoken and written language, the ability to learn languages, and the capacity to use language to accomplish goals. Such intelligence is fostered by three specific activities: reading, writing and interpersonal communication - both written and oral.
  • Technology allows addition of multisensory elements that provide meaningful contexts to facilitate comprehension, thus expanding the learning ground of language and linguistics.
  • ...8 more annotations...
  • Research into the effect of technology on the development of the language and literacy skills vis-à-vis reading activities of children has offered evidence for favorable effects of digital-form books.
  • E-books are also being increasingly used to teach reading among beginners and children with reading difficulties.
  • Technology can be used to improve reading ability in many ways. It can enhance and sustain the interest levels for digitial natives by allowing immediate feedback on performance and providing added practice when necessary.
  • Technology can also help in improvement of writing skills. Word processing software promotes not only composition but also editing and revising in ways that streamline the task of writing.
  • However, the web cannot be discounted as being "bad for language", considering that it also offers very useful tools such as blogging and microblogging that can help the student improve her writing skills with dynamic feedback. The possibility of incorporating other media into a written document (e.g. figures, graphics, videos etc.) can enhance the joy of writing using technology.
  • Technology enhanced oral communication is indeed useful in that it allows students from remote locations, or from all over the world to communicate orally through video and audio conferencing tools.
  • As with anything to do with technology, there are also detractors who propose negative influence of features like animation, sound, music and other multimedia effects possible in digital media, which may distract young readers from the story content.
  • Such complaints notwithstanding, the symbiotic ties between linguistics and technology cannot be ignored.
Javier E

leap-of-faith - 1 views

shared by Javier E on 27 Jun 13 - No Cached
Javier E

In Defense of Anonymous Political Giving - NYTimes.com - 0 views

  • In partisan terms, the growth of secrecy in campaign finance has been driven by the political right, as shown in the graphic at Figure 2. Of the $310.8 million in total political spending by nondisclosing groups in 2011-12, $265.2 million, or 85.5 percent, was spent by conservative, pro-Republican organizations (red in the pie chart), and $10.9 million, or 11.2 percent, was spent by liberal, pro-Democratic organizations (blue in the chart).
  • “The rationale behind donor anonymity, which is a form of First Amendment speech, is to protect against the threat of retaliation when someone or some group takes a stand, espouses their point of view or articulates a position on issues that may (or may not) be popular with the general public or the political party in majority power. There are many precedents to this: the Federalist Papers were published under pseudonyms and financed anonymously, out of fear of retribution.”
  • do you have a principled answer to the argument that efforts to influence the political and policy-making process should be as transparent and open as possible because voters deserve to know who is trying to persuade them to take stands on issues of major public importance? More simply: Is transparency an essential ingredient of democracy? What overrides transparency?
  • ...1 more annotation...
  • Scalia declared that “a person who is required to put his name to a document is much less likely to lie than one who can lie anonymously.”Scalia concluded: “I can imagine no reason why an anonymous leaflet is any more honorable, as a general matter, than an anonymous phone call or an anonymous letter. It facilitates wrong by eliminating accountability, which is ordinarily the very purpose of the anonymity.”
Ellie McGinnis

Obsessive Thoughts: A Darker Side of OCD - Olivia Loving - The Atlantic - 0 views

  • Compulsive tics steal most of the limelight when it comes to Obsessive-Compulsive Disorder. Comparatively less attention, meanwhile, is given to the obsessive thoughts that characterize the other half of OCD.
  • For example: A woman, distraught by visions of murdering her child, wakes up several times in the night to check on her daughter.
  • It is easier for them to understand repetitive hand­-washing than, say, the fear of murdering your parents. Abstract pamphlet language—"recurrent and persistent thoughts, impulses or images"—doesn't necessarily register in a non­sufferer's mind as graphic or violent.
  • ...5 more annotations...
  • But the worry that "something bad will happen" is not an ephemeral, occasional threat for OCD sufferers.
  • "Most people don't understand OCD at all, to begin with. Secondly, most people tend to get 28too distracted by the content of the obsessions. The content is irrelevant,
  • Obsessions fuel compulsions.
  • "If you listen carefully, patients will ask constantly agonize over their obsessions, asking, 'Why am I having these thoughts; how do I know that I wouldn't do this; why would I be thinking it if I didn’t want to do it?'"
  • "Most violent and dangerous people don’t sit there having these inner dialogues," he added.
Javier E

Face It, Your Brain Is a Computer - The New York Times - 0 views

  • all the standard arguments about why the brain might not be a computer are pretty weak.
  • Take the argument that “brains are parallel, but computers are serial.” Critics are right to note that virtually every time a human does anything, many different parts of the brain are engaged; that’s parallel, not serial.
  • the trend over time in the hardware business has been to make computers more and more parallel, using new approaches like multicore processors and graphics processing units.
  • ...6 more annotations...
  • The real payoff in subscribing to the idea of a brain as a computer would come from using that idea to profitably guide research. In an article last fall in the journal Science, two of my colleagues (Adam Marblestone of M.I.T. and Thomas Dean of Google) and I endeavored to do just that, suggesting that a particular kind of computer, known as the field programmable gate array, might offer a preliminary starting point for thinking about how the brain works.
  • FIELD programmable gate arrays consist of a large number of “logic block” programs that can be configured, and reconfigured, individually, to do a wide range of tasks. One logic block might do arithmetic, another signal processing, and yet another look things up in a table. The computation of the whole is a function of how the individual parts are configured. Much of the logic can be executed in parallel, much like what happens in a brain.
  • our suggestion is that the brain might similarly consist of highly orchestrated sets of fundamental building blocks, such as “computational primitives” for constructing sequences, retrieving information from memory, and routing information between different locations in the brain. Identifying those building blocks, we believe, could be the Rosetta stone that unlocks the brain.
  • it is unlikely that we will ever be able to directly connect the language of neurons and synapses to the diversity of human behavior, as many neuroscientists seem to hope. The chasm between brains and behavior is just too vast.
  • Our best shot may come instead from dividing and conquering. Fundamentally, that may involve two steps: finding some way to connect the scientific language of neurons and the scientific language of computational primitives (which would be comparable in computer science to connecting the physics of electrons and the workings of microprocessors); and finding some way to connect the scientific language of computational primitives and that of human behavior (which would be comparable to understanding how computer programs are built out of more basic microprocessor instructions).
  • If neurons are akin to computer hardware, and behaviors are akin to the actions that a computer performs, computation is likely to be the glue that binds the two.
Javier E

Specs that see right through you - tech - 05 July 2011 - New Scientist - 0 views

  • a number of "social X-ray specs" that are set to transform how we interact with each other. By sensing emotions that we would otherwise miss, these technologies can thwart disastrous social gaffes and help us understand each other better.
  • In conversation, we pantomime certain emotions that act as social lubricants. We unconsciously nod to signal that we are following the other person's train of thought, for example, or squint a bit to indicate that we are losing track. Many of these signals can be misinterpreted - sometimes because different cultures have their own specific signals.
  • n 2005, she enlisted Simon Baron-Cohen, also at Cambridge, to help her identify a set of more relevant emotional facial states. They settled on six: thinking, agreeing, concentrating, interested - and, of course, the confused and disagreeing expressions
  • ...16 more annotations...
  • More often, we fail to spot them altogether. D
  • it's hard to fool the machine for long
  • The camera tracks 24 "feature points" on your conversation partner's face, and software developed by Picard analyses their myriad micro-expressions, how often they appear and for how long. It then compares that data with its bank of known expressions (see diagram).
  • Eventually, she thinks the system could be incorporated into a pair of augmented-reality glasses, which would overlay computer graphics onto the scene in front of the wearer.
  • the average person only managed to interpret, correctly, 54 per cent of Baron-Cohen's expressions on real, non-acted faces. This suggested to them that most people - not just those with autism - could use some help sensing the mood of people they are talking to.
  • set up a company called Affectiva, based in Waltham, Massachusetts, which is selling their expression recognition software. Their customers include companies that, for example, want to measure how people feel about their adverts or movie.
  • To create this lexicon, they hired actors to mime the expressions, then asked volunteers to describe their meaning, taking the majority response as the accurate one.
  • In addition to facial expressions, we radiate a panoply of involuntary "honest signals", a term identified by MIT Media Lab researcher Alex Pentland in the early 2000s to describe the social signals that we use to augment our language. They include body language such as gesture mirroring, and cues such as variations in the tone and pitch of the voice. We do respond to these cues, but often not consciously. If we were more aware of them in others and ourselves, then we would have a fuller picture of the social reality around us, and be able to react more deliberately.
  • develop a small electronic badge that hangs around the neck. Its audio sensors record how aggressive the wearer is being, the pitch, volume and clip of their voice, and other factors. They called it the "jerk-o-meter".
  • it helped people realise when they were being either obnoxious or unduly self-effacing.
  • y the end of the experiment, all the dots had gravitated towards more or less the same size and colour. Simply being able to see their role in a group made people behave differently, and caused the group dynamics to become more even. The entire group's emotional intelligence had increased (
  • Some of our body's responses during a conversation are not designed for broadcast to another person - but it's possible to monitor those too. Your temperature and skin conductance can also reveal secrets about your emotional state, and Picard can tap them with a glove-like device called the Q Sensor. In response to stresses, good or bad, our skin becomes clammy, increasing its conductance, and the Q Sensor picks this up.
  • Physiological responses can now even be tracked remotely, in principle without your consent. Last year, Picard and one of her graduate students showed that it was possible to measure heart rate without any surface contact with the body. They used software linked to an ordinary webcam to read information about heart rate, blood pressure and skin temperature based on, among other things, colour changes in the subject's face
  • In Rio de Janeiro and Sao Paolo, police officers can decide whether someone is a criminal just by looking at them. Their glasses scan the features of a face, and match them against a database of criminal mugshots. A red light blinks if there's a match.
  • Thad Starner at Georgia Institute of Technology in Atlanta wears a small device he has built that looks like a monocle. It can retrieve video, audio or text snippets of past conversations with people he has spoken with, and even provide real-time links between past chats and topics he is currently discussing.
  • The US military has built a radar-imaging device that can see through walls to capture 3D images of people and objects beyond.
catbclark

6 Girl Scout cookies you thought you were getting but aren't - Los Angeles Times - 0 views

    • catbclark
       
      $4 dollars here 
  • 6 Girl Scout cookies you thought you were getting but aren't
Javier E

Out of Print, Maybe, but Not Out of Mind - NYTimes.com - 1 views

  • efforts to reimagine the core experience of the book have stumbled. Dozens of publishing start-ups tried harnessing social reading apps or multimedia, but few caught on.
  • Social Books, which let users leave public comments on particular passages and comment on passages selected by others, became Rethink Books and then faltered. Push Pop Press, whose avowed aim was to reimagine the book by mixing text, images, audio, video and interactive graphics, was acquired by Facebook in 2011 and heard from no more. Copia, another highly publicized social reading platform, changed its business model to become a classroom learning tool. The latest to stumble is Small Demons, which explores the interrelationship among books. Users who were struck by the Ziegfeld Follies in “The Great Gatsby,” for instance, could follow a link to the dancers’ appearance in 67 other books. Small Demons said it would close this month without a new investor.
  • “A lot of these solutions were born out of a programmer’s ability to do something rather than the reader’s enthusiasm for things they need,” said Peter Meyers, author of “Breaking the Page,” a forthcoming look at the digital transformation of books. “We pursued distractions and called them enhancements.”
  • ...6 more annotations...
  • The notion that books require too much time to read dates back, at least, to midcentury entrepreneurial operations like Reader’s Digest and CliffsNotes, which offered up predigested texts. So some start-ups chose a basic approach: Take a text and break it up. Safari Flow, a service from Safari Books, offers chapters of technical manuals for a $29 monthly subscription fee. Inkling does the same with more consumer-oriented titles like cookbooks. If you want only the chapter on pasta, you can buy it for $4.99 instead of having to buy the whole book. Citia is a New York start-up with a much more ambitious approach. Working in collaboration with an author, Citia editors take a nonfiction book and reorganize its ideas onto digital cards that can be read on different devices and sent through social networks
  • One of the first books given the Citia treatment was Kevin Kelly’s “What Technology Wants.” Material directly from the book is in quotation marks and the author is referred to in the third person, which lends a somewhat academic distance to the summaries. Sections of the book are summarized on one card, then the reader can drill down into subsections on cards hidden underneath.
  • What to label these stories is another question. The Internet by its nature breaks down borders and unfreezes text. Put a book online and set it free to grow and shrink with new arguments, be broken up and reassembled as readers demand, and it might be only nostalgia that calls it by its old name.
  • “We will continue to recognize books as books as they migrate to the Internet, but our understanding of storytelling will inevitably expand,” Mr. Brantley said. Among the presentations at Books in Browsers this fall: “A Book Isn’t a Book Isn’t a Book” and “The Death of the Reader.”
  • Much of the design innovation at the moment, Mr. Brantley believes, is not coming from publishers, who must still wrestle with delivering both digital and physical books. Instead it is being developed by a tech community that “doesn’t think about stories as the end product. Instead, they think about storytelling platforms that will enable new forms of both authoring and reading.”
  • He cited the enormous success of Wattpad, a Canadian start-up that advertises itself as the world’s largest storytelling community. There are 10 million stories on the site.
Javier E

From Sports Illustrated, the Latest Body Part for Women to Fix - NYTimes.com - 0 views

  • At 44, I am old enough to remember when reconstruction was something you read about in history class, when a muffin top was something delicious you ate at the bakery, a six-pack was how you bought your beer, camel toe was something one might glimpse at the zoo, a Brazilian was someone from the largest country in South America and terms like thigh gap and bikini bridge would be met with blank looks.
  • Now, each year brings a new term for an unruly bit of body that women are expected to subdue through diet and exercise.
  • Girls’ and women’s lives matter. Their safety and health and their rights matter. Whether every inch of them looks like a magazine cover?That, my sisters, does not matter at all.
  • ...5 more annotations...
  • there’s no profit in leaving things as they are.Show me a body part, I’ll show you someone who’s making money by telling women that theirs looks wrong and they need to fix it. Tone it, work it out, tan it, bleach it, tattoo it, lipo it, remove all the hair, lose every bit of jiggle.
  • As a graphic designer and Photoshop teacher, I also have to note that Photoshop is used HEAVILY in these kinds of publications. Even on women with incredibly beautiful (by pop culture standards) bodies. It's quite sad because the imagery we're expected to live up to (or approximate) by cultural standards, is illustration. It's not even real. My boyfriend and I had a big laugh over a Playboy cover a few months ago where the Photoshopping was so extreme (thigh gap and butt cheek) it was anatomically impossible and looked ridiculous. I work in the industry.. I know what the Liquify filter and the Spot Healing Brush can do!
  • We may harp on gender inequality while pursuing stupid fetishes. Well into our middle age, we still try to forcefully wriggle into size 2 pair of jeans. We foolishly spend tonnes of money on fake ( these guys should be sued for false advertising )age -defying, anti-wrinkle creams. Why do we have to have our fuzz and bush diappear while the men have forests on their chests,abdomens,butts, arms and legs? For that we have only ourselves to blame. We just cannot get out of this mindset of being objectified. And we pass on these foolishness to our daughters and grand-daughters. They get trapped, never satisfied with what they see in the mirror. Don't expect the men to change anytime soon. They will always maintain the status quo. It is for us, women to get out of this rut. We have to 'snatch' gender-equality. It will never be handed to us. PERIOD
  • I spent years dieting and exercising to look good--or really to not look bad. I knew the calories (and probably still do) in thousands of foods. How I regret the time I spent on that and the boyfriends who cared about that. And how much more I had to give to the world. With unprecedented economic injustice, ecosystems collapsing, war breaking out everywhere, nations going under water, people starving in refugee camps, the keys to life, behavior, and disease being unlocked in the biological sciences . . . this is what we think women should spend their time worrying about? Talk about a poverty of ambition. No more. Won't even look at these demeaning magazines when I get my hair cut. If that's what a woman cares about, I try to tell her to stop wasting her time. If that's what a man cares about, he is a waste of my time. What a depressing way to distract women from achieving more in this world. Really wish I'd know this at 12.
  • we believe we're all competing against one another to procreate and participate in evolution. So women (and men) compete ferociously, and body image is a subset of all that. Then there's LeMarckian evolutionary theory and epigenetics...http://en.wikipedia.org/wiki/Lamarckismhttp://en.wikipedia.org/wiki/EpigeneticsBottom line is that we can't stop this train any more easily than we can stop the anthropocene's Climate Change. Human beings are tempted. Sometimes we win the battle, other times we give in to vanity, hedonism, and ego. This is all a subset of much larger forces at play. Men and women make choices and act within that environment. Deal with it.
kirkpatrickry

Face It, Your Brain Is a Computer - The New York Times - 0 views

  • This approach is misguided. Too many scientists have given up on the computer analogy, and far too little has been offered in its place. In my view, the analogy is due for a rethink.To begin with, all the standard arguments about why the brain might not be a computer are pretty weak. Take the argument that “brains are parallel, but computers are serial.” Critics are right to note that virtually every time a human does anything, many different parts of the brain are engaged; that’s parallel, not serial.
  • But the idea that computers are strictly serial is woefully out of date. Ever since desktop computers became popular, there has always been some degree of parallelism in computers, with several different computations being performed simultaneously, by different components, such as the hard-drive controller and the central processor. And the trend over time in the hardware business has been to make computers more and more parallel, using new approaches like multicore processors and graphics processing units.Skeptics of the computer metaphor also like to argue that “brains are analog, while computers are digital.” The idea here is that things that are digital operate only with discrete divisions, as with a digital watch; things that are analog, like an old-fashioned watch, work on a smooth continuum.
grayton downing

Retracing Steps | The Scientist Magazine® - 1 views

  • growing body of research has highlighted scientists’ inability to reproduce one another’s results, including a 2012 study that found only 11 percent of “landmark” cancer studies investigated could be independently confirmed.
  • “Some communities have standards requiring raw data to be deposited at or before publication, but the computer code is generally not made available, typically due to the time it takes to prepare it for release,”
  • Sage’s solution? An open-source computational platform, called Synapse, which enables seamless collaboration among geographically dispersed scientific teams—providing them with the tools to share data, source code, and analysis methods on specific research projects or on any of the 10,000 datasets in the organization’s massive data corpus. Key to these collaborations are tools embedded in Synapse that allow for everything from data “freezing” and versioning controls to graphical provenance records—delineating who did what to which dataset, for example.
  • ...2 more annotations...
  • It was indeed the connecting data framework that held the entire project together,” said Josh Stuart, professor of biomolecular engineering at the University of California, Santa Cruz, who is part of the TCGA-led project.
  • “It provides a framework for the science to be extended upon, instead of publication as a finite endpoint for research,”
Javier E

How YouTube Drives People to the Internet's Darkest Corners - WSJ - 0 views

  • YouTube is the new television, with more than 1.5 billion users, and videos the site recommends have the power to influence viewpoints around the world.
  • Those recommendations often present divisive, misleading or false content despite changes the site has recently made to highlight more-neutral fare, a Wall Street Journal investigation found.
  • Behind that growth is an algorithm that creates personalized playlists. YouTube says these recommendations drive more than 70% of its viewing time, making the algorithm among the single biggest deciders of what people watch.
  • ...25 more annotations...
  • People cumulatively watch more than a billion YouTube hours daily world-wide, a 10-fold increase from 2012
  • After the Journal this week provided examples of how the site still promotes deceptive and divisive videos, YouTube executives said the recommendations were a problem.
  • When users show a political bias in what they choose to view, YouTube typically recommends videos that echo those biases, often with more-extreme viewpoints.
  • Such recommendations play into concerns about how social-media sites can amplify extremist voices, sow misinformation and isolate users in “filter bubbles”
  • Unlike Facebook Inc. and Twitter Inc. sites, where users see content from accounts they choose to follow, YouTube takes an active role in pushing information to users they likely wouldn’t have otherwise seen.
  • “The editorial policy of these new platforms is to essentially not have one,”
  • “That sounded great when it was all about free speech and ‘in the marketplace of ideas, only the best ones win.’ But we’re seeing again and again that that’s not what happens. What’s happening instead is the systems are being gamed and people are being gamed.”
  • YouTube has been tweaking its algorithm since last autumn to surface what its executives call “more authoritative” news source
  • YouTube last week said it is considering a design change to promote relevant information from credible news sources alongside videos that push conspiracy theories.
  • The Journal investigation found YouTube’s recommendations often lead users to channels that feature conspiracy theories, partisan viewpoints and misleading videos, even when those users haven’t shown interest in such content.
  • YouTube engineered its algorithm several years ago to make the site “sticky”—to recommend videos that keep users staying to watch still more, said current and former YouTube engineers who helped build it. The site earns money selling ads that run before and during videos.
  • YouTube’s algorithm tweaks don’t appear to have changed how YouTube recommends videos on its home page. On the home page, the algorithm provides a personalized feed for each logged-in user largely based on what the user has watched.
  • There is another way to calculate recommendations, demonstrated by YouTube’s parent, Alphabet Inc.’s Google. It has designed its search-engine algorithms to recommend sources that are authoritative, not just popular.
  • Google spokeswoman Crystal Dahlen said that Google improved its algorithm last year “to surface more authoritative content, to help prevent the spread of blatantly misleading, low-quality, offensive or downright false information,” adding that it is “working with the YouTube team to help share learnings.”
  • In recent weeks, it has expanded that change to other news-related queries. Since then, the Journal’s tests show, news searches in YouTube return fewer videos from highly partisan channels.
  • YouTube’s recommendations became even more effective at keeping people on the site in 2016, when the company began employing an artificial-intelligence technique called a deep neural network that makes connections between videos that humans wouldn’t. The algorithm uses hundreds of signals, YouTube says, but the most important remains what a given user has watched.
  • Using a deep neural network makes the recommendations more of a black box to engineers than previous techniques,
  • “We don’t have to think as much,” he said. “We’ll just give it some raw data and let it figure it out.”
  • To better understand the algorithm, the Journal enlisted former YouTube engineer Guillaume Chaslot, who worked on its recommendation engine, to analyze thousands of YouTube’s recommendations on the most popular news-related queries
  • Mr. Chaslot created a computer program that simulates the “rabbit hole” users often descend into when surfing the site. In the Journal study, the program collected the top five results to a given search. Next, it gathered the top three recommendations that YouTube promoted once the program clicked on each of those results. Then it gathered the top three recommendations for each of those promoted videos, continuing four clicks from the original search.
  • The first analysis, of November’s top search terms, showed YouTube frequently led users to divisive and misleading videos. On the 21 news-related searches left after eliminating queries about entertainment, sports and gaming—such as “Trump,” “North Korea” and “bitcoin”—YouTube most frequently recommended these videos:
  • The algorithm doesn’t seek out extreme videos, they said, but looks for clips that data show are already drawing high traffic and keeping people on the site. Those videos often tend to be sensationalist and on the extreme fringe, the engineers said.
  • Repeated tests by the Journal as recently as this week showed the home page often fed far-right or far-left videos to users who watched relatively mainstream news sources, such as Fox News and MSNBC.
  • Searching some topics and then returning to the home page without doing a new search can produce recommendations that push users toward conspiracy theories even if they seek out just mainstream sources.
  • After searching for “9/11” last month, then clicking on a single CNN clip about the attacks, and then returning to the home page, the fifth and sixth recommended videos were about claims the U.S. government carried out the attacks. One, titled “Footage Shows Military Plane hitting WTC Tower on 9/11—13 Witnesses React”—had 5.3 million views.
Javier E

Why Trump's 'animals' remark should make everyone angry - The Washington Post - 0 views

  • why did so many people get so angry when President Trump said “These are animals” in response to a remark about a gang called MS-13?
  • Obviously, because he wasn’t just stating a simple fact; he was using those words to demote those people from the human race.
  • And by the transitive property, to demote immigrants from the empathy and consideration that decent people extend to other human beings.
  • ...8 more annotations...
  • there is a plausible reading of Trump’s words that refers to the gang, or similar criminals, not to immigrants in general. But in light of Trump’s history, that surface reading isn’t enough.
  • the “animals” controversy illustrates a broader truth: It’s a common human failing to characterize outgroups by the worst examples we can find while dismissing our own bad apples as isolated minorities who have nothing to do with the rest of us.
  • in the succeeding days, he has seemed obsessed with repeating the word “animals” every time the social media storm threatened to die down.
  • He’s less careful when immigrants are involved. Immigrants actually have a lower crime rate than native-born Americans , yet Trump sure seems to spend an awful lot of time talking about the small fraction who are criminals.
  • It’s instructive to compare Trump’s harsh language about immigrant “animals” with his response to a direct question about a different group of people behaving badly. After white nationalists staged marches in Charlottesville, culminating in a death, Trump was at pains to distinguish the Nazis from the “people in that group that were there to innocently protest.”
  • Consider how conservatives feel, for example, when the left focuses disproportionate energy on the tiny portion of the population that belongs to the alt-right or to white-nationalist groups.
  • Or consider the lingering indignation over Barack Obama’s suggestion that in some small towns in the Midwest and Pennsylvania, where the economy has been devastated by de-industrialization, some people “get bitter, they cling to guns or religion or antipathy to people who aren’t like them.”
  • Obama wasn’t making an abstract observation about human psychology; he was implying that while Democrats come to their views through thoughtful reflection, the Republican rubes simply react to environmental stimulus, like amoebas.
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
Javier E

I Sent All My Text Messages in Calligraphy for a Week - Cristina Vanko - The Atlantic - 2 views

  • I decided to blend a newfound interest in calligraphy with my lifelong passion for written correspondence to create a new kind of text messaging. The idea: I wanted to message friends using calligraphic texts for one week. The average 18-to-24-year-old sends and gets something like 4,000 messages a month, which includes sending more than 500 texts a week, according to Experian. The week of my experiment, I only sent 100
  • We are a youth culture that heavily relies on emojis. I didn’t realize how much I depend on emojis and emoticons to express myself until I didn’t have them. Handdrawn emoticons, though original, just aren’t the same. I wasn't able to convey emoticons as neatly as the cleanliness of a typeface. Sketching emojis is too time consuming. To bridge the gap between time and the need for graphic imagery, I sent out selfies on special occasions when my facial expression spoke louder than words.
  • That week, the sense of urgency I normally felt about my phone virtually vanished. It was like back when texts were rationed, and when I lacked anxiety about viewing "read" receipts. I didn’t feel naked without having my phone on me every moment. 
  • ...10 more annotations...
  • So while the experiment began as an exercise to learn calligraphy, it doubled as a useful sort of digital detox that revealed my relationship with technology. Here's what I learned:
  • Receiving handwritten messages made people feel special. The awesome feeling of receiving personalized mail really can be replicated with a handwritten text.
  • Handwriting allows for more self-expression. I found I could give words a certain flourish to mimic the intonation of spoken language. Expressing myself via handwriting could also give the illusion of real-time presence. One friend told me, “it’s like you’re here with us!”
  • Before I started, I established rules for myself: I could create only handwritten text messages for seven days, absolutely no using my phone’s keyboard. I had to write out my messages on paper, photograph them, then hit “send.” I didn’t reveal my plan to my friends unless asked
  • Sometimes you don't need to respond. Most conversations aren’t life or death situations, so it was refreshing to feel 100 percent present in all interactions. I didn’t interrupt conversations by checking social media or shooting text messages to friends. I was more in tune with my surroundings. On transit, I took part in people watching—which, yes, meant mostly watching people staring at their phones. I smiled more at passersby while walking since I didn’t feel the need to avoid human interaction by staring at my phone.
  • A phone isn't only a texting device. As I texted less, I used my phone less frequently—mostly because I didn’t feel the need to look at it to keep me busy, nor did I want to feel guilty for utilizing the keyboard through other applications. I still took photos, streamed music, and logged workouts since I felt okay with pressing buttons for selection purposes
  • People don’t expect to receive phone calls anymore. Texting brings about a less intimidating, more convenient experience. But it wasn't that long ago when real-time voice were the norm. It's clear to me that, these days, people prefer to be warned about an upcoming phone call before it comes in.
  • Having a pen and paper is handy at all times. Writing out responses is a great reminder to slow down and use your hands. While all keys on a keyboard feel the same, it’s difficult to replicate the tactile activity of tracing a letter’s shape
  • My sent messages were more thoughtful.
  • I was more careful with grammar and spelling. People often ignore the rules of grammar and spelling just to maintain the pace of texting conversation. But because a typical calligraphic text took minutes to craft, I had time to make sure I got things right. The usual texting acronyms and misspellings look absurd when texted with type, but they'd be especially ridiculous written by hand.
kushnerha

BBC - Future - Will emoji become a new language? - 2 views

  • Emoji are now used in around half of every sentence on sites like Instagram, and Facebook looks set to introduce them alongside the famous “like” button as a way of expression your reaction to a post.
  • If you were to believe the headlines, this is just the tipping point: some outlets have claimed that emoji are an emerging language that could soon compete with English in global usage. To many, this would be an exciting evolution of the way we communicate; to others, it is linguistic Armageddon.
  • Do emoji show the same characteristics of other communicative systems and actual languages? And what do they help us to express that words alone can’t say?When emoji appear with text, they often supplement or enhance the writing. This is similar to gestures that appear along with speech. Over the past three decades, research has shown that our hands provide important information that often transcends and clarifies the message in speech. Emoji serve this function too – for instance, adding a kissy or winking face can disambiguate whether a statement is flirtatiously teasing or just plain mean.
  • ...17 more annotations...
  • This is a key point about language use: rarely is natural language ever limited to speech alone. When we are speaking, we constantly use gestures to illustrate what we mean. For this reason, linguists say that language is “multi-modal”. Writing takes away that extra non-verbal information, but emoji may allow us to re-incorporate it into our text.
  • Emoji are not always used as embellishments, however – sometimes, strings of the characters can themselves convey meaning in a longer sequence on their own. But to constitute their own language, they would need a key component: grammar.
  • A grammatical system is a set of constraints that governs how the meaning of an utterance is packaged in a coherent way. Natural language grammars have certain traits that distinguish them. For one, they have individual units that play different roles in the sequence – like nouns and verbs in a sentence. Also, grammar is different from meaning
  • When emoji are isolated, they are primarily governed by simple rules related to meaning alone, without these more complex rules. For instance, according to research by Tyler Schnoebelen, people often create strings of emoji that share a common meaning
  • This sequence has little internal structure; even when it is rearranged, it still conveys the same message. These images are connected solely by their broader meaning. We might consider them to be a visual list: “here are all things related to celebrations and birthdays.” Lists are certainly a conventionalised way of communicating, but they don’t have grammar the way that sentences do.
  • What if the order did matter though? What if they conveyed a temporal sequence of events? Consider this example, which means something like “a woman had a party where they drank, and then opened presents and then had cake”:
  • In all cases, the doer of the action (the agent) precedes the action. In fact, this pattern is commonly found in both full languages and simple communication systems. For example, the majority of the world’s languages place the subject before the verb of a sentence.
  • These rules may seem like the seeds of grammar, but psycholinguist Susan Goldin-Meadow and colleagues have found this order appears in many other systems that would not be considered a language. For example, this order appears when people arrange pictures to describe events from an animated cartoon, or when speaking adults communicate using only gestures. It also appears in the gesture systems created by deaf children who cannot hear spoken languages and are not exposed to sign languages.
  • describes the children as lacking exposure to a language and thus invent their own manual systems to communicate, called “homesigns”. These systems are limited in the size of their vocabularies and the types of sequences they can create. For this reason, the agent-act order seems not to be due to a grammar, but from basic heuristics – practical workarounds – based on meaning alone. Emoji seem to tap into this same system.
  • Nevertheless, some may argue that despite emoji’s current simplicity, this may be the groundwork for emerging complexity – that although emoji do not constitute a language at the present time, they could develop into one over time.
  • Could an emerging “emoji visual language” be developing in a similar way, with actual grammatical structure? To answer that question, you need to consider the intrinsic constraints on the technology itself.Emoji are created by typing into a computer like text. But, unlike text, most emoji are provided as whole units, except for the limited set of emoticons which convert to emoji, like :) or ;). When writing text, we use the building blocks (letters) to create the units (words), not by searching through a list of every whole word in the language.
  • emoji force us to convey information in a linear unit-unit string, which limits how complex expressions can be made. These constraints may mean that they will never be able to achieve even the most basic complexity that we can create with normal and natural drawings.
  • What’s more, these limits also prevent users from creating novel signs – a requisite for all languages, especially emerging ones. Users have no control over the development of the vocabulary. As the “vocab list” for emoji grows, it will become increasingly unwieldy: using them will require a conscious search process through an external list, not an easy generation from our own mental vocabulary, like the way we naturally speak or draw. This is a key point – it means that emoji lack the flexibility needed to create a new language.
  • we already have very robust visual languages, as can be seen in comics and graphic novels. As I argue in my book, The Visual Language of Comics, the drawings found in comics use a systematic visual vocabulary (such as stink lines to represent smell, or stars to represent dizziness). Importantly, the available vocabulary is not constrained by technology and has developed naturally over time, like spoken and written languages.
  • grammar of sequential images is more of a narrative structure – not of nouns and verbs. Yet, these sequences use principles of combination like any other grammar, including roles played by images, groupings of images, and hierarchic embedding.
  • measured participants’ brainwaves while they viewed sequences one image at a time where a disruption appeared either within the groupings of panels or at the natural break between groupings. The particular brainwave responses that we observed were similar to those that experimenters find when violating the syntax of sentences. That is, the brain responds the same way to violations of “grammar”, whether in sentences or sequential narrative images.
  • I would hypothesise that emoji can use a basic narrative structure to organise short stories (likely made up of agent-action sequences), but I highly doubt that they would be able to create embedded clauses like these. I would also doubt that you would see the same kinds of brain responses that we saw with the comic strip sequences.
‹ Previous 21 - 40 of 58 Next ›
Showing 20 items per page