Skip to main content

Home/ TOK Friends/ Group items tagged self perception

Rss Feed Group items tagged

kushnerha

If Philosophy Won't Diversify, Let's Call It What It Really Is - The New York Times - 0 views

  • The vast majority of philosophy departments in the United States offer courses only on philosophy derived from Europe and the English-speaking world. For example, of the 118 doctoral programs in philosophy in the United States and Canada, only 10 percent have a specialist in Chinese philosophy as part of their regular faculty. Most philosophy departments also offer no courses on Africana, Indian, Islamic, Jewish, Latin American, Native American or other non-European traditions. Indeed, of the top 50 philosophy doctoral programs in the English-speaking world, only 15 percent have any regular faculty members who teach any non-Western philosophy.
  • Given the importance of non-European traditions in both the history of world philosophy and in the contemporary world, and given the increasing numbers of students in our colleges and universities from non-European backgrounds, this is astonishing. No other humanities discipline demonstrates this systematic neglect of most of the civilizations in its domain. The present situation is hard to justify morally, politically, epistemically or as good educational and research training practice.
  • While a few philosophy departments have made their curriculums more diverse, and while the American Philosophical Association has slowly broadened the representation of the world’s philosophical traditions on its programs, progress has been minimal.
  • ...9 more annotations...
  • Many philosophers and many departments simply ignore arguments for greater diversity; others respond with arguments for Eurocentrism that we and many others have refuted elsewhere. The profession as a whole remains resolutely Eurocentric.
  • Instead, we ask those who sincerely believe that it does make sense to organize our discipline entirely around European and American figures and texts to pursue this agenda with honesty and openness. We therefore suggest that any department that regularly offers courses only on Western philosophy should rename itself “Department of European and American Philosophy.”
  • We see no justification for resisting this minor rebranding (though we welcome opposing views in the comments section to this article), particularly for those who endorse, implicitly or explicitly, this Eurocentric orientation.
  • Some of our colleagues defend this orientation on the grounds that non-European philosophy belongs only in “area studies” departments, like Asian Studies, African Studies or Latin American Studies. We ask that those who hold this view be consistent, and locate their own departments in “area studies” as well, in this case, Anglo-European Philosophical Studies.
  • Others might argue against renaming on the grounds that it is unfair to single out philosophy: We do not have departments of Euro-American Mathematics or Physics. This is nothing but shabby sophistry. Non-European philosophical traditions offer distinctive solutions to problems discussed within European and American philosophy, raise or frame problems not addressed in the American and European tradition, or emphasize and discuss more deeply philosophical problems that are marginalized in Anglo-European philosophy. There are no comparable differences in how mathematics or physics are practiced in other contemporary cultures.
  • Of course, we believe that renaming departments would not be nearly as valuable as actually broadening the philosophical curriculum and retaining the name “philosophy.” Philosophy as a discipline has a serious diversity problem, with women and minorities underrepresented at all levels among students and faculty, even while the percentage of these groups increases among college students. Part of the problem is the perception that philosophy departments are nothing but temples to the achievement of males of European descent. Our recommendation is straightforward: Those who are comfortable with that perception should confirm it in good faith and defend it honestly; if they cannot do so, we urge them to diversify their faculty and their curriculum.
  • This is not to disparage the value of the works in the contemporary philosophical canon: Clearly, there is nothing intrinsically wrong with philosophy written by males of European descent; but philosophy has always become richer as it becomes increasingly diverse and pluralistic.
  • We hope that American philosophy departments will someday teach Confucius as routinely as they now teach Kant, that philosophy students will eventually have as many opportunities to study the “Bhagavad Gita” as they do the “Republic,” that the Flying Man thought experiment of the Persian philosopher Avicenna (980-1037) will be as well-known as the Brain-in-a-Vat thought experiment of the American philosopher Hilary Putnam (1926-2016), that the ancient Indian scholar Candrakirti’s critical examination of the concept of the self will be as well-studied as David Hume’s, that Frantz Fanon (1925-1961), Kwazi Wiredu (1931- ), Lame Deer (1903-1976) and Maria Lugones will be as familiar to our students as their equally profound colleagues in the contemporary philosophical canon. But, until then, let’s be honest, face reality and call departments of European-American Philosophy what they really are.
  • For demographic, political and historical reasons, the change to a more multicultural conception of philosophy in the United States seems inevitable. Heed the Stoic adage: “The Fates lead those who come willingly, and drag those who do not.”
Javier E

Can truth survive this president? An honest investigation. - The Washington Post - 0 views

  • in the summer of 2002, long before “fake news” or “post-truth” infected the vernacular, one of President George W. Bush’s top advisers mocked a journalist for being part of the “reality-based community.” Seeking answers in reality was for suckers, the unnamed adviser explained. “We’re an empire now, and when we act, we create our own reality.”
  • This was the hubris and idealism of a post-Cold War, pre-Iraq War superpower: If you exert enough pressure, events will bend to your will.
  • the deceit emanating from the White House today is lazier, more cynical. It is not born of grand strategy or ideology; it is impulsive and self-serving. It is not arrogant, but shameless.
  • ...26 more annotations...
  • Bush wanted to remake the world. President Trump, by contrast, just wants to make it up as he goes along
  • Through all their debates over who is to blame for imperiling truth (whether Trump, postmodernism, social media or Fox News), as well as the consequences (invariably dire) and the solutions (usually vague), a few conclusions materialize, should you choose to believe them.
  • There is a pattern and logic behind the dishonesty of Trump and his surrogates; however, it’s less multidimensional chess than the simple subordination of reality to political and personal ambition
  • Trump’s untruth sells best precisely when feelings and instincts overpower facts, when America becomes a safe space for fabrication.
  • Rand Corp. scholars Jennifer Kavanagh and Michael D. Rich point to the Gilded Age, the Roaring Twenties and the rise of television in the mid-20th century as recent periods of what they call “Truth Decay” — marked by growing disagreement over facts and interpretation of data; a blurring of lines between opinion, fact and personal experience; and diminishing trust in once-respected sources of information.
  • In eras of truth decay, “competing narratives emerge, tribalism within the U.S. electorate increases, and political paralysis and dysfunction grow,”
  • Once you add the silos of social media as well as deeply polarized politics and deteriorating civic education, it becomes “nearly impossible to have the types of meaningful policy debates that form the foundation of democracy.”
  • To interpret our era’s debasement of language, Kakutani reflects perceptively on the World War II-era works of Victor Klemperer, who showed how the Nazis used “words as ‘tiny doses of arsenic’ to poison and subvert the German culture,” and of Stefan Zweig, whose memoir “The World of Yesterday” highlights how ordinary Germans failed to grasp the sudden erosion of their freedoms.
  • Kakutani calls out lefty academics who for decades preached postmodernism and social constructivism, which argued that truth is not universal but a reflection of relative power, structural forces and personal vantage points.
  • postmodernists rejected Enlightenment ideals as “vestiges of old patriarchal and imperialist thinking,” Kakutani writes, paving the way for today’s violence against fact in politics and science.
  • “dumbed-down corollaries” of postmodernist thought have been hijacked by Trump’s defenders, who use them to explain away his lies, inconsistencies and broken promises.
  • intelligent-design proponents and later climate deniers drew from postmodernism to undermine public perceptions of evolution and climate change. “Even if right-wing politicians and other science deniers were not reading Derrida and Foucault, the germ of the idea made its way to them: science does not have a monopoly on the truth,
  • McIntyre quotes at length from mea culpas by postmodernist and social constructivist writers agonizing over what their theories have wrought, shocked that conservatives would use them for nefarious purposes
  • pro-Trump troll and conspiracy theorist Mike Cernovich , who helped popularize the “Pizzagate” lie, has forthrightly cited his unlikely influences. “Look, I read postmodernist theory in college,” Cernovich told the New Yorker in 2016. “If everything is a narrative, then we need alternatives to the dominant narrative. I don’t seem like a guy who reads [Jacques] Lacan, do I?
  • When truth becomes malleable and contestable regardless of evidence, a mere tussle of manufactured narratives, it becomes less about conveying facts than about picking sides, particularly in politics.
  • In “On Truth,” Cambridge University philosopher Simon Blackburn writes that truth is attainable, if at all, “only at the vanishing end points of enquiry,” adding that, “instead of ‘facts first’ we may do better if we think of ‘enquiry first,’ with the notion of fact modestly waiting to be invited to the feast afterward.
  • He is concerned, but not overwhelmingly so, about the survival of truth under Trump. “Outside the fevered world of politics, truth has a secure enough foothold,” Blackburn writes. “Perjury is still a serious crime, and we still hope that our pilots and surgeons know their way about.
  • Kavanaugh and Rich offer similar consolation: “Facts and data have become more important in most other fields, with political and civil discourse being striking exceptions. Thus, it is hard to argue that the world is truly ‘post-fact.’ ”
  • McIntyre argues persuasively that our methods of ascertaining truth — not just the facts themselves — are under attack, too, and that this assault is especially dangerous.
  • Ideologues don’t just disregard facts they disagree with, he explains, but willingly embrace any information, however dubious, that fits their agenda. “This is not the abandonment of facts, but a corruption of the process by which facts are credibly gathered and reliably used to shape one’s beliefs about reality. Indeed, the rejection of this undermines the idea that some things are true irrespective of how we feel about them.”
  • “It is hardly a depressing new phenomenon that people’s beliefs are capable of being moved by their hopes, grievances and fears,” Blackburn writes. “In order to move people, objective facts must become personal beliefs.” But it can’t work — or shouldn’t work — in reverse.
  • More than fearing a post-truth world, Blackburn is concerned by a “post-shame environment,” in which politicians easily brush off their open disregard for truth.
  • it is human nature to rationalize away the dissonance. “Why get upset by his lies, when all politicians lie?” Kakutani asks, distilling the mind-set. “Why get upset by his venality, when the law of the jungle rules?”
  • So any opposition is deemed a witch hunt, or fake news, rigged or just so unfair. Trump is not killing the truth. But he is vandalizing it, constantly and indiscriminately, diminishing its prestige and appeal, coaxing us to look away from it.
  • the collateral damage includes the American experiment.
  • “One of the most important ways to fight back against post-truth is to fight it within ourselves,” he writes, whatever our particular politics may be. “It is easy to identify a truth that someone else does not want to see. But how many of us are prepared to do this with our own beliefs? To doubt something that we want to believe, even though a little piece of us whispers that we do not have all the facts?”
pier-paolo

How ADHD Affects Working Memory | HealthyPlace - 0 views

  • Attention-deficit/hyperactivity disorder (ADHD) affects working memory as well as short- and long-term memory
  • People with ADHD struggle with "working memory," a term that used to be interchangeable with "short-term memory."
  • Long-term memory is essentially a storage bank of information that exists thanks to short-term and working memory.
  • ...8 more annotations...
  • It makes it hard to follow directions and instructions because they require holding multiple steps in mind. It can lead to losing important items and missing deadlines
  • We have a hard time prioritizing, and having a “good” memory involves paying attention to what is important. One study required children to remember specific words from a list. These were the “important” or priority words. Those with ADHD remembered the same number of words as those without the condition, but they were less likely to recall the important words.1
  • working memory is the ability to manipulate that information
  • People with ADHD sometimes interrupt others because they are afraid of forgetting what they want to say
  • Attention and memory are connected.
  • ADHD makes it hard to control one’s attention. Experts note that having problems with “source discrimination3" and “selective attention” lead to being “overwhelmed by unimportant stimuli.4"
  • In other words, we experience an information overload and do not know what to remember. It goes back to the test with the ADHD children who recalled the same number of words as their peers but not the “right” words.
  • On average, it was low, but when looked at individually, it showed that each ADHD child alternately processed problems quickly and slowly, though often as accurately as the other children.5
margogramiak

What happens when your brain can't tell which way is up or down? Study shows that how w... - 0 views

  • What feels like up may actually be some other direction depending on how our brains process our orientation, according to psychology researchers at York University's Faculty of Health.
  • What feels like up may actually be some other direction depending on how our brains process our orientation, according to psychology researchers at York University's Faculty of Health.
    • margogramiak
       
      Excited to get an explanation for this statement
  • an individual's interpretation of the direction of gravity can be altered by how their brain responds to visual information.
    • margogramiak
       
      So, that means that everyone's brain responds differently to visual information. What factor plays into this?
  • ...6 more annotations...
  • found, using virtual reality, that people differ in how much they are influenced by their visual environment
    • margogramiak
       
      oh, interesting.
  • "These findings may also help us to better understand and predict why astronauts may misestimate how far they have moved in a given situation, especially in the microgravity of space," says Harris.
    • margogramiak
       
      I didn't know this was an issue in the first place....
  • Not only did the VRI-vulnerable group rely more on vision to tell them how they were oriented, but they also found visual motion to be more powerful in evoking the sensation of moving through the scene,
    • margogramiak
       
      wow, that's interesting.
  • This decision is helped by the fact that we normally move at right angles to gravity.
    • margogramiak
       
      One of Newton's laws!!!!
  • But if a person's perception of gravity is altered by the visual environment or by removing gravity, this distinction becomes much harder."
    • margogramiak
       
      That makes sense.
  • The findings could also be helpful for virtual reality game designers, as certain virtual environments may lead to differences in how players interpret and move through the game.
    • margogramiak
       
      It's hard to imagine virtual reality getting more realistic than it is now.
Emily Freilich

How Do Experiences Become Memories? : NPR - 0 views

  • So if memory is reconstructed as Scott Fraser says, then do we really know what memory is?
  • And it was absolutely glorious music. And at the very end of the recording, there was a dreadful screeching sound.
  • He had had the experience, he had had 20 minutes of glorious music. They counted for nothing. Because he was left with a memory, the memory was ruined, and the memory was all that he had gotten to keep.
  • ...2 more annotations...
  • And then there is a remembering self, and the remembering self is the one that keeps score and maintains the story of our life, and it's the one that the doctor approaches in asking the question, how have you been feeling lately?
  • one thing that is very difficult is to have a very clear memory and to realize that it is absolutely untrue. Normally, some memories really have the appeal of a visual perception. You know, when I see things, I normally don't doubt them. Doubting what you see is a very odd experience. And doubting what you remember is a little less odd than doubting what you see. But it's also a pretty odd experience, because some memories come with a very compelling sense of truth about them and that happens to be the case even for memories that are not true.
Javier E

How the Internet Gets Inside Us : The New Yorker - 0 views

  • It isn’t just that we’ve lived one technological revolution among many; it’s that our technological revolution is the big social revolution that we live with
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness.
  • ...14 more annotations...
  • In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.”
  • when people struggle to describe the state that the Internet puts them in they arrive at a remarkably familiar picture of disassociation and fragmentation. Life was once whole, continuous, stable; now it is fragmented, multi-part, shimmering around us, unstable and impossible to fix.
  • The odd thing is that this complaint, though deeply felt by our contemporary Better-Nevers, is identical to Baudelaire’s perception about modern Paris in 1855, or Walter Benjamin’s about Berlin in 1930, or Marshall McLuhan’s in the face of three-channel television (and Canadian television, at that) in 1965.
  • If all you have is a hammer, the saying goes, everything looks like a nail; and, if you think the world is broken, every machine looks like the hammer that broke it.
  • Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began.
  • Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers—as, for that matter, did the Hermione-like idea of “looking things up.” That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
  • What we live in is not the age of the extended mind but the age of the inverted self. The things that have usually lived in the darker recesses or mad corners of our mind—sexual obsessions and conspiracy theories, paranoid fixations and fetishes—are now out there: you click once and you can read about the Kennedy autopsy or the Nazi salute or hog-tied Swedish flight attendants. But things that were once external and subject to the social rules of caution and embarrassment—above all, our interactions with other people—are now easily internalized, made to feel like mere workings of the id left on its own.
  • A social network is crucially different from a social circle, since the function of a social circle is to curb our appetites and of a network to extend them.
  • And so the peacefulness, the serenity that we feel away from the Internet, and which all the Better-Nevers rightly testify to, has less to do with being no longer harried by others than with being less oppressed by the force of your own inner life. Shut off your computer, and your self stops raging quite as much or quite as loud.
  • Now television is the harmless little fireplace over in the corner, where the family gathers to watch “Entourage.” TV isn’t just docile; it’s positively benevolent. This makes you think that what made television so evil back when it was evil was not its essence but its omnipresence. Once it is not everything, it can be merely something. The real demon in the machine is the tirelessness of the user.
  • the Internet screen has always been like the palantír in Tolkien’s “Lord of the Rings”—the “seeing stone” that lets the wizards see the entire world. Its gift is great; the wizard can see it all. Its risk is real: evil things will register more vividly than the great mass of dull good. The peril isn’t that users lose their knowledge of the world. It’s that they can lose all sense of proportion. You can come to think that the armies of Mordor are not just vast and scary, which they are, but limitless and undefeatable, which they aren’t.
Javier E

Buddhism and the Brain § SEEDMAGAZINE.COM - 0 views

  • Anatta is not a unified, unchanging self. It is more like a concert, constantly changing emotions, perceptions, and thoughts. Our minds are fragmented and impermanent. A change occurred in the band, so it follows that one expects a change in the music. Both Buddhism and neuroscience converge on a similar point of view: The way it feels isn’t how it is. There is no permanent, constant soul in the background. Even our language about ourselves is to be distrusted (requiring the tortured negation of anatta). In the broadest strokes then, neuroscience and Buddhism agree.
  • How did Buddhism get so much right? I speak here as an outsider, but it seems to me that Buddhism started with a bit of empiricism. Perhaps the founders of Buddhism were pre-scientific, but they did use empirical data. They noted the natural world: the sun sets, the wind blows into a field, one insect eats another. There is constant change, shifting parts, and impermanence. They called this impermanence anicca, and it forms a central dogma of Buddhism.
sissij

These Wearables Are All About Neuroscience | Big Think - 0 views

  • Artist, writer, and experimental philosopher Jonathon Keats, fresh from his recent Reciprocal Biomimicry project, is back, and this time it’s wearable.
  • It’s clothing designed to alter one’s self-perception.
  • Wearing clothes that make you feel good isn’t new, of course, but Keats’ press release claims to be “applying cutting-edge neuroscience to millennia of costume history.”
  • ...3 more annotations...
  • The bracelets can encourage the wearer to assume a “power pose,” boosting self-assurance through the release of testosterone.
  • Superego shades have irises that open and close in sync with the wearer’s breathing, raising his or her consciousness of his or her respiration.
  • Superego shoes offer heels whose height can be adjusted to ensure the wearer is always taller than anyone with whom he or she is speaking.
  •  
    I think it is very interesting than even those wearable designs can be related to neuroscience. They seem to me that the two subjects are very far away. Those designs are very interesting as it combine some idea in science with artistic designs. As we learned in English when we were having a speech project, power pose is a standing position that can strengthen our confidence and persuasiveness. By having those clothing specially designed, it can force us into such position. I think this is a very fantastic idea. I really like the changing height high heel. As a short person, I know how people feel when they have to raise their heads to talk to people. --Sissi (3/12/2017)
Javier E

FaceApp helped a middle-aged man become a popular younger woman. His fan base has never... - 1 views

  • Soya’s fame illustrated a simple truth: that social media is less a reflection of who we are, and more a performance of who we want to be.
  • It also seemed to herald a darker future where our fundamental senses of reality are under siege: The AI that allows anyone to fabricate a face can also be used to harass women with “deepfake” pornography, invent fraudulent LinkedIn personas and digitally impersonate political enemies.
  • As the photos began receiving hundreds of likes, Soya’s personality and style began to come through. She was relentlessly upbeat. She never sneered or bickered or trolled. She explored small towns, savored scenic vistas, celebrated roadside restaurants’ simple meals.
  • ...25 more annotations...
  • She took pride in the basic things, like cleaning engine parts. And she only hinted at the truth: When one fan told her in October, “It’s great to be young,” Soya replied, “Youth does not mean a certain period of life, but how to hold your heart.”
  • She seemed, well, happy, and FaceApp had made her that way. Creating the lifelike impostor had taken only a few taps: He changed the “Gender” setting to “Female,” the “Age” setting to “Teen,” and the “Impression” setting — a mix of makeup filters — to a glamorous look the app calls “Hollywood.”
  • Soya pouted and scowled on rare occasions when Nakajima himself felt frustrated. But her baseline expression was an extra-wide smile, activated with a single tap.
  • Nakajima grew his shimmering hair below his shoulders and raided his local convenience store for beauty supplies he thought would make the FaceApp images more convincing: blushes, eyeliners, concealers, shampoos.
  • “When I compare how I feel when I started to tweet as a woman and now, I do feel that I’m gradually gravitating toward this persona … this fantasy world that I created,” Nakajima said. “When I see photos of what I tweeted, I feel like, ‘Oh. That’s me.’ ”
  • The sensation Nakajima was feeling is so common that there’s a term for it: the Proteus effect, named for the shape-shifting Greek god. Stanford University researchers first coined it in 2007 to describe how people inhabiting the body of a digital avatar began to act the part
  • People made to appear taller in virtual-reality simulations acted more assertively, even after the experience ended. Prettier characters began to flirt.
  • What is it about online disguises? Why are they so good at bending people’s sense of self-perception?
  • they tap into this “very human impulse to play with identity and pretend to be someone you’re not.”
  • Users in the Internet’s early days rarely had any presumptions of authenticity, said Melanie C. Green, a University of Buffalo professor who studies technology and social trust. Most people assumed everyone else was playing a character clearly distinguished from their real life.
  • “This identity play was considered one of the huge advantages of being online,” Green said. “You could switch your gender and try on all of these different personas. It was a playground for people to explore.”
  • It wasn’t until the rise of giant social networks like Facebook — which used real identities to, among other things, supercharge targeted advertising — that this big game of pretend gained an air of duplicity. Spaces for playful performance shrank, and the biggest Internet watering holes began demanding proof of authenticity as a way to block out malicious intent.
  • The Web’s big shift from text to visuals — the rise of photo-sharing apps, live streams and video calls — seemed at first to make that unspoken rule of real identities concrete. It seemed too difficult to fake one’s appearance when everyone’s face was on constant display.
  • Now, researchers argue, advances in image-editing artificial intelligence have done for the modern Internet what online pseudonyms did for the world’s first chat rooms. Facial filters have allowed anyone to mold themselves into the character they want to play.
  • researchers fear these augmented reality tools could end up distorting the beauty standards and expectations of actual reality.
  • Some political and tech theorists worry this new world of synthetic media threatens to detonate our concept of truth, eroding our shared experiences and infusing every online relationship with suspicion and self-doubt.
  • Deceptive political memes, conspiracy theories, anti-vaccine hoaxes and other scams have torn the fabric of our democracy, culture and public health.
  • But she also thinks about her kids, who assume “that everything online is fabricated,” and wonders whether the rules of online identity require a bit more nuance — and whether that generational shift is already underway.
  • “Bots pretending to be people, automated representations of humanity — that, they perceive as exploitative,” she said. “But if it’s just someone engaging in identity experimentation, they’re like: ‘Yeah, that’s what we’re all doing.'
  • To their generation, “authenticity is not about: ‘Does your profile picture match your real face?’ Authenticity is: ‘Is your voice your voice?’
  • “Their feeling is: ‘The ideas are mine. The voice is mine. The content is mine. I’m just looking for you to receive it without all the assumptions and baggage that comes with it.’ That’s the essence of a person’s identity. That’s who they really are.”
  • But wasn’t this all just a big con? Nakajima had tricked people with a “cool girl” stereotype to boost his Twitter numbers. He hadn’t elevated the role of women in motorcycling; if anything, he’d supplanted them. And the character he’d created was paper thin: Soya had no internal complexity outside of what Nakajima had projected, just that eternally superimposed smile.
  • Perhaps he should have accepted his irrelevance and faded into the digital sunset, sharing his life for few to see. But some of Soya’s followers have said they never felt deceived: It was Nakajima — his enthusiasm, his attitude about life — they’d been charmed by all along. “His personality,” as one Twitter follower said, “shined through.”
  • In Nakajima’s mind, he’d used the tools of a superficial medium to craft genuine connections. He had not felt real until he had become noticed for being fake.
  • Nakajima said he doesn’t know how long he’ll keep Soya alive. But he said he’s grateful for the way she helped him feel: carefree, adventurous, seen.
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
margogramiak

Being alone and socializing with others each contributes differently to personal growth... - 0 views

  • How do people experience time alone and time with others? Findings from a new Bar-Ilan University study reveal the intricacies of people's experiences in these basic social conditions.
  • How do people experience time alone and time with others? Findings from a new Bar-Ilan University study reveal the intricacies of people's experiences in these basic social conditions.
    • margogramiak
       
      It makes sense that balance is important here, as it is in almost every other aspect of life.
  • This approach shed light on people's perceptions when free to express themselves without being bound to specific questions.
    • margogramiak
       
      It makes sense that alone time is just as important as social time.
  • ...4 more annotations...
  • when people think about themselves with others, they are more focused on the present, and less focused on the past or the future than when they think about themselves alone.
    • margogramiak
       
      That's so true. Being with other people makes you so much more aware of the moment.
  • Time alone is reflected in people's thoughts as an opportunity to think about past experiences and future plans, to relax from the stress of social interactions, and to engage in self-selected leisure activities.
    • margogramiak
       
      I feel like this is best case scenario though. Being along also means time to stress about the past and future.
  • "Being alone and being with others are represented in people's minds as qualitatively different experiences, each contributing to the formation of an integrated self
    • margogramiak
       
      I wonder the extent of how people are affected by lots of alone time in covid.
  • For those facing current lockdowns alone, Uziel says the present findings, which highlight potential constructive effects of time alone, indicate that this could also be an opportunity for personal growth.
    • margogramiak
       
      Oh! Here's an answer to my question.
kaylynfreeman

Why Lack of Human Touch Can Be Difficult Amid Coronavirus | Time - 1 views

  • With people around the world practicing social distancing and self-isolation to curb the further spread of coronavirus, some are starting to feel the effects of a lack of human touch.
  • “Touch is the fundamental language of connection,” says Keltner. “When you think about a parent-child bond or two friends or romantic partners, a lot of the ways in which we connect and trust and collaborate are founded in touch.”
  • It’s not just about how we feel emotionally. Keltner adds that “touch deprivation” can impact people on a psychological and even physical level
  • ...15 more annotations...
  • “Big parts of our brains are devoted to making sense of touch and our skin has billions of cells that process information about it,”
  • “The right type of friendly touch—like hugging your partner or linking arms with a dear friend—calms your stress response down. [Positive] touch activates a big bundle of nerves in your body that improves your immune system, regulates digestion and helps you sleep well. It also activates parts of your brain that help you empathize.”
  • Psychologist Sheldon Cohen and other researchers at Carnegie Mellon University cited hugging specifically as a form of touch that can strengthen the immune system in a 2014 study investigating whether receiving hugs—and more broadly, social support that gives the perception that one is cared for—could make people less susceptible to one of the viruses that causes the common cold.
  • Broadly speaking, the participants who had reported having more social support were less likely to get sick—and those who got more hugs were far more likely to report feeling socially supported.
  • Everybody should be open to people being a little more socially distant and not touching as much. Some of it will return and some of it won’t.”
  • Although there’s no exact substitute for human touch, if you’re struggling with this aspect of self-isolating in particular, there are a few alternatives that can offer similar health benefits for people who are social distancing
  • “In-person interactions have a big effect on the brain releasing oxytocin, but interacting via video is actually not that [different],” he explains. “It’s maybe 80% as effective. Video conferencing is a great way to see and be seen.”
  • Keltner adds that dancing, singing or doing yoga with others via an online platform can also be highly effective substitutes for physical contact
  • “Not only would it be good to prevent coronavirus disease; it probably would decrease instances of influenza dramatically in this country,”
  • Zak says U.S. customs like shaking hands and hugging may be changed forever and suggests that non-tactile greetings like a nod, bow or wave may come to replace them. However, he says it will still be important to find ways to reintroduce the humanity of positive touch into in-person interactions without putting anyone’s physical or mental health in jeopardy.
  • “When we’re touched [in a positive way], a cascade of events happens in the brain and one of the important ones is the release of a neurochemical called oxytocin,”
  • But for those who are quarantining alone or with people with whom they don’t have physical contact, loneliness and social isolation are growing health concerns.
  • this process reduces stress and improves immunity. “That’s super valuable in a time of pandemic.”
  • If you’re using a video chat service for work or school, Zak recommends that you take five minutes at the beginning of the call to focus on interpersonal connection.
  • to illustrate how dance parties like Daybreaker can be beneficial for people’s physical and mental health. “When you create a dance experience driven by music, community and participation, that’s how you’re able to release all four happy brain chemicals,” Agrawal says.
  •  
    Touch has a great impact on our brains and our reactions amid coronavirus separation. There are a few substitutes for human touch, like yoga and facetime, that we can all try.
runlai_jiang

Impossible Colors and How to See Them - 0 views

  • Impossible Colors and How to See Them
  • How Impossible Colors Work Basically, the human eye has three types of cone cells that register color that work in an antagonistic fashion:Blue versus yellowRed versus greenLight versus darkThere is overlap between the wavelengths of light covered by the cone cells, so you see more than just blue, yellow, red, and green. White, for example, is not a wavelength of light, yet the human eye perceives it as a mixture of different spectral colors. Because of the opponent process, you can't see both blue and yellow at the same time, nor red and green. These combinations are so-called impossible colors.
  • Chimerical Colors Hyperbolic colors may be seen by staring at a color and then viewing the afterimage on the complementary color opposite it on the color wheel. Dave King / Getty Images
  • ...4 more annotations...
  • While you can't ordinarily see both red and green or both blue and yellow, visual scientist Hewitt Crane and his colleague Thomas Piantanida published a paper in Science claiming such perception was possible. In their 1983 paper "On Seeing Reddish Green and Yellowish Blue" they claimed volunteers viewing adjacent red and green stripes could see reddish green, while viewers of adjacent yellow and blue stripes could see yellowish blue. The researchers used an eye tracker to hold the
  • The impossible colors reddish green and yellowish blue are imaginary colors that do not occur in the light spectrum. Another type of imaginary color is a chimerical color. A chimerical color is seen by looking at a color until the cone cells are fatigued and then looking at a different color. This produces an afterimage perceived by the brain, not the eyes.Examples of chimerical colors include:Self-luminous colors: Self-luminous colors appear to glow even though no light is emitted. An
  • Stygian colors: Stygian colors are dark and supersaturated. For example, "stygian blue" may be seen by staring at bright yellow and then looking at black. The normal afterimage is dark blue. When viewed against black, the resulting blue is as dark as black, yet colored. Stygian colors appear on black because certain neurons only fire signals in the dark.Hyperbolic colors:
  • Impossible colors like reddish green or yellowish blue are trickier to see. To try to see these colors, put a yellow object and blue object right next to each other and cross your eyes so that the two objects overlap. The same procedure works for green and red. The overlapping region may appear to be a mix of the two colors (i.e., green for blue and yellow, brown for red and green), a field of dots of the component colors, or an unfamiliar color that is both red/green or yellow/blue at once!
manhefnawi

Study Finds Experts Overestimate Their Knowledge | Mental Floss - 0 views

  • You’re in the middle of a conversation about politics or music or art, and someone asks, “Have you heard of…?” And despite your complete lack of knowledge of that band or law or artist, you say, “Sure!”
  • As it turns out, the more you know about a subject, the more likely you are to fib when your knowledge falls short, claiming that some factoid rings a bell when, in fact, there are no bells to be rung
  • Self-professed experts were more likely to claim they were very knowledgeable about concepts and places that didn’t exist
  • ...1 more annotation...
  • “Our results suggest that people do not simply consult a ‘mental index’ that catalogues their knowledge but instead draw on preexisting self-perceptions of knowledge to make inferences about what they should or probably do know
Javier E

I'm a therapist to the super-rich: they are as miserable as Succession makes out | Clay... - 0 views

  • I work as a psychotherapist and my specialism is ultra-high net worth individuals.
  • I got into working with billionaires by accident. I had one wealthy client, who passed my name around to their acquaintances. They are called the 1% for a reason: there are not that many of them and so the circle is tight.
  • Over the years, I have developed a great deal of empathy for those who have far too much. The television programme Succession, now in its third season, does such a good job of exploring the kinds of toxic excess my clients struggle with that when my wife is watching it I have to leave the room; it just feels like work.
  • ...9 more annotations...
  • What could possibly be challenging about being a billionaire, you might ask. Well, what would it be like if you couldn’t trust those close to you? Or if you looked at any new person in your life with deep suspicion? I hear this from my clients all the time: “What do they want from me?”; or “How are they going to manipulate me?”; or “They are probably only friends with me because of my money.”
  • Too many of my clients want to indulge their children so “they never have to suffer what I had to suffer” while growing up.
  • If all your necessities and much more were covered for the rest of your life – you might struggle with a lack of meaning and ambition too. My clients are often bored with life and too many times this leads to them chasing the next high – chemically or otherwise – to fill that void.
  • Then there are the struggles with purpose – the depression that sets in when you feel like you have no reason to get out of bed. Why bother going to work when the business you have built or inherited runs itself without you now?
  • There is a perception that money can immunise you against mental-health problems when actually, I believe that wealth can make you – and the people closest to you – much more susceptible to them.
  • But the result is that they prevent their children from experiencing the very things that made them successful: sacrifice, hard work, overcoming failure and developing resilience. An over-indulged child develops into an entitled adult who has low self-confidence, low self-esteem, and a complete lack of grit.
  • These very wealthy children start out by going to elite boarding schools and move on to elite universities – developing a language and culture among their own kind. Rarely do they create friendships with non-wealthy people; this can lead to feelings of isolation and being trapped inside a very small bubble.
  • There are few people in the world to whom they can actually relate, which of course leads to a lack of empathy
  • Notice the awkwardness and lack of human connection and how dreadfully they treat each other. It’s fascinating and frightening. When one leads a life without consequences (for being rude to a waiter or cruel to a sibling, for example) there really is no reason to not do these things. After a while, it becomes normalised and accepted. Living a life without rules isn’t good for anyone.
Javier E

His Job Was to Make Instagram Safe for Teens. His 14-Year-Old Showed Him What the App W... - 0 views

  • The experience of young users on Meta’s Instagram—where Bejar had spent the previous two years working as a consultant—was especially acute. In a subsequent email to Instagram head Adam Mosseri, one statistic stood out: One in eight users under the age of 16 said they had experienced unwanted sexual advances on the platform over the previous seven days.
  • For Bejar, that finding was hardly a surprise. His daughter and her friends had been receiving unsolicited penis pictures and other forms of harassment on the platform since the age of 14, he wrote, and Meta’s systems generally ignored their reports—or responded by saying that the harassment didn’t violate platform rules.
  • “I asked her why boys keep doing that,” Bejar wrote to Zuckerberg and his top lieutenants. “She said if the only thing that happens is they get blocked, why wouldn’t they?”
  • ...39 more annotations...
  • For the well-being of its users, Bejar argued, Meta needed to change course, focusing less on a flawed system of rules-based policing and more on addressing such bad experiences
  • The company would need to collect data on what upset users and then work to combat the source of it, nudging those who made others uncomfortable to improve their behavior and isolating communities of users who deliberately sought to harm others.
  • “I am appealing to you because I believe that working this way will require a culture shift,” Bejar wrote to Zuckerberg—the company would have to acknowledge that its existing approach to governing Facebook and Instagram wasn’t working.
  • During and after Bejar’s time as a consultant, Meta spokesman Andy Stone said, the company has rolled out several product features meant to address some of the Well-Being Team’s findings. Those features include warnings to users before they post comments that Meta’s automated systems flag as potentially offensive, and reminders to be kind when sending direct messages to users like content creators who receive a large volume of messages. 
  • Meta’s classifiers were reliable enough to remove only a low single-digit percentage of hate speech with any degree of precision.
  • Bejar was floored—all the more so when he learned that virtually all of his daughter’s friends had been subjected to similar harassment. “DTF?” a user they’d never met would ask, using shorthand for a vulgar proposition. Instagram acted so rarely on reports of such behavior that the girls no longer bothered reporting them. 
  • Meta’s own statistics suggested that big problems didn’t exist. 
  • Meta had come to approach governing user behavior as an overwhelmingly automated process. Engineers would compile data sets of unacceptable content—things like terrorism, pornography, bullying or “excessive gore”—and then train machine-learning models to screen future content for similar material.
  • While users could still flag things that upset them, Meta shifted resources away from reviewing them. To discourage users from filing reports, internal documents from 2019 show, Meta added steps to the reporting process. Meta said the changes were meant to discourage frivolous reports and educate users about platform rules. 
  • The outperformance of Meta’s automated enforcement relied on what Bejar considered two sleights of hand. The systems didn’t catch anywhere near the majority of banned content—only the majority of what the company ultimately removed
  • “Please don’t talk about my underage tits,” Bejar’s daughter shot back before reporting his comment to Instagram. A few days later, the platform got back to her: The insult didn’t violate its community guidelines.
  • Also buttressing Meta’s statistics were rules written narrowly enough to ban only unambiguously vile material. Meta’s rules didn’t clearly prohibit adults from flooding the comments section on a teenager’s posts with kiss emojis or posting pictures of kids in their underwear, inviting their followers to “see more” in a private Facebook Messenger group. 
  • “Mark personally values freedom of expression first and foremost and would say this is a feature and not a bug,” Rosen responded
  • Narrow rules and unreliable automated enforcement systems left a lot of room for bad behavior—but they made the company’s child-safety statistics look pretty good according to Meta’s metric of choice: prevalence.
  • Defined as the percentage of content viewed worldwide that explicitly violates a Meta rule, prevalence was the company’s preferred measuring stick for the problems users experienced.
  • According to prevalence, child exploitation was so rare on the platform that it couldn’t be reliably estimated, less than 0.05%, the threshold for functional measurement. Content deemed to encourage self-harm, such as eating disorders, was just as minimal, and rule violations for bullying and harassment occurred in just eight of 10,000 views. 
  • “There’s a grading-your-own-homework problem,”
  • Meta defines what constitutes harmful content, so it shapes the discussion of how successful it is at dealing with it.”
  • It could reconsider its AI-generated “beauty filters,” which internal research suggested made both the people who used them and those who viewed the images more self-critical
  • the team built a new questionnaire called BEEF, short for “Bad Emotional Experience Feedback.
  • A recurring survey of issues 238,000 users had experienced over the past seven days, the effort identified problems with prevalence from the start: Users were 100 times more likely to tell Instagram they’d witnessed bullying in the last week than Meta’s bullying-prevalence statistics indicated they should.
  • “People feel like they’re having a bad experience or they don’t,” one presentation on BEEF noted. “Their perception isn’t constrained by policy.
  • they seemed particularly common among teens on Instagram.
  • Among users under the age of 16, 26% recalled having a bad experience in the last week due to witnessing hostility against someone based on their race, religion or identity
  • More than a fifth felt worse about themselves after viewing others’ posts, and 13% had experienced unwanted sexual advances in the past seven days. 
  • The vast gap between the low prevalence of content deemed problematic in the company’s own statistics and what users told the company they experienced suggested that Meta’s definitions were off, Bejar argued
  • To minimize content that teenagers told researchers made them feel bad about themselves, Instagram could cap how much beauty- and fashion-influencer content users saw.
  • Proving to Meta’s leadership that the company’s prevalence metrics were missing the point was going to require data the company didn’t have. So Bejar and a group of staffers from the Well-Being Team started collecting it
  • And it could build ways for users to report unwanted contacts, the first step to figuring out how to discourage them.
  • One experiment run in response to BEEF data showed that when users were notified that their comment or post had upset people who saw it, they often deleted it of their own accord. “Even if you don’t mandate behaviors,” said Krieger, “you can at least send signals about what behaviors aren’t welcome.”
  • But among the ranks of Meta’s senior middle management, Bejar and Krieger said, BEEF hit a wall. Managers who had made their careers on incrementally improving prevalence statistics weren’t receptive to the suggestion that the approach wasn’t working. 
  • After three decades in Silicon Valley, he understood that members of the company’s C-Suite might not appreciate a damning appraisal of the safety risks young users faced from its product—especially one citing the company’s own data. 
  • “This was the email that my entire career in tech trained me not to send,” he says. “But a part of me was still hoping they just didn’t know.”
  • “Policy enforcement is analogous to the police,” he wrote in the email Oct. 5, 2021—arguing that it’s essential to respond to crime, but that it’s not what makes a community safe. Meta had an opportunity to do right by its users and take on a problem that Bejar believed was almost certainly industrywide.
  • fter Haugen’s airing of internal research, Meta had cracked down on the distribution of anything that would, if leaked, cause further reputational damage. With executives privately asserting that the company’s research division harbored a fifth column of detractors, Meta was formalizing a raft of new rules for employees’ internal communication.
  • Among the mandates for achieving “Narrative Excellence,” as the company called it, was to keep research data tight and never assert a moral or legal duty to fix a problem.
  • “I had to write about it as a hypothetical,” Bejar said. Rather than acknowledging that Instagram’s survey data showed that teens regularly faced unwanted sexual advances, the memo merely suggested how Instagram might help teens if they faced such a problem.
  • The hope that the team’s work would continue didn’t last. The company stopped conducting the specific survey behind BEEF, then laid off most everyone who’d worked on it as part of what Zuckerberg called Meta’s “year of efficiency.
  • If Meta was to change, Bejar told the Journal, the effort would have to come from the outside. He began consulting with a coalition of state attorneys general who filed suit against the company late last month, alleging that the company had built its products to maximize engagement at the expense of young users’ physical and mental health. Bejar also got in touch with members of Congress about where he believes the company’s user-safety efforts fell short. 
Javier E

A Million First Dates - Dan Slater - The Atlantic - 0 views

  • . In his 2004 book, The Paradox of Choice, the psychologist Barry Schwartz indicts a society that “sanctifies freedom of choice so profoundly that the benefits of infinite options seem self-evident.” On the contrary, he argues, “a large array of options may diminish the attractiveness of what people actually choose, the reason being that thinking about the attractions of some of the unchosen options detracts from the pleasure derived from the chosen one.”
  • Psychologists who study relationships say that three ingredients generally determine the strength of commitment: overall satisfaction with the relationship; the investment one has put into it (time and effort, shared experiences and emotions, etc.); and the quality of perceived alternatives. Two of the three—satisfaction and quality of alternatives—could be directly affected by the larger mating pool that the Internet offers.
  • as the range of options grows larger, mate-seekers are liable to become “cognitively overwhelmed,” and deal with the overload by adopting lazy comparison strategies and examining fewer cues. As a result, they are more likely to make careless decisions than they would be if they had fewer options,
  • ...7 more annotations...
  • research elsewhere has found that people are less satisfied when choosing from a larger group: in one study, for example, subjects who selected a chocolate from an array of six options believed it tasted better than those who selected the same chocolate from an array of 30.
  • evidence shows that the perception that one has appealing alternatives to a current romantic partner is a strong predictor of low commitment to that partner.
  • But the pace of technology is upending these rules and assumptions. Relationships that begin online, Jacob finds, move quickly. He chalks this up to a few things. First, familiarity is established during the messaging process, which also often involves a phone call. By the time two people meet face-to-face, they already have a level of intimacy. Second, if the woman is on a dating site, there’s a good chance she’s eager to connect. But for Jacob, the most crucial difference between online dating and meeting people in the “real” world is the sense of urgency. Occasionally, he has an acquaintance in common with a woman he meets online, but by and large she comes from a different social pool. “It’s not like we’re just going to run into each other again,” he says. “So you can’t afford to be too casual. It’s either ‘Let’s explore this’ or ‘See you later.’ ”
  • he phenomenon extends beyond dating sites to the Internet more generally. “I’ve seen a dramatic increase in cases where something on the computer triggered the breakup,” he says. “People are more likely to leave relationships, because they’re emboldened by the knowledge that it’s no longer as hard as it was to meet new people. But whether it’s dating sites, social media, e‑mail—it’s all related to the fact that the Internet has made it possible for people to communicate and connect, anywhere in the world, in ways that have never before been seen.”
  • eople seeking commitment—particularly women—have developed strategies to detect deception and guard against it. A woman might withhold sex so she can assess a man’s intentions. Theoretically, her withholding sends a message: I’m not just going to sleep with any guy that comes along. Theoretically, his willingness to wait sends a message back: I’m interested in more than sex.
  • people who are in marriages that are either bad or average might be at increased risk of divorce, because of increased access to new partners. Third, it’s unknown whether that’s good or bad for society. On one hand, it’s good if fewer people feel like they’re stuck in relationships. On the other, evidence is pretty solid that having a stable romantic partner means all kinds of health and wellness benefits.” And that’s even before one takes into account the ancillary effects of such a decrease in commitment—on children, for example, or even society more broadly.
  • As online dating becomes increasingly pervasive, the old costs of a short-term mating strategy will give way to new ones. Jacob, for instance, notices he’s seeing his friends less often. Their wives get tired of befriending his latest girlfriend only to see her go when he moves on to someone else. Also, Jacob has noticed that, over time, he feels less excitement before each new date. “Is that about getting older,” he muses, “or about dating online?” How much of the enchantment associated with romantic love has to do with scarcity (this person is exclusively for me), and how will that enchantment hold up in a marketplace of abundance (this person could be exclusively for me, but so could the other two people I’m meeting this week)?
Javier E

About Face: Emotions and Facial Expressions May Not Be Directly Related | Boston Magazine - 0 views

  • Ekman had traveled the globe with photographs that showed faces experiencing six basic emotions—happiness, sadness, fear, disgust, anger, and surprise. Everywhere he went, from Japan to Brazil to the remotest village of Papua New Guinea, he asked subjects to look at those faces and then to identify the emotions they saw on them. To do so, they had to pick from a set list of options presented to them by Ekman. The results were impressive. Everybody, it turned out, even preliterate Fore tribesmen in New Guinea who’d never seen a foreigner before in their lives, matched the same emotions to the same faces. Darwin, it seemed, had been right.
  • Ekman’s findings energized the previously marginal field of emotion science. Suddenly, researchers had an objective way to measure and compare human emotions—by reading the universal language of feeling written on the face. In the years that followed, Ekman would develop this idea, arguing that each emotion is like a reflex, with its own circuit in the brain and its own unique pattern of effects on the face and the body. He and his peers came to refer to it as the Basic Emotion model—and it had significant practical applications
  • What if he’s wrong?
  • ...15 more annotations...
  • Barrett is a professor of psychology at Northeastern
  • her research has led her to conclude that each of us constructs them in our own individual ways, from a diversity of sources: our internal sensations, our reactions to the environments we live in, our ever-evolving bodies of experience and learning, our cultures.
  • if Barrett is correct, we’ll need to rethink how we interpret mental illness, how we understand the mind and self, and even what psychology as a whole should become in the 21st century.
  • The problem was the options that Ekman had given his subjects when asking them to identify the emotions shown on the faces they were presented with. Those options, Barrett discovered, had limited the ways in which people allowed themselves to think. Barrett explained the problem to me this way: “I can break that experiment really easily, just by removing the words. I can just show you a face and ask how this person feels. Or I can show you two faces, two scowling faces, and I can say, ‘Do these people feel the same thing?’ And agreement drops into the toilet.”
  • Just as that first picture of the bee actually wasn’t a picture of a bee for me until I taught myself that it was, my emotions aren’t actually emotions until I’ve taught myself to think of them that way. Without that, I have only a meaningless mishmash of information about what I’m feeling.
  • emotion isn’t a simple reflex or a bodily state that’s hard-wired into our DNA, and it’s certainly not universally expressed. It’s a contingent act of perception that makes sense of the information coming in from the world around you, how your body is feeling in the moment, and everything you’ve ever been taught to understand as emotion. Culture to culture, person to person even, it’s never quite the same. What’s felt as sadness in one person might as easily be felt as weariness in another, or frustration in someone else.
  • The brain, it turns out, doesn’t consciously process every single piece of information that comes its way. Think of how impossibly distracting the regular act of blinking would be if it did. Instead, it pays attention to what you need to pay attention to, then raids your memory stores to fill in the blanks.
  • In many quarters, Barrett was angrily attacked for her ideas, and she’s been the subject of criticism ever since. “I think Lisa does a disservice to the actual empirical progress that we’re making,” says Dacher Keltner, a Berkeley psychologist
  • Keltner told me that he himself has coded thousands of facial expressions using Ekman’s system, and the results are strikingly consistent: Certain face-emotion combinations recur regularly, and others never occur. “That tells me, ‘Wow, this approach to distinct emotions has real power,’” he says.
  • Ekman reached the peak of his fame in the years following 2001. That’s the year the American Psychological Association named him one of the most influential psychologists of the 20th century. The next year, Malcolm Gladwell wrote an article about him in the New Yorker, and in 2003 he began working pro bono for the TSA. A year later, riding the updraft of success, he left his university post and started the Paul Ekman Group,
  • a small research team to visit the isolated Himba tribe in Namibia, in southern Africa. The plan was this: The team, led by Maria Gendron, would do a study similar to Ekman’s original cross-cultural one, but without providing any of the special words or context-heavy stories that Ekman had used to guide his subjects’ answers. Barrett’s researchers would simply hand a jumbled pile of different expressions (happy, sad, fearful, angry, disgusted, and neutral) to their subjects, and would ask them to sort them into six piles. If emotional expressions are indeed universal, they reasoned, then the Himba would put all low-browed, tight-lipped expressions into an anger pile, all wrinkled-nose faces into a disgust pile, and so on.
  • It didn’t happen that way. The Himba sorted some of the faces in ways that aligned with Ekman’s theory: smiling faces went into one pile, wide-eyed fearful faces went into another, and affectless faces went mostly into a third. But in the other three piles, the Himba mixed up angry scowls, disgusted grimaces, and sad frowns. Without any suggestive context, of the kind that Ekman had originally provided, they simply didn’t recognize the differences that leap out so naturally to Westerners.
  • “What we’re trying to do,” she told me, “is to just get people to pay attention to the fact that there’s a mountain of evidence that does not support the idea that facial expressions are universally recognized as emotional expressions.” That’s the crucial point, of course, because if we acknowledge that, then the entire edifice that Paul Ekman and others have been constructing for the past half-century comes tumbling down. And all sorts of things that we take for granted today—how we understand ourselves and our relationships with others, how we practice psychology
  • Barrett’s theory is still only in its infancy. But other researchers are beginning to take up her ideas, sometimes in part, sometimes in full, and where the science will take us as it expands is impossible to predict. It’s even possible that Barrett will turn out to be wrong, as she herself acknowledges. “Every scientist has to face that,” she says. Still, if she is right, then perhaps the most important change we’ll need to make is in our own heads. If our emotions are not universal physiological responses but concepts we’ve constructed from various biological signals and stashed memories, then perhaps we can exercise more control over our emotional lives than we’ve assumed.
  • “Every experience you have now is seeding your experience for the future,” Barrett told me. “Knowing that, would you choose to do what you’re doing now?” She paused a beat and looked me in the eye. “Well? Would you? You are the architect of your own experience.”
Javier E

Daniel Kahneman | Profile on TED.com - 1 views

  • rather than stating the optimal, rational answer, as an economist of the time might have, they quantified how most real people, consistently, make a less-rational choice. Their work treated economics not as a perfect or self-correcting machine, but as a system prey to quirks of human perception. The field of behavioral economics was born.
  • Tversky and calls for a new form of academic cooperation, marked not by turf battles but by "adversarial collaboration," a good-faith effort by unlike minds to conduct joint research, critiquing each other in the service of an ideal of truth to which both can contribute.
Javier E

Guess Who Doesn't Fit In at Work - NYTimes.com - 0 views

  • ACROSS cultures and industries, managers strongly prize “cultural fit” — the idea that the best employees are like-minded.
  • One recent survey found that more than 80 percent of employers worldwide named cultural fit as a top hiring priority.
  • When done carefully, selecting new workers this way can make organizations more productive and profitable.
  • ...18 more annotations...
  • In the process, fit has become a catchall used to justify hiring people who are similar to decision makers and rejecting people who are not.
  • The concept of fit first gained traction in the 1980s. The original idea was that if companies hired individuals whose personalities and values — and not just their skills — meshed with an organization’s strategy, workers would feel more attached to their jobs, work harder and stay longer.
  • in many organizations, fit has gone rogue. I saw this firsthand while researching the hiring practices of the country’s top investment banks, management consultancies and law firms. I interviewed 120 decision makers and spent nine months observing
  • While résumés (and connections) influenced which applicants made it into the interview room, interviewers’ perceptions of fit strongly shaped who walked out with job offers.
  • Crucially, though, for these gatekeepers, fit was not about a match with organizational values. It was about personal fit. In these time- and team-intensive jobs, professionals at all levels of seniority reported wanting to hire people with whom they enjoyed hanging out and could foresee developing close relationships with
  • To judge fit, interviewers commonly relied on chemistry. “
  • Many used the “airport test.” As a managing director at an investment bank put it, “Would I want to be stuck in an airport in Minneapolis in a snowstorm with them?”
  • interviewers were primarily interested in new hires whose hobbies, hometowns and biographies matched their own. Bonding over rowing college crew, getting certified in scuba, sipping single-malt Scotches in the Highlands or dining at Michelin-starred restaurants was evidence of fit; sharing a love of teamwork or a passion for pleasing clients was not
  • it has become a common feature of American corporate culture. Employers routinely ask job applicants about their hobbies and what they like to do for fun, while a complementary self-help industry informs white-collar job seekers that chemistry, not qualifications, will win them an offer.
  • Selection based on personal fit can keep demographic and cultural diversity low
  • In the elite firms I studied, the types of shared experiences associated with fit typically required large investments of time and money.
  • Class-biased definitions of fit are one reason investment banks, management consulting firms and law firms are dominated by people from the highest socioeconomic backgrounds
  • Also, whether the industry is finance, high-tech or fashion, a good fit in most American corporations still tends to be stereotypically masculine.
  • Perhaps most important, it is easy to mistake rapport for skill. Just as they erroneously believe that they can accurately tell when someone is lying, people tend to be overly confident in their ability to spot talent. Unstructured interviews, which are the most popular hiring tools for American managers and the primary way they judge fit, are notoriously poor predictors of job performance.
  • Organizations that use cultural fit for competitive advantage tend to favor concrete tools like surveys and structured interviews that systematically test behaviors associated with increased performance and employee retention.
  • For managers who want to use cultural fit in a more productive way, I have several suggestions.
  • First, communicate a clear and consistent idea of what the organization’s culture is (and is not) to potential employees. Second, make sure the definition of cultural fit is closely aligned with business goals. Ideally, fit should be based on data-driven analysis of what types of values, traits and behaviors actually predict on-the-job success. Third, create formal procedures like checklists for measuring fit, so that assessment is not left up to the eyes (and extracurriculars) of the beholder.
  • But cultural fit has become a new form of discrimination that keeps demographic and cultural diversity down
‹ Previous 21 - 40 of 56 Next ›
Showing 20 items per page