Skip to main content

Home/ TOK Friends/ Group items tagged R&D

Rss Feed Group items tagged

Javier E

How 2020 Forced Facebook and Twitter to Step In - The Atlantic - 0 views

  • mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm.
  • During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year
  • Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate.
  • ...15 more annotations...
  • Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.
  • Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University.
  • The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines.
  • the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.
  • When internet platforms announce new policies, assessing whether they can and will enforce them consistently has always been difficult. In essence, the companies are grading their own work. But too often what can be gleaned from the outside suggests that they’re failing.
  • It tweaked its algorithm to boost authoritative sources in the news feed and turned off recommendations to join groups based around political or social issues. Facebook is reversing some of these steps now, but it cannot make people forget this toolbox exists in the future
  • As platforms grow more comfortable with their power, they are recognizing that they have options beyond taking posts down or leaving them up. In addition to warning labels, Facebook implemented other “break glass” measures to stem misinformation as the election approached.
  • Platforms don’t deserve praise for belatedly noticing dumpster fires that they helped create and affixing unobtrusive labels to them
  • Warning labels for misinformation might make some commentators feel a little better, but whether labels actually do much to contain the spread of false information is still unknown.
  • News reporting suggests that insiders at Facebook knew they could and should do more about misinformation, but higher-ups vetoed their ideas. YouTube barely acted to stem the flood of misinformation about election results on its platform.
  • Even before the pandemic, YouTube had begun adjusting its recommendation algorithm to reduce the spread of borderline and harmful content, and is introducing pop-up nudges to encourage user
  • And if 2020 finally made clear to platforms the need for greater content moderation, it also exposed the inevitable limits of content moderation.
  • Down-ranking, labeling, or deleting content on an internet platform does not address the social or political circumstances that caused it to be posted in the first place
  • even the most powerful platform will never be able to fully compensate for the failures of other governing institutions or be able to stop the leader of the free world from constructing an alternative reality when a whole media ecosystem is ready and willing to enable him. As Renée DiResta wrote in The Atlantic last month, “reducing the supply of misinformation doesn’t eliminate the demand.”
  • Even so, this year’s events showed that nothing is innate, inevitable, or immutable about platforms as they currently exist. The possibilities for what they might become—and what role they will play in society—are limited more by imagination than any fixed technological constraint, and the companies appear more willing to experiment than ever.
knudsenlu

The Theory That Explains the Structure of the Internet - The Atlantic - 1 views

  • A paper posted online last month has reignited a debate about one of the oldest, most startling claims in the modern era of network science: the proposition that most complex networks in the real world—from the World Wide Web to interacting proteins in a cell—are “scale-free.” Roughly speaking, that means that a few of their nodes should have many more connections than others, following a mathematical formula called a power law, so that there’s no one scale that characterizes the network.
  • Purely random networks do not obey power laws, so when the early proponents of the scale-free paradigm started seeing power laws in real-world networks in the late 1990s, they viewed them as evidence of a universal organizing principle underlying the formation of these diverse networks. The architecture of scale-freeness, researchers argued, could provide insight into fundamental questions such as how likely a virus is to cause an epidemic, or how easily hackers can disable a network.
  • Amazingly simple and far-reaching natural laws govern the structure and evolution of all the complex networks that surround us,” wrote Barabási (who is now at Northeastern University in Boston) in Linked. He later added: “Uncovering and explaining these laws has been a fascinating roller-coaster ride during which we have learned more about our complex, interconnected world than was known in the last hundred years.”
  • ...5 more annotations...
  • “These results undermine the universality of scale-free networks and reveal that real-world networks exhibit a rich structural diversity that will likely require new ideas and mechanisms to explain,” wrote the study’s authors, Anna Broido and Aaron Clauset of the University of Colorado at Boulder.
  • Network scientists agree, by and large, that the paper’s analysis is statistically sound. But when it comes to interpreting its findings, the paper seems to be functioning like a Rorschach test, in which both proponents and critics of the scale-free paradigm see what they already believed to be true. Much of the discussion has played out in vigorous Twitter debates.
  • The scale-free paradigm in networks emerged at a historical moment when power laws had taken on an outsize role in statistical physics. In the 1960s and 1970s, they had played a key part in universal laws that underlie phase transitions in a wide range of physical systems, a finding that earned Kenneth Wilson the 1982 Nobel Prize in physics. Soon after, power laws formed the core of two other paradigms that swept across the statistical-physics world: fractals, and a theory about organization in nature called self-organized criticality.
  • From the beginning, though, the scale-free paradigm also attracted pushback. Critics pointed out that preferential attachment is far from the only mechanism that can give rise to power laws, and that networks with the same power law can have very different topologies. Some network scientists and domain experts cast doubt on the scale-freeness of specific networks such as power grids, metabolic networks, and the physical internet.
  • If you were to observe 1,000 falling objects instead of just a rock and a feather, Clauset says, a clear picture would emerge of how both gravity and air resistance work. But his and Broido’s analysis of nearly 1,000 networks has yielded no similar clarity. “It is reasonable to believe a fundamental phenomenon would require less customized detective work” than Barabási is calling for, Clauset wrote on Twitter.
Javier E

Uber, Arizona, and the Limits of Self-Driving Cars - The Atlantic - 0 views

  • it’s a good time for a critical review of the technical literature of self-driving cars. This literature reveals that autonomous vehicles don’t work as well as their creators might like the public to believe.
  • The world is a 3-D grid with x, y, and z coordinates. The car moves through the grid from point A to point B, using highly precise GPS measurements gathered from nearby satellites. Several other systems operate at the same time. The car’s sensors bounce out laser radar waves and measure the response time to build a “picture” of what is outside.
  • It is a masterfully designed, intricate computational system. However, there are dangers.
  • ...11 more annotations...
  • Self-driving cars navigate by GPS. What happens if a self-driving school bus is speeding down the highway and loses its navigation system at 75 mph because of a jammer in the next lane?
  • Because they are not calculating the trajectory for the stationary fire truck, only for objects in motion (like pedestrians or bicyclists), they can’t react quickly to register a previously stationary object as an object in motion.
  • If the car was programmed to save the car’s occupants at the expense of pedestrians, the autonomous-car industry is facing its first public moment of moral reckoning.
  • This kind of blind optimism about technology, the assumption that tech is always the right answer, is a kind of bias that I call technochauvinism.
  • an overwhelming number of tech people (and investors) seem to want self-driving cars so badly that they are willing to ignore evidence suggesting that self-driving cars could cause as much harm as good
  • By this point, many people know about the trolley problem as an example of an ethical decision that has to be programmed into a self-driving car.
  • With driving, the stakes are much higher. In a self-driving car, death is an unavoidable feature, not a bug.
  • t imagine the opposite scenario: The car is programmed to sacrifice the driver and the occupants to preserve the lives of bystanders. Would you get into that car with your child? Would you let anyone in your family ride in it? Do you want to be on the road, or on the sidewalk, or on a bicycle, next to cars that have no drivers and have unreliable software that is designed to kill you or the driver?
  • Plenty of people want self-driving cars to make their lives easier, but self-driving cars aren’t the only way to fix America’s traffic problems. One straightforward solution would be to invest more in public transportation.
  • Public-transportation funding is a complex issue that requires massive, collaborative effort over a period of years. It involves government bureaucracy. This is exactly the kind of project that tech people often avoid attacking, because it takes a really long time and the fixes are complicated.
  • Plenty of people, including technologists, are sounding warnings about self-driving cars and how they attempt to tackle very hard problems that haven’t yet been solved. People are warning of a likely future for self-driving cars that is neither safe nor ethical nor toward the greater good. Still,  the idea that self-driving cars are nifty and coming soon is often the accepted wisdom, and there’s a tendency to forget that technologists have been saying “coming soon” for decades now.
knudsenlu

Huge MIT Study of 'Fake News': Falsehoods Win on Twitter - The Atlantic - 0 views

  • “Falsehood flies, and the Truth comes limping after it,” Jonathan Swift once wrote.It was hyperbole three centuries ago. But it is a factual description of social media, according to an ambitious and first-of-its-kind study published Thursday in Science.
  • By every common metric, falsehood consistently dominates the truth on Twitter, the study finds: Fake news and false rumors reach more people, penetrate deeper into the social network, and spread much faster than accurate stories.
  • “It seems to be pretty clear [from our study] that false information outperforms true information,” said Soroush Vosoughi, a data scientist at MIT who has studied fake news since 2013 and who led this study. “And that is not just because of bots. It might have something to do with human nature.”
  • ...8 more annotations...
  • A false story is much more likely to go viral than a real story, the authors find. A false story reaches 1,500 people six times quicker, on average, than a true story does.
  • “In short, I don’t think there’s any reason to doubt the study’s results,” said Rebekah Tromble, a professor of political science at Leiden University in the Netherlands, in an email.
  • It’s a question that can have life-or-death consequences.“[Fake news] has become a white-hot political and, really, cultural topic, but the trigger for us was personal events that hit Boston five years ago,” said Deb Roy, a media scientist at MIT and one of the authors of the new study.
  • Ultimately, they found about 126,000 tweets, which, together, had been retweeted more than 4.5 million times. Some linked to “fake” stories hosted on other websites. Some started rumors themselves, either in the text of a tweet or in an attached image. (The team used a special program that could search for words contained within static tweet images.) And some contained true information or linked to it elsewhere.
  • Tweet A and Tweet B both have the same size audience, but Tweet B has more “depth,” to use Vosoughi’s term. It chained together retweets, going viral in a way that Tweet A never did. “It could reach 1,000 retweets, but it has a very different shape,” he said.Here’s the thing: Fake news dominates according to both metrics. It consistently reaches a larger audience, and it tunnels much deeper into social networks than real news does. The authors found that accurate news wasn’t able to chain together more than 10 retweets. Fake news could put together a retweet chain 19 links long—and do it 10 times as fast as accurate news put together its measly 10 retweets.
  • What does this look like in real life? Take two examples from the last presidential election. In August 2015, a rumor circulated on social media that Donald Trump had let a sick child use his plane to get urgent medical care. Snopes confirmed almost all of the tale as true. But according to the team’s estimates, only about 1,300 people shared or retweeted the story.
  • Why does falsehood do so well? The MIT team settled on two hypotheses.First, fake news seems to be more “novel” than real news. Falsehoods are often notably different from the all the tweets that have appeared in a user’s timeline 60 days prior to their retweeting them, the team found.Second, fake news evokes much more emotion than the average tweet. The researchers created a database of the words that Twitter users used to reply to the 126,000 contested tweets, then analyzed it with a state-of-the-art sentiment-analysis tool. Fake tweets tended to elicit words associated with surprise and disgust, while accurate tweets summoned words associated with sadness and trust, they found.
  • It suggests—to me, at least, a Twitter user since 2007, and someone who got his start in journalism because of the social network—that social-media platforms do not encourage the kind of behavior that anchors a democratic government. On platforms where every user is at once a reader, a writer, and a publisher, falsehoods are too seductive not to succeed: The thrill of novelty is too alluring, the titillation of disgust too difficult to transcend. After a long and aggravating day, even the most staid user might find themselves lunging for the politically advantageous rumor. Amid an anxious election season, even the most public-minded user might subvert their higher interest to win an argument.
runlai_jiang

A New Antidote for Noisy Airports: Slower Planes - WSJ - 0 views

  • Urban airports like Boston’s Logan thought they had silenced noise issues with quieter planes. Now complaints pour in from suburbs 10 to 15 miles away because new navigation routes have created relentless noise for some homeowners. Photo: Alamy By Scott McCartney Scott McCartney The Wall Street Journal BiographyScott McCartney @MiddleSeat Scott.McCartney@wsj.com March 7, 2018 8:39 a.m. ET 146 COMMENTS saveSB107507240220
  • It turns out engines aren’t the major culprit anymore. New airplanes are much quieter. It’s the “whoosh” that big airplanes make racing through the air.
  • Computer models suggest slowing departures by 30 knots—about 35 miles an hour—would reduce noise on the ground significantly.
  • ...9 more annotations...
  • The FAA says it’s impressed and is moving forward with recommendations Boston has made.
  • . A working group is forming to evaluate the main recommendation to slow departing jets to a speed limit of 220 knots during the climb to 10,000 feet, down from 250 knots.
  • New routes put planes over quiet communities. Complaints soared. Phoenix neighborhoods sued the FAA; Chicago neighborhoods are pushing for rotating runway use. Neighborhoods from California to Washington, D.C., are fighting the new procedures that airlines and the FAA insist are vital to future travel.
  • “It’s a concentration problem. It’s a frequency problem. It’s not really a noise problem.”
  • “The flights wake you up. We get a lot of complaints from young families with children,” says Mr. Wright, a data analyst who works from home for a major health-care company.
  • In Boston, an analysis suggested only 54% of the complaints Massport received resulted from noise louder than 45 decibels—about the level of background noise. When it’s relentless, you notice it more.
  • With a 30-knot reduction, noise directly under the flight track would decrease by between 1.5 and 5 decibels and the footprint on the ground would get a lot skinnier, sharply reducing the number of people affected, Mr. Hansman says.
  • The industry trade association Airlines for America has offered cautious support of the Boston recommendations. In a statement, the group said the changes must be safe, work with a variety of aircraft and not reduce the airport’s capacity for takeoffs and landings.
  • Air-traffic controllers will need to delay a departure a bit to put more room between a slower plane and a faster one, or modify its course slightly.
Javier E

Psychology's Replication Crisis Is Real, Many Labs 2 Says - The Atlantic - 1 views

  • n recent years, it has become painfully clear that psychology is facing a “reproducibility crisis,” in which even famous, long-established phenomena—the stuff of textbooks and TED Talks—might not be real
  • Ironically enough, it seems that one of the most reliable findings in psychology is that only half of psychological studies can be successfully repeated
  • That failure rate is especially galling, says Simine Vazire from the University of California at Davis, because the Many Labs 2 teams tried to replicate studies that had made a big splash and been highly cited
  • ...5 more annotations...
  • With 15,305 participants in total, the new experiments had, on average, 60 times as many volunteers as the studies they were attempting to replicate. The researchers involved worked with the scientists behind the original studies to vet and check every detail of the experiments beforehand. And they repeated those experiments many times over, with volunteers from 36 different countries, to see if the studies would replicate in some cultures and contexts but not others.
  • Despite the large sample sizes and the blessings of the original teams, the team failed to replicate half of the studies it focused on. It couldn’t, for example, show that people subconsciously exposed to the concept of heat were more likely to believe in global warming, or that moral transgressions create a need for physical cleanliness in the style of Lady Macbeth, or that people who grow up with more siblings are more altruistic.
  • Many Labs 2 “was explicitly designed to examine how much effects varied from place to place, from culture to culture,” says Katie Corker, the chair of the Society for the Improvement of Psychological Science. “And here’s the surprising result: The results do not show much variability at all.” If one of the participating teams successfully replicated a study, others did, too. If a study failed to replicate, it tended to fail everywhere.
  • it’s a serious blow to one of the most frequently cited criticisms of the “reproducibility crisis” rhetoric. Surely, skeptics argue, it’s a fantasy to expect studies to replicate everywhere. “There’s a massive deference to the sample,” Nosek says. “Your replication attempt failed? It must be because you did it in Ohio and I did it in Virginia, and people are different. But these results suggest that we can’t just wave those failures away very easily.”
  • the lack of variation in Many Labs 2 is actually a positive thing. Sure, it suggests that the large number of failed replications really might be due to sloppy science. But it also hints that the fundamental business of psychology—creating careful lab experiments to study the tricky, slippery, complicated world of the human mind—works pretty well. “Outside the lab, real-world phenomena can and probably do vary by context,” he says. “But within our carefully designed studies and experiments, the results are not chaotic or unpredictable. That means we can do valid social-science research.”
Javier E

The Navy's USS Gabrielle Giffords and the Future of Work - The Atlantic - 0 views

  • Minimal manning—and with it, the replacement of specialized workers with problem-solving generalists—isn’t a particularly nautical concept. Indeed, it will sound familiar to anyone in an organization who’s been asked to “do more with less”—which, these days, seems to be just about everyone.
  • Ten years from now, the Deloitte consultant Erica Volini projects, 70 to 90 percent of workers will be in so-called hybrid jobs or superjobs—that is, positions combining tasks once performed by people in two or more traditional roles.
  • If you ask Laszlo Bock, Google’s former culture chief and now the head of the HR start-up Humu, what he looks for in a new hire, he’ll tell you “mental agility.
  • ...40 more annotations...
  • “What companies are looking for,” says Mary Jo King, the president of the National Résumé Writers’ Association, “is someone who can be all, do all, and pivot on a dime to solve any problem.”
  • The phenomenon is sped by automation, which usurps routine tasks, leaving employees to handle the nonroutine and unanticipated—and the continued advance of which throws the skills employers value into flux
  • Or, for that matter, on the relevance of the question What do you want to be when you grow up?
  • By 2020, a 2016 World Economic Forum report predicted, “more than one-third of the desired core skill sets of most occupations” will not have been seen as crucial to the job when the report was published
  • I asked John Sullivan, a prominent Silicon Valley talent adviser, why should anyone take the time to master anything at all? “You shouldn’t!” he replied.
  • Minimal manning—and the evolution of the economy more generally—requires a different kind of worker, with not only different acquired skills but different inherent abilities
  • It has implications for the nature and utility of a college education, for the path of careers, for inequality and employability—even for the generational divide.
  • Then, in 2001, Donald Rumsfeld arrived at the Pentagon. The new secretary of defense carried with him a briefcase full of ideas from the corporate world: downsizing, reengineering, “transformational” technologies. Almost immediately, what had been an experimental concept became an article of faith
  • But once cadets got into actual command environments, which tend to be fluid and full of surprises, a different picture emerged. “Psychological hardiness”—a construct that includes, among other things, a willingness to explore “multiple possible response alternatives,” a tendency to “see all experience as interesting and meaningful,” and a strong sense of self-confidence—was a better predictor of leadership ability in officers after three years in the field.
  • Because there really is no such thing as multitasking—just a rapid switching of attention—I began to feel overstrained, put upon, and finally irked by the impossible set of concurrent demands. Shouldn’t someone be giving me a hand here? This, Hambrick explained, meant I was hitting the limits of working memory—basically, raw processing power—which is an important aspect of “fluid intelligence” and peaks in your early 20s. This is distinct from “crystallized intelligence”—the accumulated facts and know-how on your hard drive—which peaks in your 50
  • Others noticed the change but continued to devote equal attention to all four tasks. Their scores fell. This group, Hambrick found, was high in “conscientiousness”—a trait that’s normally an overwhelming predictor of positive job performance. We like conscientious people because they can be trusted to show up early, double-check the math, fill the gap in the presentation, and return your car gassed up even though the tank was nowhere near empty to begin with. What struck Hambrick as counterintuitive and interesting was that conscientiousness here seemed to correlate with poor performance.
  • he discovered another correlation in his test: The people who did best tended to score high on “openness to new experience”—a personality trait that is normally not a major job-performance predictor and that, in certain contexts, roughly translates to “distractibility.”
  • To borrow the management expert Peter Drucker’s formulation, people with this trait are less focused on doing things right, and more likely to wonder whether they’re doing the right things.
  • High in fluid intelligence, low in experience, not terribly conscientious, open to potential distraction—this is not the classic profile of a winning job candidate. But what if it is the profile of the winning job candidate of the future?
  • One concerns “grit”—a mind-set, much vaunted these days in educational and professional circles, that allows people to commit tenaciously to doing one thing well
  • These ideas are inherently appealing; they suggest that dedication can be more important than raw talent, that the dogged and conscientious will be rewarded in the end.
  • he studied West Point students and graduates.
  • Traditional measures such as SAT scores and high-school class rank “predicted leader performance in the stable, highly regulated environment of West Point” itself.
  • It would be supremely ironic if the advance of the knowledge economy had the effect of devaluing knowledge. But that’s what I heard, recurrentl
  • “Fluid, learning-intensive environments are going to require different traits than classical business environments,” I was told by Frida Polli, a co-founder of an AI-powered hiring platform called Pymetrics. “And they’re going to be things like ability to learn quickly from mistakes, use of trial and error, and comfort with ambiguity.”
  • “We’re starting to see a big shift,” says Guy Halfteck, a people-analytics expert. “Employers are looking less at what you know and more and more at your hidden potential” to learn new things
  • advice to employers? Stop hiring people based on their work experience. Because in these environments, expertise can become an obstacle.
  • “The Curse of Expertise.” The more we invest in building and embellishing a system of knowledge, they found, the more averse we become to unbuilding it.
  • All too often experts, like the mechanic in LePine’s garage, fail to inspect their knowledge structure for signs of decay. “It just didn’t occur to him,” LePine said, “that he was repeating the same mistake over and over.
  • The devaluation of expertise opens up ample room for different sorts of mistakes—and sometimes creates a kind of helplessness.
  • Aboard littoral combat ships, the crew lacks the expertise to carry out some important tasks, and instead has to rely on civilian help
  • Meanwhile, the modular “plug and fight” configuration was not panning out as hoped. Converting a ship from sub-hunter to minesweeper or minesweeper to surface combatant, it turned out, was a logistical nightmare
  • So in 2016 the concept of interchangeability was scuttled for a “one ship, one mission” approach, in which the extra 20-plus sailors became permanent crew members
  • “As equipment breaks, [sailors] are required to fix it without any training,” a Defense Department Test and Evaluation employee told Congress. “Those are not my words. Those are the words of the sailors who were doing the best they could to try to accomplish the missions we gave them in testing.”
  • These results were, perhaps, predictable given the Navy’s initial, full-throttle approach to minimal manning—and are an object lesson on the dangers of embracing any radical concept without thinking hard enough about the downsides
  • a world in which mental agility and raw cognitive speed eclipse hard-won expertise is a world of greater exclusion: of older workers, slower learners, and the less socially adept.
  • if you keep going down this road, you end up with one really expensive ship with just a few people on it who are geniuses … That’s not a future we want to see, because you need a large enough crew to conduct multiple tasks in combat.
  • hat does all this mean for those of us in the workforce, and those of us planning to enter it? It would be wrong to say that the 10,000-hours-of-deliberate-practice idea doesn’t hold up at all. In some situations, it clearly does
  • A spinal surgery will not be performed by a brilliant dermatologist. A criminal-defense team will not be headed by a tax attorney. And in tech, the demand for specialized skills will continue to reward expertise handsomely.
  • But in many fields, the path to success isn’t so clear. The rules keep changing, which means that highly focused practice has a much lower return
  • In uncertain environments, Hambrick told me, “specialization is no longer the coin of the realm.”
  • It leaves us with lifelong learning,
  • I found myself the target of career suggestions. “You need to be a video guy, an audio guy!” the Silicon Valley talent adviser John Sullivan told me, alluding to the demise of print media
  • I found the prospect of starting over just plain exhausting. Building a professional identity takes a lot of resources—money, time, energy. After it’s built, we expect to reap gains from our investment, and—let’s be honest—even do a bit of coasting. Are we equipped to continually return to apprentice mode? Will this burn us out?
  • Everybody I met on the Giffords seemed to share that mentality. They regarded every minute on board—even during a routine transit back to port in San Diego Harbor—as a chance to learn something new.
knudsenlu

Who Is Weev, and Why Did He Derail a Journalist's Career? - The Atlantic - 0 views

  • In the span of about six hours yesterday, The New York Times announced the hiring of Quinn Norton as a tech columnist and then apparently fired her. The Times claims that their decision to “go their separate ways” was guided by “new information,” revealed through a social-media maelstrom, about slurs Norton had used on Twitter and about her friendship with someone called weev. In October, Norton called weev “a terrible person, and an old friend of mine.” The rest of the world calls him a Nazi.
  • According to O’Brien, Auernheimer was an active user of a private chat server for Stormers, where, among other things, he forbade any members from talking to the police, coordinated a plan to send Nazis to Heather Heyer’s funeral, and wrote:
  • Auernheimer got involved with The Daily Stormer in 2014, after he was released from federal prison on identity-theft and hacking charges and living in Europe. Andrew Anglin, who was the focus of O’Brien’s story and who founded The Daily Stormer, said of Auernheimer in 2016, “I don’t know what I would be doing if it wasn’t for him ... He’s the one basically holding the whole thing together.”
  • ...1 more annotation...
  • Norton has written for several other news organizations, including this magazine. Her October tweet explained, “Some of my friend (sic) are terrible people, and also my friends.
knudsenlu

Quinn Norton: The New York Times Fired My Doppelgänger - The Atlantic - 0 views

  • Quinn Norton
  • The day before Valentine’s Day, social media created a bizarro-world version of me. I have seen strange ideas about me online before, but this doppelgänger was so far from resembling me that I told friends and loved ones I didn’t want to even try to rebut it. It was a leading question turned into a human form. The net created a person with my name and face, but with so little relationship to me, she could have been an invader from an alternate universe.
  • It started when The New York Times hired me for its editorial board. In January, the Times sought me out because, editorial leaders told me, the Times as an institution is struggling with understanding how technology is shifting society and politics. We talked for a while. I discussed my work, my beliefs, and my background.
  • ...9 more annotations...
  • I was hesitant with the Times. They were far out of my comfort zone, but I felt that the people I was talking to had a sincerity greater than their confusion. Nothing that has happened since then has dissuaded me from that impression.
  • If you’re reading this, especially on the internet, you are the teacher for those institutions at a local, national, and global level. I understand that you didn’t ask for this position. Neither did I. History doesn’t ask you if you want to be born in a time of upheaval, it just tells you when you are. When the backlash began, I got the call from the person who had sought me out and recruited me. The fear I heard in that shaky voice coming through my mobile phone was unmistakable. It was the fear of a mob, of the unknown, and of the idea that maybe they had gotten it wrong and done something terrible. I have felt all of those things. Many of us have. It’s not a place of strength, even when it seems to be coming from someone standing in a place of power. The Times didn’t know what the internet was doing—tearing down a new hire, exposing a fraud, threatening them—everything seemed to be in the mix.
  • I had even written about context collapse myself, but that hadn’t saved me from falling into it, and then hurting other people I didn’t mean to hurt. This particular collapse didn’t create much of a doppelgänger, but it did find me spending a morning as a defensive jerk. I’m very sorry for that dumb mistake. It helped me learn a lesson: Be damn sure when you make angry statements. Check them out long enough that, even if the statements themselves are still angry, you are not angry by the time you make them. Again and again, I have learned this: Don’t internet angry. If you’re angry, internet later.
  • I think if I’d gotten to write for the Times as part of their editorial board, this might have been different. I might have been in a position to show how our media doppelgängers get invented, and how we can unwind them. It takes time and patience. It doesn’t come from denying the doppelgänger—there’s nothing there to deny. I was accused of homophobia because of the in-group language I used with anons when I worked with them. (“Anons” refers to people who identify as part of the activist collective Anonymous.) I was accused of racism for use of taboo language, mainly in a nine-year-old retweet in support of Obama. Intentions aside, it wasn’t a great tweet, and I was probably overemotional when I retweeted it.
  • In late 2015 I woke up a little before 6 a.m., jet-lagged in New York, and started looking at Twitter. There was a hashtag, I don’t remember if it was trending or just in my timeline, called #whitegirlsaremagic. I clicked on it, and found it was racist and sexist dross. It was being promulgated in opposition to another hashtag, #blackgirlsaremagic. I clicked on that, and found a few model shots and borderline soft-core porn of black women. Armed with this impression, I set off to tweet in righteous anger about how much I disliked women being reduced to sex objects regardless of race. I was not just wrong in this moment, I was incoherently wrong. I had made my little mental model of what #blackgirlsaremagic was, and I had no clue that I had no clue what I was talking about. My 60-second impression of #whitegirlsaremagic was dead-on, but #blackgirlsaremagic didn’t fit in the last few tweets my browser had loaded.
  • I had been a victim of something the sociologists Alice Marwick and danah boyd call context collapse, where people create online culture meant for one in-group, but exposed to any number of out-groups without its original context by social-media platforms, where it can be recontextualized easily and accidentally.
  • Not everyone believes loving engagement is the best way to fight evil beliefs, but it has a good track record. Not everyone is in a position to engage safely with racists, sexists, anti-Semites, and homophobes, but for those who are, it’s a powerful tool. Engagement is not the one true answer to the societal problems destabilizing America today, but there is no one true answer. The way forward is as multifarious and diverse as America is, and a method of nonviolent confrontation and accountability, arising from my pacifism, is what I can bring to helping my society.
  • Here is your task, person on the internet, reader of journalism, speaker to the world on social media: You make the world now, in a way that you never did before. Your beliefs have a power they’ve never had in human history. You must learn to investigate with a scientific and loving mind not only what is true, but what is effective in the world. Right now we are a world of geniuses who constantly love to call each other idiots. But humanity is the most complicated thing we’ve found in the universe, and so far as we know, we’re the only thing even looking. We are miracles by the billions with powers and luxuries beyond the dreams of kings of old.
  • We are powerful creatures, but power must come with gentleness and responsibility. No one prepared us for this, no one trained us, no one came before us with an understanding of our world. There were hints, and wise people, and I lean on and cherish them. But their philosophies and imaginations can only take us so far. We have to build our own philosophies and imagine great futures for our world in order to have any futures at all. Let mercy guide us forward in these troubled times. Let yourself imagine, because imagination is the wellspring of hope. Here, in the beginning of the 21st century, hope is our duty to the future.
knudsenlu

Hawaii: Where Evolution Can Be Surprisingly Predictable - The Atlantic - 0 views

  • Situated around 2,400 miles from the nearest continent, the Hawaiian Islands are about as remote as it’s possible for islands to be. In the last 5 million years, they’ve been repeatedly colonized by far-traveling animals, which then diversified into dozens of new species. Honeycreeper birds, fruit flies, carnivorous caterpillars ... all of these creatures reached Hawaii, and evolved into wondrous arrays of unique forms.
  • The most spectacular of these spider dynasties, Gillespie says, are the stick spiders. They’re so-named because some of them have long, distended abdomens that make them look like twigs. “You only see them at night, walking around the understory very slowly,” Gillespie says. “They’re kind of like sloths.” Murderous sloths, though: Their sluggish movements allow them to sneak up on other spiders and kill them.
  • Gillespie has shown that the gold spiders on Oahu belong to a different species from those on Kauai or Molokai. In fact, they’re more closely related to their brown and white neighbors from Oahu. Time and again, these spiders have arrived on new islands and evolved into new species—but always in one of three basic ways. A gold spider arrives on Oahu, and diversified into gold, brown, and white species. Another gold spider hops across to Maui and again diversified into gold, brown, and white species. “They repeatedly evolve the same forms,” says Gillespie.
  • ...3 more annotations...
  • Gillespie has seen this same pattern before, among Hawaii’s long-jawed goblin spiders. Each island has its own representatives of the four basic types: green, maroon, small brown, and large brown. At first, Gillespie assumed that all the green species were related to each other. But the spiders’ DNA revealed that the ones that live on the same islands are most closely related, regardless of their colors. They too have hopped from one island to another, radiating into the same four varieties wherever they land.
  • One of the most common misunderstandings about evolution is that it is a random process. Mutations are random, yes, but those mutations then rise and fall in ways that are anything but random. That’s why stick spiders, when they invade a new island, don’t diversify into red species, or zebra-striped ones. The environment of Hawaii sculpts their bodies in a limited number of ways.
  • Gillespie adds that there’s an urgency to this work. For millions of years, islands like Hawaii have acted as crucibles of evolution, allowing living things to replay evolution’s tape in the way that Gould envisaged. But in a much shorter time span, humans have threatened the results of those natural experiments. “The Hawaiian islands are in dire trouble from invasive species, and environmental modifications,” says Gillespie. “And you have all these unknown groups of spiders—entire lineages of really beautiful, charismatic animals, most of which are undescribed.”
martinelligi

What the Pandemic Is Doing to Our Brains - The Atlantic - 0 views

  • This is the fog of late pandemic, and it is brutal. In the spring, we joked about the Before Times, but they were still within reach, easily accessible in our shorter-term memories. In the summer and fall, with restrictions loosening and temperatures rising, we were able to replicate some of what life used to be like, at least in an adulterated form: outdoor drinks, a day at the beach. But now, in the cold, dark, featureless middle of our pandemic winter, we can neither remember what life was like before nor imagine what it’ll be like after.
  • The sunniest optimist would point out that all this forgetting is evidence of the resilience of our species. Humans forget a great deal of what happens to us, and we tend to do it pretty quickly—after the first 24 hours or so. “Our brains are very good at learning different things and forgetting the things that are not a priority,” Tina Franklin, a neuroscientist at Georgia Tech, told me. As the pandemic has taught us new habits and made old ones obsolete, our brains have essentially put actions like taking the bus and going to restaurants in deep storage, and placed social distancing and coughing into our elbows near the front of the closet. When our habits change back, presumably so will our recall.
  • The share of Americans reporting symptoms of anxiety disorder, depressive disorder, or both roughly quadrupled from June 2019 to December 2020, according to a Census Bureau study released late last year. What’s more, we simply don’t know the long-term effects of collective, sustained grief. Longitudinal studies of survivors of Chernobyl, 9/11, and Hurricane Katrina show elevated rates of mental-health problems, in some cases lasting for more than a decade
Javier E

'ContraPoints' Is Political Philosophy Made for YouTube - The Atlantic - 1 views

  • While Wynn positions herself on the left, she is no dogmatic ideologue, readily admitting to points on the right and criticizing leftist arguments when warranted
  • She has described her work as “edutainment” and “propaganda,” and it’s both
  • But what makes her videos unique is the way Wynn combines those two elements: high standards of rational argument and not-quite-rational persuasion. ContraPoints offers compelling speech aimed at truth, rendered in the raucous, meme-laden idiom of the interne
  • ...16 more annotations...
  • In 2014, Wynn noticed a trend on YouTube that disturbed her: Videos with hyperbolic titles like “why feminism ruins everything,” “SJW cringe compilation,” and “Ben Shapiro DESTROYS Every College Snowflake” were attracting millions of views and spawning long, jeering comment threads. Wynn felt she was watching the growth of a community of outrage that believes feminists, Marxists, and multiculturalists are conspiring to destroy freedom of speech, liquidate gender norms, and demolish Western civilization
  • Wynn created ContraPoints to offer entertaining, coherent rebuttals to these kinds of ideas. Her videos also explain left-wing talking points—like rape culture and cultural appropriation—and use philosophy to explore topics that are important to Wynn, such as the meaning of gender for trans people.
  • Wynn thinks it’s a mistake to assume that viewers of angry, right-wing videos are beyond redemption. “It’s quite difficult to get through to the people who are really committed to these anti-progressive beliefs,” Wynn told me recently. However, she said, she believes that many viewers find such ideas “psychologically resonant” without being hardened reactionaries. This broad, not fully committed center—comprising people whose minds can still be changed—is Wynn’s target audience.
  • Usually, the videos to which Wynn is responding take the stance of dogged reason cutting through the emotional excesses of so-called “political correctness.” For example, the American conservative commentator Ben Shapiro, who is a target of a recent ContraPoints video, has made “facts don’t care about your feelings” his motto. Wynn’s first step in trying to win over those who find anti-progressive views appealing is to show that these ideas often rest on a flimsy foundation. To do so, she fully adopts the rational standards of argument that her rivals pride themselves on following, and demonstrates how they fail to achieve them
  • Wynn dissects her opponents’ positions, holding up fallacies, evasions, and other rhetorical tricks for examination, all the while providing a running commentary on good argumentative method.
  • The host defends her own positions according to the same principles. Wynn takes on the strongest version of her opponent’s argument, acknowledges when she thinks her opponents are right and when she has been wrong, clarifies when misunderstood, and provides plenty of evidence for her claims
  • Wynn is a former Ph.D. student in philosophy, and though her videos are too rich with dick jokes for official settings, her argumentative practice would pass muster in any grad seminar.
  • she critiques many of her leftist allies for being bad at persuasion.
  • Socrates persuaded by both the logic of argument and the dynamic of fandom. Wynn is beginning to grow a dedicated following of her own: Members of online discussion groups refer to her as “mother” and “the queen,” produce fan art, and post photos of themselves dressed as characters from her videos.
  • she shares Socrates’s view that philosophy is more an erotic art than a martial one
  • As she puts it, she’s not trying to destroy the people she addresses, but seduce them
  • for Wynn, the true key to persuasion is to engage her audience on an emotional level.
  • One thing she has come across repeatedly is a disdain for the left’s perceived moral superiority. Anti-progressives of all stripes, Wynn told me, show an “intense defensiveness against being told what to do” and a “repulsion in response to moralizing.”
  • Matching her speech to the audience’s tastes presents a prickly rhetorical challenge. In an early video, Contra complains: “The problem is this medium. These goddamn savages demand a circus, and I intend to give them one, but behind the curtain, I really just want to have a conversation.
  • Philosophical conversation requires empathy and good-faith engagement. But the native tongue of political YouTube is ironic antagonism. It’s Wynn’s inimitable way of combining these two ingredients that gives ContraPoints its distinctive mouthfeel.
  • Wynn spends weeks in the online communities of her opponents—whether they’re climate skeptics or trans-exclusionary feminists—trying to understand what they believe and why they believe it. In Socrates’s words, she’s studying the souls of her audience.
knudsenlu

Study: Does Adult Neurogenesis Exist in Humans? - The Atlantic - 0 views

  • In 1928, Santiago Ramón y Cajal, the father of modern neuroscience, proclaimed that the brains of adult humans never make new neurons. “Once development was ended,” he wrote, “the founts of growth and regeneration ... dried up irrevocably. In the adult centers the nerve paths are something fixed, ended and immutable. Everything must die, nothing may be regenerated.”
  • For decades, scientists believed that neurogenesis—the creation of new neurons—whirs along nicely in the brains of embryos and infants, but grinds to a halt by adulthood. But from the 1980s onward, this dogma started to falter. Researchers showed that neurogenesis does occur in the brains of various adult animals, and eventually found signs of newly formed neurons in the adult human brain.
  • Finally, Gage and others say that several other lines of evidence suggest that adult neurogenesis in humans is real. For example, in 1998, he and his colleagues studied the brains of five cancer patients who had been injected with BrdU—a chemical that gets incorporated into newly created DNA. They found traces of this substance in the hippocampus, which they took as a sign that the cells there are dividing and creating new neurons.
  • ...2 more annotations...
  • Greg Sutherland from the University of Sydney agrees. In 2016, he came to similar conclusions as Alvarez-Buylla’s team, using similar methods. “Depending on your inherent biases, two scientists can look at sparse events in the adult brain and come to different conclusions,” he says. “But when faced with the stark difference between infant and adult human brains, we can only conclude that [neurogenesis] is a vestigial process in the latter.”
  • Alvarez-Buylla agrees that there’s still plenty of work to do. Even if neurogenesis is a fiction in adult humans, it’s real in infants, and in other animals. If we really don’t make any new neurons as adults, how do we learn new things? And is there any way of restoring that lost ability to create new neurons in cases of stroke, Alzheimer’s, or other degenerative diseases? “Neurogenesis is precisely what we want to induce in cases of brain damage,” Alvarez-Buylla says. “If it isn’t there to begin with, how might you induce it?”
knudsenlu

The Cleaner Wrasse: A Fish That Makes Other Fish Smarter - The Atlantic - 0 views

  • At particular sites, an itchy individual can attract the attention of the bluestreak cleaner wrasse—a slender fish, with blue and yellow markings and a prominent black stripe. On seeing these colors, the itchy “client” strikes a specific pose, allowing the wrasse to snake across its body, mouth, and gills, picking off parasites and dead skin along the way. The wrasse gets a meal. The client gets exfoliated. A single wrasse works for around four hours a day, and in that time, it can inspect more than 2,000 clients.
  • The wrasse are remarkably savvy about how they perform their services. Redouan Bshary, from the University of Neuchâtel, has shown that they sometimes cheat their clients by taking illicit bites of the protective mucus covering their skin. If the clients are watching, the wrasse restrain themselves from such shenanigans, in an effort to maintain their reputation. If disgruntled clients chase them, they try to make amends by offering a complementary fin massage. If high-status clients pop by—large, visiting predators like sharks or groupers—the cleaners prioritize them over smaller fish that live in the area. They’re surprisingly intelligent for fish.
  • And it seems that, by removing parasites, they also make other fish more intelligent.
  • ...3 more annotations...
  • she captured damselfish from various reefs and put them through a series of challenges. First, she put square plates on either side of their tank. One of these hid a chunk of food that the fish could smell but not reach, while the other hid a more accessible morsel. The damselfish had to learn which plate to swim up to—a simple spatial-memory test, and one that every individual passed. Next, Binning swapped the location of the correct plate; again, all the fish learned to change their behavior.
  • hings changed when she gave them a more difficult task. This time, they had to approach the correct plate based not on its location, but on its appearance. This skill—visual discrimination—is vitally important to a damselfish. “They have to learn very quickly, on the basis of color and pattern, which fish are safe to be around, and what competitors or friends look like,” says Binning. “They’re very good at that.”
  • Without the cleaners, the damselfish might also not have enough energy to fully fuel their demanding brains. They’re targeted by parasitic, bloodsucking crustaceans, which makes them “anemic, sluggish, and weak,” Binning says. When cleaners remove these parasites, the distressed damsels can divert their energies toward other matters—like thinking.
knudsenlu

Will the Quantum Nature of Gravity Finally Be Measured? - The Atlantic - 0 views

  • In 1935, when both quantum mechanics and Albert Einstein’s general theory of relativity were young, a little-known Soviet physicist named Matvei Bronstein, just 28 himself, made the first detailed study of the problem of reconciling the two in a quantum theory of gravity. This “possible theory of the world as a whole,” as Bronstein called it, would supplant Einstein’s classical description of gravity, which casts it as curves in the space-time continuum, and rewrite it in the same quantum language as the rest of physics.
  • His words were prophetic. Eighty-three years later, physicists are still trying to understand how space-time curvature emerges on macroscopic scales from a more fundamental, presumably quantum picture of gravity; it’s arguably the deepest question in physics.
  • The search for the full theory of quantum gravity has been stymied by the fact that gravity’s quantum properties never seem to manifest in actual experience. Physicists never get to see how Einstein’s description of the smooth space-time continuum, or Bronstein’s quantum approximation of it when it’s weakly curved, goes wrong.
  • ...4 more annotations...
  • Not only that, but the universe appears to be governed by a kind of cosmic censorship: Regions of extreme gravity—where space-time curves so sharply that Einstein’s equations malfunction and the true, quantum nature of gravity and space-time must be revealed—always hide behind the horizons of black holes.
  • Dyson, who helped develop quantum electrodynamics (the theory of interactions between matter and light) and is professor emeritus at the Institute for Advanced Study in Princeton, New Jersey, where he overlapped with Einstein, disagrees with the argument that quantum gravity is needed to describe the unreachable interiors of black holes. And he wonders whether detecting the hypothetical graviton might be impossible, even in principle. In that case, he argues, quantum gravity is metaphysical, rather than physics.
  • The ability to detect the “grin” of quantum gravity would seem to refute Dyson’s argument. It would also kill the gravitational decoherence theory, by showing that gravity and space-time do maintain quantum superpositions.
  • If gravity is a quantum interaction, then the answer is: It depends. Each component of the blue diamond’s superposition will experience a stronger or weaker gravitational attraction to the red diamond, depending on whether the latter is in the branch of its superposition that’s closer or farther away. And the gravity felt by each component of the red diamond’s superposition similarly depends on where the blue diamond is.
knudsenlu

How Psychologists Predict We'll React to Alien News - The Atlantic - 0 views

  • On the night before Halloween in 1938, a strange story crackled over radios across the United States. An announcer interrupted the evening’s regular programming for a “special bulletin,” which went on to describe an alien invasion in a field in New Jersey, complete with panicked eyewitness accounts and sounds of gunfire. The story was, of course, fake, a dramatization of The War of The Worlds, the science-fiction novel published by H. G. Wells in 1898. But not all listeners knew that. The intro to the segment was quite vague, and those who tuned in a few minutes into the show found no suggestion that what they were hearing wasn’t true.
  • The exact nature of the reaction of these unlucky listeners has been debated in the decades since the broadcast. Some say thousands of people dashed out of their homes and into the streets in terror, convinced the country was under attack by Martians. Others say there was no such mass panic. Regardless of the actual scale of the reaction, the event helped cement an understanding that would later be perpetuated in science-fiction television shows and films: Humans, if and when they encounter aliens, probably aren’t going to react well.
  • But what if the extraterrestrial life we confronted wasn’t nightmarish and intelligent, as it’s commonly depicted, but rather microscopic and clueless?
  • ...4 more annotations...
  • Microscopic organisms don’t make for good alien villains, but our chances of discovering extraterrestrial microbial life seem better than encountering advanced alien civilizations, Varnum says. In recent years, more and more scientists have begun to suspect that microbes may exist on moons in our solar system, in the subsurface oceans of Europa and Enceladus and the methane lakes of Titan.
  • In every case, the text-analysis software showed that people, journalists and non-journalists alike, seemed to exhibit more positive than negative emotions in response to news of extraterrestrial microbes.
  • In general, media mentions and their predictive abilities are imperfect measures. Text-analysis software itself has some gaps; the program can’t, for example, detect sarcasm.
  • “A lot of worldviews, both religious and secular, have shown themselves to be pretty flexible,” Varnum says. “The Catholic Church eventually made peace with a heliocentric solar system, right?”
Javier E

'Affective Presence': How You Make Other People Feel - The Atlantic - 0 views

  • Some people can walk into a room and instantly put everyone at ease. Others seem to make teeth clench and eyes roll no matter what they do
  • A small body of psychology research supports the idea that the way a person tends to make others feel is a consistent and measurable part of his personality. Researchers call it “affective presence.”
  • 10 years ago in a study by Noah Eisenkraft and Hillary Anger Elfenbein. They put business-school students into groups, had them enroll in all the same classes for a semester, and do every group project together
  • ...15 more annotations...
  • Unsurprisingly, people who consistently make others feel good are more central to their social networks—in Elfenbein’s study, more of their classmates considered them to be friends. They also got more romantic interest from others in a separate speed-dating study.
  • It seems that “our own way of being has an emotional signature,”
  • affective presence is an effect one has regardless of one’s own feelings—those with positive affective presence make other people feel good, even if they personally are anxious or sad, and the opposite is true for those with negative affective presence.
  • “To use common, everyday words, some people are just annoying. It doesn’t mean they’re annoyed all the time,”
  • “They may be content because they’re always getting their way.
  • Some people bring out great things in others while they’re themselves quite depressed.”
  • The researchers found that a significant portion of group members’ emotions could be accounted for by the affective presence of their peers
  • leaders who make other people feel good by their very presence have teams that are better at sharing information, which leads to more innovation. Subordinates are more likely to voice their ideas, too, to a leader with positive affective presence.
  • xactly what people are doing that sets others at ease or puts them off hasn’t yet been studied. It may have to do with body language, or tone of voice, or being a good listener
  • a big part of affective presence may be how people regulate emotions—those of others and their own.
  • Throughout the day, one experiences emotional “blips” as Elfenbein puts it—blips of annoyance or excitement or sadness. The question is, “Can you regulate yourself so those blips don’t infect other people?” she asks. “Can you smooth over the noise in your life so other people aren’t affected by it?”
  • This “smoothing over”—or emotional regulation—could take the form of finding the positive in a bad situation, which can be healthy.
  • it could also take the form of suppressing one’s own emotions just to keep other people comfortable, which is less so.
  • Elfenbein notes that positive affective presence isn’t inherently good, either for the person themselves, or for their relationships with others. Psychopaths are notoriously charming
  • Neither is negative affective presence necessarily always a bad thing in a leader—think of a football coach yelling at the team at halftime, motivating them to make a comeback.
anniina03

The Human Brain Evolved When Carbon Dioxide Was Lower - The Atlantic - 0 views

  • Kris Karnauskas, a professor of ocean sciences at the University of Colorado, has started walking around campus with a pocket-size carbon-dioxide detector. He’s not doing it to measure the amount of carbon pollution in the atmosphere. He’s interested in the amount of CO₂ in each room.
  • The indoor concentration of carbon dioxide concerns him—and not only for the usual reason. Karnauskas is worried that indoor CO₂ levels are getting so high that they are starting to impair human cognition.
  • Carbon dioxide, the same odorless and invisible gas that causes global warming, may be making us dumber.
  • ...11 more annotations...
  • “This is a hidden impact of climate change … that could actually impact our ability to solve the problem itself,” he said.
  • The science is, at first glance, surprisingly fundamental. Researchers have long believed that carbon dioxide harms the brain at very high concentrations. Anyone who’s seen the film Apollo 13 (or knows the real-life story behind it) may remember a moment when the mission’s three astronauts watch a gauge monitoring their cabin start to report dangerous levels of a gas. That gauge was measuring carbon dioxide. As one of the film’s NASA engineers remarks, if CO₂ levels rise too high, “you get impaired judgement, blackouts, the beginning of brain asphyxia.”
  • The same general principle, he argues, could soon affect people here on Earth. Two centuries of rampant fossil-fuel use have already spiked the amount of CO₂ in the atmosphere from about 280 parts per million before the Industrial Revolution to about 410 parts per million today. For Earth as a whole, that pollution traps heat in the atmosphere and causes climate change. But more locally, it also sets a baseline for indoor levels of carbon dioxide: You cannot ventilate a room’s carbon-dioxide levels below the global average.
  • In fact, many rooms have a much higher CO₂ level than the atmosphere, since ventilation systems don’t work perfectly.
  • On top of that, some rooms—in places such as offices, hospitals, and schools—are filled with many breathing people, that is, many people who are themselves exhaling carbon dioxide.
  • As the amount of atmospheric CO₂ keeps rising, indoor CO₂ will climb as well.
  • in one 2016 study Danish scientists cranked up indoor carbon-dioxide levels to 3,000 parts per million—more than seven times outdoor levels today—and found that their 25 subjects suffered no cognitive impairment or health issues. Only when scientists infused that same air with other trace chemicals and organic compounds emitted by the human body did the subjects begin to struggle, reporting “headache, fatigue, sleepiness, and difficulty in thinking clearly.” The subjects also took longer to solve basic math problems. The same lab, in another study, found that indoor concentrations of pure CO₂ could get to 5,000 parts per million and still cause little difficulty, at least for college students.
  • But other research is not as optimistic. When scientists at NASA’s Johnson Space Center tested the effects of CO₂ on about two dozen “astronaut-like subjects,” they found that their advanced decision-making skills declined with CO₂ at 1,200 parts per million. But cognitive skills did not seem to worsen as CO₂ climbed past that mark, and the intensity of the effect seemed to vary from person to person.
  • There’s evidence that carbon-dioxide levels may impair only the most complex and challenging human cognitive tasks. And we still don’t know why.
  • No one has looked at the effects of indoor CO₂ on children, the elderly, or people with health problems. Likewise, studies have so far exposed people to very high carbon levels for only a few hours, leaving open the question of what days-long exposure could do.
  • Modern humans, as a species, are only about 300,000 years old, and the ambient CO₂ that we encountered for most of our evolutionary life—from the first breath of infants to the last rattle of a dying elder—was much lower than the ambient CO₂ today. I asked Gall: Has anyone looked to see if human cognition improves under lower carbon-dioxide levels? If you tested someone in a room that had only 250 parts per million of carbon dioxide—a level much closer to that of Earth’s atmosphere three centuries or three millennia ago—would their performance on tests improve? In other words, is it possible that human cognitive ability has already declined?
katherineharron

What woman presidential candidates are facing (Opinion) - CNN - 0 views

  • The 2020 presidential election marks the first time more than two women have competed in the Democratic or Republican primaries, according to the Center for American Women and Politics at Rutgers University. Democratic congresswomen Tulsi Gabbard of Hawaii, Kirsten Gillibrand of New York, Kamala Harris of California, Amy Klobuchar of Minnesota, and Elizabeth Warren of Massachusetts have all thrown their hats in the ring. And Marianne Williamson, a bestselling author and a spiritual counselor to Oprah, is also running.
  • Research indicates that voters may unknowingly discriminate against female candidates for president because a woman has never held the position, and therefore a woman won't appear to be a "fit" for the role. Scholars call this the gender-incongruency hypothesis. For example, studies have shown that female candidates don't do worse than men when they run for local and state-wide office, but they don't fare as well when they run for president.
  • In a 2007 study published in the journal Basic and Applied Social Psychology, when students were given identical resumes of candidates who they were told were running for president -- a position which, of course, has never been held by a woman -- they judged the candidate to have more presidential potential and to have had a better career when the candidate was named Brian than when the person was named Karen. But when students were shown resumes of candidates running for Congress -- where women already hold seats -- they didn't judge Brian more positively than Karen.
  • ...3 more annotations...
  • Manne also reported in the book that women are less likely to be perceived as competent. When they are considered competent, they're often disliked and considered polarizing. She said female candidates are also often judged to be untrustworthy "on no ostensible basis" and women's claims are viewed as less credible than claims by men. Then, when women defend themselves from unfair attacks, they're accused of "playing the victim."
  • Yet Bernie Sanders -- the frontrunner among declared Democratic candidates -- has also been accused of mistreating his aides, but those allegations don't seem to have gotten the same media attention. One former staffer told the Vermont newspaper Seven Days in 2015 that Sanders was "unbelievably abusive" and claimed "to have endured frequent verbal assaults." The paper reported that others who worked for Sanders also said that "the senator is prone to fits of anger." A spokesperson for Sanders responded to Seven Days and said that Sanders "had very positive relations with people who have worked with him.") And Sanders, in response to the article, told the Des Moines Register, "Yes, I do work hard. Yes, I do demand a lot of the people who work with me. Yes, some people have left who were not happy. But I would say that by and large in my Senate office, in my House office, on my campaigns, the vast majority of people who have worked with me considered that to be a very, very good experience..."
  • Ultimately, the solution to women not appearing to fit the role of president because a woman has never been president seems obvious: Voters need to elect a woman president. But, in order for that to happen, even those of us who are eager to empower women may need to rethink how we judge female candidates.
Javier E

Geology's Timekeepers Are Feuding - The Atlantic - 0 views

  • , in 2000, the Nobel Prize-winning chemist Paul Crutzen won permanent fame for stratigraphy. He proposed that humans had so throughly altered the fundamental processes of the planet—through agriculture, climate change, and nuclear testing, and other phenomena—that a new geological epoch had commenced: the Anthropocene, the age of humans.
  • Zalasiewicz should know. He is the chair of the Anthropocene working group, which the ICS established in 2009 to investigate whether the new epoch deserved a place in stratigraphic time.
  • In 2015, the group announced that the Anthropocene was a plausible new layer and that it should likely follow the Holocene. But the team has yet to propose a “golden spike” for the epoch: a boundary in the sedimentary rock record where the Anthropocene clearly begins.
  • ...12 more annotations...
  • Officially, the Holocene is still running today. You have lived your entire life in the Holocene, and the Holocene has constituted the geological “present” for as long as there have been geologists.But if we now live in a new epoch, the Anthropocene, then the ICS will have to chop the Holocene somewhere. It will have to choose when the Holocene ended, and it will move some amount of time out of the purview of the Holocene working group and into that of the Anthropocene working group.
  • This is politically difficult. And right now, the Anthropocene working group seems intent on not carving too deep into the Holocene. In a paper published earlier this year in Earth-Science Reviews, the Anthropocene working group’s members strongly imply that they will propose starting the new epoch in the mid-20th century.
  • Some geologists argue that the Anthropocene started even earlier: perhaps 4,000 or 6,000 years ago, as farmers began to remake the land surface.“Most of the world’s forests that were going to be converted to cropland and agriculture were already cleared well before 1950,” says Bill Ruddiman, a geology professor at the University of Virginia and an advocate of this extremely early Anthropocene.
  • “Most of the world’s prairies and steppes that were going to be cleared for crops were already gone, by then. How can you argue the Anthropocene started in 1950 when all of the major things that affect Earth’s surface were already over?”Van der Pluijm agreed that the Anthropocene working group was picking 1950 for “not very good reasons.”“Agriculture was the revolution that allowed society to develop,” he said. “That was really when people started to force the land to work for them. That massive land movement—it’s like a landslide, except it’s a humanslide. And it is not, of course, as dramatic as today’s motion of land, but it starts the clock.”
  • This muddle had to stop. The Holocene comes up constantly in discussions of modern global warming. Geologists and climate scientists did not make their jobs any easier by slicing it in different ways and telling contradictory stories about it.
  • This process started almost 10 years ago. For this reason, Zalasiewicz, the chair of the Anthropocene working group, said he wasn’t blindsided by the new subdivisions at all. In fact, he voted to adopt them as a member of the Quaternary working group.“Whether the Anthropocene works with a unified Holocene or one that’s in three parts makes for very little difference,” he told me.In fact, it had made the Anthropocene group’s work easier. “It has been useful to compare the scale of the two climate events that mark the new boundaries [within the Holocene] with the kind of changes that we’re assessing in the Anthropocene. It has been quite useful to have the compare and contrast,” he said. “Our view is that some of the changes in the Anthropocene are rather bigger.”
  • Zalasiewicz said that he and his colleagues were going as fast as they could. When the working group group began its work in 2009, it was “really starting from scratch,” he told me.While other working groups have a large body of stratigraphic research to consider, the Anthropocene working group had nothing. “We had to spend a fair bit of time deciding whether the Anthropocene was geology at all,” he said. Then they had to decide where its signal could show up. Now, they’re looking for evidence that shows it.
  • This cycle of “glacials” and “interglacials” has played out about 50 times over the last several million years. When the Holocene began, it was only another interglacial—albeit the one we live in. Until recently, glaciers were still on schedule to descend in another 30,000 years or so.Yet geologists still call the Holocene an epoch, even though they do not bestow this term on any of the previous 49 interglacials. It get special treatment because we live in it.
  • Much of this science is now moot. Humanity’s vast emissions of greenhouse gas have now so warmed the climate that they have offset the next glaciation. They may even knock us out of the ongoing cycle of Ice Ages, sending the Earth hurtling back toward a “greenhouse” climate after the more amenable “icehouse” climate during which humans evolved.For this reason, van der Pluijm wants the Anthropocene to supplant the Holocene entirely. Humans made their first great change to the environment at the close of the last glaciation, when they seem to have hunted the world’s largest mammals—the wooly mammoth, the saber-toothed tiger—to extinction. Why not start the Anthropocene then?He would even rename the pre-1800 period “the Holocene Age” as a consolation prize:
  • Zalasiewicz said he would not start the Anthropocene too early in time, as it would be too work-intensive for the field to rename such a vast swath of time. “The early-Anthropocene idea would crosscut against the Holocene as it’s seen by Holocene workers,” he said. If other academics didn’t like this, they could create their own timescales and start the Anthropocene Epoch where they choose. “We have no jurisdiction over the word Anthropocene,” he said.
  • Ruddiman, the University of Virginia professor who first argued for a very early Anthropocene, now makes an even broader case. He’s not sure it makes sense to formally define the Anthropocene at all. In a paper published this week, he objects to designating the Anthropocene as starting in the 1950s—and then he objects to delineating the Anthropocene, or indeed any new geological epoch, by name. “Keep the use of the term informal,” he told me. “Don’t make it rigid. Keep it informal so people can say the early-agricultural Anthropocene, or the industrial-era Anthropocene.”
  • “This is the age of geochemical dating,” he said. Geologists have stopped looking to the ICS to place each rock sample into the rock sequence. Instead, field geologists use laboratory techniques to get a precise year or century of origin for each rock sample. “The community just doesn’t care about these definitions,” he said.
1 - 20 of 69 Next › Last »
Showing 20 items per page