Skip to main content

Home/ TOK Friends/ Group items matching "times" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

The Psychopath Makeover - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • The eminent criminal psychologist and creator of the widely used Psychopathy Checklist paused before answering. "I think, in general, yes, society is becoming more psychopathic," he said. "I mean, there's stuff going on nowadays that we wouldn't have seen 20, even 10 years ago. Kids are becoming anesthetized to normal sexual behavior by early exposure to pornography on the Internet. Rent-a-friend sites are getting more popular on the Web, because folks are either too busy or too techy to make real ones. ... The recent hike in female criminality is particularly revealing. And don't even get me started on Wall Street."
  • in a survey that has so far tested 14,000 volunteers, Sara Konrath and her team at the University of Michigan's Institute for Social Research has found that college students' self-reported empathy levels (as measured by the Interpersonal Reactivity Index, a standardized questionnaire containing such items as "I often have tender, concerned feelings for people less fortunate than me" and "I try to look at everybody's side of a disagreement before I make a decision") have been in steady decline over the past three decades—since the inauguration of the scale, in fact, back in 1979. A particularly pronounced slump has been observed over the past 10 years. "College kids today are about 40 percent lower in empathy than their counterparts of 20 or 30 years ago," Konrath reports.
  • Imagining, it would seem, really does make it so. Whenever we read a story, our level of engagement is such that we "mentally simulate each new situation encountered in a narrative," according to one of the researchers, Nicole Speer. Our brains then interweave these newly encountered situations with knowledge and experience gleaned from our own lives to create an organic mosaic of dynamic mental syntheses.
  • ...16 more annotations...
  • during this same period, students' self-reported narcissism levels have shot through the roof. "Many people see the current group of college students, sometimes called 'Generation Me,' " Konrath continues, "as one of the most self-centered, narcissistic, competitive, confident, and individualistic in recent history."
  • Reading a book carves brand-new neural pathways into the ancient cortical bedrock of our brains. It transforms the way we see the world—makes us, as Nicholas Carr puts it in his recent essay, "The Dreams of Readers," "more alert to the inner lives of others." We become vampires without being bitten—in other words, more empathic. Books make us see in a way that casual immersion in the Internet, and the quicksilver virtual world it offers, doesn't.
  • if society really is becoming more psychopathic, it's not all doom and gloom. In the right context, certain psychopathic characteristics can actually be very constructive. A neurosurgeon I spoke with (who rated high on the psychopathic spectrum) described the mind-set he enters before taking on a difficult operation as "an intoxication that sharpens rather than dulls the senses." In fact, in any kind of crisis, the most effective individuals are often those who stay calm—who are able to respond to the exigencies of the moment while at the same time maintaining the requisite degree of detachment.
  • mental toughness isn't the only characteristic that Special Forces soldiers have in common with psychopaths. There's also fearlessness.
  • I ask Andy whether he ever felt any regret over anything he'd done. Over the lives he'd taken on his numerous secret missions around the world. "No," he replies matter-of-factly, his arctic-blue eyes showing not the slightest trace of emotion. "You seriously don't think twice about it. When you're in a hostile situation, the primary objective is to pull the trigger before the other guy pulls the trigger. And when you pull it, you move on. Simple as that. Why stand there, dwelling on what you've done? Go down that route and chances are the last thing that goes through your head will be a bullet from an M16. "The regiment's motto is 'Who Dares Wins.' But sometimes it can be shortened to 'F--- It.' "
  • one of the things that we know about psychopaths is that the light switches of their brains aren't wired up in quite the same way as the rest of ours are—and that one area particularly affected is the amygdala, a peanut-size structure located right at the center of the circuit board. The amygdala is the brain's emotion-control tower. It polices our emotional airspace and is responsible for the way we feel about things. But in psychopaths, a section of this airspace, the part that corresponds to fear, is empty.
  • Turn down the signals to the amygdala, of course, and you're well on the way to giving someone a psychopath makeover. Indeed, Liane Young and her team in Boston have since kicked things up a notch and demonstrated that applying TMS to the right temporoparietal junction—a neural ZIP code within that neighborhood—has significant effects not just on lying ability but also on moral-reasoning ability: in particular, ascribing intentionality to others' actions.
  • at an undisclosed moment sometime within the next 60 seconds, the image you see at the present time will change, and images of a different nature will appear on the screen. These images will be violent. And nauseating. And of a graphic and disturbing nature. "As you view these images, changes in your heart rate, skin conductance, and EEG activity will be monitored and compared with the resting levels that are currently being recorded
  • "OK," says Nick. "Let's get the show on the road." He disappears behind us, leaving Andy and me merrily soaking up the incontinence ad. Results reveal later that, at this point, as we wait for something to happen, our physiological output readings are actually pretty similar. Our pulse rates are significantly higher than our normal resting levels, in anticipation of what's to come. But with the change of scene, an override switch flips somewhere in Andy's brain. And the ice-cold Special Forces soldier suddenly swings into action. As vivid, florid images of dismemberment, mutilation, torture, and execution flash up on the screen in front of us (so vivid, in fact, that Andy later confesses to actually being able to "smell" the blood: a "kind of sickly-sweet smell that you never, ever forget"), accompanied not by the ambient spa music of before but by blaring sirens and hissing white noise, his physiological readings start slipping into reverse. His pulse rate begins to slow. His GSR begins to drop, his EEG to quickly and dramatically attenuate. In fact, by the time the show is over, all three of Andy's physiological output measures are pooling below his baseline.
  • Nick has seen nothing like it. "It's almost as if he was gearing himself up for the challenge," he says. "And then, when the challenge eventually presented itself, his brain suddenly responded by injecting liquid nitrogen into his veins. Suddenly implemented a blanket neural cull of all surplus feral emotion. Suddenly locked down into a hypnotically deep code red of extreme and ruthless focus." He shakes his head, nonplused. "If I hadn't recorded those readings myself, I'm not sure I would have believed them," he continues. "OK, I've never tested Special Forces before. And maybe you'd expect a slight attenuation in response. But this guy was in total and utter control of the situation. So tuned in, it looked like he'd completely tuned out."
  • My physiological output readings, in contrast, went through the roof. Exactly like Andy's, they were well above baseline as I'd waited for the carnage to commence. But that's where the similarity ended. Rather than go down in the heat of battle, in the midst of the blood and guts, mine had appreciated exponentially. "At least it shows that the equipment is working properly," comments Nick. "And that you're a normal human being."
  • TMS can't penetrate far enough into the brain to reach the emotion and moral-reasoning precincts directly. But by damping down or turning up the regions of the cerebral cortex that have links with such areas, it can simulate the effects of deeper, more incursive influence.
  • Before the experiment, I'd been curious about the time scale: how long it would take me to begin to feel the rush. Now I had the answer: about 10 to 15 minutes. The same amount of time, I guess, that it would take most people to get a buzz out of a beer or a glass of wine.
  • The effects aren't entirely dissimilar. An easy, airy confidence. A transcendental loosening of inhibition. The inchoate stirrings of a subjective moral swagger: the encroaching, and somehow strangely spiritual, realization that hell, who gives a s---, anyway? There is, however, one notable exception. One glaring, unmistakable difference between this and the effects of alcohol. That's the lack of attendant sluggishness. The enhancement of attentional acuity and sharpness. An insuperable feeling of heightened, polished awareness. Sure, my conscience certainly feels like it's on ice, and my anxieties drowned with a half-dozen shots of transcranial magnetic Jack Daniel's. But, at the same time, my whole way of being feels as if it's been sumptuously spring-cleaned with light. My soul, or whatever you want to call it, immersed in a spiritual dishwasher.
  • So this, I think to myself, is how it feels to be a psychopath. To cruise through life knowing that no matter what you say or do, guilt, remorse, shame, pity, fear—all those familiar, everyday warning signals that might normally light up on your psychological dashboard—no longer trouble you.
  • I suddenly get a flash of insight. We talk about gender. We talk about class. We talk about color. And intelligence. And creed. But the most fundamental difference between one individual and another must surely be that of the presence, or absence, of conscience. Conscience is what hurts when everything else feels good. But what if it's as tough as old boots? What if one's conscience has an infinite, unlimited pain threshold and doesn't bat an eye when others are screaming in agony?
Emily Horwitz

Long Prison Terms Eyed as Contributing to Poverty - NYTimes.com - 0 views

  •  
    A very eye-opening (albeit very long) article about the devastating effects that a stint in prison can have on a family. The article noted that many men who are sent to jail are often given sentences that are too long, so that, by the time they are released, they are way past the average age for committing crimes, and have missed out on valuable life experiences that could have aided them in getting a job. As a result, many of these men find it difficult to get a job, falling into a cycle of poverty, and perhaps, again, to crime. Additionally, for those who have spouses and/or children, time in prison can drastically harm the economic status of a man's family; without the extra income, these families may become homeless, as did the woman, Ms. Hamilton, in the article. Especially intriguing to me was the racial implications of the article; the author pointed out that African-American men are more likely to be incarcerated than they are to have a job, if they did not receive a high school diploma. In terms of TOK, this article showed that all the facets of the human sciences are related - from social problems to economic problems, times in prison can have a devastating effect on not only the jailed man, but his family as well.
sissij

Watch How Casually False Claims are Published: New York Times and Nicholas Lemann Edition - 1 views

  •  wrote my favorite sentence about this whole affair, one which I often quoted in my speeches to great audience laughter: “there are only three possible explanations for the Snowden heist: 1) It was a Russian espionage operation; 2) It was a Chinese espionage operation; or 3) It was a joint Sino-Russian operation.”
  • demanding that they only publish those which expose information necessary to inform the public debate: precisely because he did not want to destroy NSA programs he believes are justifiable.
  • As is true of most leaks – from the routine to the spectacular – those publishing decisions rested solely in the hands of the media outlets and their teams of reporters, editors and lawyers.
  • ...6 more annotations...
  • There have of course been some stories where my calculation of what is not public interest differs from that of reporters, but it is for this precise reason that publication decisions were entrusted to journalists and their editors.
  • journalist-driven process that determined which documents got published
  • Ironically, the most controversial Snowden stories – the type his critics cite as the ones that should not have been published because they exposed sensitive national security secrets – were often the ones the NYT itself decided to publish, such as its very controversial exposé on how NSA spied on China’s Huawei.
  • Snowden didn’t decide what stayed secret. The press did.
  • it is so often the case that the most influential media outlets publish factually false statements using the most authoritative tones.
  • But Snowden never said anything like that.
  •  
    The reporting on Snowden shows that how bad our new system can be. Although it is arguable that this article can be false as well because we can never know the exact truth except the Snowden himself. Our flaws in logic and perception makes us very vulnerable to bad news. There was time in social media called the "Yellow News". During that time, the news publications weren't taking their responsibility as a guide to the general population. We surely need better news and prevent the era that we publish our news based on its "hotness" instead of accuracy. --Sissi (1/12/2017)
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
sissij

Keeping the Boardroom Out of the Bedroom - The New York Times - 0 views

  • I never allowed myself to be vulnerable with my husband because I didn’t realize that it was a requirement for an intimate, connected relationship. Back then, I didn’t even know what an intimate, connected relationship was. I only thought in terms of tasks and achievements.
  • Surely, I had thought, my firefighter boyfriend would wait an hour or two for me without complaint. But if he had, would we still be together today?
  • It has been said that we teach people how to treat us.
  •  
    I found this article very interesting. The author was a strong woman and she wasn't able to switch her mode between her role as a successful businesswoman and a wife. People always tend to be eccentric. Although everybody knows that we have to stand from the perspective of other to make a better decision. It reminded me of that my mom complained to me many times that I didn't followed the time that she was going to pick me up. However, I was only thinking in my perspective and even think about how tired my mom was after a day of work. People should contribute equally to maintain a good relationship. --Sissi (3/24/2017)
Aisling Horan

Can Physicists Find Time Travelers on Facebook? - Robinson Meyer - The Atlantic - 0 views

  • the two scoured Twitter, Facebook, Google+ and a few other websites to find “prescient information”—that is, tweets and statuses about current events posted before the events became current. The only way someone could write such a post, they reasoned, is if they were visiting… from the future.
  • (Histories of bright comets have been “generally well kept by societies and journals around the world,” they write.)
  • Attention, Facebook and Google+: Your social network’s crappy search is preventing humanity from finding time travelers from the future.
  • ...3 more annotations...
  • Pope Francis.” Once they consulted the blog post it advertised, though, they the tweet “deemed overtly speculative and not prescient.” 
  • But that doesn’t quite mean anything. The authors admit that the study might have failed for many reasons: Time travelers might not have the ability to physically adjust the past; they might not have posted about the events the authors were looking for; they might have posted about the events but not turned up in a search. Time travelers might have also read the study or this news story about it, and been sure to making avoid any careless mistakes.
  • [G]iven the current prevalence of the Internet, its numerous portals around the globe, and its numerous uses in communication, this search might be considered the most sensitive and comprehensive search yet for time travel from the future.
Javier E

95,000 Words, Many of Them Ominous, From Donald Trump's Tongue - The New York Times - 2 views

  • The New York Times analyzed every public utterance by Mr. Trump over the past week from rallies, speeches, interviews and news conferences to explore the leading candidate’s hold on the Republican electorate for the past five months.
  • The transcriptions yielded 95,000 words and several powerful patterns
  • The most striking hallmark was Mr. Trump’s constant repetition of divisive phrases, harsh words and violent imagery that American presidents rarely use
  • ...19 more annotations...
  • He has a particular habit of saying “you” and “we” as he inveighs against a dangerous “them” or unnamed other — usually outsiders like illegal immigrants (“they’re pouring in”), Syrian migrants (“young, strong men”) and Mexicans, but also leaders of both political parties.
  • Mr. Trump appears unrivaled in his ability to forge bonds with a sizable segment of Americans over anxieties about a changing nation, economic insecurities, ferocious enemies and emboldened minorities (like the first black president, whose heritage and intelligence he has all but encouraged supporters to malign).
  • “ ‘We vs. them’ creates a threatening dynamic, where ‘they’ are evil or crazy or ignorant and ‘we’ need a candidate who sees the threat and can alleviate it,”
  • “He appeals to the masses and makes them feel powerful again: ‘We’ need to build a wall on the Mexican border — not ‘I,’ but ‘we.’ ”
  • And as much as he likes the word “attack,” the Times analysis shows, he often uses it to portray himself as the victim of cable news channels and newspapers that, he says, do not show the size of his crowds.
  • The specter of violence looms over much of his speech, which is infused with words like kill, destroy and fight.
  • “Such statements and accusations make him seem like a guy who can and will cut through all the b.s. and do what in your heart you know is right — and necessary,
  • And Mr. Trump uses rhetoric to erode people’s trust in facts, numbers, nuance, government and the news media, according to specialists in political rhetoric.
  • “Nobody knows,” he likes to declare, where illegal immigrants are coming from or the rate of increase of health care premiums under the Affordable Care Act, even though government agencies collect and publish this information.
  • He insists that Mr. Obama wants to accept 250,000 Syrian migrants, even though no such plan exists, and repeats discredited rumors that thousands of Muslims were cheering in New Jersey during the Sept. 11, 2001, attacks.
  • In another pattern, Mr. Trump tends to attack a person rather than an idea or a situation, like calling political opponents “stupid” (at least 30 times), “horrible” (14 times), “weak” (13 times) and other names, and criticizing foreign leaders, journalists and so-called anchor babies
  • This pattern of elevating emotional appeals over rational ones is a rhetorical style that historians, psychologists and political scientists placed in the tradition of political figures like Goldwater, George Wallace, Joseph McCarthy, Huey Long and Pat Buchanan,
  • “His entire campaign is run like a demagogue’s — his language of division, his cult of personality, his manner of categorizing and maligning people with a broad brush,”
  • “If you’re an illegal immigrant, you’re a loser. If you’re captured in war, like John McCain, you’re a loser. If you have a disability, you’re a loser. It’s rhetoric like Wallace’s — it’s not a kind or generous rhetoric.”
  • “And then there are the winners, most especially himself, with his repeated references to his wealth and success and intelligence,”
  • Historically, demagogues have flourished when they tapped into the grievances of citizens and then identified and maligned outside foes, as McCarthy did with attacking Communists, Wallace with pro-integration northerners and Mr. Buchanan with cultural liberals
  • Mr. Trump, by contrast, is an energetic and charismatic speaker who can be entertaining and ingratiating with his audiences. There is a looseness to his language that sounds almost like water-cooler talk or neighborly banter, regardless of what it is about.
  • he presents himself as someone who is always right in his opinions — even prophetic, a visionary
  • It is the sort of trust-me-and-only-me rhetoric that, according to historians, demagogues have used to insist that they have unique qualities that can lead the country through turmoil
Javier E

Geology's Timekeepers Are Feuding - The Atlantic - 0 views

  • , in 2000, the Nobel Prize-winning chemist Paul Crutzen won permanent fame for stratigraphy. He proposed that humans had so throughly altered the fundamental processes of the planet—through agriculture, climate change, and nuclear testing, and other phenomena—that a new geological epoch had commenced: the Anthropocene, the age of humans.
  • Zalasiewicz should know. He is the chair of the Anthropocene working group, which the ICS established in 2009 to investigate whether the new epoch deserved a place in stratigraphic time.
  • In 2015, the group announced that the Anthropocene was a plausible new layer and that it should likely follow the Holocene. But the team has yet to propose a “golden spike” for the epoch: a boundary in the sedimentary rock record where the Anthropocene clearly begins.
  • ...12 more annotations...
  • Officially, the Holocene is still running today. You have lived your entire life in the Holocene, and the Holocene has constituted the geological “present” for as long as there have been geologists.But if we now live in a new epoch, the Anthropocene, then the ICS will have to chop the Holocene somewhere. It will have to choose when the Holocene ended, and it will move some amount of time out of the purview of the Holocene working group and into that of the Anthropocene working group.
  • This is politically difficult. And right now, the Anthropocene working group seems intent on not carving too deep into the Holocene. In a paper published earlier this year in Earth-Science Reviews, the Anthropocene working group’s members strongly imply that they will propose starting the new epoch in the mid-20th century.
  • Some geologists argue that the Anthropocene started even earlier: perhaps 4,000 or 6,000 years ago, as farmers began to remake the land surface.“Most of the world’s forests that were going to be converted to cropland and agriculture were already cleared well before 1950,” says Bill Ruddiman, a geology professor at the University of Virginia and an advocate of this extremely early Anthropocene.
  • “Most of the world’s prairies and steppes that were going to be cleared for crops were already gone, by then. How can you argue the Anthropocene started in 1950 when all of the major things that affect Earth’s surface were already over?”Van der Pluijm agreed that the Anthropocene working group was picking 1950 for “not very good reasons.”“Agriculture was the revolution that allowed society to develop,” he said. “That was really when people started to force the land to work for them. That massive land movement—it’s like a landslide, except it’s a humanslide. And it is not, of course, as dramatic as today’s motion of land, but it starts the clock.”
  • This muddle had to stop. The Holocene comes up constantly in discussions of modern global warming. Geologists and climate scientists did not make their jobs any easier by slicing it in different ways and telling contradictory stories about it.
  • This process started almost 10 years ago. For this reason, Zalasiewicz, the chair of the Anthropocene working group, said he wasn’t blindsided by the new subdivisions at all. In fact, he voted to adopt them as a member of the Quaternary working group.“Whether the Anthropocene works with a unified Holocene or one that’s in three parts makes for very little difference,” he told me.In fact, it had made the Anthropocene group’s work easier. “It has been useful to compare the scale of the two climate events that mark the new boundaries [within the Holocene] with the kind of changes that we’re assessing in the Anthropocene. It has been quite useful to have the compare and contrast,” he said. “Our view is that some of the changes in the Anthropocene are rather bigger.”
  • Zalasiewicz said that he and his colleagues were going as fast as they could. When the working group group began its work in 2009, it was “really starting from scratch,” he told me.While other working groups have a large body of stratigraphic research to consider, the Anthropocene working group had nothing. “We had to spend a fair bit of time deciding whether the Anthropocene was geology at all,” he said. Then they had to decide where its signal could show up. Now, they’re looking for evidence that shows it.
  • This cycle of “glacials” and “interglacials” has played out about 50 times over the last several million years. When the Holocene began, it was only another interglacial—albeit the one we live in. Until recently, glaciers were still on schedule to descend in another 30,000 years or so.Yet geologists still call the Holocene an epoch, even though they do not bestow this term on any of the previous 49 interglacials. It get special treatment because we live in it.
  • Much of this science is now moot. Humanity’s vast emissions of greenhouse gas have now so warmed the climate that they have offset the next glaciation. They may even knock us out of the ongoing cycle of Ice Ages, sending the Earth hurtling back toward a “greenhouse” climate after the more amenable “icehouse” climate during which humans evolved.For this reason, van der Pluijm wants the Anthropocene to supplant the Holocene entirely. Humans made their first great change to the environment at the close of the last glaciation, when they seem to have hunted the world’s largest mammals—the wooly mammoth, the saber-toothed tiger—to extinction. Why not start the Anthropocene then?He would even rename the pre-1800 period “the Holocene Age” as a consolation prize:
  • Zalasiewicz said he would not start the Anthropocene too early in time, as it would be too work-intensive for the field to rename such a vast swath of time. “The early-Anthropocene idea would crosscut against the Holocene as it’s seen by Holocene workers,” he said. If other academics didn’t like this, they could create their own timescales and start the Anthropocene Epoch where they choose. “We have no jurisdiction over the word Anthropocene,” he said.
  • Ruddiman, the University of Virginia professor who first argued for a very early Anthropocene, now makes an even broader case. He’s not sure it makes sense to formally define the Anthropocene at all. In a paper published this week, he objects to designating the Anthropocene as starting in the 1950s—and then he objects to delineating the Anthropocene, or indeed any new geological epoch, by name. “Keep the use of the term informal,” he told me. “Don’t make it rigid. Keep it informal so people can say the early-agricultural Anthropocene, or the industrial-era Anthropocene.”
  • “This is the age of geochemical dating,” he said. Geologists have stopped looking to the ICS to place each rock sample into the rock sequence. Instead, field geologists use laboratory techniques to get a precise year or century of origin for each rock sample. “The community just doesn’t care about these definitions,” he said.
Javier E

Philosophy isn't dead yet | Raymond Tallis | Comment is free | The Guardian - 1 views

  • Fundamental physics is in a metaphysical mess and needs help. The attempt to reconcile its two big theories, general relativity and quantum mechanics, has stalled for nearly 40 years. Endeavours to unite them, such as string theory, are mathematically ingenious but incomprehensible even to many who work with them. This is well known.
  • A better-kept secret is that at the heart of quantum mechanics is a disturbing paradox – the so-called measurement problem, arising ultimately out of the Uncertainty Principle – which apparently demonstrates that the very measurements that have established and confirmed quantum theory should be impossible. Oxford philosopher of physics David Wallace has argued that this threatens to make quantum mechanics incoherent which can be remedied only by vastly multiplying worlds.
  • there is the failure of physics to accommodate conscious beings. The attempt to fit consciousness into the material world, usually by identifying it with activity in the brain, has failed dismally, if only because there is no way of accounting for the fact that certain nerve impulses are supposed to be conscious (of themselves or of the world) while the overwhelming majority (physically essentially the same) are not. In short, physics does not allow for the strange fact that matter reveals itself to material objects (such as physicists).
  • ...3 more annotations...
  • then there is the mishandling of time. The physicist Lee Smolin's recent book, Time Reborn, links the crisis in physics with its failure to acknowledge the fundamental reality of time. Physics is predisposed to lose time because its mathematical gaze freezes change. Tensed time, the difference between a remembered or regretted past and an anticipated or feared future, is particularly elusive. This worried Einstein: in a famous conversation, he mourned the fact that the present tense, "now", lay "just outside of the realm of science".
  • Recent attempts to explain how the universe came out of nothing, which rely on questionable notions such as spontaneous fluctuations in a quantum vacuum, the notion of gravity as negative energy, and the inexplicable free gift of the laws of nature waiting in the wings for the moment of creation, reveal conceptual confusion beneath mathematical sophistication. They demonstrate the urgent need for a radical re-examination of the invisible frameworks within which scientific investigations are conducted.
  • we should reflect on how a scientific image of the world that relies on up to 10 dimensions of space and rests on ideas, such as fundamental particles, that have neither identity nor location, connects with our everyday experience. This should open up larger questions, such as the extent to which mathematical portraits capture the reality of our world – and what we mean by "reality".
Javier E

Liu Cixin's War of the Worlds | The New Yorker - 0 views

  • he briskly dismissed the idea that fiction could serve as commentary on history or on current affairs. “The whole point is to escape the real world!” he said.
  • Chinese tech entrepreneurs discuss the Hobbesian vision of the trilogy as a metaphor for cutthroat competition in the corporate world; other fans include Barack Obama, who met Liu in Beijing two years ago, and Mark Zuckerberg. Liu’s international career has become a source of national pride. In 2015, China’s then Vice-President, Li Yuanchao, invited Liu to Zhongnanhai—an off-limits complex of government accommodation sometimes compared to the Kremlin—to discuss the books and showed Liu his own copies, which were dense with highlights and annotations.
  • In China, one of his stories has been a set text in the gao kao—the notoriously competitive college-entrance exams that determine the fate of ten million pupils annually; another has appeared in the national seventh-grade-curriculum textbook. When a reporter recently challenged Liu to answer the middle-school questions about the “meaning” and the “central themes” of his story, he didn’t get a single one right. “I’m a writer,” he told me, with a shrug.
  • ...20 more annotations...
  • Liu’s tomes—they tend to be tomes—have been translated into more than twenty languages, and the trilogy has sold some eight million copies worldwide. He has won China’s highest honor for science-fiction writing, the Galaxy Award, nine times, and in 2015 he became the first Asian writer to win the Hugo Award, the most prestigious international science-fiction prize
  • Liu believes that this trend signals a deeper shift in the Chinese mind-set—that technological advances have spurred a new excitement about the possibilities of cosmic exploration.
  • Concepts that seemed abstract to others took on, for him, concrete forms; they were like things he could touch, inducing a “druglike euphoria.” Compared with ordinary literature, he came to feel, “the stories of science are far more magnificent, grand, involved, profound, thrilling, strange, terrifying, mysterious, and even emotional
  • Pragmatic choices like this one, or like the decision his grandparents made when their sons were conscripted, recur in his fiction—situations that present equally unconscionable choices on either side of a moral fulcrum
  • The great flourishing of science fiction in the West at the end of the nineteenth century occurred alongside unprecedented technological progress and the proliferation of the popular press—transformations that were fundamental to the development of the genre
  • Joel Martinsen, the translator of the second volume of Liu’s trilogy, sees the series as a continuation of this tradition. “It’s not hard to read parallels between the Trisolarans and imperialist designs on China, driven by hunger for resources and fear of being wiped out,” he told me. Even Liu, unwilling as he is to endorse comparisons between the plot and China’s current face-off with the U.S., did at one point let slip that “the relationship between politics and science fiction cannot be underestimated.”
  • Speculative fiction is the art of imagining alternative worlds, and the same political establishment that permits it to be used as propaganda for the existing regime is also likely to recognize its capacity to interrogate the legitimacy of the status quo.
  • Liu has been criticized for peopling his books with characters who seem like cardboard cutouts installed in magnificent dioramas. Liu readily admits to the charge. “I did not begin writing for love of literature,” he told me. “I did so for love of science.”
  • “The Three-Body Problem” takes its title from an analytical problem in orbital mechanics which has to do with the unpredictable motion of three bodies under mutual gravitational pull. Reading an article about the problem, Liu thought, What if the three bodies were three suns? How would intelligent life on a planet in such a solar system develop? From there, a structure gradually took shape that almost resembles a planetary system, with characters orbiting the central conceit like moons. For better or worse, the characters exist to support the framework of the story rather than to live as individuals on the page.
  • Liu’s imagination is dauntingly capacious, his narratives conceived on a scale that feels, at times, almost hallucinogenic. The time line of the trilogy spans 18,906,450 years, encompassing ancient Egypt, the Qin dynasty, the Byzantine Empire, the Cultural Revolution, the present, and a time eighteen million years in the future
  • The first book is set on Earth, though some of its scenes take place in virtual reality; by the end of the third book, the scope of the action is interstellar and annihilation unfolds across several dimensions. The London Review of Books has called the trilogy “one of the most ambitious works of science fiction ever written.”
  • Although physics furnishes the novels’ premises, it is politics that drives the plots. At every turn, the characters are forced to make brutal calculations in which moral absolutism is pitted against the greater good
  • In Liu’s fictional universe, idealism is fatal and kindness an exorbitant luxury. As one general says in the trilogy, “In a time of war, we can’t afford to be too scrupulous.” Indeed, it is usually when people do not play by the rules of Realpolitik that the most lives are lost.
  • “I know what you are thinking,” he told me with weary clarity. “What about individual liberty and freedom of governance?” He sighed, as if exhausted by a debate going on in his head. “But that’s not what Chinese people care about. For ordinary folks, it’s the cost of health care, real-estate prices, their children’s education. Not democracy.”
  • Liu closed his eyes for a long moment and then said quietly, “This is why I don’t like to talk about subjects like this. The truth is you don’t really—I mean, can’t truly—understand.”
  • Liu explained to me, the existing regime made the most sense for today’s China, because to change it would be to invite chaos. “If China were to transform into a democracy, it would be hell on earth,”
  • It was an opinion entirely consistent with his systems-level view of human societies, just as mine reflected a belief in democracy and individualism as principles to be upheld regardless of outcomes
  • “I cannot escape and leave behind reality, just like I cannot leave behind my shadow. Reality brands each of us with its indelible mark. Every era puts invisible shackles on those who have lived through it, and I can only dance in my chains.
  • Chinese people of his generation were lucky, he said. The changes they had seen were so huge that they now inhabited a world entirely different from that of their childhood. “China is a futuristic country,” he said. “I realized that the world around me became more and more like science fiction, and this process is speeding up.”
  • “We have statues of a few martyrs, but we never—We don’t memorialize those, the individuals.” He took off his glasses and blinked, peering into the wide expanse of green and concrete. “This is how we Chinese have always been,” he said. “When something happens, it passes, and time buries the stories.”
Javier E

How to avoid covid-19 hoax stories? - The Washington Post - 1 views

  • How good are people at sifting out fake news?
  • we’ve been investigating whether ordinary individuals who encounter news when it first appears online — before fact-checkers like Snopes and PolitiFacts have an opportunity to issue reports about an article’s veracity — are able to identify whether articles contain true or false information.
  • Unfortunately, it seems quite difficult for people to identify false or misleading news, and the limited number of coronavirus news stories in our collection are no exception
  • ...14 more annotations...
  • Over a 13-week period, our study allowed us to capture people’s assessments of fresh news articles in real time. Each day of the study, we relied on a fixed, pre-registered process to select five popular articles published within the previous 24 hours
  • The five articles were balanced between conservative, liberal and non-partisan sources, as well as from mainstream news websites and from websites known to produce fake news. In total, we sent 150 total articles to 90 survey respondents each
  • We also sent these articles separately to six independent fact checkers, and treated their most common response — true, false/misleading, or cannot determine — for each article as the “correct’’ answer for that article.
  • When shown an article that was rated “true” by the professional fact checkers, respondents correctly identified the article as true 62 percent of the time. When the source of the true news story was a mainstream news source, respondents correctly identified the article as true 73 percent of the time.
  • However, for each article the professional fact checkers rated “false/misleading,” the study participants were as likely to say it was true as they were to say it was false or misleading. And roughly one-third of the time they told us they were unable to determine the veracity of the article. In other words, people on the whole were unable to correctly classify false or misleading news.
  • four of the articles in our study that fact checkers rated as false or misleading were related to the coronavirus.
  • All four articles promoted the unfounded rumor that the virus was intentionally developed in a laboratory. Although accidental releases of pathogens from labs have previously caused significant morbidity and mortality, in the current pandemic multiple pieces of evidence suggest this virus is of natural origin. There’s little evidence that the virus was manufactured or altered.
  • Only 30 percent of participants correctly classified them as false or misleading.
  • respondents seemed to have more trouble deciding what to think about false covid-19 stories, leading to a higher proportion of “could not determine” responses than we saw for the stories on other topics our professional fact checkers rated as “false/misleading.” This finding suggests that it may be particularly difficult to identify misinformation in newly emerging topics
  • Study participants with higher levels of education did better on identifying both fake news overall and coronavirus-related fake news — but were far from being able to correctly weed out misinformation all of the time
  • In fact, no group, regardless of education level, was able to correctly identify the stories that the professional fact checkers had labeled as false or misleading more than 40 percent of the time.
  • Taken together, our findings suggest that there is widespread potential for vulnerability to misinformation when it first appears online. This is especially worrying during the current pandemic
  • In the current environment, misinformation has the potential to undermine social distancing efforts, to lead people to hoard supplies, or to promote the adoption of potentially dangerous fake cures.
  • our findings suggest that non-trivial numbers of people will believe false information to be true when they first encounter it. And it suggests that efforts to remove coronavirus-related misinformation will need to be swift — and implemented early in an article’s life-cycle — to stop the spread of something else that’s dangerous: misinformation.
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

I Sent All My Text Messages in Calligraphy for a Week - Cristina Vanko - The Atlantic - 2 views

  • I decided to blend a newfound interest in calligraphy with my lifelong passion for written correspondence to create a new kind of text messaging. The idea: I wanted to message friends using calligraphic texts for one week. The average 18-to-24-year-old sends and gets something like 4,000 messages a month, which includes sending more than 500 texts a week, according to Experian. The week of my experiment, I only sent 100
  • We are a youth culture that heavily relies on emojis. I didn’t realize how much I depend on emojis and emoticons to express myself until I didn’t have them. Handdrawn emoticons, though original, just aren’t the same. I wasn't able to convey emoticons as neatly as the cleanliness of a typeface. Sketching emojis is too time consuming. To bridge the gap between time and the need for graphic imagery, I sent out selfies on special occasions when my facial expression spoke louder than words.
  • That week, the sense of urgency I normally felt about my phone virtually vanished. It was like back when texts were rationed, and when I lacked anxiety about viewing "read" receipts. I didn’t feel naked without having my phone on me every moment. 
  • ...10 more annotations...
  • So while the experiment began as an exercise to learn calligraphy, it doubled as a useful sort of digital detox that revealed my relationship with technology. Here's what I learned:
  • Receiving handwritten messages made people feel special. The awesome feeling of receiving personalized mail really can be replicated with a handwritten text.
  • Handwriting allows for more self-expression. I found I could give words a certain flourish to mimic the intonation of spoken language. Expressing myself via handwriting could also give the illusion of real-time presence. One friend told me, “it’s like you’re here with us!”
  • Before I started, I established rules for myself: I could create only handwritten text messages for seven days, absolutely no using my phone’s keyboard. I had to write out my messages on paper, photograph them, then hit “send.” I didn’t reveal my plan to my friends unless asked
  • Sometimes you don't need to respond. Most conversations aren’t life or death situations, so it was refreshing to feel 100 percent present in all interactions. I didn’t interrupt conversations by checking social media or shooting text messages to friends. I was more in tune with my surroundings. On transit, I took part in people watching—which, yes, meant mostly watching people staring at their phones. I smiled more at passersby while walking since I didn’t feel the need to avoid human interaction by staring at my phone.
  • A phone isn't only a texting device. As I texted less, I used my phone less frequently—mostly because I didn’t feel the need to look at it to keep me busy, nor did I want to feel guilty for utilizing the keyboard through other applications. I still took photos, streamed music, and logged workouts since I felt okay with pressing buttons for selection purposes
  • People don’t expect to receive phone calls anymore. Texting brings about a less intimidating, more convenient experience. But it wasn't that long ago when real-time voice were the norm. It's clear to me that, these days, people prefer to be warned about an upcoming phone call before it comes in.
  • Having a pen and paper is handy at all times. Writing out responses is a great reminder to slow down and use your hands. While all keys on a keyboard feel the same, it’s difficult to replicate the tactile activity of tracing a letter’s shape
  • My sent messages were more thoughtful.
  • I was more careful with grammar and spelling. People often ignore the rules of grammar and spelling just to maintain the pace of texting conversation. But because a typical calligraphic text took minutes to craft, I had time to make sure I got things right. The usual texting acronyms and misspellings look absurd when texted with type, but they'd be especially ridiculous written by hand.
clairemann

Keep the Filibuster, There Are Better Ways to Reform | Time - 0 views

  • After passing an immense $1.9 trillion COVID aid package that was one of the most expensive and significant pieces of social legislation in a generation, the Biden administration realizes that much of the rest of its agenda—election reform, gun control, and civil rights—is dead on arrival in the Senate, a Senate that Democrats only narrowly control.
  • The reason, of course, is the filibuster, the procedural maneuver that allows 41 senators to block multiple forms of substantive legislation.
  • This would be a serious mistake that would enhance partisan polarization and increase political instability. There are better ways to achieve policy reform. There are better ways to lower the temperature of American politics.
  • ...12 more annotations...
  • I discovered that thoughtful progressives and thoughtful conservatives each suffered from different, deep fears about our political future. Progressives feared minoritarian rule. Conservatives feared majoritarian domination. Ending the filibuster, perversely enough, makes both fears more real.
  • The Republican Party has won exactly one popular vote for president since 1988, George W. Bush’s narrow 2.4 percent edge over John Kerry in 2004. Yet it won three presidential elections in that span of time
  • Republicans not only have a present electoral college advantage over Democrats, they also have inherent advantages in both the House and the Senate.
  • Do away with the filibuster, and it’s entirely possible that the next Republican government could enjoy immense legislative power without a majority of the popular vote. In fact, they could lose voters by margins numbering in the millions, yet still exercise decisive control over the government.
  • The Democratic Party is seeking to pass laws that would introduce dramatic changes in American elections, transform free speech doctrine, and potentially limit religious liberty.
  • The GOP, for example, is currently in the grips of a Trumpist base that prioritizes angry opposition over compromise. The party largely lacks a positive agenda, so (with some notable exceptions) its priority is clear: No compromise, even when compromise might be prudent. Stop the Democrats. Some Republicans have gone further, descending into a fantasy world of dark conspiracies.
  • Yes, through decentralization, de-escalation, and strategic moderation.
  • That means most Americans live in jurisdictions where, for example, election rules, civil rights laws, gun laws, and a wide variety of economic and social policies are within their partisan control.
  • Gridlock in Washington does not have to mean gridlock in government,
  • Research demonstrates that a majority of Americans are exhausted by partisan politics. Motivated minorities drive most American polarization.
  • A combination of redistricting reform and voting reforms like ranked-choice voting can limit the powers of partisan extremists. Ranked-choice voting—which allows voters to list candidates in order of preference—most notably can reduce the chances of highly-partisan pluralities dominating political primaries.
  • The answer to polarization and gridlock is not partisan escalation. Ending the filibuster would only ramp up partisan acrimony and increase the level of fear and anxiety around American elections. There are better paths through American division. We should try those before we enable drastic measures like majoritarian dominance or minoritarian control.
clairemann

Why Some People Lie in Therapy | TIME - 1 views

  • Lying is, for better or worse, a behavior humans take part in at some point in their lives. On average, Americans tell one to two lies a day, multiple studies have suggested. But it’s where some people are fibbing that might come as a surprise.
  • Laura is far from alone. In a comprehensive 2015 study published by the American Psychological Association book Secrets and Lies in Psychotherapy, 93% of respondents admitted they had lied during therapy at least once.
  • The 2015 study found 61% of participants cited embarrassment as the main reason for dishonesty with their therapist.
  • ...4 more annotations...
  • Morin acknowledges many clients are scared of “getting in trouble” for what they confess in therapy. “They may worry that the therapist will terminate their sessions because they aren’t making progress or they may be concerned the therapist will somehow punish them,” she says.
  • “Sometimes people don’t really mean to lie, but they minimize their problems because they can’t quite accept them yet,” Morin says. “Someone with a substance abuse problem might insist she didn’t drink much this week even though she drank heavily every day. Individuals often need help coming to terms with their problems before they can be honest with themselves.”
  • “I don’t want to talk about trauma because discussing it is going to overwhelm me,” Farber says of this mindset. “It’s going to bring me back to an experience or experiences that have been so difficult [and] so overwhelming, and I’m fearful that if I talk about it, it will re-traumatize me.”
  • Altering the truth in an attempt at kindness is still problematic, though, because it limits how effective treatment can be. “If you’re censoring your experience, then the therapist can’t be helpful to you,” Kolod says. Therapists are aware clients sometimes omit the truth or downplay the significance of certain life experiences, and there has been research on how mental health professionals can better spot dishonesty and adapt their treatment accordingly.
Javier E

Opinion | Why Covid's Airborne Transmission Was Acknowledged So Late - The New York Times - 0 views

  • A week ago, more than a year after the World Health Organization declared that we face a pandemic, a page on its website titled “Coronavirus Disease (Covid-19): How Is It Transmitted?” got a seemingly small update.
  • The revised response still emphasizes transmission in close contact but now says it may be via aerosols — smaller respiratory particles that can float — as well as droplets. It also adds a reason the virus can also be transmitted “in poorly ventilated and/or crowded indoor settings,” saying this is because “aerosols remain suspended in the air or travel farther than 1 meter.”
  • on Friday, the Centers for Disease Control and Prevention also updated its guidance on Covid-19, clearly saying that inhalation of these smaller particles is a key way the virus is transmitted, even at close range, and put it on top of its list of how the disease spreads.
  • ...38 more annotations...
  • But these latest shifts challenge key infection control assumptions that go back a century, putting a lot of what went wrong last year in context
  • They may also signal one of the most important advancements in public health during this pandemic.
  • If the importance of aerosol transmission had been accepted early, we would have been told from the beginning that it was much safer outdoors, where these small particles disperse more easily, as long as you avoid close, prolonged contact with others.
  • We would have tried to make sure indoor spaces were well ventilated, with air filtered as necessary.
  • Instead of blanket rules on gatherings, we would have targeted conditions that can produce superspreading events: people in poorly ventilated indoor spaces, especially if engaged over time in activities that increase aerosol production, like shouting and singing
  • We would have started using masks more quickly, and we would have paid more attention to their fit, too. And we would have been less obsessed with cleaning surfaces.
  • The implications of this were illustrated when I visited New York City in late April — my first trip there in more than a year.
  • A giant digital billboard greeted me at Times Square, with the message “Protecting yourself and others from Covid-19. Guidance from the World Health Organization.”
  • That billboard neglected the clearest epidemiological pattern of this pandemic: The vast majority of transmission has been indoors, sometimes beyond a range of three or even six feet. The superspreading events that play a major role in driving the pandemic occur overwhelmingly, if not exclusively, indoors.
  • The billboard had not a word about ventilation, nothing about opening windows or moving activities outdoors, where transmission has been rare and usually only during prolonged and close contact. (Ireland recently reported 0.1 percent of Covid-19 cases were traced to outdoor transmission.)
  • Mary-Louise McLaws, an epidemiologist at the University of New South Wales in Sydney, Australia, and a member of the W.H.O. committees that craft infection prevention and control guidance, wanted all this examined but knew the stakes made it harder to overcome the resistance. She told The Times last year, “If we started revisiting airflow, we would have to be prepared to change a lot of what we do.” She said it was a very good idea, but she added, “It will cause an enormous shudder through the infection control society.”
  • In contrast, if the aerosols had been considered a major form of transmission, in addition to distancing and masks, advice would have centered on ventilation and airflow, as well as time spent indoors. Small particles can accumulate in enclosed spaces, since they can remain suspended in the air and travel along air currents. This means that indoors, three or even six feet, while helpful, is not completely protective, especially over time.
  • Meanwhile, many countries allowed their indoor workplaces to open but with inadequate aerosol protections. There was no attention to ventilation, installing air filters as necessary or even opening windows when possible, more to having people just distancing three or six feet, sometimes not requiring masks beyond that distance, or spending money on hard plastic barriers, which may be useless at best
  • To see this misunderstanding in action, look at what’s still happening throughout the world. In India, where hospitals have run out of supplemental oxygen and people are dying in the streets, money is being spent on fleets of drones to spray anti-coronavirus disinfectant in outdoor spaces. Parks, beaches and outdoor areas keep getting closed around the world. This year and last, organizers canceled outdoor events for the National Cherry Blossom Festival in Washington, D.C. Cambodian customs officials advised spraying disinfectant outside vehicles imported from India. The examples are many.
  • clear evidence doesn’t easily overturn tradition or overcome entrenched feelings and egos. John Snow, often credited as the first scientific epidemiologist, showed that a contaminated well was responsible for a 1854 London cholera epidemic by removing the suspected pump’s handle and documenting how the cases plummeted afterward. Many other scientists and officials wouldn’t believe him for 12 years, when the link to a water source showed up again and became harder to deny.
  • Along the way to modern public health shaped largely by the fight over germs, a theory of transmission promoted by the influential public health figure Charles Chapin took hold
  • Dr. Chapin asserted in the early 1900s that respiratory diseases were most likely spread at close range by people touching bodily fluids or ejecting respiratory droplets, and did not allow for the possibility that such close-range infection could occur by inhaling small floating particles others emitted
  • In a contemporary example of this attitude, the initial public health report on the Mount Vernon choir case said that it may have been caused by people “sitting close to one another, sharing snacks and stacking chairs at the end of the practice,” even though almost 90 percent of the people there developed symptoms of Covid-19
  • It was in this context in early 2020 that the W.H.O. and the C.D.C. asserted that SARS-CoV-2 was transmitted primarily via these heavier, short-range droplets, and provided guidance accordingly
  • Amid the growing evidence, in July, hundreds of scientists signed an open letter urging the public health agencies, especially the W.H.O., to address airborne transmission of the coronavirus.
  • Last October, the C.D.C. published updated guidance acknowledging airborne transmission, but as a secondary route under some circumstances, until it acknowledged airborne transmission as crucial on Friday. And the W.H.O. kept inching forward in its public statements, most recently a week ago.
  • Linsey Marr, a professor of engineering at Virginia Tech who made important contributions to our understanding of airborne virus transmission before the pandemic, pointed to two key scientific errors — rooted in a lot of history — that explain the resistance, and also opened a fascinating sociological window into how science can get it wrong and why.
  • Dr. Marr said that if you inhale a particle from the air, it’s an aerosol.
  • biomechanically, she said, nasal transmission faces obstacles, since nostrils point downward and the physics of particles that large makes it difficult for them to move up the nose. And in lab measurements, people emit far more of the easier-to-inhale aerosols than the droplets, she said, and even the smallest particles can be virus laden, sometimes more so than the larger ones, seemingly because of how and where they are produced in the respiratory tract.
  • Second, she said, proximity is conducive to transmission of aerosols as well because aerosols are more concentrated near the person emitting them. In a twist of history, modern scientists have been acting like those who equated stinky air with disease, by equating close contact, a measure of distance, only with the larger droplets, a mechanism of transmission, without examination.
  • Since aerosols also infect at close range, measures to prevent droplet transmission — masks and distancing — can help dampen transmission for airborne diseases as well. However, this oversight led medical people to circularly assume that if such measures worked at all, droplets must have played a big role in their transmission.
  • Another dynamic we’ve seen is something that is not unheard-of in the history of science: setting a higher standard of proof for theories that challenge conventional wisdom than for those that support it.
  • Another key problem is that, understandably, we find it harder to walk things back. It is easier to keep adding exceptions and justifications to a belief than to admit that a challenger has a better explanation.
  • The ancients believed that all celestial objects revolved around the earth in circular orbits. When it became clear that the observed behavior of the celestial objects did not fit this assumption, those astronomers produced ever-more-complex charts by adding epicycles — intersecting arcs and circles — to fit the heavens to their beliefs.
  • He was also concerned that belief in airborne transmission, which he associated with miasma theories, would make people feel helpless and drop their guard against contact transmission. This was a mistake that would haunt infection control for the next century and more.
  • So much of what we have done throughout the pandemic — the excessive hygiene theater and the failure to integrate ventilation and filters into our basic advice — has greatly hampered our response.
  • Some of it, like the way we underused or even shut down outdoor space, isn’t that different from the 19th-century Londoners who flushed the source of their foul air into the Thames and made the cholera epidemic worse.
  • Righting this ship cannot be a quiet process — updating a web page here, saying the right thing there. The proclamations that we now know are wrong were so persistent and so loud for so long.
  • the progress we’ve made might lead to an overhaul in our understanding of many other transmissible respiratory diseases that take a terrible toll around the world each year and could easily cause other pandemics.
  • So big proclamations require probably even bigger proclamations to correct, or the information void, unnecessary fears and misinformation will persist, damaging the W.H.O. now and in the future.
  • I’ve seen our paper used in India to try to reason through aerosol transmission and the necessary mitigations. I’ve heard of people in India closing their windows after hearing that the virus is airborne, likely because they were not being told how to respond
  • The W.H.O. needs to address these fears and concerns, treating it as a matter of profound change, so other public health agencies and governments, as well as ordinary people, can better adjust.
  • It needs to begin a campaign proportional to the importance of all this, announcing, “We’ve learned more, and here’s what’s changed, and here’s how we can make sure everyone understands how important this is.” That’s what credible leadership looks like. Otherwise, if a web page is updated in the forest without the requisite fanfare, how will it matter?
margogramiak

How the brain remembers right place, right time: Studies could lead to new ways to enhance memory for those with traumatic brain injury or Alzheimer's disease -- ScienceDaily - 0 views

  • how the brain encodes time and place into memories.
  • how the brain encodes time and place into memories.
    • margogramiak
       
      This is something we talked about in class... the brain isn't reliable when it comes to this.
  • What the team found was exciting: Not only did they identify a robust population of time cells, but the firing of these cells predicted how well individuals were able to link words together in time (a phenomenon called temporal clustering). Finally, these cells appear to exhibit phase precession in humans, as predicted.
    • margogramiak
       
      That's super interesting. I can see why this would be helpful research
  • ...8 more annotations...
  • hippocampus,
    • margogramiak
       
      Familiar, I think we talked about in class/read about
  • Electrodes implanted in these patients' brains help their surgeons precisely identify the seizure foci and also provide valuable information on the brain's inner workings, Lega says.
    • margogramiak
       
      Wow, that's pretty cool.
  • new treatments to combat memory loss from conditions such as traumatic brain injury or Alzheimer's disease.
    • margogramiak
       
      That would be amazing
  • In addition, while rats are actively exploring an environment, place cells are further organized into "mini-sequences" that represent a virtual sweep of locations ahead of the rat. These radar-like sweeps happen roughly 8-10 times per second and are thought to be a brain mechanism for predicting immediately upcoming events or outcomes.
    • margogramiak
       
      Wow. I think how this article is putting super complex ideas into words that are easier to understand.
  • it was unclear how the hippocampus was able to produce such sequences.
    • margogramiak
       
      I'm sure there will be specific research on this eventually
  • hocolate milk
    • margogramiak
       
      why chocolate milk I wonder...
  • However, taking a closer look at the data, the researchers found something new: As the rats moved through these spaces, their neurons not only exhibited forward, predictive mini-sequences, but also backward, retrospective mini-sequences. The forward and backward sequences alternated with each other, each taking only a few dozen milliseconds to complete.
    • margogramiak
       
      How can information like his be applied to treating dementia though?
  • "In the past few decades, there's been an explosion in new findings about memory,"
    • margogramiak
       
      That's great
Javier E

How Does Science Really Work? | The New Yorker - 1 views

  • I wanted to be a scientist. So why did I find the actual work of science so boring? In college science courses, I had occasional bursts of mind-expanding insight. For the most part, though, I was tortured by drudgery.
  • I’d found that science was two-faced: simultaneously thrilling and tedious, all-encompassing and narrow. And yet this was clearly an asset, not a flaw. Something about that combination had changed the world completely.
  • “Science is an alien thought form,” he writes; that’s why so many civilizations rose and fell before it was invented. In his view, we downplay its weirdness, perhaps because its success is so fundamental to our continued existence.
  • ...50 more annotations...
  • In school, one learns about “the scientific method”—usually a straightforward set of steps, along the lines of “ask a question, propose a hypothesis, perform an experiment, analyze the results.”
  • That method works in the classroom, where students are basically told what questions to pursue. But real scientists must come up with their own questions, finding new routes through a much vaster landscape.
  • Since science began, there has been disagreement about how those routes are charted. Two twentieth-century philosophers of science, Karl Popper and Thomas Kuhn, are widely held to have offered the best accounts of this process.
  • For Popper, Strevens writes, “scientific inquiry is essentially a process of disproof, and scientists are the disprovers, the debunkers, the destroyers.” Kuhn’s scientists, by contrast, are faddish true believers who promulgate received wisdom until they are forced to attempt a “paradigm shift”—a painful rethinking of their basic assumptions.
  • Working scientists tend to prefer Popper to Kuhn. But Strevens thinks that both theorists failed to capture what makes science historically distinctive and singularly effective.
  • Sometimes they seek to falsify theories, sometimes to prove them; sometimes they’re informed by preëxisting or contextual views, and at other times they try to rule narrowly, based on t
  • Why do scientists agree to this scheme? Why do some of the world’s most intelligent people sign on for a lifetime of pipetting?
  • Strevens thinks that they do it because they have no choice. They are constrained by a central regulation that governs science, which he calls the “iron rule of explanation.” The rule is simple: it tells scientists that, “if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with”; from there, they must “conduct all disputes with reference to empirical evidence alone.”
  • , it is “the key to science’s success,” because it “channels hope, anger, envy, ambition, resentment—all the fires fuming in the human heart—to one end: the production of empirical evidence.”
  • Strevens arrives at the idea of the iron rule in a Popperian way: by disproving the other theories about how scientific knowledge is created.
  • The problem isn’t that Popper and Kuhn are completely wrong. It’s that scientists, as a group, don’t pursue any single intellectual strategy consistently.
  • Exploring a number of case studies—including the controversies over continental drift, spontaneous generation, and the theory of relativity—Strevens shows scientists exerting themselves intellectually in a variety of ways, as smart, ambitious people usually do.
  • “Science is boring,” Strevens writes. “Readers of popular science see the 1 percent: the intriguing phenomena, the provocative theories, the dramatic experimental refutations or verifications.” But, he says,behind these achievements . . . are long hours, days, months of tedious laboratory labor. The single greatest obstacle to successful science is the difficulty of persuading brilliant minds to give up the intellectual pleasures of continual speculation and debate, theorizing and arguing, and to turn instead to a life consisting almost entirely of the production of experimental data.
  • Ultimately, in fact, it was good that the geologists had a “splendid variety” of somewhat arbitrary opinions: progress in science requires partisans, because only they have “the motivation to perform years or even decades of necessary experimental work.” It’s just that these partisans must channel their energies into empirical observation. The iron rule, Strevens writes, “has a valuable by-product, and that by-product is data.”
  • Science is often described as “self-correcting”: it’s said that bad data and wrong conclusions are rooted out by other scientists, who present contrary findings. But Strevens thinks that the iron rule is often more important than overt correction.
  • Eddington was never really refuted. Other astronomers, driven by the iron rule, were already planning their own studies, and “the great preponderance of the resulting measurements fit Einsteinian physics better than Newtonian physics.” It’s partly by generating data on such a vast scale, Strevens argues, that the iron rule can power science’s knowledge machine: “Opinions converge not because bad data is corrected but because it is swamped.”
  • Why did the iron rule emerge when it did? Strevens takes us back to the Thirty Years’ War, which concluded with the Peace of Westphalia, in 1648. The war weakened religious loyalties and strengthened national ones.
  • Two regimes arose: in the spiritual realm, the will of God held sway, while in the civic one the decrees of the state were paramount. As Isaac Newton wrote, “The laws of God & the laws of man are to be kept distinct.” These new, “nonoverlapping spheres of obligation,” Strevens argues, were what made it possible to imagine the iron rule. The rule simply proposed the creation of a third sphere: in addition to God and state, there would now be science.
  • Strevens imagines how, to someone in Descartes’s time, the iron rule would have seemed “unreasonably closed-minded.” Since ancient Greece, it had been obvious that the best thinking was cross-disciplinary, capable of knitting together “poetry, music, drama, philosophy, democracy, mathematics,” and other elevating human disciplines.
  • We’re still accustomed to the idea that a truly flourishing intellect is a well-rounded one. And, by this standard, Strevens says, the iron rule looks like “an irrational way to inquire into the underlying structure of things”; it seems to demand the upsetting “suppression of human nature.”
  • Descartes, in short, would have had good reasons for resisting a law that narrowed the grounds of disputation, or that encouraged what Strevens describes as “doing rather than thinking.”
  • In fact, the iron rule offered scientists a more supple vision of progress. Before its arrival, intellectual life was conducted in grand gestures.
  • Descartes’s book was meant to be a complete overhaul of what had preceded it; its fate, had science not arisen, would have been replacement by some equally expansive system. The iron rule broke that pattern.
  • Strevens sees its earliest expression in Francis Bacon’s “The New Organon,” a foundational text of the Scientific Revolution, published in 1620. Bacon argued that thinkers must set aside their “idols,” relying, instead, only on evidence they could verify. This dictum gave scientists a new way of responding to one another’s work: gathering data.
  • it also changed what counted as progress. In the past, a theory about the world was deemed valid when it was complete—when God, light, muscles, plants, and the planets cohered. The iron rule allowed scientists to step away from the quest for completeness.
  • The consequences of this shift would become apparent only with time
  • In 1713, Isaac Newton appended a postscript to the second edition of his “Principia,” the treatise in which he first laid out the three laws of motion and the theory of universal gravitation. “I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses,” he wrote. “It is enough that gravity really exists and acts according to the laws that we have set forth.”
  • What mattered, to Newton and his contemporaries, was his theory’s empirical, predictive power—that it was “sufficient to explain all the motions of the heavenly bodies and of our sea.”
  • Descartes would have found this attitude ridiculous. He had been playing a deep game—trying to explain, at a fundamental level, how the universe fit together. Newton, by those lights, had failed to explain anything: he himself admitted that he had no sense of how gravity did its work
  • by authorizing what Strevens calls “shallow explanation,” the iron rule offered an empirical bridge across a conceptual chasm. Work could continue, and understanding could be acquired on the other side. In this way, shallowness was actually more powerful than depth.
  • Quantum theory—which tells us that subatomic particles can be “entangled” across vast distances, and in multiple places at the same time—makes intuitive sense to pretty much nobody.
  • Without the iron rule, Strevens writes, physicists confronted with such a theory would have found themselves at an impasse. They would have argued endlessly about quantum metaphysics.
  • ollowing the iron rule, they can make progress empirically even though they are uncertain conceptually. Individual researchers still passionately disagree about what quantum theory means. But that hasn’t stopped them from using it for practical purposes—computer chips, MRI machines, G.P.S. networks, and other technologies rely on quantum physics.
  • One group of theorists, the rationalists, has argued that science is a new way of thinking, and that the scientist is a new kind of thinker—dispassionate to an uncommon degree.
  • As evidence against this view, another group, the subjectivists, points out that scientists are as hopelessly biased as the rest of us. To this group, the aloofness of science is a smoke screen behind which the inevitable emotions and ideologies hide.
  • At least in science, Strevens tells us, “the appearance of objectivity” has turned out to be “as important as the real thing.”
  • The subjectivists are right, he admits, inasmuch as scientists are regular people with a “need to win” and a “determination to come out on top.”
  • But they are wrong to think that subjectivity compromises the scientific enterprise. On the contrary, once subjectivity is channelled by the iron rule, it becomes a vital component of the knowledge machine. It’s this redirected subjectivity—to come out on top, you must follow the iron rule!—that solves science’s “problem of motivation,” giving scientists no choice but “to pursue a single experiment relentlessly, to the last measurable digit, when that digit might be quite meaningless.”
  • If it really was a speech code that instigated “the extraordinary attention to process and detail that makes science the supreme discriminator and destroyer of false ideas,” then the peculiar rigidity of scientific writing—Strevens describes it as “sterilized”—isn’t a symptom of the scientific mind-set but its cause.
  • The iron rule—“a kind of speech code”—simply created a new way of communicating, and it’s this new way of communicating that created science.
  • Other theorists have explained science by charting a sweeping revolution in the human mind; inevitably, they’ve become mired in a long-running debate about how objective scientists really are
  • In “The Knowledge Machine: How Irrationality Created Modern Science” (Liveright), Michael Strevens, a philosopher at New York University, aims to identify that special something. Strevens is a philosopher of science
  • Compared with the theories proposed by Popper and Kuhn, Strevens’s rule can feel obvious and underpowered. That’s because it isn’t intellectual but procedural. “The iron rule is focused not on what scientists think,” he writes, “but on what arguments they can make in their official communications.”
  • Like everybody else, scientists view questions through the lenses of taste, personality, affiliation, and experience
  • geologists had a professional obligation to take sides. Europeans, Strevens reports, tended to back Wegener, who was German, while scholars in the United States often preferred Simpson, who was American. Outsiders to the field were often more receptive to the concept of continental drift than established scientists, who considered its incompleteness a fatal flaw.
  • Strevens’s point isn’t that these scientists were doing anything wrong. If they had biases and perspectives, he writes, “that’s how human thinking works.”
  • Eddington’s observations were expected to either confirm or falsify Einstein’s theory of general relativity, which predicted that the sun’s gravity would bend the path of light, subtly shifting the stellar pattern. For reasons having to do with weather and equipment, the evidence collected by Eddington—and by his colleague Frank Dyson, who had taken similar photographs in Sobral, Brazil—was inconclusive; some of their images were blurry, and so failed to resolve the matter definitively.
  • it was only natural for intelligent people who were free of the rule’s strictures to attempt a kind of holistic, systematic inquiry that was, in many ways, more demanding. It never occurred to them to ask if they might illuminate more collectively by thinking about less individually.
  • In the single-sphered, pre-scientific world, thinkers tended to inquire into everything at once. Often, they arrived at conclusions about nature that were fascinating, visionary, and wrong.
  • How Does Science Really Work?Science is objective. Scientists are not. Can an “iron rule” explain how they’ve changed the world anyway?By Joshua RothmanSeptember 28, 2020
knudsenlu

Will the Quantum Nature of Gravity Finally Be Measured? - The Atlantic - 0 views

  • In 1935, when both quantum mechanics and Albert Einstein’s general theory of relativity were young, a little-known Soviet physicist named Matvei Bronstein, just 28 himself, made the first detailed study of the problem of reconciling the two in a quantum theory of gravity. This “possible theory of the world as a whole,” as Bronstein called it, would supplant Einstein’s classical description of gravity, which casts it as curves in the space-time continuum, and rewrite it in the same quantum language as the rest of physics.
  • His words were prophetic. Eighty-three years later, physicists are still trying to understand how space-time curvature emerges on macroscopic scales from a more fundamental, presumably quantum picture of gravity; it’s arguably the deepest question in physics.
  • The search for the full theory of quantum gravity has been stymied by the fact that gravity’s quantum properties never seem to manifest in actual experience. Physicists never get to see how Einstein’s description of the smooth space-time continuum, or Bronstein’s quantum approximation of it when it’s weakly curved, goes wrong.
  • ...4 more annotations...
  • Not only that, but the universe appears to be governed by a kind of cosmic censorship: Regions of extreme gravity—where space-time curves so sharply that Einstein’s equations malfunction and the true, quantum nature of gravity and space-time must be revealed—always hide behind the horizons of black holes.
  • Dyson, who helped develop quantum electrodynamics (the theory of interactions between matter and light) and is professor emeritus at the Institute for Advanced Study in Princeton, New Jersey, where he overlapped with Einstein, disagrees with the argument that quantum gravity is needed to describe the unreachable interiors of black holes. And he wonders whether detecting the hypothetical graviton might be impossible, even in principle. In that case, he argues, quantum gravity is metaphysical, rather than physics.
  • The ability to detect the “grin” of quantum gravity would seem to refute Dyson’s argument. It would also kill the gravitational decoherence theory, by showing that gravity and space-time do maintain quantum superpositions.
  • If gravity is a quantum interaction, then the answer is: It depends. Each component of the blue diamond’s superposition will experience a stronger or weaker gravitational attraction to the red diamond, depending on whether the latter is in the branch of its superposition that’s closer or farther away. And the gravity felt by each component of the red diamond’s superposition similarly depends on where the blue diamond is.
tongoscar

Ivy Tech moves toward 8-week courses across state | Local News | greensburgdailynews.com - 0 views

  • “It is more focused and faster to complete,” said Ivy Tech President Sue Ellspermann. “For working adults, that means less time for life to get in the way. Part-time students focus on just one class at a time. Full-time students focus on just two to three classes at a time.”
  • “It works well for me,” he said, noting that he has a 4.0 GPA. He’s taking two classes this eight-week session, and he will take another three for the eight-week session that starts in March.
  • Ivy Tech statewide now offers about 60% all courses in the eight-week format and students are passing at significantly higher rates and dropping fewer classes, Ellspermann said.
  • ...2 more annotations...
  • Last year and this fall, the college statewide saw about a 6% improvement in pass rates, a more than 2% percent reduction in withdrawal rates and a more than 2% reduction in students who just stop showing up to school, she said.
  • Two-thirds of Ivy Tech students are part-time, and the changes enable those students to progress more quickly, if they choose. “I think with our students and their work schedules and commitments in life, it gives them flexibility,” Ellspermann said.
« First ‹ Previous 61 - 80 of 2647 Next › Last »
Showing 20 items per page