Skip to main content

Home/ TOK Friends/ Group items tagged problem-solving

Rss Feed Group items tagged

jlessner

Should Schools Teach Personality? - NYTimes.com - 0 views

  • Self-control, curiosity, “grit” — these qualities may seem more personal than academic, but at some schools, they’re now part of the regular curriculum.
  • Some researchers say personality could be even more important than intelligence when it comes to students’ success in school. But critics worry that the increasing focus on qualities like grit will distract policy makers from problems with schools.
  • A number of researchers have been successful in improving students’ conscientiousness, Dr. Poropat said in an interview. One team, he said, found that when elementary-school students get training in “effortful control,” a trait similar to conscientiousness, “it not only improves the students’ performance at that point in their education, but also has follow-on effects a number of years afterward.” Another study found that a 16-week problem-solving training program could increase retirees’ levels of openness.
  • ...4 more annotations...
  • In a 2014 paper, the Australian psychology professor Arthur E. Poropat cites research showing that both conscientiousness (which he defines as a tendency to be “diligent, dutiful and hardworking”) and openness (characterized by qualities like creativity and curiosity) are more highly correlated with student performance than intelligence is.
  • Some already have. “Grit” — which the psychology professor Angela Duckworth of the University of Pennsylvania and her co-authors define in a 2007 paper as “perseverance and passion for long-term goals,” and which they see as overlapping in some ways with conscientiousness — has become part of the curriculum at a number of schools.
  • We know that these noncognitive traits can be taught. We also know that it is necessary for success. You look at anybody who has had long-term sustainable success, and every one of them exhibited at some point this grit, this tenacity to keep going.”
  • One result of the class, which includes lessons on people, like Malala Yousafzai, who have overcome significant challenges: Students “are now willing to do the hard thing instead of always running to what was easy.” Ms. Benedix also coordinates a districtwide grit initiative — since it began, she says, the number of high schoolers taking advanced-placement classes has increased significantly.
Javier E

The Trouble With Brain Science - NYTimes.com - 0 views

  • What would a good theory of the brain actually look like?
  • Different kinds of sciences call for different kinds of theories. Physicists, for example, are searching for a “grand unified theory” that integrates gravity, electromagnetism and the strong and weak nuclear forces into a neat package of equations.
  • The living world is bursting with variety and unpredictable complexity, because biology is the product of historical accidents, with species solving problems based on happenstance that leads them down one evolutionary road rather than another.
  • ...4 more annotations...
  • ut biological complexity is only part of the challenge in figuring out what kind of theory of the brain we’re seeking.
  • What we are really looking for is a bridge, some way of connecting two separate scientific languages — those of neuroscience and psychology.
  • An example is the discovery of DNA, which allowed us to understand how genetic information could be represented and replicated in a physical structure. In one stroke, this bridge transformed biology from a mystery — in which the physical basis of life was almost entirely unknown — into a tractable if challenging set of problems
  • We know that there must be some lawful relation between assemblies of neurons and the elements of thought, but we are currently at a loss to describe those laws.
Javier E

History News Network | An Open Letter to the Harvard Business School Dean Who Gave Hist... - 0 views

  • I would like to make some gratuitous curricular and pedagogical suggestions for business schools.
  • Foremost, business schools, at least those that purport to mold leaders, should stop training and start educating. Their graduates should be able to think and problem-solve for themselves, not just copy the latest fad.
  • Business schools generally do not cultivate or even select for general intelligence and breadth of understanding but instead breed shrewdness and encourage narrow technical knowledge.
  • ...8 more annotations...
  • To try to cover up the obvious shortcomings of their profoundly antisocial pedagogical model, many business schools tack on courses in ethics, corporate social responsibility, and the like, then shrug their collective shoulders when their graduates behave in ways that would make Vikings and pirates blush.
  • The only truly socially responsible management curriculum would be one built from the ground up out of the liberal arts – economics, of course, but also history, philosophy, political science, psychology, and sociology – because those are the core disciplines of social scientific and humanistic inquiry.
  • Properly understood, they are not “subjects” but ways of thinking about human beings, their behaviors, their institutions (of which businesses are just a small subset), and the ways they interact with the natural world. Only intelligent people with broad and deep backgrounds in the liberal arts can consistently make ethical decisions that are good for stakeholders, consumers, and the world they share.
  • Precisely because they are not deeply rooted in the liberal arts, many business schools try to inculcate messages into the brains of their students that are unscientific, mere fashions that cycle into and then out of popularity.
  • No one can seriously purport to understand corporate X (finance, formation, governance, social responsibility, etc.) today who does not understand X’s origins and subsequent development. Often, then, the historian of corporate X is the real expert, not the business school professor who did a little consulting, a few interviews, and a survey.
  • Lurking somewhere in the background of most great business leaders, ones who helped their companies, their customers, and the world, is a liberal arts education.
  • Instead of forcing students to choose between a broad liberal arts degree or a business career, business schools and liberal arts departments ought to work together to integrate all methods of knowing into a seamless whole focused on key questions and problems important to us all
  • There is not a single question of importance in the business world that does not have economic, historical, philosophical, political, psychological, and sociological components that are absolutely crucial to making good (right and moral) decisions. So why continue to divide understanding of the real world into hoary compartments
Javier E

Facebook will now ask users to rank news organizations they trust - The Washington Post - 0 views

  • Zuckerberg wrote Facebook is not “comfortable” deciding which news sources are the most trustworthy in a “world with so much division."
  • "We decided that having the community determine which sources are broadly trusted would be most objective," he wrote.
  • The new trust rankings will emerge from surveys the company is conducting. "Broadly trusted" outlets that are affirmed by a significant cross-section of users may see a boost in readership, while less known organizations or start-ups receiving poor ratings could see their web traffic decline
  • ...14 more annotations...
  • The company's changes include an effort to boost the content of local news outlets, which have suffered sizable subscription and readership declines
  • The changes follow another major News Feed redesign, announced last week, in which Facebook said users would begin to see less content from news organizations and brands in favor of "meaningful" posts from friends and family.
  • Currently, 5 percent of Facebook posts are generated by news organizations; that number is expected to drop to 4 percent after the redesign, Zuckerberg said.
  • On Friday, Google announced it would cancel a two-month-old experiment, called Knowledge Panel, that informed its users that a news article had been disputed by independent fact-checking organizations. Conservatives had complained the feature unfairly targeted a right-leaning outlet.
  • More than two-thirds of Americans now get some of their news from social media, according to Pew Research Center.
  • That shift has empowered Facebook and Google, putting them in an uncomfortable position of deciding what news they should distribute to their global audiences. But it also has led to questions about whether these corporations should be considered media companie
  • "Just by putting things out to a vote in terms of what the community would find trustworthy undermines the role for any serious institutionalized process to determine what’s quality and what’s not,” he said.
  • rther criticism that the social network had become vulnerable to bad actors seeking to spread disinformation.
  • Jay Rosen, a journalism professor at New York University, said that Facebook learned the wrong lesson from Trending Topics, which was to try to avoid politics at all costs
  • “One of the things that can happen if you are determined to avoid politics at all costs is you are driven to illusory solutions,” he said. “I don’t think there is any alternative to using your judgement. But Facebook is convinced that there is. This idea that they can avoid judgement is part of their problem.”
  • Facebook revealed few details about how it is conducting its trust surveys,
  • "The hard question we've struggled with is how to decide what news sources are broadly trusted," Zuckerberg wrote. "We could try to make that decision ourselves, but that's not something we're comfortable with. We considered asking outside experts, which would take the decision out of our hands but would likely not solve the objectivity problem. Or we could ask you -- the community -- and have your feedback determine the ranking."
  • Some experts wondered whether Facebook's latest effort could be gamed.
  • "This seems like a positive step toward improving the news environment on Facebook," Diresta said. "That said, the potential downside is that the survey approach unfairly penalizes emerging publications."
anonymous

Talking to Children About Anti-Asian Bias - The New York Times - 1 views

  • I’m Helping My Korean-American Daughter Embrace Her Identity to Counter Racism
  • “I’m not sure Asian-American families can avoid ‘the talk’ any longer,” one expert said.
  • My daughter was the only kid who didn’t have a separate Korean name when we signed her up for Korean classes three years ago. The blank space on the registration form looked at me, as if to say we’d forgotten something as parents.
  • ...23 more annotations...
  • my spouse and I, who are both Asian-American, never thought to give her a name like Seohyun or Haeun. Though Korean was the language I spoke growing up in New York with my immigrant parents, I’ve forgotten many of the words I used to know. Yet hearing it spoken still conjures the sense of home.
  • I had no ambition to teach my daughter Korean, but when she turned 5, she insisted she wanted to learn so she could talk to her halmoni — her grandmother. So I conceded.
  • On Seollal, the Korean New Year, she and the other girls in her class sported traditional silk outfits. The floor-length skirts flapped to show their patterned leggings underneath, in a church basement that smelled of steamed rice and sesame oil.
  • Still, I kept asking my daughter when she would try soccer, which seemed to me the “American thing” to do on a Saturday morning. It was held at the same time as Korean School. I kept thinking about the parents on the sidelines and wondered what we were missing
  • A classmate had written that coronavirus was a problem and that keeping Chinese people out of the country was the solution.
  • In the summer of 2020, the Stop A.A.P.I. Hate Youth Campaign interviewed 990 Asian-American young adults across the United States about their experiences during the pandemic, and found that one in four had reported experiencing racism in some way
  • Kids said that they had been bullied, physically harassed and had racial slurs shouted at them
  • a child who hears a racist remark hears this: “You don’t belong. You’re other. You’re different.”
  • We are one of only a handful of Asian-American families in our school, which prides itself on teaching about inclusion. Earlier in the year, our daughter came home talking about Malala Yousafzai and Ruby Bridges, asking where we would have been sitting on the bus in times of segregation.
  • But when a girl in our neighborhood pointed to my daughter and said they could not play together because of the “China virus,” I wept.
  • During lockdown, we devoured books with Asian-American heroines by authors like Grace Lin and Min Jin Lee
  • I struggled to find the words to explain to my daughter why Chinese-Americans were forced to live in these barracks; why they were separated from their families.
  • She doesn’t yet know about the 84-year old man who died two days after being shoved to the sidewalk in Chinatown in San Francisco last month, or about the six Asian-American women killed by a shooter in Atlanta this week.
  • While attacks on Asian people aren’t always charged as hate crimes, many Asians feel an increasing sense of vulnerability.
  • Kids begin to develop a sense of racial identity by age 3 or 4, Dr. Yip said.
  • Once they enter grade school, they hear about race and racism from peers and the media they consume.
  • “By not talking about race” and what they’re hearing, Dr. Yip said, “you run the risk of intensifying stereotypes” which can then lead to racism.
  • “We think we’re protecting our kids, by not talking about racist incidents” Dr. Chen added. “But actually not talking about it is not helping.” Building their racial identity is what helps them feel safe.
  • When a racist incident happens to your child, Dr. Chen said, don’t jump into solving the problem. First ask how they feel and listen. Tell them you don’t know all the answers, but you can find solutions together.
  • Make sure that the children who were targeted know it wasn’t their fault, Dr. Chen added. Role play what you will do if it happens again and tell them, Mom or Dad or your caregivers will keep you safe.
  • “I’m not sure Asian-American families can avoid ‘the talk’ any longer.” It’s a talk that must include listening to, and coming to understand, all groups who face racial bias.
  • In hindsight, I now see that Korean School has done more for my family than soccer ever could. It’s a place where my daughter sees she isn’t alone. There are families who look like ours and wrestle with the same questions, about what we will forget, and what we will keep from our immigrant families’ pasts.
  • My daughter has gone from sewing masks for her bears, to carrying Black Lives Matter posters and voting with me in a presidential election.
anonymous

Beverly Cleary, Beloved Children's Book Author, Dies at 104 - The New York Times - 0 views

  • Beverly Cleary, Beloved Children’s Book Author, Dies at 104
  • Her funny stories about Henry Huggins and his dog Ribsy, the sisters Ramona and Beezus Quimby, and a motorcycling mouse named Ralph never talked down to readers.
  • Beverly Cleary, who enthralled tens of millions of young readers with the adventures and mishaps of Henry Huggins and his dog Ribsy, the bratty Ramona Quimby and her older sister Beezus, and other residents of Klickitat Street, died on Thursday in Carmel, Calif
  • ...20 more annotations...
  • She was 104.
  • Always sympathetic, never condescending, she presented her readers with characters they knew and understood, the 20th-century equivalents of Huck Finn or Louisa May Alcott’s little women, and every bit as popular: Her books sold more than 85 million copies
  • “Cleary is funny in a very sophisticated way,
  • At her library job in Yakima, Ms. Cleary had become dissatisfied with the books being offered to her young patrons
  • The protagonists tended to be aristocratic English children who had nannies and pony carts, or poor children whose problems disappeared when a long-lost rich relative turned up in the last chapter.
  • “I wanted to read funny stories about the sort of children I knew,” she wrote, “and I decided that someday when I grew up I would write them.”
  • After marrying Clarence Cleary, a graduate student she had met at Berkeley, she moved to San Francisco and, while her husband served in the military, sold children’s books at the Sather Gate
  • Book Shop in Berkeley and worked as a librarian at Camp John T. Knight in Oakland.
  • “She gets very close to satire, which I think is why adults like her, but she’s still deeply respectful of her characters — nobody gets a laugh at the expense of another. I think kids appreciate that they’re on a level playing field with adults.”
  • She had been particularly touched by the plight of a group of boys who asked her, “Where are the books about us?”
  • “Why didn’t authors write books about everyday problems that children could solve by themselves?”
  • “Why weren’t there more stories about children playing? Why couldn’t I find more books that would make me laugh? These were the bo
  • oks I wanted to read, and the books I was eventually to write.”
  • “When I began ‘Henry Huggins’ I did not know how to write a book, so I mentally told the stories that I remembered and wrote them down as I told them,”
  • Ramona Quimby, introduced in a small role as the annoying younger sister of Henry’s friend Beatrice, better known as Beezus, emerged as a superstar.
  • “I thought like Ramona, but I was a very well-behaved little girl.”
  • By the time “Beezus and Ramona” was published, Ms. Cleary had twins, Malcolm and Marianne, to provide her with fresh material. They survive her, along with three grandchildren and one great-grandchild. Her husband died in 2004.
  • Ramona mounts a campaign to have her father quit smoking, a habit he abuses after losing his job.
  • That book won the Newbery Medal in 1984. A sequel, “Strider,” followed in 1991.
  • “That little girl, who has remained with me, prevents me from writing down to children, from poking fun at my characters, and from writing an adult reminiscence about childhood instead of a book to be enjoyed by children.”
caelengrubb

Why Is Memory So Good and So Bad? - Scientific American - 0 views

  • Memories of visual images (e.g., dinner plates) are stored in what is called visual memory.
  • Our minds use visual memory to perform even the simplest of computations; from remembering the face of someone we’ve just met, to remembering what time it was last we checked. Without visual memory, we wouldn’t be able to store—and later retrieve—anything we see.
  • ust as a computer’s memory capacity constrains its abilities, visual memory capacity has been correlated with a number of higher cognitive abilities, including academic success, fluid intelligence (the ability to solve novel problems), and general comprehension.
  • ...13 more annotations...
  • For many reasons, then, it would be very useful to understand how visual memory facilitates these mental operations, as well as constrains our ability to perform them
  • Visual working memory is where visual images are temporarily stored while your mind works away at other tasks—like a whiteboard on which things are briefly written and then wiped away. We rely on visual working memory when remembering things over brief intervals, such as when copying lecture notes to a notebook.
  • Which is exactly what happened: Zhang & Luck found that participants were either very precise, or they completely guessed; that is, they either remembered the square’s color with great accuracy, or forgot it completely
  • The participants had a simple task: to recall the color of one particular square, not knowing in advance which square they would be asked to recall. The psychologists assumed that measuring how visual working memory behaves over increasing demands (i.e., the increasing durations of 1,4 or 10 seconds) would reveal something about how the system works.
  • If short-term visual memories fade away—if they are gradually wiped away from the whiteboard—then after longer intervals participants’ accuracy in remembering the colors should still be high, deviating only slightly from the square’s original color. But if these memories are wiped out all at once—if the whiteboard is left untouched until, all at once, scrubbed clean—then participants should make very precise responses (corresponding to instances when the memories are still untouched) and then, after the interval grows too long, very random guesses.
  • UC Davis psychologists Weiwei Zhang and Steven Luck have shed some light on this problem. In their experiment, participants briefly saw three colored squares flashed on a computer screen, and were asked to remember the colors of each square. Then, after 1, 4 or 10 seconds the squares re-appeared, except this time their colors were missing, so that all that was visible were black squares outlined in white.
  • But this, it turns out, is not true of all memories
  • In a recent paper, Researchers at MIT and Harvard found that, if a memory can survive long enough to make it into what is called “visual long-term memory,” then it doesn’t have to be wiped out at all.
  • Talia Konkle and colleagues showed participants a stream of three thousand images of different scenes, such as ocean waves, golf courses or amusement parks. Then, participants were shown two hundred pairs of images—an old one they had seen in the first task, and a completely new one—and asked to indicate which was the old one.
  • Participants were remarkably accurate at spotting differences between the new and old images—96 percent
  • In a recent review, researchers at Harvard and MIT argue that the critical factor is how meaningful the remembered images are—whether the content of the images you see connects to pre-existing knowledge about them
  • This prior knowledge changes how these images are processed, allowing thousands of them to be transferred from the whiteboard of short-term memory into the bank vault of long-term memory, where they are stored with remarkable detail.
  • Together, these experiments suggest why memories are not eliminated equally— indeed, some don’t seem to be eliminated at all. This might also explain why we’re so hopeless at remembering some things, and yet so awesome at remembering others.
Javier E

How Does Science Really Work? | The New Yorker - 1 views

  • I wanted to be a scientist. So why did I find the actual work of science so boring? In college science courses, I had occasional bursts of mind-expanding insight. For the most part, though, I was tortured by drudgery.
  • I’d found that science was two-faced: simultaneously thrilling and tedious, all-encompassing and narrow. And yet this was clearly an asset, not a flaw. Something about that combination had changed the world completely.
  • “Science is an alien thought form,” he writes; that’s why so many civilizations rose and fell before it was invented. In his view, we downplay its weirdness, perhaps because its success is so fundamental to our continued existence.
  • ...50 more annotations...
  • In school, one learns about “the scientific method”—usually a straightforward set of steps, along the lines of “ask a question, propose a hypothesis, perform an experiment, analyze the results.”
  • That method works in the classroom, where students are basically told what questions to pursue. But real scientists must come up with their own questions, finding new routes through a much vaster landscape.
  • Since science began, there has been disagreement about how those routes are charted. Two twentieth-century philosophers of science, Karl Popper and Thomas Kuhn, are widely held to have offered the best accounts of this process.
  • For Popper, Strevens writes, “scientific inquiry is essentially a process of disproof, and scientists are the disprovers, the debunkers, the destroyers.” Kuhn’s scientists, by contrast, are faddish true believers who promulgate received wisdom until they are forced to attempt a “paradigm shift”—a painful rethinking of their basic assumptions.
  • Working scientists tend to prefer Popper to Kuhn. But Strevens thinks that both theorists failed to capture what makes science historically distinctive and singularly effective.
  • Sometimes they seek to falsify theories, sometimes to prove them; sometimes they’re informed by preëxisting or contextual views, and at other times they try to rule narrowly, based on t
  • Why do scientists agree to this scheme? Why do some of the world’s most intelligent people sign on for a lifetime of pipetting?
  • Strevens thinks that they do it because they have no choice. They are constrained by a central regulation that governs science, which he calls the “iron rule of explanation.” The rule is simple: it tells scientists that, “if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with”; from there, they must “conduct all disputes with reference to empirical evidence alone.”
  • , it is “the key to science’s success,” because it “channels hope, anger, envy, ambition, resentment—all the fires fuming in the human heart—to one end: the production of empirical evidence.”
  • Strevens arrives at the idea of the iron rule in a Popperian way: by disproving the other theories about how scientific knowledge is created.
  • The problem isn’t that Popper and Kuhn are completely wrong. It’s that scientists, as a group, don’t pursue any single intellectual strategy consistently.
  • Exploring a number of case studies—including the controversies over continental drift, spontaneous generation, and the theory of relativity—Strevens shows scientists exerting themselves intellectually in a variety of ways, as smart, ambitious people usually do.
  • “Science is boring,” Strevens writes. “Readers of popular science see the 1 percent: the intriguing phenomena, the provocative theories, the dramatic experimental refutations or verifications.” But, he says,behind these achievements . . . are long hours, days, months of tedious laboratory labor. The single greatest obstacle to successful science is the difficulty of persuading brilliant minds to give up the intellectual pleasures of continual speculation and debate, theorizing and arguing, and to turn instead to a life consisting almost entirely of the production of experimental data.
  • Ultimately, in fact, it was good that the geologists had a “splendid variety” of somewhat arbitrary opinions: progress in science requires partisans, because only they have “the motivation to perform years or even decades of necessary experimental work.” It’s just that these partisans must channel their energies into empirical observation. The iron rule, Strevens writes, “has a valuable by-product, and that by-product is data.”
  • Science is often described as “self-correcting”: it’s said that bad data and wrong conclusions are rooted out by other scientists, who present contrary findings. But Strevens thinks that the iron rule is often more important than overt correction.
  • Eddington was never really refuted. Other astronomers, driven by the iron rule, were already planning their own studies, and “the great preponderance of the resulting measurements fit Einsteinian physics better than Newtonian physics.” It’s partly by generating data on such a vast scale, Strevens argues, that the iron rule can power science’s knowledge machine: “Opinions converge not because bad data is corrected but because it is swamped.”
  • Why did the iron rule emerge when it did? Strevens takes us back to the Thirty Years’ War, which concluded with the Peace of Westphalia, in 1648. The war weakened religious loyalties and strengthened national ones.
  • Two regimes arose: in the spiritual realm, the will of God held sway, while in the civic one the decrees of the state were paramount. As Isaac Newton wrote, “The laws of God & the laws of man are to be kept distinct.” These new, “nonoverlapping spheres of obligation,” Strevens argues, were what made it possible to imagine the iron rule. The rule simply proposed the creation of a third sphere: in addition to God and state, there would now be science.
  • Strevens imagines how, to someone in Descartes’s time, the iron rule would have seemed “unreasonably closed-minded.” Since ancient Greece, it had been obvious that the best thinking was cross-disciplinary, capable of knitting together “poetry, music, drama, philosophy, democracy, mathematics,” and other elevating human disciplines.
  • We’re still accustomed to the idea that a truly flourishing intellect is a well-rounded one. And, by this standard, Strevens says, the iron rule looks like “an irrational way to inquire into the underlying structure of things”; it seems to demand the upsetting “suppression of human nature.”
  • Descartes, in short, would have had good reasons for resisting a law that narrowed the grounds of disputation, or that encouraged what Strevens describes as “doing rather than thinking.”
  • In fact, the iron rule offered scientists a more supple vision of progress. Before its arrival, intellectual life was conducted in grand gestures.
  • Descartes’s book was meant to be a complete overhaul of what had preceded it; its fate, had science not arisen, would have been replacement by some equally expansive system. The iron rule broke that pattern.
  • Strevens sees its earliest expression in Francis Bacon’s “The New Organon,” a foundational text of the Scientific Revolution, published in 1620. Bacon argued that thinkers must set aside their “idols,” relying, instead, only on evidence they could verify. This dictum gave scientists a new way of responding to one another’s work: gathering data.
  • it also changed what counted as progress. In the past, a theory about the world was deemed valid when it was complete—when God, light, muscles, plants, and the planets cohered. The iron rule allowed scientists to step away from the quest for completeness.
  • The consequences of this shift would become apparent only with time
  • In 1713, Isaac Newton appended a postscript to the second edition of his “Principia,” the treatise in which he first laid out the three laws of motion and the theory of universal gravitation. “I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses,” he wrote. “It is enough that gravity really exists and acts according to the laws that we have set forth.”
  • What mattered, to Newton and his contemporaries, was his theory’s empirical, predictive power—that it was “sufficient to explain all the motions of the heavenly bodies and of our sea.”
  • Descartes would have found this attitude ridiculous. He had been playing a deep game—trying to explain, at a fundamental level, how the universe fit together. Newton, by those lights, had failed to explain anything: he himself admitted that he had no sense of how gravity did its work
  • by authorizing what Strevens calls “shallow explanation,” the iron rule offered an empirical bridge across a conceptual chasm. Work could continue, and understanding could be acquired on the other side. In this way, shallowness was actually more powerful than depth.
  • Quantum theory—which tells us that subatomic particles can be “entangled” across vast distances, and in multiple places at the same time—makes intuitive sense to pretty much nobody.
  • Without the iron rule, Strevens writes, physicists confronted with such a theory would have found themselves at an impasse. They would have argued endlessly about quantum metaphysics.
  • ollowing the iron rule, they can make progress empirically even though they are uncertain conceptually. Individual researchers still passionately disagree about what quantum theory means. But that hasn’t stopped them from using it for practical purposes—computer chips, MRI machines, G.P.S. networks, and other technologies rely on quantum physics.
  • One group of theorists, the rationalists, has argued that science is a new way of thinking, and that the scientist is a new kind of thinker—dispassionate to an uncommon degree.
  • As evidence against this view, another group, the subjectivists, points out that scientists are as hopelessly biased as the rest of us. To this group, the aloofness of science is a smoke screen behind which the inevitable emotions and ideologies hide.
  • At least in science, Strevens tells us, “the appearance of objectivity” has turned out to be “as important as the real thing.”
  • The subjectivists are right, he admits, inasmuch as scientists are regular people with a “need to win” and a “determination to come out on top.”
  • But they are wrong to think that subjectivity compromises the scientific enterprise. On the contrary, once subjectivity is channelled by the iron rule, it becomes a vital component of the knowledge machine. It’s this redirected subjectivity—to come out on top, you must follow the iron rule!—that solves science’s “problem of motivation,” giving scientists no choice but “to pursue a single experiment relentlessly, to the last measurable digit, when that digit might be quite meaningless.”
  • If it really was a speech code that instigated “the extraordinary attention to process and detail that makes science the supreme discriminator and destroyer of false ideas,” then the peculiar rigidity of scientific writing—Strevens describes it as “sterilized”—isn’t a symptom of the scientific mind-set but its cause.
  • The iron rule—“a kind of speech code”—simply created a new way of communicating, and it’s this new way of communicating that created science.
  • Other theorists have explained science by charting a sweeping revolution in the human mind; inevitably, they’ve become mired in a long-running debate about how objective scientists really are
  • In “The Knowledge Machine: How Irrationality Created Modern Science” (Liveright), Michael Strevens, a philosopher at New York University, aims to identify that special something. Strevens is a philosopher of science
  • Compared with the theories proposed by Popper and Kuhn, Strevens’s rule can feel obvious and underpowered. That’s because it isn’t intellectual but procedural. “The iron rule is focused not on what scientists think,” he writes, “but on what arguments they can make in their official communications.”
  • Like everybody else, scientists view questions through the lenses of taste, personality, affiliation, and experience
  • geologists had a professional obligation to take sides. Europeans, Strevens reports, tended to back Wegener, who was German, while scholars in the United States often preferred Simpson, who was American. Outsiders to the field were often more receptive to the concept of continental drift than established scientists, who considered its incompleteness a fatal flaw.
  • Strevens’s point isn’t that these scientists were doing anything wrong. If they had biases and perspectives, he writes, “that’s how human thinking works.”
  • Eddington’s observations were expected to either confirm or falsify Einstein’s theory of general relativity, which predicted that the sun’s gravity would bend the path of light, subtly shifting the stellar pattern. For reasons having to do with weather and equipment, the evidence collected by Eddington—and by his colleague Frank Dyson, who had taken similar photographs in Sobral, Brazil—was inconclusive; some of their images were blurry, and so failed to resolve the matter definitively.
  • it was only natural for intelligent people who were free of the rule’s strictures to attempt a kind of holistic, systematic inquiry that was, in many ways, more demanding. It never occurred to them to ask if they might illuminate more collectively by thinking about less individually.
  • In the single-sphered, pre-scientific world, thinkers tended to inquire into everything at once. Often, they arrived at conclusions about nature that were fascinating, visionary, and wrong.
  • How Does Science Really Work?Science is objective. Scientists are not. Can an “iron rule” explain how they’ve changed the world anyway?By Joshua RothmanSeptember 28, 2020
Javier E

How to Remember Everything You Want From Non-Fiction Books | by Eva Keiffenheim, MSc | ... - 0 views

  • A Bachelor’s degree taught me how to learn to ace exams. But it didn’t teach me how to learn to remember.
  • 65% to 80% of students answered “no” to the question “Do you study the way you do because somebody taught you to study that way?”
  • the most-popular Coursera course of all time: Dr. Barabara Oakley’s free course on “Learning how to Learn.” So did I. And while this course taught me about chunking, recalling, and interleaving
  • ...66 more annotations...
  • I learned something more useful: the existence of non-fiction literature that can teach you anything.
  • something felt odd. Whenever a conversation revolved around a serious non-fiction book I read, such as ‘Sapiens’ or ‘Thinking Fast and Slow,’ I could never remember much. Turns out, I hadn’t absorbed as much information as I’d believed. Since I couldn’t remember much, I felt as though reading wasn’t an investment in knowledge but mere entertainment.
  • When I opened up about my struggles, many others confessed they also can’t remember most of what they read, as if forgetting is a character flaw. But it isn’t.
  • It’s the way we work with books that’s flawed.
  • there’s a better way to read. Most people rely on techniques like highlighting, rereading, or, worst of all, completely passive reading, which are highly ineffective.
  • Since I started applying evidence-based learning strategies to reading non-fiction books, many things have changed. I can explain complex ideas during dinner conversations. I can recall interesting concepts and link them in my writing or podcasts. As a result, people come to me for all kinds of advice.
  • What’s the Architecture of Human Learning and Memory?
  • Human brains don’t work like recording devices. We don’t absorb information and knowledge by reading sentences.
  • we store new information in terms of its meaning to our existing memory
  • we give new information meaning by actively participating in the learning process — we interpret, connect, interrelate, or elaborate
  • To remember new information, we not only need to know it but also to know how it relates to what we already know.
  • Learning is dependent on memory processes because previously-stored knowledge functions as a framework in which newly learned information can be linked.”
  • Human memory works in three stages: acquisition, retention, and retrieval. In the acquisition phase, we link new information to existing knowledge; in the retention phase, we store it, and in the retrieval phase, we get information out of our memory.
  • Retrieval, the third stage, is cue dependent. This means the more mental links you’re generating during stage one, the acquisition phase, the easier you can access and use your knowledge.
  • we need to understand that the three phases interrelate
  • creating durable and flexible access to to-be-learned information is partly a matter of achieving a meaningful encoding of that information and partly a matter of exercising the retrieval process.”
  • Next, we’ll look at the learning strategies that work best for our brains (elaboration, retrieval, spaced repetition, interleaving, self-testing) and see how we can apply those insights to reading non-fiction books.
  • The strategies that follow are rooted in research from professors of Psychological & Brain Science around Henry Roediger and Mark McDaniel. Both scientists spent ten years bridging the gap between cognitive psychology and education fields. Harvard University Press published their findings in the book ‘Make It Stick.
  • #1 Elaboration
  • “Elaboration is the process of giving new material meaning by expressing it in your own words and connecting it with what you already know.”
  • Why elaboration works: Elaborative rehearsal encodes information into your long-term memory more effectively. The more details and the stronger you connect new knowledge to what you already know, the better because you’ll be generating more cues. And the more cues they have, the easier you can retrieve your knowledge.
  • How I apply elaboration: Whenever I read an interesting section, I pause and ask myself about the real-life connection and potential application. The process is invisible, and my inner monologues sound like: “This idea reminds me of…, This insight conflicts with…, I don’t really understand how…, ” etc.
  • For example, when I learned about A/B testing in ‘The Lean Startup,’ I thought about applying this method to my startup. I added a note on the site stating we should try it in user testing next Wednesday. Thereby the book had an immediate application benefit to my life, and I will always remember how the methodology works.
  • How you can apply elaboration: Elaborate while you read by asking yourself meta-learning questions like “How does this relate to my life? In which situation will I make use of this knowledge? How does it relate to other insights I have on the topic?”
  • While pausing and asking yourself these questions, you’re generating important memory cues. If you take some notes, don’t transcribe the author’s words but try to summarize, synthesize, and analyze.
  • #2 Retrieval
  • With retrieval, you try to recall something you’ve learned in the past from your memory. While retrieval practice can take many forms — take a test, write an essay, do a multiple-choice test, practice with flashcards
  • the authors of ‘Make It Stick’ state: “While any kind of retrieval practice generally benefits learning, the implication seems to be that where more cognitive effort is required for retrieval, greater retention results.”
  • Whatever you settle for, be careful not to copy/paste the words from the author. If you don’t do the brain work yourself, you’ll skip the learning benefits of retrieval
  • Retrieval strengthens your memory and interrupts forgetting and, as other researchers replicate, as a learning event, the act of retrieving information is considerably more potent than is an additional study opportunity, particularly in terms of facilitating long-term recall.
  • How I apply retrieval: I retrieve a book’s content from my memory by writing a book summary for every book I want to remember. I ask myself questions like: “How would you summarize the book in three sentences? Which concepts do you want to keep in mind or apply? How does the book relate to what you already know?”
  • I then publish my summaries on Goodreads or write an article about my favorite insights
  • How you can apply retrieval: You can come up with your own questions or use mine. If you don’t want to publish your summaries in public, you can write a summary into your journal, start a book club, create a private blog, or initiate a WhatsApp group for sharing book summaries.
  • a few days after we learn something, forgetting sets in
  • #3 Spaced Repetition
  • With spaced repetition, you repeat the same piece of information across increasing intervals.
  • The harder it feels to recall the information, the stronger the learning effect. “Spaced practice, which allows some forgetting to occur between sessions, strengthens both the learning and the cues and routes for fast retrieval,”
  • Why it works: It might sound counterintuitive, but forgetting is essential for learning. Spacing out practice might feel less productive than rereading a text because you’ll realize what you forgot. Your brain has to work harder to retrieve your knowledge, which is a good indicator of effective learning.
  • How I apply spaced repetition: After some weeks, I revisit a book and look at the summary questions (see #2). I try to come up with my answer before I look up my actual summary. I can often only remember a fraction of what I wrote and have to look at the rest.
  • “Knowledge trapped in books neatly stacked is meaningless and powerless until applied for the betterment of life.”
  • How you can apply spaced repetition: You can revisit your book summary medium of choice and test yourself on what you remember. What were your action points from the book? Have you applied them? If not, what hindered you?
  • By testing yourself in varying intervals on your book summaries, you’ll strengthen both learning and cues for fast retrieval.
  • Why interleaving works: Alternate working on different problems feels more difficult as it, again, facilitates forgetting.
  • How I apply interleaving: I read different books at the same time.
  • 1) Highlight everything you want to remember
  • #5 Self-Testing
  • While reading often falsely tricks us into perceived mastery, testing shows us whether we truly mastered the subject at hand. Self-testing helps you identify knowledge gaps and brings weak areas to the light
  • “It’s better to solve a problem than to memorize a solution.”
  • Why it works: Self-testing helps you overcome the illusion of knowledge. “One of the best habits a learner can instill in herself is regular self-quizzing to recalibrate her understanding of what she does and does not know.”
  • How I apply self-testing: I explain the key lessons from non-fiction books I want to remember to others. Thereby, I test whether I really got the concept. Often, I didn’t
  • instead of feeling frustrated, cognitive science made me realize that identifying knowledge gaps are a desirable and necessary effect for long-term remembering.
  • How you can apply self-testing: Teaching your lessons learned from a non-fiction book is a great way to test yourself. Before you explain a topic to somebody, you have to combine several mental tasks: filter relevant information, organize this information, and articulate it using your own vocabulary.
  • Now that I discovered how to use my Kindle as a learning device, I wouldn’t trade it for a paper book anymore. Here are the four steps it takes to enrich your e-reading experience
  • How you can apply interleaving: Your brain can handle reading different books simultaneously, and it’s effective to do so. You can start a new book before you finish the one you’re reading. Starting again into a topic you partly forgot feels difficult first, but as you know by now, that’s the effect you want to achieve.
  • it won’t surprise you that researchers proved highlighting to be ineffective. It’s passive and doesn’t create memory cues.
  • 2) Cut down your highlights in your browser
  • After you finished reading the book, you want to reduce your highlights to the essential part. Visit your Kindle Notes page to find a list of all your highlights. Using your desktop browser is faster and more convenient than editing your highlights on your e-reading device.
  • Now, browse through your highlights, delete what you no longer need, and add notes to the ones you really like. By adding notes to the highlights, you’ll connect the new information to your existing knowledge
  • 3) Use software to practice spaced repetitionThis part is the main reason for e-books beating printed books. While you can do all of the above with a little extra time on your physical books, there’s no way to systemize your repetition praxis.
  • Readwise is the best software to combine spaced repetition with your e-books. It’s an online service that connects to your Kindle account and imports all your Kindle highlights. Then, it creates flashcards of your highlights and allows you to export your highlights to your favorite note-taking app.
  • Common Learning Myths DebunkedWhile reading and studying evidence-based learning techniques I also came across some things I wrongly believed to be true.
  • #2 Effective learning should feel easyWe think learning works best when it feels productive. That’s why we continue to use ineffective techniques like rereading or highlighting. But learning works best when it feels hard, or as the authors of ‘Make It Stick’ write: “Learning that’s easy is like writing in sand, here today and gone tomorrow.”
  • In Conclusion
  • I developed and adjusted these strategies over two years, and they’re still a work in progress.
  • Try all of them but don’t force yourself through anything that doesn’t feel right for you. I encourage you to do your own research, add further techniques, and skip what doesn’t serve you
  • “In the case of good books, the point is not to see how many of them you can get through, but rather how many can get through to you.”— Mortimer J. Adler
Javier E

We should know by now that progress isn't guaranteed - and often backfires - The Washin... - 1 views

  • We assume that progress is the natural order of things. Problems are meant to be solved. History is an upward curve of well-being. But what if all this is a fantasy
  • our most powerful disruptions shared one characteristic: They were not widely foreseen
  • This was true of the terrorism of 9/11; the financial crisis of 2008-2009 and the parallel Great Recession; and now the coronavirus pandemic
  • ...13 more annotations...
  • In each case, there was a failure of imagination, as Tom Friedman has noted. Warnings found little receptiveness among the public or government officials. We didn’t think what happened could happen. The presumption of progress bred complacency.
  • We fooled ourselves into thinking we had engineered permanent improvements in our social and economic systems.
  • To be fair, progress as it’s commonly understood — higher living standards — has not been at a standstill. Many advances have made life better
  • Similar inconsistencies and ambiguities attach to economic growth. It raises some up and pushes others down.
  • What we should have learned by now is that progress is often grudging, incomplete or contradictory.
  • the lesson of both economic growth and technologies is that they are double-edged swords and must be judged as such.
  • Sure, the Internet enables marvelous things. But it also imposes huge costs on society
  • Global warming is another example. It is largely a result of the burning of fossil fuels, which has been the engine of our progress. Now, it is anti-progress.
  • Still, the setbacks loom ever larger. Our governmental debt is high, and economic stability is low. Many of the claims of progress turn out to be exaggerated, superficial, delusional or unattainable,
  • What connects these various problems is the belief that the future can be orchestrated.
  • The reality is that our control over the future is modest at best, nonexistent at worst. We react more to events than lead them.
  • We worship at the altar of progress without adequately acknowledging its limits.
  • it does mean that we should be more candid about what is possible. If not, we might yet again wander over the “border between reality and impossibility.”
knudsenlu

You Are Already Living Inside a Computer - The Atlantic - 1 views

  • Nobody really needs smartphone-operated bike locks or propane tanks. And they certainly don’t need gadgets that are less trustworthy than the “dumb” ones they replace, a sin many smart devices commit. But people do seem to want them—and in increasing numbers.
  • Why? One answer is that consumers buy what is on offer, and manufacturers are eager to turn their dumb devices smart. Doing so allows them more revenue, more control, and more opportunity for planned obsolescence. It also creates a secondary market for data collected by means of these devices. Roomba, for example, hopes to deduce floor plans from the movement of its robotic home vacuums so that it can sell them as business intelligence.
  • And the more people love using computers for everything, the more life feels incomplete unless it takes place inside them.
  • ...15 more annotations...
  • Computers already are predominant, human life already takes place mostly within them, and people are satisfied with the results.
  • These devices pose numerous problems. Cost is one. Like a cheap propane gauge, a traditional bike lock is a commodity. It can be had for $10 to $15, a tenth of the price of Nokē’s connected version. Security and privacy are others. The CIA was rumored to have a back door into Samsung TVs for spying. Disturbed people have been caught speaking to children over hacked baby monitors. A botnet commandeered thousands of poorly secured internet-of-things devices to launch a massive distributed denial-of-service attack against the domain-name syste
  • Reliability plagues internet-connected gadgets, too. When the network is down, or the app’s service isn’t reachable, or some other software behavior gets in the way, the products often cease to function properly—or at all.
  • Turing guessed that machines would become most compelling when they became convincing companions, which is essentially what today’s smartphones (and smart toasters) do.
  • But Turing never claimed that machines could think, let alone that they might equal the human mind. Rather, he surmised that machines might be able to exhibit convincing behavior.
  • People choose computers as intermediaries for the sensual delight of using computers
  • Why would anyone ever choose a solution that doesn’t involve computers, when computers are available? Propane tanks and bike locks are still edge cases, but ordinary digital services work similarly: The services people seek out are the ones that allow them to use computers to do things—from finding information to hailing a cab to ordering takeout. This is a feat of aesthetics as much as it is one of business. People choose computers as intermediaries for the sensual delight of using computers, not just as practical, efficient means for solving problems.
  • Doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers.
  • “Being a computer” means something different today than in 1950, when Turing proposed the imitation game. Contra the technical prerequisites of artificial intelligence, acting like a computer often involves little more than moving bits of data around, or acting as a controller or actuator. Grill as computer, bike lock as computer, television as computer. An intermediary
  • Or consider doorbells once more. Forget Ring, the doorbell has already retired in favor of the computer. When my kids’ friends visit, they just text a request to come open the door. The doorbell has become computerized without even being connected to an app or to the internet. Call it “disruption” if you must, but doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers, where they can produce new affections.
  • The present status of intelligent machines is more powerful than any future robot apocalypse.
  • ne such affection is the pleasure of connectivity. You don’t want to be offline. Why would you want your toaster or doorbell to suffer the same fate? Today, computational absorption is an ideal. The ultimate dream is to be online all the time, or at least connected to a computational machine of some kind.
  • This is not where anyone thought computing would end up. Early dystopic scenarios cautioned that the computer could become a bureaucrat or a fascist, reducing human behavior to the predetermined capacities of a dumb machine. Or else, that obsessive computer use would be deadening, sucking humans into narcotic detachment.Those fears persist to some extent, partly because they have been somewhat realized. But they have also been inverted. Being away from them now feels deadening, rather than being attached to them without end. And thus, the actions computers take become self-referential: to turn more and more things into computers to prolong that connection.
  • But the real present status of intelligent machines is both humdrum and more powerful than any future robot apocalypse. Turing is often called the father of AI, but he only implied that machines might become compelling enough to inspire interaction. That hardly counts as intelligence, artificial or real. It’s also far easier to achieve. Computers already have persuaded people to move their lives inside of them. The machines didn’t need to make people immortal, or promise to serve their every whim, or to threaten to destroy them absent assent. They just needed to become a sufficient part of everything human beings do such that they can’t—or won’t—imagine doing those things without them.
  • . The real threat of computers isn’t that they might overtake and destroy humanity with their future power and intelligence. It’s that they might remain just as ordinary and impotent as they are today, and yet overtake us anyway.
johnsonel7

Human intelligence: have we reached the limit of knowledge? - 0 views

  • Not only have scientists failed to find the Holy Grail of physics – unifying the very large (general relativity) with the very small (quantum mechanics) – they still don’t know what the vast majority of the universe is made up of. The sought after Theory of Everything continues to elude us.
  • Human brains are the product of blind and unguided evolution. They were designed to solve practical problems impinging on our survival and reproduction, not to unravel the fabric of the universe. This realisation has led some philosophers to embrace a curious form of pessimism, arguing there are bound to be things we will never understand.
  • the late philosopher Jerry Fodor claimed that there are bound to be “thoughts that we are unequipped to think”.
  • ...4 more annotations...
  • McGinn suspects that the reason why philosophical conundrums such as the mind/body problem – how physical processes in our brain give rise to consciousness – prove to be intractable is that their true solutions are simply inaccessible to the human mind.
  • Is a question still a “mystery” if you have arrived at the correct answer, but you have no idea what it means or cannot wrap your head around it? Mysterians often conflate those two possibilities.
  • Most importantly, we can extend our own minds to those of our fellow human beings. What makes our species unique is that we are capable of culture, in particular cumulative cultural knowledge. A population of human brains is much smarter than any individual brain in isolation.
  • It is quite true that we can never rule out the possibility that there are such unknown unknowns, and that some of them will forever remain unknown, because for some (unknown) reason human intelligence is not up to the task. But the important thing to note about these unknown unknowns is that nothing can be said about them. To presume from the outset that some unknown unknowns will always remain unknown, as mysterians do, is not modesty – it’s arrogance.
sanderk

Council Post: The Seven Key Steps Of Critical Thinking - 0 views

  • the effort we put into growing our workforce, we often forget the one person who is in constant need of development: ourselves. In particular, we neglect the soft skills that are vital to becoming the best professional possible — one of them being critical thinking.
  • In short, the ability to think critically is the art of analyzing and evaluating data for a practical approach to understanding the data, then determining what to believe and how to act.
  • There are times where an answer just needs to be given and given right now. But that doesn't mean you should make a decision just to make one. Sometimes, quick decisions can fall flat
  • ...5 more annotations...
  • “Don’t just do something, stand there.” Sometimes, taking a minute to be systematic and follow an organized approach makes all the difference. This is where critical thinking meets problem solving. Define the problem, come up with a list of solutions, then select the best answer, implement it, create an evaluation tool and fine-tune as needed.
  • Evaluate information factually. Recognizing predispositions of those involved is a challenging task at times. It is your responsibility to weigh the information from all sources and come to your own conclusions.
  • Be open-minded and consider all points of view. This is a good time to pull the team into finding the best solution. This point will allow you to develop the critical-thinking skills of those you lead.
  • Communicate your findings and results. This is a crucial yet often overlooked component. Failing to do so can cause much confusion in the organization.
  • Developing your critical-thinking skills is fundamental to your leadership success.
Javier E

How to Talk About Climate Change Across the Political Divide | The New Yorker - 0 views

  • “It was really moving to Texas that set me on this path of figuring out how to communicate about climate change,” she told me. “I was the only climate scientist within two hundred miles.”
  • She records the questions she is asked afterward, using an app, and the two most frequent are: “What gives you hope?” and “How do I talk to my [blank] about climate change?
  • In the late nineties, a Gallup poll found that forty-six per cent of Democrats and forty-seven per cent of Republicans agreed that the effects of global warming had already begun.
  • ...30 more annotations...
  • In her new book, “Saving Us,” which comes out in September, Hayhoe sets out to answer these questions. Chapter by chapter, she lays out effective strategies for communicating about the urgency of climate change across America’s political divide.
  • She breaks out categories—originally defined by her colleague Anthony Leiserowitz, at the Yale Program on Climate Change Communication, and other researchers—of attitudes toward global warming: alarmed, concerned, cautious, disengaged, and doubtful. Only the remaining eight per cent of Americans fall into the final category, dismissive.
  • In the past decade, though, as the scope of the crisis became clear, Democrats began pressing for policies to cut U.S. reliance on fossil fuels, and Republicans were reluctant to commit. Energy companies stepped into the stalemate and began aggressively lobbying politicians, and injecting doubt into the public discourse, to stop such policies from taking effect. “Industry swung into motion to activate the political system in their favor,” Hayhoe said.
  • “In a study of fifty-six countries, researchers found people’s opinions on climate change to be most strongly correlated not with education and knowledge, but rather with ‘values, ideologies, worldviews and political orientation,’ ”
  • One salient problem is an aspect of human behavior that researchers have termed “solution aversion.” Solving the climate crisis will require ending our reliance on fossil fuels, which people believe would involve major sacrifice.
  • “If there’s a problem and we’re not going to fix it, then that makes us bad people,” Hayhoe said. “No one wants to be a bad person.” So instead people are happy to seize on excuses not to take action.
  • Most are what she calls “science-y sounding objections, and, in the U.S., religious-y sounding objections.”
  • Hayhoe often hears that the Earth has always heated and cooled according to its own intrinsic cycle, or that God, not humanity, controls the fate of the planet. These objections can then harden into aspects of our political identity.
  • Hayhoe eschews the term “climate denier,” saying that she has “seen it applied all too often to shut down discussion rather than encourage it.”
  • So much of this is not about the facts,” Leiserowitz told me later. “It’s about trusting the person the facts come from.”
  • research has shown her that dismissives are nearly impossible to influence. They are also few enough that it should be possible to build political will around fighting climate change by focussing on others.
  • “It’s not about the loudest voices,” Hayhoe told me. “It’s about everyone else who doesn’t understand why climate change matters or what they can do about it.”
  • Leiserowitz told me. His work has revealed, for example, that conversations about the climate tend to be more effective if both speakers share a core value or an aspect of their identity. The most effective climate communicators to conservatives are often people of faith, members of the military, and Republicans who are nevertheless committed to the climate.
  • “That’s why it’s so important to seek out like-minded groups: winter athletes, parents, fellow birders or Rotarians, or people who share our faith.”
  • There is a long history within evangelicalism of advocating “creation care,” the belief that God charged humanity with caring for the earth. The Evangelical Environmental Network, which Hayhoe advises, argues that evangelicals should follow a “Biblical mandate to care for creation,”
  • Hayhoe believes that emphasizing the care of plants and animals is less effective than highlighting the potential dangers for our fellow human beings. “It’s not about saving the planet—it’s about saving us,”
  • One of her communication strategies is to talk to people about their own observations, which help them connect the realities of their lives to the abstraction of climate change.
  • With farmers, Hayhoe avoids using the term “climate change,” since the phenomenon is frequently seen as a liberal hoax. “We use the words ‘climate variability’ and ‘long-term trends,’ ” she said.
  • Scott’s work served another purpose. By showing success with his climate-conscious farming techniques, he might persuade other farmers to join in, potentially becoming the center of what Hayhoe calls a cluster. “I preach to my friends about how well it’s doing,” he said.
  • I don’t accost people in diners,” she wrote me, later. “I wait until they come to me.”
  • “As recently as 2008, former speaker of the house Newt Gingrich, a Republican, and current House speaker Nancy Pelosi, a Democrat, cozied up on a love seat in front of the U.S. Capitol to film a commercial about climate change,”
  • She then directed the conversation to Republican-led free market initiatives to combat climate change by putting a price on carbon emissions. Companies passed their costs onto the rest of us by putting the carbon into the atmosphere, she told Dale, “but what if they had to pay for it? What if, when someone’s house burned down because of a forest fire, the companies making money from selling carbon had to pay a homeowner back?” Dale responded, “Well, I’m in favor of that.”
  • “It’s so important to educate kids about what’s going on, not to frighten them but to show them they can have a hand in solutions.”
  • Through the years, she’s developed a system to manage trolls. “It’s been trial and error, error, error,” she said. She now responds once, offering a link to resources.
  • Most fire back with gendered insults, often plays on her last name, after which she blocks the sender.
  • ayhoe doesn’t urge guilt on her listeners. She only urges that we change our trajectory. “That’s all repentance means,” she said. “To turn.”
  • the most important aspect of fighting climate change is pushing for policies that will cut our reliance on fossil fuels. She urges the alarmed to get involved in politics, beginning with lobbying politicians at the local and state level.
  • she’d come across a book, “Scientists as Prophets: A Rhetorical Genealogy,” that examined the role of prophets in society, beginning with the oracle at Delphi, stretching through the Old Testament, and culminating in the work of modern-day scientists.
  • Studies show that early adopters help shift the norms of their communities.
  • By Eliza GriswoldSeptember 16, 2021
Javier E

Opinion | You Are the Object of Facebook's Secret Extraction Operation - The New York T... - 0 views

  • Facebook is not just any corporation. It reached trillion-dollar status in a single decade by applying the logic of what I call surveillance capitalism — an economic system built on the secret extraction and manipulation of human data
  • Facebook and other leading surveillance capitalist corporations now control information flows and communication infrastructures across the world.
  • These infrastructures are critical to the possibility of a democratic society, yet our democracies have allowed these companies to own, operate and mediate our information spaces unconstrained by public law.
  • ...56 more annotations...
  • The result has been a hidden revolution in how information is produced, circulated and acted upon
  • The world’s liberal democracies now confront a tragedy of the “un-commons.” Information spaces that people assume to be public are strictly ruled by private commercial interests for maximum profit.
  • The internet as a self-regulating market has been revealed as a failed experiment. Surveillance capitalism leaves a trail of social wreckage in its wake: the wholesale destruction of privacy, the intensification of social inequality, the poisoning of social discourse with defactualized information, the demolition of social norms and the weakening of democratic institutions.
  • These social harms are not random. They are tightly coupled effects of evolving economic operations. Each harm paves the way for the next and is dependent on what went before.
  • There is no way to escape the machine systems that surveil u
  • All roads to economic and social participation now lead through surveillance capitalism’s profit-maximizing institutional terrain, a condition that has intensified during nearly two years of global plague.
  • Will Facebook’s digital violence finally trigger our commitment to take back the “un-commons”?
  • Will we confront the fundamental but long ignored questions of an information civilization: How should we organize and govern the information and communication spaces of the digital century in ways that sustain and advance democratic values and principles?
  • Mark Zuckerberg’s start-up did not invent surveillance capitalism. Google did that. In 2000, when only 25 percent of the world’s information was stored digitally, Google was a tiny start-up with a great search product but little revenue.
  • By 2001, in the teeth of the dot-com bust, Google’s leaders found their breakthrough in a series of inventions that would transform advertising. Their team learned how to combine massive data flows of personal information with advanced computational analyses to predict where an ad should be placed for maximum “click through.”
  • Google’s scientists learned how to extract predictive metadata from this “data exhaust” and use it to analyze likely patterns of future behavior.
  • Prediction was the first imperative that determined the second imperative: extraction.
  • Lucrative predictions required flows of human data at unimaginable scale. Users did not suspect that their data was secretly hunted and captured from every corner of the internet and, later, from apps, smartphones, devices, cameras and sensors
  • User ignorance was understood as crucial to success. Each new product was a means to more “engagement,” a euphemism used to conceal illicit extraction operations.
  • When asked “What is Google?” the co-founder Larry Page laid it out in 2001,
  • “Storage is cheap. Cameras are cheap. People will generate enormous amounts of data,” Mr. Page said. “Everything you’ve ever heard or seen or experienced will become searchable. Your whole life will be searchable.”
  • Instead of selling search to users, Google survived by turning its search engine into a sophisticated surveillance medium for seizing human data
  • Company executives worked to keep these economic operations secret, hidden from users, lawmakers, and competitors. Mr. Page opposed anything that might “stir the privacy pot and endanger our ability to gather data,” Mr. Edwards wrote.
  • As recently as 2017, Eric Schmidt, the executive chairman of Google’s parent company, Alphabet, acknowledged the role of Google’s algorithmic ranking operations in spreading corrupt information. “There is a line that we can’t really get across,” he said. “It is very difficult for us to understand truth.” A company with a mission to organize and make accessible all the world’s information using the most sophisticated machine systems cannot discern corrupt information.
  • This is the economic context in which disinformation wins
  • In March 2008, Mr. Zuckerberg hired Google’s head of global online advertising, Sheryl Sandberg, as his second in command. Ms. Sandberg had joined Google in 2001 and was a key player in the surveillance capitalism revolution. She led the build-out of Google’s advertising engine, AdWords, and its AdSense program, which together accounted for most of the company’s $16.6 billion in revenue in 2007.
  • A Google multimillionaire by the time she met Mr. Zuckerberg, Ms. Sandberg had a canny appreciation of Facebook’s immense opportunities for extraction of rich predictive data. “We have better information than anyone else. We know gender, age, location, and it’s real data as opposed to the stuff other people infer,” Ms. Sandberg explained
  • The company had “better data” and “real data” because it had a front-row seat to what Mr. Page had called “your whole life.”
  • Facebook paved the way for surveillance economics with new privacy policies in late 2009. The Electronic Frontier Foundation warned that new “Everyone” settings eliminated options to restrict the visibility of personal data, instead treating it as publicly available information.
  • Mr. Zuckerberg “just went for it” because there were no laws to stop him from joining Google in the wholesale destruction of privacy. If lawmakers wanted to sanction him as a ruthless profit-maximizer willing to use his social network against society, then 2009 to 2010 would have been a good opportunity.
  • Facebook was the first follower, but not the last. Google, Facebook, Amazon, Microsoft and Apple are private surveillance empires, each with distinct business models.
  • In 2021 these five U.S. tech giants represent five of the six largest publicly traded companies by market capitalization in the world.
  • As we move into the third decade of the 21st century, surveillance capitalism is the dominant economic institution of our time. In the absence of countervailing law, this system successfully mediates nearly every aspect of human engagement with digital information
  • Today all apps and software, no matter how benign they appear, are designed to maximize data collection.
  • Historically, great concentrations of corporate power were associated with economic harms. But when human data are the raw material and predictions of human behavior are the product, then the harms are social rather than economic
  • The difficulty is that these novel harms are typically understood as separate, even unrelated, problems, which makes them impossible to solve. Instead, each new stage of harm creates the conditions for the next stage.
  • Fifty years ago the conservative economist Milton Friedman exhorted American executives, “There is one and only one social responsibility of business — to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game.” Even this radical doctrine did not reckon with the possibility of no rules.
  • With privacy out of the way, ill-gotten human data are concentrated within private corporations, where they are claimed as corporate assets to be deployed at will.
  • The sheer size of this knowledge gap is conveyed in a leaked 2018 Facebook document, which described its artificial intelligence hub, ingesting trillions of behavioral data points every day and producing six million behavioral predictions each second.
  • Next, these human data are weaponized as targeting algorithms, engineered to maximize extraction and aimed back at their unsuspecting human sources to increase engagement
  • Targeting mechanisms change real life, sometimes with grave consequences. For example, the Facebook Files depict Mr. Zuckerberg using his algorithms to reinforce or disrupt the behavior of billions of people. Anger is rewarded or ignored. News stories become more trustworthy or unhinged. Publishers prosper or wither. Political discourse turns uglier or more moderate. People live or die.
  • Occasionally the fog clears to reveal the ultimate harm: the growing power of tech giants willing to use their control over critical information infrastructure to compete with democratically elected lawmakers for societal dominance.
  • when it comes to the triumph of surveillance capitalism’s revolution, it is the lawmakers of every liberal democracy, especially in the United States, who bear the greatest burden of responsibility. They allowed private capital to rule our information spaces during two decades of spectacular growth, with no laws to stop it.
  • All of it begins with extraction. An economic order founded on the secret massive-scale extraction of human data assumes the destruction of privacy as a nonnegotiable condition of its business operations.
  • We can’t fix all our problems at once, but we won’t fix any of them, ever, unless we reclaim the sanctity of information integrity and trustworthy communications
  • The abdication of our information and communication spaces to surveillance capitalism has become the meta-crisis of every republic, because it obstructs solutions to all other crises.
  • Neither Google, nor Facebook, nor any other corporate actor in this new economic order set out to destroy society, any more than the fossil fuel industry set out to destroy the earth.
  • like global warming, the tech giants and their fellow travelers have been willing to treat their destructive effects on people and society as collateral damage — the unfortunate but unavoidable byproduct of perfectly legal economic operations that have produced some of the wealthiest and most powerful corporations in the history of capitalism.
  • Where does that leave us?
  • Democracy is the only countervailing institutional order with the legitimate authority and power to change our course. If the ideal of human self-governance is to survive the digital century, then all solutions point to one solution: a democratic counterrevolution.
  • instead of the usual laundry lists of remedies, lawmakers need to proceed with a clear grasp of the adversary: a single hierarchy of economic causes and their social harms.
  • We can’t rid ourselves of later-stage social harms unless we outlaw their foundational economic causes
  • This means we move beyond the current focus on downstream issues such as content moderation and policing illegal content. Such “remedies” only treat the symptoms without challenging the illegitimacy of the human data extraction that funds private control over society’s information spaces
  • Similarly, structural solutions like “breaking up” the tech giants may be valuable in some cases, but they will not affect the underlying economic operations of surveillance capitalism.
  • Instead, discussions about regulating big tech should focus on the bedrock of surveillance economics: the secret extraction of human data from realms of life once called “private.
  • No secret extraction means no illegitimate concentrations of knowledge about people. No concentrations of knowledge means no targeting algorithms. No targeting means that corporations can no longer control and curate information flows and social speech or shape human behavior to favor their interests
  • the sober truth is that we need lawmakers ready to engage in a once-a-century exploration of far more basic questions:
  • How should we structure and govern information, connection and communication in a democratic digital century?
  • What new charters of rights, legislative frameworks and institutions are required to ensure that data collection and use serve the genuine needs of individuals and society?
  • What measures will protect citizens from unaccountable power over information, whether it is wielded by private companies or governments?
  • The corporation that is Facebook may change its name or its leaders, but it will not voluntarily change its economics.
Javier E

Reality is your brain's best guess - Big Think - 0 views

  • Andy Clark admits it’s strange that he took up “predictive processing,” an ambitious leading theory of how the brain works. A philosopher of mind at the University of Sussex, he has devoted his career to how thinking doesn’t occur just between the ears—that it flows through our bodies, tools, and environments. “The external world is functioning as part of our cognitive machinery
  • But 15 years ago, he realized that had to come back to the center of the system: the brain. And he found that predictive processing provided the essential links among the brain, body, and world.
  • There’s a traditional view that goes back at least to Descartes that perception was about the imprinting of the outside world onto the sense organs. In 20th-century artificial intelligence and neuroscience, vision was a feed-forward process in which you took in pixel-level information, refined it into a two and a half–dimensional sketch, and then refined that into a full world model.
  • ...9 more annotations...
  • a new book, The Experience Machine: How Our Minds Predict and Shape Reality, which is remarkable for how it connects the high-level concepts to everyday examples of how our brains make predictions, how that process can lead us astray, and what we can do about it.
  • being driven to stay within your own viability envelope is crucial to the kind of intelligence that we know about—the kind of intelligence that we are
  • If you ask what is a predictive brain for, the answer has to be: staying alive. Predictive brains are a way of staying within your viability envelope as an embodied biological organism: getting food when you need it, getting water when you need it.
  • in predictive processing, perception is structured around prediction. Perception is about the brain having a guess at what’s most likely to be out there and then using sensory information to refine the guess.
  • artificial curiosity. Predictive-processing systems automatically have that. They’re set up so that they predict the conditions of their own survival, and they’re always trying to get rid of prediction errors. But if they’ve solved all their practical problems and they’ve got nothing else to do, then they’ll just explore. Getting rid of any error is going to be a good thing for them. If you’re a creature like that, you’re going to be a really good learning system. You’re going to love to inhabit the environments that you can learn most from, where the problems are not too simple, not too hard, but just right.
  • It’s an effect that you also see in Marieke Jepma et al.’s work on pain. They showed that if you predict intense pain, the signal that you get will be interpreted as more painful than it would otherwise be, and vice versa. Then they asked why you don’t correct your misimpression. If it’s my expectation that is making it feel more painful, why don’t I get prediction errors that correct it?
  • The reason is that there are no errors. You’re expecting a certain level of pain, and your prediction helps bring that level about; there is nothing for you to correct. In fact, you’ve got confirmation of your own prediction. So it can be a vicious circle
  • Do you think this self-fulfilling loop in psychosis and pain perception helps to account for misinformation in our society’s and people’s susceptibility to certain narratives?Absolutely. We all have these vulnerabilities and self-fulfilling cycles. We look at the places that tend to support the models that we already have, because that’s often how we judge whether the information is good or not
  • Given that we know we’re vulnerable to self-fulfilling information loops, how can we make sure we don’t get locked into a belief?Unfortunately, it’s really difficult. The most potent intervention is to remind ourselves that we sample the world in ways that are guided by the models that we’ve currently got. The structures of science are there to push back against our natural tendency to cherry-pick.
Javier E

Will ChatGPT Kill the Student Essay? - The Atlantic - 0 views

  • Essay generation is neither theoretical nor futuristic at this point. In May, a student in New Zealand confessed to using AI to write their papers, justifying it as a tool like Grammarly or spell-check: ​​“I have the knowledge, I have the lived experience, I’m a good student, I go to all the tutorials and I go to all the lectures and I read everything we have to read but I kind of felt I was being penalised because I don’t write eloquently and I didn’t feel that was right,” they told a student paper in Christchurch. They don’t feel like they’re cheating, because the student guidelines at their university state only that you’re not allowed to get somebody else to do your work for you. GPT-3 isn’t “somebody else”—it’s a program.
  • The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up
  • “You can no longer give take-home exams/homework … Even on specific questions that involve combining knowledge across domains, the OpenAI chat is frankly better than the average MBA at this point. It is frankly amazing.”
  • ...18 more annotations...
  • In the modern tech world, the value of a humanistic education shows up in evidence of its absence. Sam Bankman-Fried, the disgraced founder of the crypto exchange FTX who recently lost his $16 billion fortune in a few days, is a famously proud illiterate. “I would never read a book,” he once told an interviewer. “I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that.”
  • Elon Musk and Twitter are another excellent case in point. It’s painful and extraordinary to watch the ham-fisted way a brilliant engineering mind like Musk deals with even relatively simple literary concepts such as parody and satire. He obviously has never thought about them before.
  • The extraordinary ignorance on questions of society and history displayed by the men and women reshaping society and history has been the defining feature of the social-media era. Apparently, Mark Zuckerberg has read a great deal about Caesar Augustus, but I wish he’d read about the regulation of the pamphlet press in 17th-century Europe. It might have spared America the annihilation of social trust.
  • These failures don’t derive from mean-spiritedness or even greed, but from a willful obliviousness. The engineers do not recognize that humanistic questions—like, say, hermeneutics or the historical contingency of freedom of speech or the genealogy of morality—are real questions with real consequences
  • Everybody is entitled to their opinion about politics and culture, it’s true, but an opinion is different from a grounded understanding. The most direct path to catastrophe is to treat complex problems as if they’re obvious to everyone. You can lose billions of dollars pretty quickly that way.
  • As the technologists have ignored humanistic questions to their peril, the humanists have greeted the technological revolutions of the past 50 years by committing soft suicide.
  • As of 2017, the number of English majors had nearly halved since the 1990s. History enrollments have declined by 45 percent since 2007 alone
  • the humanities have not fundamentally changed their approach in decades, despite technology altering the entire world around them. They are still exploding meta-narratives like it’s 1979, an exercise in self-defeat.
  • Contemporary academia engages, more or less permanently, in self-critique on any and every front it can imagine.
  • the situation requires humanists to explain why they matter, not constantly undermine their own intellectual foundations.
  • The humanities promise students a journey to an irrelevant, self-consuming future; then they wonder why their enrollments are collapsing. Is it any surprise that nearly half of humanities graduates regret their choice of major?
  • Despite the clear value of a humanistic education, its decline continues. Over the past 10 years, STEM has triumphed, and the humanities have collapsed. The number of students enrolled in computer science is now nearly the same as the number of students enrolled in all of the humanities combined.
  • now there’s GPT-3. Natural-language processing presents the academic humanities with a whole series of unprecedented problems
  • Practical matters are at stake: Humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated?
  • despite the drastic divide of the moment, natural-language processing is going to force engineers and humanists together. They are going to need each other despite everything. Computer scientists will require basic, systematic education in general humanism: The philosophy of language, sociology, history, and ethics are not amusing questions of theoretical speculation anymore. They will be essential in determining the ethical and creative use of chatbots, to take only an obvious example.
  • The humanists will need to understand natural-language processing because it’s the future of language
  • that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.
  • But that’s always been the beginning of wisdom, no matter what technological era we happen to inhabit.
Javier E

Avoidance, not anxiety, may be sabotaging your life - The Washington Post - 0 views

  • Anxiety, for many people, is like an unwelcome houseguest — a lingering presence that causes tension, clouds the mind with endless “what ifs” and shows up as various physical sensations.
  • About 12 percent of U.S. adults regularly felt worry, nervousness or anxiety, according to a National Health Interview Survey conducted between October and December 2022.
  • Anxiety, though, is not the puppeteer pulling the strings in many of our lives. There is a more subtle and insidious marionette, and it’s called psychological avoidance. When we avoid certain situations and decisions, it can lead to heightened anxiety and more problems.
  • ...23 more annotations...
  • Psychological avoidance is akin to an ostrich burying its head in the sand, choosing ignorance over confrontation, all while a storm brews in the background.
  • depression and anxiety disorders cost the global economy $1 trillion each year in lost productivity.
  • avoidance, a strategy that not only fails to solve problems but fuels them.
  • Psychological avoidance isn’t about the actions we take or don’t take, but the intentions behind them. If our actions aim to squash discomfort hastily, then we’re probably 2favoiding
  • the three ways people tend to practice psychological avoidance.
  • Reacting
  • It’s when we reply hastily to an email that upsets us or raise our voices without considering the consequences.
  • Reacting is any response that seeks to eliminate the source of discomfort
  • Retreating
  • Retreating is the act of moving away or pulling back from anxiety-inducing situations
  • For example, my client with the fear of public speaking took a different job to avoid it. Others may reach for a glass of wine to numb out o
  • Remaining
  • Remaining is sticking to the status quo to avoid the discomfort of change.
  • Psychological avoidance is a powerful enemy, but there are three science-based skills to fight it.
  • Shifting involves checking in with your thoughts, especially when anxiety comes knocking. In those moments, we often have black-and-white, distorted thoughts, just like my client, who was worried about being in a romantic relationship, telling himself, “I will never be in a good relationship.”
  • Shifting is taking off dark, monochrome glasses and seeing the world in color again. Challenge your thoughts, clean out your lenses, by asking yourself, “Would I say this to my best friend in this scenario?
  • Approaching
  • taking a step that feels manageable.
  • The opposite of avoiding is approaching
  • Ask yourself: What is one small step I can take toward my fears and anxiety to overcome my avoidance.
  • Aligning
  • Aligning is living a values-driven life, where our daily actions are aligned with what matters the most to us: our values.
  • This is the opposite of what most of us do while anxious. In moments of intense anxiety, we tend to let our emotions, not our values, dictate our actions. To live a values-driven life, we need to first identify our values, whether that is health, family, work or something else. Then we need to dedicate time and effort to our values.
Javier E

If We Knew Then What We Know Now About Covid, What Would We Have Done Differently? - WSJ - 0 views

  • For much of 2020, doctors and public-health officials thought the virus was transmitted through droplets emitted from one person’s mouth and touched or inhaled by another person nearby. We were advised to stay at least 6 feet away from each other to avoid the droplets
  • A small cadre of aerosol scientists had a different theory. They suspected that Covid-19 was transmitted not so much by droplets but by smaller infectious aerosol particles that could travel on air currents way farther than 6 feet and linger in the air for hours. Some of the aerosol particles, they believed, were small enough to penetrate the cloth masks widely used at the time.
  • The group had a hard time getting public-health officials to embrace their theory. For one thing, many of them were engineers, not doctors.
  • ...37 more annotations...
  • “My first and biggest wish is that we had known early that Covid-19 was airborne,”
  • , “Once you’ve realized that, it informs an entirely different strategy for protection.” Masking, ventilation and air cleaning become key, as well as avoiding high-risk encounters with strangers, he says.
  • Instead of washing our produce and wearing hand-sewn cloth masks, we could have made sure to avoid superspreader events and worn more-effective N95 masks or their equivalent. “We could have made more of an effort to develop and distribute N95s to everyone,” says Dr. Volckens. “We could have had an Operation Warp Speed for masks.”
  • We didn’t realize how important clear, straight talk would be to maintaining public trust. If we had, we could have explained the biological nature of a virus and warned that Covid-19 would change in unpredictable ways.  
  • We didn’t know how difficult it would be to get the basic data needed to make good public-health and medical decisions. If we’d had the data, we could have more effectively allocated scarce resources
  • In the face of a pandemic, he says, the public needs an early basic and blunt lesson in virology
  • and mutates, and since we’ve never seen this particular virus before, we will need to take unprecedented actions and we will make mistakes, he says.
  • Since the public wasn’t prepared, “people weren’t able to pivot when the knowledge changed,”
  • By the time the vaccines became available, public trust had been eroded by myriad contradictory messages—about the usefulness of masks, the ways in which the virus could be spread, and whether the virus would have an end date.
  • , the absence of a single, trusted source of clear information meant that many people gave up on trying to stay current or dismissed the different points of advice as partisan and untrustworthy.
  • “The science is really important, but if you don’t get the trust and communication right, it can only take you so far,”
  • people didn’t know whether it was OK to visit elderly relatives or go to a dinner party.
  • Doctors didn’t know what medicines worked. Governors and mayors didn’t have the information they needed to know whether to require masks. School officials lacked the information needed to know whether it was safe to open schools.
  • Had we known that even a mild case of Covid-19 could result in long Covid and other serious chronic health problems, we might have calculated our own personal risk differently and taken more care.
  • just months before the outbreak of the pandemic, the Council of State and Territorial Epidemiologists released a white paper detailing the urgent need to modernize the nation’s public-health system still reliant on manual data collection methods—paper records, phone calls, spreadsheets and faxes.
  • While the U.K. and Israel were collecting and disseminating Covid case data promptly, in the U.S. the CDC couldn’t. It didn’t have a centralized health-data collection system like those countries did, but rather relied on voluntary reporting by underfunded state and local public-health systems and hospitals.
  • doctors and scientists say they had to depend on information from Israel, the U.K. and South Africa to understand the nature of new variants and the effectiveness of treatments and vaccines. They relied heavily on private data collection efforts such as a dashboard at Johns Hopkins University’s Coronavirus Resource Center that tallied cases, deaths and vaccine rates globally.
  • For much of the pandemic, doctors, epidemiologists, and state and local governments had no way to find out in real time how many people were contracting Covid-19, getting hospitalized and dying
  • To solve the data problem, Dr. Ranney says, we need to build a public-health system that can collect and disseminate data and acts like an electrical grid. The power company sees a storm coming and lines up repair crews.
  • If we’d known how damaging lockdowns would be to mental health, physical health and the economy, we could have taken a more strategic approach to closing businesses and keeping people at home.
  • t many doctors say they were crucial at the start of the pandemic to give doctors and hospitals a chance to figure out how to accommodate and treat the avalanche of very sick patients.
  • The measures reduced deaths, according to many studies—but at a steep cost.
  • The lockdowns didn’t have to be so harmful, some scientists say. They could have been more carefully tailored to protect the most vulnerable, such as those in nursing homes and retirement communities, and to minimize widespread disruption.
  • Lockdowns could, during Covid-19 surges, close places such as bars and restaurants where the virus is most likely to spread, while allowing other businesses to stay open with safety precautions like masking and ventilation in place.  
  • The key isn’t to have the lockdowns last a long time, but that they are deployed earlier,
  • If England’s March 23, 2020, lockdown had begun one week earlier, the measure would have nearly halved the estimated 48,600 deaths in the first wave of England’s pandemic
  • If the lockdown had begun a week later, deaths in the same period would have more than doubled
  • It is possible to avoid lockdowns altogether. Taiwan, South Korea and Hong Kong—all countries experienced at handling disease outbreaks such as SARS in 2003 and MERS—avoided lockdowns by widespread masking, tracking the spread of the virus through testing and contact tracing and quarantining infected individuals.
  • With good data, Dr. Ranney says, she could have better managed staffing and taken steps to alleviate the strain on doctors and nurses by arranging child care for them.
  • Early in the pandemic, public-health officials were clear: The people at increased risk for severe Covid-19 illness were older, immunocompromised, had chronic kidney disease, Type 2 diabetes or serious heart conditions
  • t had the unfortunate effect of giving a false sense of security to people who weren’t in those high-risk categories. Once case rates dropped, vaccines became available and fear of the virus wore off, many people let their guard down, ditching masks, spending time in crowded indoor places.
  • it has become clear that even people with mild cases of Covid-19 can develop long-term serious and debilitating diseases. Long Covid, whose symptoms include months of persistent fatigue, shortness of breath, muscle aches and brain fog, hasn’t been the virus’s only nasty surprise
  • In February 2022, a study found that, for at least a year, people who had Covid-19 had a substantially increased risk of heart disease—even people who were younger and had not been hospitalized
  • respiratory conditions.
  • Some scientists now suspect that Covid-19 might be capable of affecting nearly every organ system in the body. It may play a role in the activation of dormant viruses and latent autoimmune conditions people didn’t know they had
  •  A blood test, he says, would tell people if they are at higher risk of long Covid and whether they should have antivirals on hand to take right away should they contract Covid-19.
  • If the risks of long Covid had been known, would people have reacted differently, especially given the confusion over masks and lockdowns and variants? Perhaps. At the least, many people might not have assumed they were out of the woods just because they didn’t have any of the risk factors.
Emily Horwitz

True Blue Stands Out in an Earthy Crowd - NYTimes.com - 0 views

  • blue was the only color with enough strength of character to remain blue “in all its tones.”
  • Scientists, too, have lately been bullish on blue, captivated by its optical purity, complexity and metaphorical fluency.
  • Still other researchers are tracing the history of blue pigments in human culture, and the role those pigments have played in shaping our notions of virtue, authority, divinity and social class. “Blue pigments played an outstanding role in human development,” said Heinz Berke, an emeritus professor of chemistry at the University of Zurich. For some cultures, he said, they were as valuable as gold.
  • ...8 more annotations...
  • people their favorite color, and in most parts of the world roughly half will say blue, a figure three to four times the support accorded common second-place finishers like purple or green
  • t young patients preferred nurses wearing blue uniforms to those in white or yellow.
  • blue’s basic emotional valence is calmness and open-endedness, in contrast to the aggressive specificity associated with red. Blue is sea and sky, a pocket-size vacation.
  • computer screen color affected participants’ ability to solve either creative problems —
  • blue can also imply coldness, sorrow and death. On learning of a good friend’s suicide in 1901, Pablo Picasso fell into a severe depression, and he began painting images of beggars, drunks, the poor and the halt, all famously rendered in a palette of blue.
  • association arose from the look of the body when it’s in a low energy, low oxygen state. “The lips turn blue, there’s a blue pallor to the complexion,” she said. “It’s the opposite of the warm flushing of the skin that we associate with love, kindness and affection.”
  • A blue glow makes food look very unappetizing.”
  • That blue can connote coolness and tranquillity is one of nature’s little inside jokes. Blue light is on the high-energy end of the visible spectrum, and the comparative shortness of its wavelengths explains why the blue portion of the white light from the sun is easily scattered by the nitrogen and oxygen molecules in our atmosphere, and thus why the sky looks blue.
« First ‹ Previous 81 - 100 of 174 Next › Last »
Showing 20 items per page