Skip to main content

Home/ TOK Friends/ Group items tagged generative

Rss Feed Group items tagged

Javier E

In Defense of Naïve Reading - NYTimes.com - 1 views

  • Clearly, poems and novels and paintings were not produced as objects for future academic study; there is no a priori reason to think that they could be suitable objects of  “research.” By and large they were produced for the pleasure and enlightenment of those who enjoyed them.
  • But just as clearly, the teaching of literature in universities ─ especially after the 19th-century research model of Humboldt University of Berlin was widely copied ─ needed a justification consistent with the aims of that academic setting
  • The main aim was research: the creating and accumulation and transmission of knowledge. And the main model was the natural science model of collaborative research: define problems, break them down into manageable parts, create sub-disciplines and sub-sub-disciplines for the study of these, train students for such research specialties and share everything. With that model, what literature and all the arts needed was something like a general “science of meaning” that could eventually fit that sort of aspiration. Texts or art works could be analyzed as exemplifying and so helping establish such a science. Results could be published in scholarly journals, disputed by others, consensus would eventually emerge and so on.
  • ...3 more annotations...
  • literature study in a university education requires some method of evaluation of whether the student has done well or poorly. Students’ papers must be graded and no faculty member wants to face the inevitable “that’s just your opinion” unarmed, as it were. Learning how to use a research methodology, providing evidence that one has understood and can apply such a method, is understandably an appealing pedagogy
  • Literature and the arts have a dimension unique in the academy, not shared by the objects studied, or “researched” by our scientific brethren. They invite or invoke, at a kind of “first level,” an aesthetic experience that is by its nature resistant to restatement in more formalized, theoretical or generalizing language. This response can certainly be enriched by knowledge of context and history, but the objects express a first-person or subjective view of human concerns that is falsified if wholly transposed to a more “sideways on” or third person view.
  • such works also can directly deliver a  kind of practical knowledge and self-understanding not available from a third person or more general formulation of such knowledge. There is no reason to think that such knowledge — exemplified in what Aristotle said about the practically wise man (the phronimos)or in what Pascal meant by the difference between l’esprit géometrique and l’esprit de finesse — is any less knowledge because it cannot be so formalized or even taught as such.
sissij

Bacon Shortage? Calm Down. It's Fake News. - The New York Times - 2 views

  • The alarming headlines came quickly Wednesday morning: “Now It’s Getting Serious: 2017 Could See a Bacon Shortage.”
  • The source of the anxiety was a recent report from the U.S.D.A., boosted by the Ohio Pork Council, which reported that the country’s frozen pork belly inventory was at its lowest point in half a century.
  • To create a panic “was not our intent,” Mr. Deaton added with a laugh. “We can’t control how the news is interpreted.”
  •  
    With the development of Internet and social media, we find the news on the website, paper, TV more unreliable. Partly because we can easily find alternate statement that points out the flaws, but mostly, it's because the news today likes to use exaggeration to grab the attention of the general population. Media should realize the impact and panic it can cause in the society before they report news. Although freedom of speech is appreciated, that doesn't mean the media can put aside its responsibility to guide the general population in a good direction. I remember there was a fake news after the big earthquake in Japan saying that salt can prevent nuclear radiation, then people were all panic and bought salt. It was very funny that in some places, some people were even fighting for a pack of salt. The media should make sure that people won't misunderstand the message before they publish. --Sissi (2/1/2017)
Javier E

How Calls for Privacy May Upend Business for Facebook and Google - The New York Times - 0 views

  • People detailed their interests and obsessions on Facebook and Google, generating a river of data that could be collected and harnessed for advertising. The companies became very rich. Users seemed happy. Privacy was deemed obsolete, like bloodletting and milkmen
  • It has been many months of allegations and arguments that the internet in general and social media in particular are pulling society down instead of lifting it up.
  • That has inspired a good deal of debate about more restrictive futures for Facebook and Google. At the furthest extreme, some dream of the companies becoming public utilities.
  • ...20 more annotations...
  • There are other avenues still, said Jascha Kaykas-Wolff, the chief marketing officer of Mozilla, the nonprofit organization behind the popular Firefox browser, including advertisers and large tech platforms collecting vastly less user data and still effectively customizing ads to consumers.
  • The greatest likelihood is that the internet companies, frightened by the tumult, will accept a few more rules and work a little harder for transparency.
  • The Cambridge Analytica case, said Vera Jourova, the European Union commissioner for justice, consumers and gender equality, was not just a breach of private data. “This is much more serious, because here we witness the threat to democracy, to democratic plurality,” she said.
  • Although many people had a general understanding that free online services used their personal details to customize the ads they saw, the latest controversy starkly exposed the machinery.
  • Consumers’ seemingly benign activities — their likes — could be used to covertly categorize and influence their behavior. And not just by unknown third parties. Facebook itself has worked directly with presidential campaigns on ad targeting, describing its services in a company case study as “influencing voters.”
  • “If your personal information can help sway elections, which affects everyone’s life and societal well-being, maybe privacy does matter after all.”
  • some trade group executives also warned that any attempt to curb the use of consumer data would put the business model of the ad-supported internet at risk.
  • “You’re undermining a fundamental concept in advertising: reaching consumers who are interested in a particular product,”
  • If suspicion of Facebook and Google is a relatively new feeling in the United States, it has been embedded in Europe for historical and cultural reasons that date back to the Nazi Gestapo, the Soviet occupation of Eastern Europe and the Cold War.
  • “We’re at an inflection point, when the great wave of optimism about tech is giving way to growing alarm,” said Heather Grabbe, director of the Open Society European Policy Institute. “This is the moment when Europeans turn to the state for protection and answers, and are less likely than Americans to rely on the market to sort out imbalances.”
  • In May, the European Union is instituting a comprehensive new privacy law, called the General Data Protection Regulation. The new rules treat personal data as proprietary, owned by an individual, and any use of that data must be accompanied by permission — opting in rather than opting out — after receiving a request written in clear language, not legalese.
  • the protection rules will have more teeth than the current 1995 directive. For example, a company experiencing a data breach involving individuals must notify the data protection authority within 72 hours and would be subject to fines of up to 20 million euros or 4 percent of its annual revenue.
  • “With the new European law, regulators for the first time have real enforcement tools,” said Jeffrey Chester, the executive director of the Center for Digital Democracy, a nonprofit group in Washington. “We now have a way to hold these companies accountable.”
  • Privacy advocates and even some United States regulators have long been concerned about the ability of online services to track consumers and make inferences about their financial status, health concerns and other intimate details to show them behavior-based ads. They warned that such microtargeting could unfairly categorize or exclude certain people.
  • the Do Not Track effort and the privacy bill were both stymied.Industry groups successfully argued that collecting personal details posed no harm to consumers and that efforts to hinder data collection would chill innovation.
  • “If it can be shown that the current situation is actually a market failure and not an individual-company failure, then there’s a case to be made for federal regulation” under certain circumstances
  • The business practices of Facebook and Google were reinforced by the fact that no privacy flap lasted longer than a news cycle or two. Nor did people flee for other services. That convinced the companies that digital privacy was a dead issue.
  • If the current furor dies down without meaningful change, critics worry that the problems might become even more entrenched. When the tech industry follows its natural impulses, it becomes even less transparent.
  • “To know the real interaction between populism and Facebook, you need to give much more access to researchers, not less,” said Paul-Jasper Dittrich, a German research fellow
  • There’s another reason Silicon Valley tends to be reluctant to share information about what it is doing. It believes so deeply in itself that it does not even think there is a need for discussion. The technology world’s remedy for any problem is always more technology
Javier E

The meaning of life in a world without work | Technology | The Guardian - 0 views

  • As artificial intelligence outperforms humans in more and more tasks, it will replace humans in more and more jobs.
  • Many new professions are likely to appear: virtual-world designers, for example. But such professions will probably require more creativity and flexibility, and it is unclear whether 40-year-old unemployed taxi drivers or insurance agents will be able to reinvent themselves as virtual-world designers
  • The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms. Consequently, by 2050 a new class of people might emerge – the useless class. People who are not just unemployed, but unemployable.
  • ...15 more annotations...
  • The same technology that renders humans useless might also make it feasible to feed and support the unemployable masses through some scheme of universal basic income.
  • The real problem will then be to keep the masses occupied and content. People must engage in purposeful activities, or they go crazy. So what will the useless class do all day?
  • One answer might be computer games. Economically redundant people might spend increasing amounts of time within 3D virtual reality worlds, which would provide them with far more excitement and emotional engagement than the “real world” outside.
  • This, in fact, is a very old solution. For thousands of years, billions of people have found meaning in playing virtual reality games. In the past, we have called these virtual reality games “religions”.
  • Muslims and Christians go through life trying to gain points in their favorite virtual reality game. If you pray every day, you get points. If you forget to pray, you lose points. If by the end of your life you gain enough points, then after you die you go to the next level of the game (aka heaven).
  • As religions show us, the virtual reality need not be encased inside an isolated box. Rather, it can be superimposed on the physical reality. In the past this was done with the human imagination and with sacred books, and in the 21st century it can be done with smartphones.
  • Consumerism too is a virtual reality game. You gain points by acquiring new cars, buying expensive brands and taking vacations abroad, and if you have more points than everybody else, you tell yourself you won the game.
  • we saw two others kids on the street who were hunting the same Pokémon, and we almost got into a fight with them. It struck me how similar the situation was to the conflict between Jews and Muslims about the holy city of Jerusalem. When you look at the objective reality of Jerusalem, all you see are stones and buildings. There is no holiness anywhere. But when you look through the medium of smartbooks (such as the Bible and the Qur’an), you see holy places and angels everywhere.
  • In the end, the real action always takes place inside the human brain. Does it matter whether the neurons are stimulated by observing pixels on a computer screen, by looking outside the windows of a Caribbean resort, or by seeing heaven in our mind’s eyes?
  • Indeed, one particularly interesting section of Israeli society provides a unique laboratory for how to live a contented life in a post-work world. In Israel, a significant percentage of ultra-orthodox Jewish men never work. They spend their entire lives studying holy scriptures and performing religion rituals. They and their families don’t starve to death partly because the wives often work, and partly because the government provides them with generous subsidies. Though they usually live in poverty, government support means that they never lack for the basic necessities of life.
  • That’s universal basic income in action. Though they are poor and never work, in survey after survey these ultra-orthodox Jewish men report higher levels of life-satisfaction than any other section of Israeli society.
  • Hence virtual realities are likely to be key to providing meaning to the useless class of the post-work world. Maybe these virtual realities will be generated inside computers. Maybe they will be generated outside computers, in the shape of new religions and ideologies. Maybe it will be a combination of the two. The possibilities are endless
  • In any case, the end of work will not necessarily mean the end of meaning, because meaning is generated by imagining rather than by working.
  • People in 2050 will probably be able to play deeper games and to construct more complex virtual worlds than in any previous time in history.
  • But what about truth? What about reality? Do we really want to live in a world in which billions of people are immersed in fantasies, pursuing make-believe goals and obeying imaginary laws? Well, like it or not, that’s the world we have been living in for thousands of years already.
caelengrubb

Opinion | History is repeating itself - right before our eyes - 1 views

  • History has a tendency to repeat itself. As memory fades, events from the past can become events of the present.
  • this is due to the cyclical nature of history — history repeats itself and flows based on the generations
  • According to them, four generations are needed to cycle through before similar events begin to occur, which would put the coming of age of the millennial generation in parallel to the events of the early 20th century.
  • ...9 more annotations...
  • Hate crime reports increased 17 percent in the United States in 2017 according to the FBI, increasing for the third consecutive year.
  • It is not just LGBTQ+ hate crime that is on the rise. 2018 saw a 99 percent increase in anti-Semitic incidents versus 2015, according to the Anti-Defamation League. When it strictly came to race/ethnicity/ancestry motivated crimes, the increase was 18.4 percent between 2016 and 2017. It is a dangerous time if you are not a cisgender, white, Christian in America, but that is not new.
  • A hundred years ago, in 1920, the National Socialist German Workers’ (Nazi) Party was founded in Germany. It started a generation of Germans that came of age around World War II, meaning they were young adults in 1939.
  • This is not really surprising. History repeats itself. And people forget about history.
  • The Anti-Defamation League says it like it is: Anti-Semitism in the U.S. is as bad as it was in the 1930s
  • The Nazis held a rally in New York City, where they were protected from protesters by the NYPD. This occurred a full six years after the concentration camps started in Germany. American history sometimes casually likes to omit those events in its recounting of World War II. Americans were undoubtedly the good guys of World War II, saving many countries and millions of people worldwide from fascism, but it has also done a poor job at ensuring these fascists ideas stay out of the country in recent years.
  • How can we protect history and avoid making the same mistakes we made in the past when we forget what happened?
  • In the same survey, 93 percent of respondents said that students should learn about the Holocaust in school. Americans understand the importance of passing down the knowledge of this dark past, but we have a government that still refuses to condemn groups promoting the same ideas that tore the world apart 80 years ago.
  • Those events took so many lives, led to a collective awakening to the plight of the Jewish people and now, 80 years later, we are falling back into old patterns.
Javier E

How Does Science Really Work? | The New Yorker - 1 views

  • I wanted to be a scientist. So why did I find the actual work of science so boring? In college science courses, I had occasional bursts of mind-expanding insight. For the most part, though, I was tortured by drudgery.
  • I’d found that science was two-faced: simultaneously thrilling and tedious, all-encompassing and narrow. And yet this was clearly an asset, not a flaw. Something about that combination had changed the world completely.
  • “Science is an alien thought form,” he writes; that’s why so many civilizations rose and fell before it was invented. In his view, we downplay its weirdness, perhaps because its success is so fundamental to our continued existence.
  • ...50 more annotations...
  • In school, one learns about “the scientific method”—usually a straightforward set of steps, along the lines of “ask a question, propose a hypothesis, perform an experiment, analyze the results.”
  • That method works in the classroom, where students are basically told what questions to pursue. But real scientists must come up with their own questions, finding new routes through a much vaster landscape.
  • Since science began, there has been disagreement about how those routes are charted. Two twentieth-century philosophers of science, Karl Popper and Thomas Kuhn, are widely held to have offered the best accounts of this process.
  • For Popper, Strevens writes, “scientific inquiry is essentially a process of disproof, and scientists are the disprovers, the debunkers, the destroyers.” Kuhn’s scientists, by contrast, are faddish true believers who promulgate received wisdom until they are forced to attempt a “paradigm shift”—a painful rethinking of their basic assumptions.
  • Working scientists tend to prefer Popper to Kuhn. But Strevens thinks that both theorists failed to capture what makes science historically distinctive and singularly effective.
  • Sometimes they seek to falsify theories, sometimes to prove them; sometimes they’re informed by preëxisting or contextual views, and at other times they try to rule narrowly, based on t
  • Why do scientists agree to this scheme? Why do some of the world’s most intelligent people sign on for a lifetime of pipetting?
  • Strevens thinks that they do it because they have no choice. They are constrained by a central regulation that governs science, which he calls the “iron rule of explanation.” The rule is simple: it tells scientists that, “if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with”; from there, they must “conduct all disputes with reference to empirical evidence alone.”
  • , it is “the key to science’s success,” because it “channels hope, anger, envy, ambition, resentment—all the fires fuming in the human heart—to one end: the production of empirical evidence.”
  • Strevens arrives at the idea of the iron rule in a Popperian way: by disproving the other theories about how scientific knowledge is created.
  • The problem isn’t that Popper and Kuhn are completely wrong. It’s that scientists, as a group, don’t pursue any single intellectual strategy consistently.
  • Exploring a number of case studies—including the controversies over continental drift, spontaneous generation, and the theory of relativity—Strevens shows scientists exerting themselves intellectually in a variety of ways, as smart, ambitious people usually do.
  • “Science is boring,” Strevens writes. “Readers of popular science see the 1 percent: the intriguing phenomena, the provocative theories, the dramatic experimental refutations or verifications.” But, he says,behind these achievements . . . are long hours, days, months of tedious laboratory labor. The single greatest obstacle to successful science is the difficulty of persuading brilliant minds to give up the intellectual pleasures of continual speculation and debate, theorizing and arguing, and to turn instead to a life consisting almost entirely of the production of experimental data.
  • Ultimately, in fact, it was good that the geologists had a “splendid variety” of somewhat arbitrary opinions: progress in science requires partisans, because only they have “the motivation to perform years or even decades of necessary experimental work.” It’s just that these partisans must channel their energies into empirical observation. The iron rule, Strevens writes, “has a valuable by-product, and that by-product is data.”
  • Science is often described as “self-correcting”: it’s said that bad data and wrong conclusions are rooted out by other scientists, who present contrary findings. But Strevens thinks that the iron rule is often more important than overt correction.
  • Eddington was never really refuted. Other astronomers, driven by the iron rule, were already planning their own studies, and “the great preponderance of the resulting measurements fit Einsteinian physics better than Newtonian physics.” It’s partly by generating data on such a vast scale, Strevens argues, that the iron rule can power science’s knowledge machine: “Opinions converge not because bad data is corrected but because it is swamped.”
  • Why did the iron rule emerge when it did? Strevens takes us back to the Thirty Years’ War, which concluded with the Peace of Westphalia, in 1648. The war weakened religious loyalties and strengthened national ones.
  • Two regimes arose: in the spiritual realm, the will of God held sway, while in the civic one the decrees of the state were paramount. As Isaac Newton wrote, “The laws of God & the laws of man are to be kept distinct.” These new, “nonoverlapping spheres of obligation,” Strevens argues, were what made it possible to imagine the iron rule. The rule simply proposed the creation of a third sphere: in addition to God and state, there would now be science.
  • Strevens imagines how, to someone in Descartes’s time, the iron rule would have seemed “unreasonably closed-minded.” Since ancient Greece, it had been obvious that the best thinking was cross-disciplinary, capable of knitting together “poetry, music, drama, philosophy, democracy, mathematics,” and other elevating human disciplines.
  • We’re still accustomed to the idea that a truly flourishing intellect is a well-rounded one. And, by this standard, Strevens says, the iron rule looks like “an irrational way to inquire into the underlying structure of things”; it seems to demand the upsetting “suppression of human nature.”
  • Descartes, in short, would have had good reasons for resisting a law that narrowed the grounds of disputation, or that encouraged what Strevens describes as “doing rather than thinking.”
  • In fact, the iron rule offered scientists a more supple vision of progress. Before its arrival, intellectual life was conducted in grand gestures.
  • Descartes’s book was meant to be a complete overhaul of what had preceded it; its fate, had science not arisen, would have been replacement by some equally expansive system. The iron rule broke that pattern.
  • by authorizing what Strevens calls “shallow explanation,” the iron rule offered an empirical bridge across a conceptual chasm. Work could continue, and understanding could be acquired on the other side. In this way, shallowness was actually more powerful than depth.
  • it also changed what counted as progress. In the past, a theory about the world was deemed valid when it was complete—when God, light, muscles, plants, and the planets cohered. The iron rule allowed scientists to step away from the quest for completeness.
  • The consequences of this shift would become apparent only with time
  • In 1713, Isaac Newton appended a postscript to the second edition of his “Principia,” the treatise in which he first laid out the three laws of motion and the theory of universal gravitation. “I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses,” he wrote. “It is enough that gravity really exists and acts according to the laws that we have set forth.”
  • What mattered, to Newton and his contemporaries, was his theory’s empirical, predictive power—that it was “sufficient to explain all the motions of the heavenly bodies and of our sea.”
  • Descartes would have found this attitude ridiculous. He had been playing a deep game—trying to explain, at a fundamental level, how the universe fit together. Newton, by those lights, had failed to explain anything: he himself admitted that he had no sense of how gravity did its work
  • Strevens sees its earliest expression in Francis Bacon’s “The New Organon,” a foundational text of the Scientific Revolution, published in 1620. Bacon argued that thinkers must set aside their “idols,” relying, instead, only on evidence they could verify. This dictum gave scientists a new way of responding to one another’s work: gathering data.
  • Quantum theory—which tells us that subatomic particles can be “entangled” across vast distances, and in multiple places at the same time—makes intuitive sense to pretty much nobody.
  • Without the iron rule, Strevens writes, physicists confronted with such a theory would have found themselves at an impasse. They would have argued endlessly about quantum metaphysics.
  • ollowing the iron rule, they can make progress empirically even though they are uncertain conceptually. Individual researchers still passionately disagree about what quantum theory means. But that hasn’t stopped them from using it for practical purposes—computer chips, MRI machines, G.P.S. networks, and other technologies rely on quantum physics.
  • One group of theorists, the rationalists, has argued that science is a new way of thinking, and that the scientist is a new kind of thinker—dispassionate to an uncommon degree.
  • As evidence against this view, another group, the subjectivists, points out that scientists are as hopelessly biased as the rest of us. To this group, the aloofness of science is a smoke screen behind which the inevitable emotions and ideologies hide.
  • At least in science, Strevens tells us, “the appearance of objectivity” has turned out to be “as important as the real thing.”
  • The subjectivists are right, he admits, inasmuch as scientists are regular people with a “need to win” and a “determination to come out on top.”
  • But they are wrong to think that subjectivity compromises the scientific enterprise. On the contrary, once subjectivity is channelled by the iron rule, it becomes a vital component of the knowledge machine. It’s this redirected subjectivity—to come out on top, you must follow the iron rule!—that solves science’s “problem of motivation,” giving scientists no choice but “to pursue a single experiment relentlessly, to the last measurable digit, when that digit might be quite meaningless.”
  • If it really was a speech code that instigated “the extraordinary attention to process and detail that makes science the supreme discriminator and destroyer of false ideas,” then the peculiar rigidity of scientific writing—Strevens describes it as “sterilized”—isn’t a symptom of the scientific mind-set but its cause.
  • The iron rule—“a kind of speech code”—simply created a new way of communicating, and it’s this new way of communicating that created science.
  • Other theorists have explained science by charting a sweeping revolution in the human mind; inevitably, they’ve become mired in a long-running debate about how objective scientists really are
  • In “The Knowledge Machine: How Irrationality Created Modern Science” (Liveright), Michael Strevens, a philosopher at New York University, aims to identify that special something. Strevens is a philosopher of science
  • Compared with the theories proposed by Popper and Kuhn, Strevens’s rule can feel obvious and underpowered. That’s because it isn’t intellectual but procedural. “The iron rule is focused not on what scientists think,” he writes, “but on what arguments they can make in their official communications.”
  • Like everybody else, scientists view questions through the lenses of taste, personality, affiliation, and experience
  • geologists had a professional obligation to take sides. Europeans, Strevens reports, tended to back Wegener, who was German, while scholars in the United States often preferred Simpson, who was American. Outsiders to the field were often more receptive to the concept of continental drift than established scientists, who considered its incompleteness a fatal flaw.
  • Strevens’s point isn’t that these scientists were doing anything wrong. If they had biases and perspectives, he writes, “that’s how human thinking works.”
  • Eddington’s observations were expected to either confirm or falsify Einstein’s theory of general relativity, which predicted that the sun’s gravity would bend the path of light, subtly shifting the stellar pattern. For reasons having to do with weather and equipment, the evidence collected by Eddington—and by his colleague Frank Dyson, who had taken similar photographs in Sobral, Brazil—was inconclusive; some of their images were blurry, and so failed to resolve the matter definitively.
  • it was only natural for intelligent people who were free of the rule’s strictures to attempt a kind of holistic, systematic inquiry that was, in many ways, more demanding. It never occurred to them to ask if they might illuminate more collectively by thinking about less individually.
  • In the single-sphered, pre-scientific world, thinkers tended to inquire into everything at once. Often, they arrived at conclusions about nature that were fascinating, visionary, and wrong.
  • How Does Science Really Work?Science is objective. Scientists are not. Can an “iron rule” explain how they’ve changed the world anyway?By Joshua RothmanSeptember 28, 2020
Javier E

How to Remember Everything You Want From Non-Fiction Books | by Eva Keiffenheim, MSc | ... - 0 views

  • A Bachelor’s degree taught me how to learn to ace exams. But it didn’t teach me how to learn to remember.
  • 65% to 80% of students answered “no” to the question “Do you study the way you do because somebody taught you to study that way?”
  • the most-popular Coursera course of all time: Dr. Barabara Oakley’s free course on “Learning how to Learn.” So did I. And while this course taught me about chunking, recalling, and interleaving
  • ...66 more annotations...
  • I learned something more useful: the existence of non-fiction literature that can teach you anything.
  • something felt odd. Whenever a conversation revolved around a serious non-fiction book I read, such as ‘Sapiens’ or ‘Thinking Fast and Slow,’ I could never remember much. Turns out, I hadn’t absorbed as much information as I’d believed. Since I couldn’t remember much, I felt as though reading wasn’t an investment in knowledge but mere entertainment.
  • When I opened up about my struggles, many others confessed they also can’t remember most of what they read, as if forgetting is a character flaw. But it isn’t.
  • It’s the way we work with books that’s flawed.
  • there’s a better way to read. Most people rely on techniques like highlighting, rereading, or, worst of all, completely passive reading, which are highly ineffective.
  • Since I started applying evidence-based learning strategies to reading non-fiction books, many things have changed. I can explain complex ideas during dinner conversations. I can recall interesting concepts and link them in my writing or podcasts. As a result, people come to me for all kinds of advice.
  • What’s the Architecture of Human Learning and Memory?
  • Human brains don’t work like recording devices. We don’t absorb information and knowledge by reading sentences.
  • we store new information in terms of its meaning to our existing memory
  • we give new information meaning by actively participating in the learning process — we interpret, connect, interrelate, or elaborate
  • To remember new information, we not only need to know it but also to know how it relates to what we already know.
  • Learning is dependent on memory processes because previously-stored knowledge functions as a framework in which newly learned information can be linked.”
  • Human memory works in three stages: acquisition, retention, and retrieval. In the acquisition phase, we link new information to existing knowledge; in the retention phase, we store it, and in the retrieval phase, we get information out of our memory.
  • Retrieval, the third stage, is cue dependent. This means the more mental links you’re generating during stage one, the acquisition phase, the easier you can access and use your knowledge.
  • we need to understand that the three phases interrelate
  • creating durable and flexible access to to-be-learned information is partly a matter of achieving a meaningful encoding of that information and partly a matter of exercising the retrieval process.”
  • Next, we’ll look at the learning strategies that work best for our brains (elaboration, retrieval, spaced repetition, interleaving, self-testing) and see how we can apply those insights to reading non-fiction books.
  • The strategies that follow are rooted in research from professors of Psychological & Brain Science around Henry Roediger and Mark McDaniel. Both scientists spent ten years bridging the gap between cognitive psychology and education fields. Harvard University Press published their findings in the book ‘Make It Stick.
  • #1 Elaboration
  • “Elaboration is the process of giving new material meaning by expressing it in your own words and connecting it with what you already know.”
  • Why elaboration works: Elaborative rehearsal encodes information into your long-term memory more effectively. The more details and the stronger you connect new knowledge to what you already know, the better because you’ll be generating more cues. And the more cues they have, the easier you can retrieve your knowledge.
  • How I apply elaboration: Whenever I read an interesting section, I pause and ask myself about the real-life connection and potential application. The process is invisible, and my inner monologues sound like: “This idea reminds me of…, This insight conflicts with…, I don’t really understand how…, ” etc.
  • For example, when I learned about A/B testing in ‘The Lean Startup,’ I thought about applying this method to my startup. I added a note on the site stating we should try it in user testing next Wednesday. Thereby the book had an immediate application benefit to my life, and I will always remember how the methodology works.
  • How you can apply elaboration: Elaborate while you read by asking yourself meta-learning questions like “How does this relate to my life? In which situation will I make use of this knowledge? How does it relate to other insights I have on the topic?”
  • While pausing and asking yourself these questions, you’re generating important memory cues. If you take some notes, don’t transcribe the author’s words but try to summarize, synthesize, and analyze.
  • #2 Retrieval
  • With retrieval, you try to recall something you’ve learned in the past from your memory. While retrieval practice can take many forms — take a test, write an essay, do a multiple-choice test, practice with flashcards
  • the authors of ‘Make It Stick’ state: “While any kind of retrieval practice generally benefits learning, the implication seems to be that where more cognitive effort is required for retrieval, greater retention results.”
  • Whatever you settle for, be careful not to copy/paste the words from the author. If you don’t do the brain work yourself, you’ll skip the learning benefits of retrieval
  • Retrieval strengthens your memory and interrupts forgetting and, as other researchers replicate, as a learning event, the act of retrieving information is considerably more potent than is an additional study opportunity, particularly in terms of facilitating long-term recall.
  • How I apply retrieval: I retrieve a book’s content from my memory by writing a book summary for every book I want to remember. I ask myself questions like: “How would you summarize the book in three sentences? Which concepts do you want to keep in mind or apply? How does the book relate to what you already know?”
  • I then publish my summaries on Goodreads or write an article about my favorite insights
  • How you can apply retrieval: You can come up with your own questions or use mine. If you don’t want to publish your summaries in public, you can write a summary into your journal, start a book club, create a private blog, or initiate a WhatsApp group for sharing book summaries.
  • a few days after we learn something, forgetting sets in
  • #3 Spaced Repetition
  • With spaced repetition, you repeat the same piece of information across increasing intervals.
  • The harder it feels to recall the information, the stronger the learning effect. “Spaced practice, which allows some forgetting to occur between sessions, strengthens both the learning and the cues and routes for fast retrieval,”
  • Why it works: It might sound counterintuitive, but forgetting is essential for learning. Spacing out practice might feel less productive than rereading a text because you’ll realize what you forgot. Your brain has to work harder to retrieve your knowledge, which is a good indicator of effective learning.
  • How I apply spaced repetition: After some weeks, I revisit a book and look at the summary questions (see #2). I try to come up with my answer before I look up my actual summary. I can often only remember a fraction of what I wrote and have to look at the rest.
  • “Knowledge trapped in books neatly stacked is meaningless and powerless until applied for the betterment of life.”
  • How you can apply spaced repetition: You can revisit your book summary medium of choice and test yourself on what you remember. What were your action points from the book? Have you applied them? If not, what hindered you?
  • By testing yourself in varying intervals on your book summaries, you’ll strengthen both learning and cues for fast retrieval.
  • Why interleaving works: Alternate working on different problems feels more difficult as it, again, facilitates forgetting.
  • How I apply interleaving: I read different books at the same time.
  • 1) Highlight everything you want to remember
  • #5 Self-Testing
  • While reading often falsely tricks us into perceived mastery, testing shows us whether we truly mastered the subject at hand. Self-testing helps you identify knowledge gaps and brings weak areas to the light
  • “It’s better to solve a problem than to memorize a solution.”
  • Why it works: Self-testing helps you overcome the illusion of knowledge. “One of the best habits a learner can instill in herself is regular self-quizzing to recalibrate her understanding of what she does and does not know.”
  • How I apply self-testing: I explain the key lessons from non-fiction books I want to remember to others. Thereby, I test whether I really got the concept. Often, I didn’t
  • instead of feeling frustrated, cognitive science made me realize that identifying knowledge gaps are a desirable and necessary effect for long-term remembering.
  • How you can apply self-testing: Teaching your lessons learned from a non-fiction book is a great way to test yourself. Before you explain a topic to somebody, you have to combine several mental tasks: filter relevant information, organize this information, and articulate it using your own vocabulary.
  • Now that I discovered how to use my Kindle as a learning device, I wouldn’t trade it for a paper book anymore. Here are the four steps it takes to enrich your e-reading experience
  • How you can apply interleaving: Your brain can handle reading different books simultaneously, and it’s effective to do so. You can start a new book before you finish the one you’re reading. Starting again into a topic you partly forgot feels difficult first, but as you know by now, that’s the effect you want to achieve.
  • it won’t surprise you that researchers proved highlighting to be ineffective. It’s passive and doesn’t create memory cues.
  • 2) Cut down your highlights in your browser
  • After you finished reading the book, you want to reduce your highlights to the essential part. Visit your Kindle Notes page to find a list of all your highlights. Using your desktop browser is faster and more convenient than editing your highlights on your e-reading device.
  • Now, browse through your highlights, delete what you no longer need, and add notes to the ones you really like. By adding notes to the highlights, you’ll connect the new information to your existing knowledge
  • 3) Use software to practice spaced repetitionThis part is the main reason for e-books beating printed books. While you can do all of the above with a little extra time on your physical books, there’s no way to systemize your repetition praxis.
  • Readwise is the best software to combine spaced repetition with your e-books. It’s an online service that connects to your Kindle account and imports all your Kindle highlights. Then, it creates flashcards of your highlights and allows you to export your highlights to your favorite note-taking app.
  • Common Learning Myths DebunkedWhile reading and studying evidence-based learning techniques I also came across some things I wrongly believed to be true.
  • #2 Effective learning should feel easyWe think learning works best when it feels productive. That’s why we continue to use ineffective techniques like rereading or highlighting. But learning works best when it feels hard, or as the authors of ‘Make It Stick’ write: “Learning that’s easy is like writing in sand, here today and gone tomorrow.”
  • In Conclusion
  • I developed and adjusted these strategies over two years, and they’re still a work in progress.
  • Try all of them but don’t force yourself through anything that doesn’t feel right for you. I encourage you to do your own research, add further techniques, and skip what doesn’t serve you
  • “In the case of good books, the point is not to see how many of them you can get through, but rather how many can get through to you.”— Mortimer J. Adler
runlai_jiang

You Asked About CES 2018. We Answered. - The New York Times - 0 views

  • You Asked About CES 2018. We Answered. By BRIAN X. CHEN At the International Consumer Electronics Show this week in Las Vegas, thousands of tech companies showcased some of the hottest new innovations: artificial intelligence, self-driving car tech, the smart home, voice-controlled accessories, fifth-generation cellular connectivity and more.Curious about the new products and how they will affect your personal technology? Readers asked Brian X. Chen, our lead consumer technology writer who is attending the trade show, their questions about wireless, TV and the Internet of Things. Jump to topic: CES Internet of Things TV Wireless Expand All Collapse All /* SHOW LIBRARY ===================== */ .g-show-xsmall, .g-show-small, .g-show-smallplus, .g-show-submedium, .g-show-sub-medium, .g-show-medium, .g-show-large, .g-show-xlarge { display: none; } .g-show { display: block; } .lt-ie10 .g-aiImg { width: 100%; } /* story top */ .story.theme-main .story-header .story-meta .story-heading { max-width: 720px; margin: 0 auto 10px; text-align: center; line-height: 2.844rem; font-size: 2.4rem; } @media only screen and (max-width: 1244px) { .story.theme-main .story-header .story-meta .story-heading { line-height: 2.5596rem; font-size: 2.16rem; } } @media only screen and (max-width: 719px) { .story.theme-main .story-header .story-meta .story-heading { line-height: 2.2752rem; font-size: 1.92rem; } } .story.theme-main .story-header .story-meta .interactive-leadin.summary { max-width: 460px; margin: 0 auto 20px auto; text-align: left; font-size: 17px; line-height: 1.6; } .story.theme-main .story-header .story-meta .byline-dateline { text-align: center; } /* top asset */ .g-top-asset { margin-left: auto; margin-right: auto; margin-bottom: 20px; } .g-top-asset img { width: 100%; } /* body text */ .g-body { max-width: 460px; margin-left: auto; margin-right: auto; font-size: 17px; line-height: 1.6; } .g-body b, .g-body strong { font-family: 'nyt-franklin', arial, sans-serif; font-size: 16px; } .g-body a { text-decoration: underline; } /* subhed */ .g-subhed h2 { max-width: 460px; margin: 2em auto 1em auto; font: 700 1.2em/1.3em 'nyt-franklin', arial, sans-serif; text-align: center; } .viewport-small-10 .g-subhed h2 { font-size: 1.5em; } /* images */ .g-item-image { margin: 25px auto; } .g-item-image img { width: 100%; } /* video */ .g-item-video { margin: 25px auto; } /* sources and credits */ .g-asset-source { padding-top: 3px; } .g-asset-source .g-source { font: 400 12px/15px 'nyt-franklin', arial, sans-serif; color: #999; } .g-asset-source .g-pipe { margin: 0 6px 0 3px; font: 400 12px/12px 'nyt-franklin', arial, sans-serif; color: #999; } .g-asset-source .g-caption { font: 300 14px/17px georgia, 'times new roman', times, serif; } .g-asset-source .g-credit { font: 400 12px/17px 'nyt-franklin', arial, sans-serif; display: inline-block; color: #999; } /* graphics */ .g-item-ai2html { margin: 25px auto; } p.g-asset-hed { font: 700 16px/18px 'nyt-franklin', arial, sans-serif; margin-bottom: 0; } .g-map-key { float: none; clear: both; overflow: hidden; margin: 10px auto 4px auto; } .g-map-key .g-key-row { margin-bottom: 5px; margin-right: 15px; float: left; } .viewport-small .g-map-key .g-key-row { width: auto; margin-bottom: 0; } .viewport-small-20 .g-map-key .g-key-row { width: auto; } .g-map-key .g-key-row .g-key-rect, .g-map-key .g-key-row .g-key-circle { display: inline-block; vertical-align: middle; margin-right: 8px; float: left; } .g-map-key .g-key-row p { font: 500 0.9em/1.6 'nyt-franklin', arial, sans-serif; float: left; vertical-align: middle; margin: -2px 0 0 0; } .viewport-small .g-map-key .g-key-row p { max-width: 92%; } .viewport-small-20 .g-map-key .g-key-row p { width: auto; max-width: none; } .g-map-key .g-key-row .g-key-rect { width: 22px; height: 10px; margin-top: 4px; } .g-map-key .g-key-row .g-key-circle { width: 9px; height: 9px; border-radius: 50%; margin-top: 4px; } .g-map-key .g-key-row .g-key-custom { width: 20px; height: 20px; background-size: 100%; display: block; float: left; width: 24px; height: 24px; margin: -4px 2px 0 0; } .viewport-small .g-map-key .g-key-row-title p { width: 100%; max-width: none; } .g-red-dot, .g-black-dot { display: inline-block; background: #d00; color: white; font-weight: bold; width: 20px; height: 20px; font: 700 14px/1.4 'nyt-franklin', arial, sans-serif; text-align: center; border-radius: 10px; line-height: 20px; } .g-black-dot { background: #222; } /* column text */ .g-column-container { max-width: 460px; margin: 20px auto 0 auto; } .viewport-medium .g-column-container { max-width: 1050px; } .viewport-large .g-column-container { margin-bottom: 30px; } .g-column-container .g-column-hed { font-family: 'nyt-franklin', arial, sans-serif; font-weight: 700; margin-bottom: 2px; } .g-column-container .g-column-col { vertical-align: top; } .viewport-small .g-column-container .g-column-col { display: block; min-width: 100%; } .viewport-medium .g-column-container .g-column-col { min-width: 0; display: inline-block; margin-right: 15px; } .viewport-medium .g-column-container .g-column-col:last-child { margin-right: 0; } .g-column-container .g-column-asset, .g-column-container .g-column-image { margin-bottom: 8px; } .g-column-container .g-column-image img { width: 100%; } /* tables */ .g-table { margin: 0 auto; margin-bottom: 25px; } .g-table tr { border-bottom: 1px solid #ececec; } .g-table p { font: 500 14px/1.4 'nyt-franklin', arial, sans-serif; text-align: left; margin: 6px 0; } /* grid */ .g-grid { margin-bottom: 15px; } .g-grid .g-grid-item { position: relative; display: inline-block; min-width: calc(50% - 5px); } .viewport-small-20 .g-grid .g-grid-item { min-width: calc(33% - 5px); } .viewport-medium .g-grid .g-grid-item { min-width: 0; } .g-grid .g-grid-item p { font: 500 15px/1.4 'nyt-franklin', arial, sans-serif; position: absolute; bottom: 10px; left: 10px; color: #fff; margin-bottom: 0; } /* Mobile issues */ /* Get rid of border under intro and share tools on mobile */ .story-header.interactive-header { border-bottom: none !important; } /* Share tools issues */ /* Pad out the kicker/sharetool space */ .story.theme-main .story-header .story-meta .kicker-container { margin-bottom: 12px; } /* Override the moving sharetools on mobile */ .story.theme-main .story-header .story-meta .kicker-container .sharetools { position: relative; float: right; /*right: 0px;*/ bottom: auto; left: auto; width: auto; margin-top: -6px; clear: none; } /* Maintain the proper space with the section name and kicker next to share tools */ .story.theme-main .story-header .story-meta .interactive-kicker { float: left; width: 70%; display: inline-block; } .g-graphic.g-graphic-freebird .g-ad-wrapper { margin: 30px auto; max-width: 1050px; } .viewport-medium .g-graphic.g-graphic-freebird .g-ad-wrapper { margin: 50px auto; } .g-graphic.g-graphic-freebird .g-ad-wrapper .ad { border-top: 1px solid #e2e2e2 !important; border-bottom: 1px solid #e2e2e2 !important; margin: 0 auto; padding: 10px 0; overflow: hidden; max-width: 100%; } .g-graphic.g-graphic-freebird .g-ad-wrapper .ad div { margin: 0 auto; display: block !important; } .g-graphic.g-graphic-freebird .g-ad-wrapper iframe { margin: 0 auto; display: block; } .g-graphic.g-graphic-freebird .g-ad-wrapper .g-ad-text { text-align: center; font: 500 12px/1.2 "nyt-franklin", arial, sans-serif; color: #bfbfbf; text-transform: uppercase; margin-bottom: 7px; } .ad.top-ad { border: none; margin-left: auto; margin-right: auto; } /* Fix spacing at top of story */ .has-top-ad .story.theme-interactive, .has-ribbon .story.theme-interactive { margin-top: 10px; } /* Fix comments button margin */ .story.theme-interactive .comments-button.theme-kicker { margin-top: 0; } /* Get rid of border under intro and share tools on mobile */ .page-interactive-default .story.theme-main .story-header { border-bottom: none; } /*
  • At the International Consumer Electronics Show this week in Las Vegas, thousands of tech companies showcased some of the hottest new innovations: artificial intelligence, self-driving car tech, the smart home, voice-controlled accessories, fifth-generation cellular connectivity and more.
  • Curious about the new products and how they will affect your personal technology? Readers asked Brian X. Chen, our lead consumer technology writer who attended the trade show, their questions about wireless, TV and the Internet of Things. (In addition,
Javier E

Covid-19 expert Karl Friston: 'Germany may have more immunological "dark matter"' | Wor... - 0 views

  • Our approach, which borrows from physics and in particular the work of Richard Feynman, goes under the bonnet. It attempts to capture the mathematical structure of the phenomenon – in this case, the pandemic – and to understand the causes of what is observed. Since we don’t know all the causes, we have to infer them. But that inference, and implicit uncertainty, is built into the models
  • That’s why we call them generative models, because they contain everything you need to know to generate the data. As more data comes in, you adjust your beliefs about the causes, until your model simulates the data as accurately and as simply as possible.
  • A common type of epidemiological model used today is the SEIR model, which considers that people must be in one of four states – susceptible (S), exposed (E), infected (I) or recovered (R). Unfortunately, reality doesn’t break them down so neatly. For example, what does it mean to be recovered?
  • ...12 more annotations...
  • SEIR models start to fall apart when you think about the underlying causes of the data. You need models that can allow for all possible states, and assess which ones matter for shaping the pandemic’s trajectory over time.
  • These techniques have enjoyed enormous success ever since they moved out of physics. They’ve been running your iPhone and nuclear power stations for a long time. In my field, neurobiology, we call the approach dynamic causal modelling (DCM). We can’t see brain states directly, but we can infer them given brain imaging data
  • Epidemiologists currently tackle the inference problem by number-crunching on a huge scale, making use of high-performance computers. Imagine you want to simulate an outbreak in Scotland. Using conventional approaches, this would take you a day or longer with today’s computing resources. And that’s just to simulate one model or hypothesis – one set of parameters and one set of starting conditions.
  • Using DCM, you can do the same thing in a minute. That allows you to score different hypotheses quickly and easily, and so to home in sooner on the best one.
  • This is like dark matter in the universe: we can’t see it, but we know it must be there to account for what we can see. Knowing it exists is useful for our preparations for any second wave, because it suggests that targeted testing of those at high risk of exposure to Covid-19 might be a better approach than non-selective testing of the whole population.
  • Our response as individuals – and as a society – becomes part of the epidemiological process, part of one big self-organising, self-monitoring system. That means it is possible to predict not only numbers of cases and deaths in the future, but also societal and institutional responses – and to attach precise dates to those predictions.
  • How well have your predictions been borne out in this first wave of infections?For London, we predicted that hospital admissions would peak on 5 April, deaths would peak five days later, and critical care unit occupancy would not exceed capacity – meaning the Nightingale hospitals would not be required. We also predicted that improvements would be seen in the capital by 8 May that might allow social distancing measures to be relaxed – which they were in the prime minister’s announcement on 10 May. To date our predictions have been accurate to within a day or two, so there is a predictive validity to our models that the conventional ones lack.
  • What do your models say about the risk of a second wave?The models support the idea that what happens in the next few weeks is not going to have a great impact in terms of triggering a rebound – because the population is protected to some extent by immunity acquired during the first wave. The real worry is that a second wave could erupt some months down the line when that immunity wears off.
  • the important message is that we have a window of opportunity now, to get test-and-trace protocols in place ahead of that putative second wave. If these are implemented coherently, we could potentially defer that wave beyond a time horizon where treatments or a vaccine become available, in a way that we weren’t able to before the first one.
  • We’ve been comparing the UK and Germany to try to explain the comparatively low fatality rates in Germany. The answers are sometimes counterintuitive. For example, it looks as if the low German fatality rate is not due to their superior testing capacity, but rather to the fact that the average German is less likely to get infected and die than the average Brit. Why? There are various possible explanations, but one that looks increasingly likely is that Germany has more immunological “dark matter” – people who are impervious to infection, perhaps because they are geographically isolated or have some kind of natural resistance
  • Any other advantages?Yes. With conventional SEIR models, interventions and surveillance are something you add to the model – tweaks or perturbations – so that you can see their effect on morbidity and mortality. But with a generative model these things are built into the model itself, along with everything else that matters.
  • Are generative models the future of disease modelling?That’s a question for the epidemiologists – they’re the experts. But I would be very surprised if at least some part of the epidemiological community didn’t become more committed to this approach in future, given the impact that Feynman’s ideas have had in so many other disciplines.
peterconnelly

Craig Nason : I survived the Columbine school shooting - then watched Uvalde ... - 0 views

  • In April 1999, I survived the Columbine shooting. At just 17 years old, I was forced to process the murder of my friends, the trauma of my community, and the unique attention the world paid to my experience. At the time, Columbine was considered a once-in-a-generation type of tragedy — one that few other people in our country would ever have to contend with. 
  • According to the organization Everytown for Gun Safety, the U.S. has experienced 274 mass shootings since 2009 alone. Thousands of survivors are now part of a club that nobody wants to join.
  • I think about how we vowed to “never forget” Columbine. How we would make sure the next generation would be safer. The opposite has happened.
  • ...8 more annotations...
  • You likely have memories associated with some of these shootings. What you were doing when you heard. Who you were with. How it affected you. When the headlines break, I often hear from friends, family, and colleagues: “I’m so sorry you have to relive this again.” The truth is, we are all reliving it again on some level. The steady cadence of shock, grief and pain is our collective story.
  • So now that the “thoughts and prayers” have been shared, what can we do together? 
  • Politicians love to tell us not to “politicize these tragedies” following mass shootings.
  • Would you care if an earlier tragedy was politicized if it meant getting your son or daughter back? Of course not. Grief doesn’t have a political affiliation. 
  • Then, change the culture. It’s impossible to ignore the role white supremacy, misogyny and extremism plays in so many mass gun violence events. In many mass shootings, the shooter has exhibited dangerous warning signs before the shooting.
  • Because it’s about the guns. The United States has roughly 5 percent of the world’s population and over 30 percent of the world’s mass shootings.
  • And while politicians love to act like this is an impossibly polarizing issue, perhaps we’re not as divided as we think. The majority of Americans actually believe Congress should pass more extensive gun legislation. According to a 2015 Public Policy Polling survey, 83 percent of gun owners support expanded background checks.
  • A new generation has grown up since Columbine; 310,000 more students in the U.S. have experienced gun violence since that day in 1999 when I escaped with my life.
Javier E

Where We Went Wrong | Harvard Magazine - 0 views

  • John Kenneth Galbraith assessed the trajectory of America’s increasingly “affluent society.” His outlook was not a happy one. The nation’s increasingly evident material prosperity was not making its citizens any more satisfied. Nor, at least in its existing form, was it likely to do so
  • One reason, Galbraith argued, was the glaring imbalance between the opulence in consumption of private goods and the poverty, often squalor, of public services like schools and parks
  • nother was that even the bountifully supplied private goods often satisfied no genuine need, or even desire; a vast advertising apparatus generated artificial demand for them, and satisfying this demand failed to provide meaningful or lasting satisfaction.
  • ...28 more annotations...
  • economist J. Bradford DeLong ’82, Ph.D. ’87, looking back on the twentieth century two decades after its end, comes to a similar conclusion but on different grounds.
  • DeLong, professor of economics at Berkeley, looks to matters of “contingency” and “choice”: at key junctures the economy suffered “bad luck,” and the actions taken by the responsible policymakers were “incompetent.”
  • these were “the most consequential years of all humanity’s centuries.” The changes they saw, while in the first instance economic, also “shaped and transformed nearly everything sociological, political, and cultural.”
  • DeLong’s look back over the twentieth century energetically encompasses political and social trends as well; nor is his scope limited to the United States. The result is a work of strikingly expansive breadth and scope
  • labeling the book an economic history fails to convey its sweeping frame.
  • The century that is DeLong’s focus is what he calls the “long twentieth century,” running from just after the Civil War to the end of the 2000s when a series of events, including the biggest financial crisis since the 1930s followed by likewise the most severe business downturn, finally rendered the advanced Western economies “unable to resume economic growth at anything near the average pace that had been the rule since 1870.
  • d behind those missteps in policy stood not just failures of economic thinking but a voting public that reacted perversely, even if understandably, to the frustrations poor economic outcomes had brought them.
  • Within this 140-year span, DeLong identifies two eras of “El Dorado” economic growth, each facilitated by expanding globalization, and each driven by rapid advances in technology and changes in business organization for applying technology to economic ends
  • from 1870 to World War I, and again from World War II to 197
  • fellow economist Robert J. Gordon ’62, who in his monumental treatise on The Rise and Fall of American Economic Growth (reviewed in “How America Grew,” May-June 2016, page 68) hailed 1870-1970 as a “special century” in this regard (interrupted midway by the disaster of the 1930s).
  • Gordon highlighted the role of a cluster of once-for-all-time technological advances—the steam engine, railroads, electrification, the internal combustion engine, radio and television, powered flight
  • Pessimistic that future technological advances (most obviously, the computer and electronics revolutions) will generate productivity gains to match those of the special century, Gordon therefore saw little prospect of a return to the rapid growth of those halcyon days.
  • DeLong instead points to a series of noneconomic (and non-technological) events that slowed growth, followed by a perverse turn in economic policy triggered in part by public frustration: In 1973 the OPEC cartel tripled the price of oil, and then quadrupled it yet again six years later.
  • For all too many Americans (and citizens of other countries too), the combination of high inflation and sluggish growth meant that “social democracy was no longer delivering the rapid progress toward utopia that it had delivered in the first post-World War II generation.”
  • Frustration over these and other ills in turn spawned what DeLong calls the “neoliberal turn” in public attitudes and economic policy. The new economic policies introduced under this rubric “did not end the slowdown in productivity growth but reinforced it.
  • the tax and regulatory changes enacted in this new climate channeled most of what economic gains there were to people already at the top of the income scale
  • Meanwhile, progressive “inclusion” of women and African Americans in the economy (and in American society more broadly) meant that middle- and lower-income white men saw even smaller gains—and, perversely, reacted by providing still greater support for policies like tax cuts for those with far higher incomes than their own.
  • Daniel Bell’s argument in his 1976 classic The Cultural Contradictions of Capitalism. Bell famously suggested that the very success of a capitalist economy would eventually undermine a society’s commitment to the values and institutions that made capitalism possible in the first plac
  • In DeLong’s view, the “greatest cause” of the neoliberal turn was “the extraordinary pace of rising prosperity during the Thirty Glorious Years, which raised the bar that a political-economic order had to surpass in order to generate broad acceptance.” At the same time, “the fading memory of the Great Depression led to the fading of the belief, or rather recognition, by the middle class that they, as well as the working class, needed social insurance.”
  • what the economy delivered to “hard-working white men” no longer matched what they saw as their just deserts: in their eyes, “the rich got richer, the unworthy and minority poor got handouts.”
  • As Bell would have put it, the politics of entitlement, bred by years of economic success that so many people had come to take for granted, squeezed out the politics of opportunity and ambition, giving rise to the politics of resentment.
  • The new era therefore became “a time to question the bourgeois virtues of hard, regular work and thrift in pursuit of material abundance.”
  • DeLong’s unspoken agenda would surely include rolling back many of the changes made in the U.S. tax code over the past half-century, as well as reinvigorating antitrust policy to blunt the dominance, and therefore outsize profits, of the mega-firms that now tower over key sectors of the economy
  • He would also surely reverse the recent trend moving away from free trade. Central bankers should certainly behave like Paul Volcker (appointed by President Carter), whose decisive action finally broke the 1970s inflation even at considerable economic cost
  • Not only Galbraith’s main themes but many of his more specific observations as well seem as pertinent, and important, today as they did then.
  • What will future readers of Slouching Towards Utopia conclude?
  • If anything, DeLong’s narratives will become more valuable as those events fade into the past. Alas, his description of fascism as having at its center “a contempt for limits, especially those implied by reason-based arguments; a belief that reality could be altered by the will; and an exaltation of the violent assertion of that will as the ultimate argument” will likely strike a nerve with many Americans not just today but in years to come.
  • what about DeLong’s core explanation of what went wrong in the latter third of his, and our, “long century”? I predict that it too will still look right, and important.
Javier E

Psychological nativism - Wikipedia - 0 views

  • In the field of psychology, nativism is the view that certain skills or abilities are "native" or hard-wired into the brain at birth. This is in contrast to the "blank slate" or tabula rasa view, which states that the brain has inborn capabilities for learning from the environment but does not contain content such as innate beliefs.
  • Some nativists believe that specific beliefs or preferences are "hard-wired". For example, one might argue that some moral intuitions are innate or that color preferences are innate. A less established argument is that nature supplies the human mind with specialized learning devices. This latter view differs from empiricism only to the extent that the algorithms that translate experience into information may be more complex and specialized in nativist theories than in empiricist theories. However, empiricists largely remain open to the nature of learning algorithms and are by no means restricted to the historical associationist mechanisms of behaviorism.
  • Nativism has a history in philosophy, particularly as a reaction to the straightforward empiricist views of John Locke and David Hume. Hume had given persuasive logical arguments that people cannot infer causality from perceptual input. The most one could hope to infer is that two events happen in succession or simultaneously. One response to this argument involves positing that concepts not supplied by experience, such as causality, must exist prior to any experience and hence must be innate.
  • ...14 more annotations...
  • The philosopher Immanuel Kant (1724–1804) argued in his Critique of Pure Reason that the human mind knows objects in innate, a priori ways. Kant claimed that humans, from birth, must experience all objects as being successive (time) and juxtaposed (space). His list of inborn categories describes predicates that the mind can attribute to any object in general. Arthur Schopenhauer (1788–1860) agreed with Kant, but reduced the number of innate categories to one—causality—which presupposes the others.
  • Modern nativism is most associated with the work of Jerry Fodor (1935–2017), Noam Chomsky (b. 1928), and Steven Pinker (b. 1954), who argue that humans from birth have certain cognitive modules (specialised genetically inherited psychological abilities) that allow them to learn and acquire certain skills, such as language.
  • For example, children demonstrate a facility for acquiring spoken language but require intensive training to learn to read and write. This poverty of the stimulus observation became a principal component of Chomsky's argument for a "language organ"—a genetically inherited neurological module that confers a somewhat universal understanding of syntax that all neurologically healthy humans are born with, which is fine-tuned by an individual's experience with their native language
  • In The Blank Slate (2002), Pinker similarly cites the linguistic capabilities of children, relative to the amount of direct instruction they receive, as evidence that humans have an inborn facility for speech acquisition (but not for literacy acquisition).
  • A number of other theorists[1][2][3] have disagreed with these claims. Instead, they have outlined alternative theories of how modularization might emerge over the course of development, as a result of a system gradually refining and fine-tuning its responses to environmental stimuli.[4]
  • Many empiricists are now also trying to apply modern learning models and techniques to the question of language acquisition, with marked success.[20] Similarity-based generalization marks another avenue of recent research, which suggests that children may be able to rapidly learn how to use new words by generalizing about the usage of similar words that they already know (see also the distributional hypothesis).[14][21][22][23]
  • The term universal grammar (or UG) is used for the purported innate biological properties of the human brain, whatever exactly they turn out to be, that are responsible for children's successful acquisition of a native language during the first few years of life. The person most strongly associated with the hypothesising of UG is Noam Chomsky, although the idea of Universal Grammar has clear historical antecedents at least as far back as the 1300s, in the form of the Speculative Grammar of Thomas of Erfurt.
  • This evidence is all the more impressive when one considers that most children do not receive reliable corrections for grammatical errors.[9] Indeed, even children who for medical reasons cannot produce speech, and therefore have no possibility of producing an error in the first place, have been found to master both the lexicon and the grammar of their community's language perfectly.[10] The fact that children succeed at language acquisition even when their linguistic input is severely impoverished, as it is when no corrective feedback is available, is related to the argument from the poverty of the stimulus, and is another claim for a central role of UG in child language acquisition.
  • Researchers at Blue Brain discovered a network of about fifty neurons which they believed were building blocks of more complex knowledge but contained basic innate knowledge that could be combined in different more complex ways to give way to acquired knowledge, like memory.[11
  • experience, the tests would bring about very different characteristics for each rat. However, the rats all displayed similar characteristics which suggest that their neuronal circuits must have been established previously to their experiences. The Blue Brain Project research suggests that some of the "building blocks" of knowledge are genetic and present at birth.[11]
  • modern nativist theory makes little in the way of specific falsifiable and testable predictions, and has been compared by some empiricists to a pseudoscience or nefarious brand of "psychological creationism". As influential psychologist Henry L. Roediger III remarked that "Chomsky was and is a rationalist; he had no uses for experimental analyses or data of any sort that pertained to language, and even experimental psycholinguistics was and is of little interest to him".[13]
  • , Chomsky's poverty of the stimulus argument is controversial within linguistics.[14][15][16][17][18][19]
  • Neither the five-year-old nor the adults in the community can easily articulate the principles of the grammar they are following. Experimental evidence shows that infants come equipped with presuppositions that allow them to acquire the rules of their language.[6]
  • Paul Griffiths, in "What is Innateness?", argues that innateness is too confusing a concept to be fruitfully employed as it confuses "empirically dissociated" concepts. In a previous paper, Griffiths argued that innateness specifically confuses these three distinct biological concepts: developmental fixity, species nature, and intended outcome. Developmental fixity refers to how insensitive a trait is to environmental input, species nature reflects what it is to be an organism of a certain kind, and the intended outcome is how an organism is meant to develop.[24]
Javier E

For Chat-Based AI, We Are All Once Again Tech Companies' Guinea Pigs - WSJ - 0 views

  • The companies touting new chat-based artificial-intelligence systems are running a massive experiment—and we are the test subjects.
  • In this experiment, Microsoft, MSFT -2.18% OpenAI and others are rolling out on the internet an alien intelligence that no one really understands, which has been granted the ability to influence our assessment of what’s true in the world. 
  • Companies have been cautious in the past about unleashing this technology on the world. In 2019, OpenAI decided not to release an earlier version of the underlying model that powers both ChatGPT and the new Bing because the company’s leaders deemed it too dangerous to do so, they said at the time.
  • ...26 more annotations...
  • Microsoft leaders felt “enormous urgency” for it to be the company to bring this technology to market, because others around the world are working on similar tech but might not have the resources or inclination to build it as responsibly, says Sarah Bird, a leader on Microsoft’s responsible AI team.
  • One common starting point for such models is what is essentially a download or “scrape” of most of the internet. In the past, these language models were used to try to understand text, but the new generation of them, part of the revolution in “generative” AI, uses those same models to create texts by trying to guess, one word at a time, the most likely word to come next in any given sequence.
  • Wide-scale testing gives Microsoft and OpenAI a big competitive edge by enabling them to gather huge amounts of data about how people actually use such chatbots. Both the prompts users input into their systems, and the results their AIs spit out, can then be fed back into a complicated system—which includes human content moderators paid by the companies—to improve it.
  • , being first to market with a chat-based AI gives these companies a huge initial lead over companies that have been slower to release their own chat-based AIs, such as Google.
  • rarely has an experiment like Microsoft and OpenAI’s been rolled out so quickly, and at such a broad scale.
  • Among those who build and study these kinds of AIs, Mr. Altman’s case for experimenting on the global public has inspired responses ranging from raised eyebrows to condemnation.
  • The fact that we’re all guinea pigs in this experiment doesn’t mean it shouldn’t be conducted, says Nathan Lambert, a research scientist at the AI startup Huggingface.
  • “I would kind of be happier with Microsoft doing this experiment than a startup, because Microsoft will at least address these issues when the press cycle gets really bad,” says Dr. Lambert. “I think there are going to be a lot of harms from this kind of AI, and it’s better people know they are coming,” he adds.
  • Others, particularly those who study and advocate for the concept of “ethical AI” or “responsible AI,” argue that the global experiment Microsoft and OpenAI are conducting is downright dangerous
  • Celeste Kidd, a professor of psychology at University of California, Berkeley, studies how people acquire knowledge
  • Her research has shown that people learning about new things have a narrow window in which they form a lasting opinion. Seeing misinformation during this critical initial period of exposure to a new concept—such as the kind of misinformation that chat-based AIs can confidently dispense—can do lasting harm, she says.
  • Dr. Kidd likens OpenAI’s experimentation with AI to exposing the public to possibly dangerous chemicals. “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,”
  • Part of the challenge with AI chatbots is that they can sometimes simply make things up. Numerous examples of this tendency have been documented by users of both ChatGPT and OpenA
  • These models also tend to be riddled with biases that may not be immediately apparent to users. For example, they can express opinions gleaned from the internet as if they were verified facts
  • When millions are exposed to these biases across billions of interactions, this AI has the potential to refashion humanity’s views, at a global scale, says Dr. Kidd.
  • OpenAI has talked publicly about the problems with these systems, and how it is trying to address them. In a recent blog post, the company said that in the future, users might be able to select AIs whose “values” align with their own.
  • “We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” the post said.
  • Eliminating made-up information and bias from chat-based search engines is impossible given the current state of the technology, says Mark Riedl, a professor at Georgia Institute of Technology who studies artificial intelligence
  • He believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment,” he adds. 
  • in other areas of human endeavor—from new drugs and new modes of transportation to advertising and broadcast media—we have standards for what can and cannot be unleashed on the public. No such standards exist for AI, says Dr. Riedl.
  • To modify these AIs so that they produce outputs that humans find both useful and not-offensive, engineers often use a process called “reinforcement learning through human feedback.
  • that’s a fancy way of saying that humans provide input to the raw AI algorithm, often by simply saying which of its potential responses to a query are better—and also which are not acceptable at all.
  • Microsoft’s and OpenAI’s globe-spanning experiments on millions of people are yielding a fire hose of data for both companies. User-entered prompts and the AI-generated results are fed back through a network of paid human AI trainers to further fine-tune the models,
  • Huggingface’s Dr. Lambert says that any company, including his own, that doesn’t have this river of real-world usage data helping it improve its AI is at a huge disadvantage
  • In chatbots, in some autonomous-driving systems, in the unaccountable AIs that decide what we see on social media, and now, in the latest applications of AI, again and again we are the guinea pigs on which tech companies are testing new technology.
  • It may be the case that there is no other way to roll out this latest iteration of AI—which is already showing promise in some areas—at scale. But we should always be asking, at times like these: At what price?
Javier E

GPT-4 has arrived. It will blow ChatGPT out of the water. - The Washington Post - 0 views

  • GPT-4, in contrast, is a state-of-the-art system capable of creating not just words but describing images in response to a person’s simple written commands.
  • When shown a photo of a boxing glove hanging over a wooden seesaw with a ball on one side, for instance, a person can ask what will happen if the glove drops, and GPT-4 will respond that it would hit the seesaw and cause the ball to fly up.
  • an AI program, known as a large language model, that early testers had claimed was remarkably advanced in its ability to reason and learn new things
  • ...22 more annotations...
  • hose promises have also fueled anxiety over how people will be able to compete for jobs outsourced to eerily refined machines or trust the accuracy of what they see online.
  • Officials with the San Francisco lab said GPT-4’s “multimodal” training across text and images would allow it to escape the chat box and more fully emulate a world of color and imagery, surpassing ChatGPT in its “advanced reasoning capabilities.”
  • A person could upload an image and GPT-4 could caption it for them, describing the objects and scene.
  • AI language models often confidently offer wrong answers because they are designed to spit out cogent phrases, not actual facts. And because they have been trained on internet text and imagery, they have also learned to emulate human biases of race, gender, religion and class.
  • GPT-4 still makes many of the errors of previous versions, including “hallucinating” nonsense, perpetuating social biases and offering bad advice. It also lacks knowledge of events that happened after about September 2021, when its training data was finalized, and “does not learn from its experience,” limiting people’s ability to teach it new things.
  • Microsoft has invested billions of dollars in OpenAI in the hope its technology will become a secret weapon for its workplace software, search engine and other online ambitions. It has marketed the technology as a super-efficient companion that can handle mindless work and free people for creative pursuits, helping one software developer to do the work of an entire team or allowing a mom-and-pop shop to design a professional advertising campaign without outside help.
  • it could lead to business models and creative ventures no one can predict.
  • sparked criticism that the companies are rushing to exploit an untested, unregulated and unpredictable technology that could deceive people, undermine artists’ work and lead to real-world harm.
  • the company held back the feature to better understand potential risks. As one example, she said, the model might be able to look at an image of a big group of people and offer up known information about them, including their identities — a possible facial recognition use case that could be used for mass surveillance.
  • OpenAI researchers wrote, “As GPT-4 and AI systems like it are adopted more widely,” they “will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in.”
  • “We can agree as a society broadly on some harms that a model should not contribute to,” such as building a nuclear bomb or generating child sexual abuse material, she said. “But many harms are nuanced and primarily affect marginalized groups,” she added, and those harmful biases, especially across other languages, “cannot be a secondary consideration in performance.”
  • OpenAI said its new model would be able to handle more than 25,000 words of text, a leap forward that could facilitate longer conversations and allow for the searching and analysis of long documents.
  • OpenAI developers said GPT-4 was more likely to provide factual responses and less likely to refuse harmless requests
  • Duolingo, the language learning app, has already used GPT-4 to introduce new features, such as an AI conversation partner and a tool that tells users why an answer was incorrect.
  • The company did not share evaluations around bias that have become increasingly common after pressure from AI ethicists.
  • GPT-4 will have competition in the growing field of multisensory AI. DeepMind, an AI firm owned by Google’s parent company Alphabet, last year released a “generalist” model named Gato that can describe images and play video games. And Google this month released a multimodal system, PaLM-E, that folded AI vision and language expertise into a one-armed robot on wheels: If someone told it to go fetch some chips, for instance, it could comprehend the request, wheel over to a drawer and choose the right bag.
  • The systems, though — as critics and the AI researchers are quick to point out — are merely repeating patterns and associations found in their training data without a clear understanding of what it’s saying or when it’s wrong.
  • GPT-4, the fourth “generative pre-trained transformer” since OpenAI’s first release in 2018, relies on a breakthrough neural-network technique in 2017 known as the transformer that rapidly advanced how AI systems can analyze patterns in human speech and imagery.
  • The systems are “pre-trained” by analyzing trillions of words and images taken from across the internet: news articles, restaurant reviews and message-board arguments; memes, family photos and works of art.
  • Giant supercomputer clusters of graphics processing chips are mapped out their statistical patterns — learning which words tended to follow each other in phrases, for instance — so that the AI can mimic those patterns, automatically crafting long passages of text or detailed images, one word or pixel at a time.
  • In 2019, the company refused to publicly release GPT-2, saying it was so good they were concerned about the “malicious applications” of its use, from automated spam avalanches to mass impersonation and disinformation campaigns.
  • Altman has also marketed OpenAI’s vision with the aura of science fiction come to life. In a blog post last month, he said the company was planning for ways to ensure that “all of humanity” benefits from “artificial general intelligence,” or AGI — an industry term for the still-fantastical idea of an AI superintelligence that is generally as smart as, or smarter than, the humans themselves.
Javier E

Generative AI Brings Cost of Creation Close to Zero, Andreessen Horowitz's Martin Casad... - 0 views

  • The value of ChatGPT-like technology comes from bringing the cost of producing images, text and other creative projects close to zero
  • With only a few prompts, generative AI technology—such as the giant language models underlying the viral ChatGPT chatbot—can enable companies to create sales and marketing materials from scratch quickly for a fraction of the price of using current software tools, and paying designers, photographers and copywriters, among other expenses
  • “That’s very rare in my 20 years of experience in doing just frontier tech, to have four or five orders of magnitude of improvement on something people care about
  • ...4 more annotations...
  • many corporate technology chiefs have taken a wait-and-see approach to the technology, which has developed a reputation for producing false, misleading and unintelligible results—dubbed AI ‘hallucinations’. 
  • Though ChatGPT, which is available free online, is considered a consumer app, OpenAI has encouraged companies and startups to build apps on top of its language models—in part by providing access to the underlying computer code for a fee.
  • here are “certain spaces where it’s clearly directly applicable,” such as summarizing documents or responding to customer queries. Many startups are racing to apply the technology to a wider set of enterprise use case
  • “I think it’s going to creep into our lives in ways we least expect it,” Mr. Casado said.
mcginnisca

Republicans have a candidate who could take back the White House. They're just not voti... - 0 views

  • John Kasich may be running a distant third in the primary, but he's the Republican presidential candidate best positioned to beat Hillary Clinton in a general election matchup
  • The researchers then took the results of those interviews and combined them with voter demographics and economic data to forecast an outcome in each state. These models, as expected, show Clinton pretty easily winning Electoral College majorities over Trump and Cruz, the two Republican frontrunners.
  • Yet she still loses overwhelmingly — she trails Kasich in a couple of traditional swing states (Colorado and Kasich's home state of Ohio), and even narrowly trails him in bluer states like Pennsylvania, Michigan, Wisconsin, Minnesota, Maine, and Oregon.
  • ...3 more annotations...
  • By contrast, both Ted Cruz and Donald Trump would lose to Clinton in a general election battle, according to the Morning Consult projections — though, interestingly, not in historic blowouts but instead by similar margins to Mitt Romney's 2012 loss
  • "If the election were held today, John Kasich would receive 304 electoral votes to Hillary Clinton’s 234, largely due to strong performances in the Midwest and mid-Atlantic,
  • Trump would get just 210 electoral votes, and Cruz would get 206.
lenaurick

Your Facial Bone Structure Has a Big Influence on How People See You - Scientific American - 0 views

  • New research shows that although we perceive character traits like trustworthiness based on a person’s facial expressions, our perceptions of abilities like strength are influenced by facial structure
  • A face resembling a happy expression, with upturned eyebrows and upward curving mouth, is likely to be seen as trustworthy while one resembling an angry expression, with downturned eyebrows, is likely to be seen as untrustworthy.
  • wider faces seen as more competent.
  • ...15 more annotations...
  • For those of us seeking to appear friendly and trustworthy to others, a new study underscores an old, chipper piece of advice: Put on a happy face.
  • the relevance of facial expressions to perceptions of characteristics such as trustworthiness and friendliness.
  • for those faces lacking structural cues, people could no longer perceive strength but could still perceive personality traits based on facial expressions.
  • An analysis revealed that participants generally ranked people with a happy expression as friendly and trustworthy but not those with angry expressions.
  • rank faces as indicative of physical strength based on facial expression but graded faces that were very broad as that of a strong individual.
  • In the first variation, for faces lacking emotional cues, people could no longer perceive personality traits but could still perceive strength based on width
  • perceptions of abilities such as physical strength are not dependent on facial expressions but rather on facial bone structure.
  • As might be expected, participants picked faces with happier expressions as financial advisors and selected broader faces as belonging to power-lifting champs.
  • Most of the participants found the computer-generated averages to be good representations of trustworthiness or strength — and generally saw the average “financial advisor” face as more trustworthy and the “power-lifter” face as stronger.
  • he findings suggest facial expressions strongly influence perception of traits such as trustworthiness, friendliness or warmth, but not ability (strength, in these experiments).
  • facial structure influences the perception of physical ability but not intentions (such as friendliness and trustworthiness, in this instance)
  • this new work reveals how perceptions of the same person can vary greatly depending on that person’s facial expression in any given moment
  • The findings above come with a big caveat: Only male faces were shown to subjects.
  • Studies of facial width and height in females have shown mixed results, so presenting study subjects with a mix of male and female faces would have yielded inconclusive results.
  • In our everyday lives this study and others make clear that although we might try to influence others’ perceptions of us with photos showing us donning sharp attire or displaying a self-assured attitude, the most important determinant of others' perception of and consequent behavior toward us is our faces.
Javier E

No matter who wins the presidential election, Nate Silver was right - The Washington Post - 1 views

  • I don’t fault Silver for his caution. It’s honest. What it really says is he doesn’t know with much confidence what’s going to happen
  • That’s because there’s a lot of human caprice and whim in electoral behavior that can’t always be explained or predicted with scientific precision. Politics ain’t moneyball. Good-quality polls give an accurate sense of where a political race is at a point in time, but they don’t predict the future.
  • Predictive models, generally based on historical patterns, work until they don’t.
  • ...2 more annotations...
  • In his hedged forecasts this time, Silver appears to be acknowledging that polling and historical patterns don’t always capture what John Maynard Keynes, in his classic 1936 economic General Theory, described as “animal spirits.”
  • There is, Keynes wrote, “the instability due to the characteristic of human nature that a large proportion of our positive activities depend on spontaneous optimism rather than on a mathematical expectation, whether moral or hedonistic or economic. Most, probably, of our decisions to do something positive, the full consequences of which will be drawn out over many days to come, can only be taken as a result of animal spirits — of a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities.”
Javier E

Marie Kondo and the Ruthless War on Stuff - The New York Times - 1 views

  • the method outlined in Kondo’s book. It includes something called a “once-in-a-lifetime tidying marathon,” which means piling five categories of material possessions — clothing, books, papers, miscellaneous items and sentimental items, including photos, in that order — one at a time, surveying how much of each you have, seeing that it’s way too much and then holding each item to see if it sparks joy in your body. The ones that spark joy get to stay. The ones that don’t get a heartfelt and generous goodbye, via actual verbal communication, and are then sent on their way to their next life.
  • She is often mistaken for someone who thinks you shouldn’t own anything, but that’s wrong. Rather, she thinks you can own as much or as little as you like, as long as every possession brings you true joy.
  • By the time her book arrived, America had entered a time of peak stuff, when we had accumulated a mountain of disposable goods — from Costco toilet paper to Isaac Mizrahi swimwear by Target — but hadn’t (and still haven’t) learned how to dispose of them. We were caught between an older generation that bought a princess phone in 1970 for $25 that was still working and a generation that bought $600 iPhones, knowing they would have to replace them within two years. We had the princess phone and the iPhone, and we couldn’t dispose of either. We were burdened by our stuff; we were drowning in it.
  • ...16 more annotations...
  • A woman named Diana, who wore star-and-flower earrings, said that before she tidied, her life was out of control. Her job had been recently eliminated when she found the book. “It’s a powerful message for women that you should be surrounded by things that make you happy,”
  • “I found the opposite of happiness is not sadness,” Diana told us. “It’s chaos.”
  • Another woman said she KonMaried a bad boyfriend. Having tidied everything in her home and finding she still distinctly lacked happiness, she held her boyfriend in her hands, realized he no longer sparked joy and got rid of him.
  • She realized that the work she was doing as a tidying consultant was far more psychological than it was practical. Tidying wasn’t just a function of your physical space; it was a function of your soul.
  • . She wants you to override the instinct to keep a certain thing because an HGTV show or a home-design magazine or a Pinterest page said it would brighten up your room or make your life better. She wants you to possess your possessions on your own terms, not theirs
  • she would say to him what she said to me, that yes, America is a little different from Japan, but ultimately it’s all the same. We’re all the same in that we’re enticed into the false illusion of happiness through material purchase.
  • She leaves room for something that people don’t often give her credit for: that the KonMari method might not be your speed. “I think it’s good to have different types of organizing methods,” she continued, “because my method might not spark joy with some people, but his method might.
  • Conference was different from the KonMari events that I attended. Whereas Kondo does not believe that you need to buy anything in order to organize and that storage systems provide only the illusion of tidiness, the women of Conference traded recon on timesaving apps, label makers, the best kind of Sharpie, the best tool they own (“supersticky notes,” “drawer dividers”)
  • They don’t like that you have to get rid of all of your papers, which is actually a misnomer: Kondo just says you should limit them because they’re incapable of sparking joy, and you should confine them to three folders: needs immediate attention, must be kept for now, must be kept forever.
  • each organizer I spoke with said that she had the same fundamental plan that Kondo did, that the client should purge (they cry “purge” for what Kondo gently calls “discarding”) what is no longer needed or wanted; somehow the extra step of thanking the object or folding it a little differently enrages them. This rage hides behind the notion that things are different here in America, that our lives are more complicated and our stuff is more burdensome and our decisions are harder to make.
  • Ultimately, the women of NAPO said that Kondo’s methods were too draconian and that the clients they knew couldn’t live in Kondo’s world. They had jobs and children, and they needed baby steps and hand-holding and maintenance plans. They needed someone to do for them what they couldn’t naturally do for themselves.
  • the most potent difference between Kondo and the NAPO women is that the NAPO women seek to make a client’s life good by organizing their stuff; Kondo, on the other hand, leads with her spiritual mission, to change their lives through magic.
  • She went to work in finance, but she found the work empty and meaningless. She would come home and find herself overwhelmed by her stuff. So she began searching for “minimalism” on the internet almost constantly, happening on Pinterest pages of beautiful, empty bathrooms and kitchens, and she began to imagine that it was her stuff that was weighing her down. She read philosophy blogs about materialism and the accumulation of objects. “They just all talked about feeling lighter,”
  • “I never knew how to get here from there,” she said. Ning looked around her apartment, which is spare. She loves it here now, but that seemed impossible just a couple of years ago.
  • She found Kondo’s book, and she felt better immediately, just having read it. She began tidying, and immediately she lost three pounds. She had been trying to lose weight forever, and then suddenly, without effort, three pounds, just gone.
  • when it comes to stuff, we are all the same. Once we’ve divided all the drawers and eliminated that which does not bring us joy and categorized ourselves within an inch of our lives, we’ll find that the person lying beneath all the stuff was still just plain old us. We are all a mess, even when we’re done tidying.
Javier E

Think Less, Think Better - The New York Times - 1 views

  • the capacity for original and creative thinking is markedly stymied by stray thoughts, obsessive ruminations and other forms of “mental load.”
  • Many psychologists assume that the mind, left to its own devices, is inclined to follow a well-worn path of familiar associations. But our findings suggest that innovative thinking, not routine ideation, is our default cognitive mode when our minds are clear.
  • We found that a high mental load consistently diminished the originality and creativity of the response: Participants with seven digits to recall resorted to the most statistically common responses (e.g., white/black), whereas participants with two digits gave less typical, more varied pairings (e.g., white/cloud).
  • ...8 more annotations...
  • In another experiment, we found that longer response times were correlated with less diverse responses, ruling out the possibility that participants with low mental loads simply took more time to generate an interesting response.
  • it seems that with a high mental load, you need more time to generate even a conventional thought. These experiments suggest that the mind’s natural tendency is to explore and to favor novelty, but when occupied it looks for the most familiar and inevitably least interesting solution.
  • In general, there is a tension in our brains between exploration and exploitation. When we are exploratory, we attend to things with a wide scope, curious and desiring to learn. Other times, we rely on, or “exploit,” what we already know, leaning on our expectations, trusting the comfort of a predictable environment
  • Much of our lives are spent somewhere between those extremes. There are functional benefits to both modes: If we were not exploratory, we would never have ventured out of the caves; if we did not exploit the certainty of the familiar, we would have taken too many risks and gone extinct. But there needs to be a healthy balance
  • All these loads can consume mental capacity, leading to dull thought and anhedonia — a flattened ability to experience pleasure.
  • ancient meditative practice helps free the mind to have richer experiences of the present
  • your life leaves too much room for your mind to wander. As a result, only a small fraction of your mental capacity remains engaged in what is before it, and mind-wandering and ruminations become a tax on the quality of your life
  • Honing an ability to unburden the load on your mind, be it through meditation or some other practice, can bring with it a wonderfully magnified experience of the world — and, as our study suggests, of your own mind.
« First ‹ Previous 41 - 60 of 977 Next › Last »
Showing 20 items per page