Skip to main content

Home/ TOK Friends/ Group items matching "origin" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
19More

Faith vs. Facts - NYTimes.com - 0 views

  • a broad group of scholars is beginning to demonstrate that religious belief and factual belief are indeed different kinds of mental creatures.
  • People process evidence differently when they think with a factual mind-set rather than with a religious mind-set
  • Even what they count as evidence is different
  • ...16 more annotations...
  • And they are motivated differently, based on what they conclude.
  • On what grounds do scholars make such claims?
  • the very language people use changes when they talk about religious beings, and the changes mean that they think about their realness differently.
  • to say, “I believe that Jesus Christ is alive” signals that you know that other people might not think so. It also asserts reverence and piety
  • We seem to regard religious beliefs and factual beliefs with what the philosopher Neil Van Leeuwen calls different “cognitive attitudes.”
  • when people consider the truth of a religious belief, what the belief does for their lives matters more than, well, the facts.
  • We evaluate religious beliefs more with our sense of destiny, purpose and the way we think the world should be
  • religious and factual beliefs play different roles in interpreting the same events. Religious beliefs explain why, rather than how
  • people’s reliance on supernatural explanations increases as they age.
  • It’s the young kids who seem skeptical when researchers ask them about gods and ancestors, and the adults who seem clear and firm. It seems that supernatural ideas do things for adults they do not yet do for children.
  • people don’t use rational, instrumental reasoning when they deal with religious beliefs
  • sacred values are immune to the normal cost-benefit trade-offs that govern other dimensions of our lives.
  • The danger point seems to be when people feel themselves to be completely fused with a group defined by its sacred value.
  • One of the interesting things about sacred values, however, is that they are both general (“I am a true Christian”) and particular (“I believe that abortion is murder”)
  • It is possible that this is the key to effective negotiation, because the ambiguity allows the sacred value to be reframed without losing its essential truth
  • these new ideas about religious belief should shape the way people negotiate about ownership of the land, just as they should shape the way we think about climate change deniers and vaccine avoiders. People aren’t dumb in not recognizing the facts. They are using a reasoning process that responds to moral arguments more than scientific ones, and we should understand that when we engage.
13More

Among the Disrupted - NYTimes.com - 0 views

  • Writers hover between a decent poverty and an indecent one; they are expected to render the fruits of their labors for little and even for nothing, and all the miracles of electronic dissemination somehow do not suffice for compensation, either of the fiscal or the spiritual kind.
  • Journalistic institutions slowly transform themselves into silent sweatshops in which words cannot wait for thoughts, and first responses are promoted into best responses, and patience is a professional liability.
  • the discussion of culture is being steadily absorbed into the discussion of business. There are “metrics” for phenomena that cannot be metrically measured. Numerical values are assigned to things that cannot be captured by numbers. Economic concepts go rampaging through noneconomic realms:
  • ...10 more annotations...
  • Quantification is the most overwhelming influence upon the contemporary American understanding of, well, everything. It is enabled by the idolatry of data, which has itself been enabled by the almost unimaginable data-generating capabilities of the new technology
  • The distinction between knowledge and information is a thing of the past, and there is no greater disgrace than to be a thing of the past.
  • even as technologism, which is not the same as technology, asserts itself over more and more precincts of human life, so too does scientism, which is not the same as science.
  • The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university
  • The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy.
  • So, too, does the view that the strongest defense of the humanities lies not in the appeal to their utility — that literature majors may find good jobs, that theaters may economically revitalize neighborhoods — but rather in the appeal to their defiantly nonutilitarian character, so that individuals can know more than how things work, and develop their powers of discernment and judgment, their competence in matters of truth and goodness and beauty, to equip themselves adequately for the choices and the crucibles of private and public life.
  • are we becoming posthumanists?
  • In American culture right now, as I say, the worldview that is ascendant may be described as posthumanism.
  • The posthumanism of the 1970s and 1980s was more insular, an academic affair of “theory,” an insurgency of professors; our posthumanism is a way of life, a social fate.
  • In “The Age of the Crisis of Man: Thought and Fiction in America, 1933-1973,” the gifted essayist Mark Greif, who reveals himself to be also a skillful historian of ideas, charts the history of the 20th-century reckonings with the definition of “man.”
18More

How Poor Are the Poor? - NYTimes.com - 0 views

  • “Anyone who studies the issue seriously understands that material poverty has continued to fall in the U.S. in recent decades, primarily due to the success of anti-poverty programs” and the declining cost of “food, air-conditioning, communications, transportation, and entertainment,”
  • Despite the rising optimism, there are disagreements over how many poor people there are and the conditions they live under. There are also questions about the problem of relative poverty, what we are now calling inequality
  • Jencks argues that the actual poverty rate has dropped over the past five decades – far below the official government level — if poverty estimates are adjusted for food and housing benefits, refundable tax credits and a better method of determining inflation rates. In Jencks’s view, the war on poverty worked.
  • ...15 more annotations...
  • Democratic supporters of safety net programs can use Jencks’s finding that poverty has dropped below 5 percent as evidence that the war on poverty has been successful.
  • At the same time liberals are wary of positive news because, as Jencks notes:It is easier to rally support for such an agenda by saying that the problem in question is getting worse
  • The plus side for conservatives of Jencks’s low estimate of the poverty rate is the implication that severe poverty has largely abated, which then provides justification for allowing enemies of government entitlement programs to further cut social spending.
  • At the same time, however, Jencks’s data undermines Republican claims that the war on poverty has been a failure – a claim exemplified by Ronald Reagan’s famous 1987 quip: “In the sixties we waged a war on poverty, and poverty won.”
  • Jencks’s conclusion: “The absolute poverty rate has declined dramatically since President Johnson launched his war on poverty in 1964.” At 4.8 percent, Jencks’s calculation is the lowest poverty estimate by a credible expert in the field.
  • his conclusion — that instead of the official count of 45.3 million people living in poverty, the number of poor people in America is just under 15 million — understates the scope of hardship in this country.
  • There are strong theoretical justifications for the use of a relative poverty measure. The Organization for Economic Cooperation and Development puts it this way:In order to participate fully in the social life of a community, individuals may need a level of resources that is not too inferior to the norms of a community. For example, the clothing budget that allows a child not to feel ashamed of his school attire is much more related to national living standards than to strict requirements for physical survival
  • using a relative measure shows that the United States lags well behind other developed countries:If you use the O.E.C.D. standard of 50 percent of median income as a poverty line, the United States looks pretty bad in cross-national relief. We have a relative poverty rate exceeded only by Chile, Turkey, Mexico and Israel (which has seen a big increase in inequality in recent years). And that rate in 2010 was essentially where it was in 1995
  • While the United States “has achieved real progress in reducing absolute poverty over the past 50 years,” according to Burtless, “the country may have made no progress at all in reducing the relative economic deprivation of folks at the bottom.”
  • the heart of the dispute: How severe is the problem of poverty?
  • Kathryn Edin, a professor of sociology at Johns Hopkins, and Luke Schaefer, a professor of social work at the University of Michigan, contend that the poverty debate overlooks crucial changes that have taken place within the population of the poor.
  • welfare reform, signed into law by President Clinton in 1996 (the Personal Responsibility and Work Opportunity Act), which limited eligibility for welfare benefits to five years. The limitation has forced many of the poor off welfare: over the past 19 years, the percentage of families falling under the official poverty line who receive welfare benefits has fallen from to 26 percent from 68 percent. Currently, three-quarters of those in poverty, under the official definition, receive no welfare payments.
  • he enactment of expanded benefits for the working poor through the earned-income tax credit and the child tax credit.According to Edin and Schaefer, the consequence of these changes, taken together, has been to divide the poor who no longer receive welfare into two groups. The first group is made up of those who have gone to work and have qualified for tax credits. Expanded tax credits lifted about 3.2 million children out of poverty in 2013
  • he second group, though, has really suffered. These are the very poor who are without work, part of a population that is struggling desperately. Edin and Schaefer write that among the losers are an estimated 3.4 million “children who over the course of a year live for at least three months under a $2 per person per day threshold.”
  • ocusing on these findings, Mishel argues, diverts attention from the more serious problem of “the failure of the labor market to adequately reward low-wage workers.”To support his case, Mishel points out that hourly pay for those in the bottom fifth grew only 7.7 percent from 1979 to 2007, while productivity grew by 64 percent, and education levels among workers in this quintile substantially improved.
11More

André Glucksmann, French Philosopher Who Renounced Marxism, Dies at 78 - The ... - 0 views

  • In 1975, in “The Cook and the Cannibal,” Mr. Glucksmann subjected Marxism to a scalding critique. Two years later, he broadened his attack in his most influential work, “The Master Thinkers,” which drew a direct line from the philosophies of Marx, Hegel, Fichte and Nietzsche to the enormities of Nazism and Soviet Communism. It was they, he wrote in his conclusion, who “erected the mental apparatus which is indispensable for launching the grand final solutions of the 20th century.”
  • An instant best seller, the book put him in the company of several like-minded former radicals, notably Bernard-Henri Lévy and Pascal Bruckner. Known as the nouveaux philosophes, a term coined by Mr. Lévy, they became some of France’s most prominent public intellectuals, somewhat analogous to the neoconservatives in the United States, but with a lingering leftist orientation.
  • Their apostasy sent shock waves through French intellectual life, and onward to Moscow, which depended on the cachet afforded by Jean-Paul Sartre and other leftist philosophers
  • ...8 more annotations...
  • “It was André Glucksmann who dealt the decisive blow to Communism in France,”
  • “In the West, he presented the anti-totalitarian case more starkly and more passionately than anyone else in modern times,
  • “He was a passionate defender of the superoppressed, whether it was the prisoners of the Gulag, the Bosnians and Kosovars, gays during the height of the AIDS crisis, the Chechens under Putin or the Iraqis under Saddam,” he said. “When he turned against Communism, it was because he realized that Communists were not on the same side.”
  • After earning the teaching degree known as an agrégation from the École Normale Supérieure de Saint-Cloud in 1961, Mr. Glucksmann enrolled in the National Center for Scientific Research to pursue a doctorate under Raymond Aron — an odd matchup because Aron was France’s leading anti-Marxist intellectual.
  • His subsequent turn away from Marxism made him a reviled figure on the left, and former comrades looked on aghast as he became one of France’s most outspoken defenders of the United States. He argued for President Ronald Reagan’s policy of nuclear deterrence toward the Soviet Union, intervention in the Balkans and both American invasions of Iraq. In 2007, he supported the candidacy of Nicolas Sarkozy for the French presidency.
  • “There is the Glucksmann who was right and the Glucksmann who could — with the same fervor, the same feeling of being in the right — be wrong,” Mr. Lévy wrote in a posthumous appreciation for Le Monde. “What set him apart from others under such circumstances is that he would admit his error, and when he came around he was fanatical about studying his mistake, mulling it over, understanding it.”
  • In his most recent book, “Voltaire Counterattacks,” published this year, he positioned France’s greatest philosopher, long out of favor, as a penetrating voice perfectly suited to the present moment.
  • “I think thought is an individual action, not one of a party,” Mr. Glucksmann told The Chicago Tribune in 1991. “First you think. And if that corresponds with the Left, then you are of the Left; if Right, then you are of the Right. But this idea of thinking Left or Right is a sin against the spirit and an illusion.”
10More

The Sheltering Campus: Why College Is Not Home - The New York Times - 2 views

  • The college years — a time for important growth in autonomy and the consolidation of adult identity and life goals — have evolved into an extended period of adolescence during which many of today’s students are not saddled with adult responsibilities.
  • For previous generations, college was a decisive break from parental supervision; guidance and support needed to come from peers and from within.
  • In the past two decades, however, continued family contact and dependence, thanks to cellphones, email and social media, has increased significantly — some parents go so far as to help with coursework.
  • ...7 more annotations...
  • At the same time, the rise in protective committees and procedures (like trigger warnings) ensure that students will not be confronted with course materials that might upset them (even classics like Ovid’s “Metamorphoses” and “The Adventures of Huckleberry Finn”).
  • universities like Yale have given in to the implicit notion that they should provide the equivalent of the home environment.
  • But college is a different kind of community than a family, and its primary job is education of the student and adaptation to independent community living.
  • To prepare for increased autonomy and responsibility, college needs to be a time of exploration and experimentation. This process entails “trying on” new ways of thinking about oneself both intellectually and personally, which is possible only if a certain degree of freedom is allowed.
  • While we should provide “safe spaces” within colleges for marginalized groups, we must also make it safe for all community members to express opinions and challenge majority views. Intellectual growth and flexibility are fostered by rigorous debate and questioning.
  • It is not surprising that young people are prone to lash out, particularly when there are sociologic reasons to do so. Our generation rallied around clear issues: the war in Vietnam and governmentally sanctioned discrimination based on race and gender. ISIS, Newtown, a changing economy with fewer good jobs and stable career paths create anxiety without generating a unifying moral vision
  • The encroachment of behavioral guidelines into the social and even intellectual spheres comes at a cost. Every college discussion about community values, social climate and behavior should also include recognition of the developmental importance of student autonomy and self-regulation, of the necessary tension between safety and self-discovery.
6More

How Meditation Changes the Brain and Body - The New York Times - 0 views

  • a study published in Biological Psychiatry brings scientific thoroughness to mindfulness meditation and for the first time shows that, unlike a placebo, it can change the brains of ordinary people and potentially improve their health.
  • One difficulty of investigating meditation has been the placebo problem. In rigorous studies, some participants receive treatment while others get a placebo: They believe they are getting the same treatment when they are not. But people can usually tell if they are meditating. Dr. Creswell, working with scientists from a number of other universities, managed to fake mindfulness.
  • Half the subjects were then taught formal mindfulness meditation at a residential retreat center; the rest completed a kind of sham mindfulness meditation that was focused on relaxation and distracting oneself from worries and stress.
  • ...3 more annotations...
  • Dr. Creswell and his colleagues believe that the changes in the brain contributed to the subsequent reduction in inflammation, although precisely how remains unknown.
  • follow-up brain scans showed differences in only those who underwent mindfulness meditation. There was more activity, or communication, among the portions of their brains that process stress-related reactions and other areas related to focus and calm. Four months later, those who had practiced mindfulness showed much lower levels in their blood of a marker of unhealthy inflammation than the relaxation group, even though few were still meditating.
  • When it comes to how much mindfulness is needed to improve health, Dr. Creswell says, ‘‘we still have no idea about the ideal dose.”
11More

History News Network | An Open Letter to the Harvard Business School Dean Who Gave Hist... - 0 views

  • I would like to make some gratuitous curricular and pedagogical suggestions for business schools.
  • Foremost, business schools, at least those that purport to mold leaders, should stop training and start educating. Their graduates should be able to think and problem-solve for themselves, not just copy the latest fad.
  • Business schools generally do not cultivate or even select for general intelligence and breadth of understanding but instead breed shrewdness and encourage narrow technical knowledge.
  • ...8 more annotations...
  • To try to cover up the obvious shortcomings of their profoundly antisocial pedagogical model, many business schools tack on courses in ethics, corporate social responsibility, and the like, then shrug their collective shoulders when their graduates behave in ways that would make Vikings and pirates blush.
  • The only truly socially responsible management curriculum would be one built from the ground up out of the liberal arts – economics, of course, but also history, philosophy, political science, psychology, and sociology – because those are the core disciplines of social scientific and humanistic inquiry.
  • Properly understood, they are not “subjects” but ways of thinking about human beings, their behaviors, their institutions (of which businesses are just a small subset), and the ways they interact with the natural world. Only intelligent people with broad and deep backgrounds in the liberal arts can consistently make ethical decisions that are good for stakeholders, consumers, and the world they share.
  • Precisely because they are not deeply rooted in the liberal arts, many business schools try to inculcate messages into the brains of their students that are unscientific, mere fashions that cycle into and then out of popularity.
  • No one can seriously purport to understand corporate X (finance, formation, governance, social responsibility, etc.) today who does not understand X’s origins and subsequent development. Often, then, the historian of corporate X is the real expert, not the business school professor who did a little consulting, a few interviews, and a survey.
  • Lurking somewhere in the background of most great business leaders, ones who helped their companies, their customers, and the world, is a liberal arts education.
  • Instead of forcing students to choose between a broad liberal arts degree or a business career, business schools and liberal arts departments ought to work together to integrate all methods of knowing into a seamless whole focused on key questions and problems important to us all
  • There is not a single question of importance in the business world that does not have economic, historical, philosophical, political, psychological, and sociological components that are absolutely crucial to making good (right and moral) decisions. So why continue to divide understanding of the real world into hoary compartments
1More

Can You Get Smarter? - 0 views

  •  
    This article discusses cognitive developments and declines, relating to our focus during Unit 1 on the neocortex and evolution of the human brain.
22More

How Humans Ended Up With Freakishly Huge Brains | WIRED - 0 views

  • paleontologists documented one of the most dramatic transitions in human evolution. We might call it the Brain Boom. Humans, chimps and bonobos split from their last common ancestor between 6 and 8 million years ago.
  • Starting around 3 million years ago, however, the hominin brain began a massive expansion. By the time our species, Homo sapiens, emerged about 200,000 years ago, the human brain had swelled from about 350 grams to more than 1,300 grams.
  • n that 3-million-year sprint, the human brain almost quadrupled the size its predecessors had attained over the previous 60 million years of primate evolution.
  • ...19 more annotations...
  • There are plenty of theories, of course, especially regarding why: increasingly complex social networks, a culture built around tool use and collaboration, the challenge of adapting to a mercurial and often harsh climate
  • Although these possibilities are fascinating, they are extremely difficult to test.
  • “What kinds of mutations occurred, and what did they do? We’re starting to get answers and a deeper appreciation for just how complicated this process was.”
  • contrary to long-standing assumptions, larger mammalian brains do not always have more neurons, and the ones they do have are not always distributed in the same way.
  • The human brain has 86 billion neurons in all: 69 billion in the cerebellum, a dense lump at the back of the brain that helps orchestrate basic bodily functions and movement; 16 billion in the cerebral cortex, the brain’s thick corona and the seat of our most sophisticated mental talents, such as self-awareness, language, problem solving and abstract thought; and 1 billion in the brain stem and its extensions into the core of the brain
  • In contrast, the elephant brain, which is three times the size of our own, has 251 billion neurons in its cerebellum, which helps manage a giant, versatile trunk, and only 5.6 billion in its cortex
  • primates evolved a way to pack far more neurons into the cerebral cortex than other mammals did
  • The great apes are tiny compared to elephants and whales, yet their cortices are far denser: Orangutans and gorillas have 9 billion cortical neurons, and chimps have 6 billion. Of all the great apes, we have the largest brains, so we come out on top with our 16 billion neurons in the cortex.
  • Although it makes up only 2 percent of body weight, the human brain consumes a whopping 20 percent of the body’s total energy at rest. In contrast, the chimpanzee brain needs only half that.
  • there was a strong evolutionary pressure to modify the human regulatory regions in a way that sapped energy from muscle and channeled it to the brain.
  • Accounting for body size and weight, the chimps and macaques were twice as strong as the humans. It’s not entirely clear why, but it is possible that our primate cousins get more power out of their muscles than we get out of ours because they feed their muscles more energy. “Compared to other primates, we lost muscle power in favor of sparing energy for our brains,” Bozek said. “It doesn’t mean that our muscles are inherently weaker. We might just have a different metabolism.
  • a pioneering experiment. Not only were they going to identify relevant genetic mutations from our brain’s evolutionary past, they were also going to weave those mutations into the genomes of lab mice and observe the consequences.
  • Silver and Wray introduced the chimpanzee copy of HARE5 into one group of mice and the human edition into a separate group. They then observed how the embryonic mice brains grew.
  • After nine days of development, mice embryos begin to form a cortex, the outer wrinkly layer of the brain associated with the most sophisticated mental talents. On day 10, the human version of HARE5 was much more active in the budding mice brains than the chimp copy, ultimately producing a brain that was 12 percent larger
  • “It wasn’t just a couple mutations and—bam!—you get a bigger brain. As we learn more about the changes between human and chimp brains, we realize there will be lots and lots of genes involved, each contributing a piece to that. The door is now open to get in there and really start understanding. The brain is modified in so many subtle and nonobvious ways.”
  • As recent research on whale and elephant brains makes clear, size is not everything, but it certainly counts for something. The reason we have so many more cortical neurons than our great-ape cousins is not that we have denser brains, but rather that we evolved ways to support brains that are large enough to accommodate all those extra cells.
  • There’s a danger, though, in becoming too enamored with our own big heads. Yes, a large brain packed with neurons is essential to what we consider high intelligence. But it’s not sufficient
  • No matter how large the human brain grew, or how much energy we lavished upon it, it would have been useless without the right body. Three particularly crucial adaptations worked in tandem with our burgeoning brain to dramatically increase our overall intelligence: bipedalism, which freed up our hands for tool making, fire building and hunting; manual dexterity surpassing that of any other animal; and a vocal tract that allowed us to speak and sing.
  • Human intelligence, then, cannot be traced to a single organ, no matter how large; it emerged from a serendipitous confluence of adaptations throughout the body. Despite our ongoing obsession with the size of our noggins, the fact is that our intelligence has always been so much bigger than our brain.
15More

How to Cultivate the Art of Serendipity - The New York Times - 0 views

  • A surprising number of the conveniences of modern life were invented when someone stumbled upon a discovery or capitalized on an accident
  • wonder whether we can train ourselves to become more serendipitous. How do we cultivate the art of finding what we’re not seeking?
  • Croatian has no word to capture the thrill of the unexpected discovery, so she was delighted when — after moving to the United States on a Fulbright scholarship in the 1980s — she learned the English word “serendipity.”
  • ...12 more annotations...
  • Today we think of serendipity as something like dumb luck. But its original meaning was very different.
  • suggested that this old tale contained a crucial idea about human genius: “As their highnesses travelled, they were always making discoveries, by accident and sagacity, of things which they were not in quest of.” And he proposed a new word — “serendipity” — to describe this princely talent for detective work. At its birth, serendipity meant a skill rather than a random stroke of good fortune.
  • sees serendipity as something people do. In the mid-1990s, she began a study of about 100 people to find out how they created their own serendipity, or failed to do so.
  • As people dredge the unknown, they are engaging in a highly creative act. What an inventor “finds” is always an expression of him- or herself.
  • subjects fell into three distinct groups. Some she called “non-encounterers”; they saw through a tight focus, a kind of chink hole, and they tended to stick to their to-do lists when searching for information rather than wandering off into the margins. Other people were “occasional encounterers,” who stumbled into moments of serendipity now and then. Most interesting were the “super-encounterers,” who reported that happy surprises popped up wherever they looked.
  • “gathering string” is just another way of talking about super-encountering. After all, “string” is the stuff that accumulates in a journalist’s pocket. It’s the note you jot down in your car after the interview, the knickknack you notice on someone’s shelf, or the anomaly that jumps out at you in Appendix B of an otherwise boring research study.
  • came up with the term super-encounterer to give us a way to talk about the people rather than just the discoveries. Without such words, we tend to become dazzled by the happy accident itself, to think of it as something that exists independent of an observer.
  • We can slip into a twisted logic in which we half-believe the penicillin picked Alexander Fleming to be its emissary, or that the moons of Jupiter wanted to be seen by Galileo. But discoveries are products of the human mind.
  • You become a super-encounterer, according to Dr. Erdelez, in part because you believe that you are one — it helps to assume that you possess special powers of perception
  • One survey of patent holders (the PatVal study of European inventors, published in 2005) found that an incredible 50 percent of patents resulted from what could be described as a serendipitous process. Thousands of survey respondents reported that their idea evolved when they were working on an unrelated project — and often when they weren’t even trying to invent anything.
  • need to develop a new, interdisciplinary field — call it serendipity studies — that can help us create a taxonomy of discoveries in the chemistry lab, the newsroom, the forest, the classroom, the particle accelerator and the hospital. By observing and documenting the many different “species” of super-encounterers, we might begin to understand their minds.
  • Of course, even if we do organize the study of serendipity, it will always be a whimsical undertaking, given that the phenomenon is difficult to define, amazingly variable and hard to capture in data. The clues will no doubt emerge where we least expect them
11More

The Legacy of Karl Marx: Or, the Inheritance We Dare Not Squander - Unpublished Manuscr... - 0 views

  • That led him to ask the right questions. First: how does free enterprise capitalism work in practice? Second: what is the relationship between economic power and government policy? Third: what is the interrelationship between such economic power and the life of the mind - otherwise known as education and the more general public awareness and perception of reality? Fourth: what questions do and do not get asked within a capitalist society? Fifth: what will be the nature of the next new system?
  • To honor his commitment to the dialectical process, I will offer relatively short commentaries on those questions so that we will have time to engage each other in dialogue. One. Marx built upon the foundation laid by Adam Smith, Thomas Malthus, and David Ricardo; but he transcended their explanation of the capitalist process
  • Two. Marx once called the government of capitalist political economies little more than the executive committee of the high bourgeoisie.
  • ...8 more annotations...
  • Three. Marx has often been criticized for oversimplifying the relationship between what he called the economic base and the intellectual superstructure. There is some truth to the charge, but it is largely a case of misplaced concreteness. He was quite aware that ideas influence the relations of production, as witness his comments about various reform movements. But he was primarily concerned with attacking the proposition that ideas had an ethereal and independent origin, and lived a self-contained existence
  • Four. Two questions take precedence under capitalism: how to turn a short-term profit, and how to keep the system from generating the very opposition that Marx predicted. Those two imperatives have so limited and inhibited the life of the mind, let alone the quality of life, that they amount to a denial of the human spirit. Let us agree that Adam Smith was correct in arguing that entrepreneurial capitalism freed human beings from the constraints of medieval and mercantilist cultures. We must also insist that Marx was correct in explaining how corporate capitalism forged new fetters.
  • Five. There is much in Lenin's thought and practice that I disagree with, but he understood at least one essential element of Marx's thought. Marx realized that capitalism increasingly defined human beings by their function in the marketplace - and so demeaned and stunted their lives, turning them inwards upon themselves as possessors of trivia.
  • permit me my summary of Marx's legacy. First, the weaknesses. I have mentioned some of those in passing, so here I will concentrate on the major items.
  • First, Marx never broke free of the capitalist conception of time defined as short-run success
  • On balance, however, the strengths of Marx outweigh his weaknesses. We can begin with his detailed, tough and persistent inquiry into the dynamics of capitalism that produced a strategy of intellectual inquiry that has provided a sustained tradition of exciting and consequential scholarship in almost every field of knowledge.
  • Then we must salute Marx's deep commitment to the proposition that, since human beings make their own history, we can devise and create a more human and equitable life as members of a community. We can change the world. That is indeed the ultimate purpose of knowledge - to change the world for the better.
  • Next we must acknowledge that the people who have acted on those Marxian contributions have indeed improved the lives of millions of non- and even anti-Marxists
13More

The Joy of Psyching Myself Out­ - The New York Times - 0 views

  • that neat separation is not just unwarranted; it’s destructive
  • Although it’s often presented as a dichotomy (the apparent subjectivity of the writer versus the seeming objectivity of the psychologist), it need not be.
  • IS it possible to think scientifically and creatively at once? Can you be both a psychologist and a writer?
  • ...10 more annotations...
  • “A writer must be as objective as a chemist,” Anton Chekhov wrote in 1887. “He must abandon the subjective line; he must know that dung heaps play a very reasonable part in a landscape.”
  • At the turn of the century, psychology was a field quite unlike what it is now. The theoretical musings of William James were the norm (a wry commenter once noted that William James was the writer, and his brother Henry, the psychologist)
  • Freud was a breed of psychologist that hardly exists anymore: someone who saw the world as both writer and psychologist, and for whom there was no conflict between the two. That boundary melding allowed him to posit the existence of cognitive mechanisms that wouldn’t be empirically proved for decades,
  • Freud got it brilliantly right and brilliantly wrong. The rightness is as good a justification as any of the benefits, the necessity even, of knowing how to look through the eyes of a writer. The wrongness is part of the reason that the distinction between writing and experimental psychology has grown far more rigid than it was a century ago.
  • the signs people associate with liars often have little empirical evidence to support them. Therein lies the psychologist’s distinct role and her necessity. As a writer, you look in order to describe, but you remain free to use that description however you see fit. As a psychologist, you look to describe, yes, but also to verify.
  • Without verification, we can’t always trust what we see — or rather, what we think we see.
  • The desire for the world to be what it ought to be and not what it is permeates experimental psychology as much as writing, though. There’s experimental bias and the problem known in the field as “demand characteristics” — when researchers end up finding what they want to find by cuing participants to act a certain way.
  • IN 1932, when he was in his 70s, Freud gave a series of lectures on psychoanalysis. In his final talk, “A Philosophy of Life,” he focused on clarifying an important caveat to his research: His followers should not be confused by the seemingly internal, and thus possibly subjective, nature of his work. “There is no other source of knowledge of the universe but the intellectual manipulation of carefully verified observations,” he said.
  • That is what both the psychologist and the writer should strive for: a self-knowledge that allows you to look in order to discover, without agenda, without preconception, without knowing or caring if what you’re seeing is wrong or right in your scheme of the world. It’s harder than it sounds. For one thing, you have to possess the self-knowledge that will allow you to admit when you’re wrong.
  • Even with the best intentions, objectivity can prove a difficult companion. I left psychology behind because I found its structural demands overly hampering. I couldn’t just pursue interesting lines of inquiry; I had to devise a set of experiments, see how feasible they were, both technically and financially, consider how they would reflect on my career. That meant that most new inquiries never happened — in a sense, it meant that objectivity was more an ideal than a reality. Each study was selected for a reason other than intrinsic interest.
10More

Bile, venom and lies: How I was trolled on the Internet - The Washington Post - 0 views

  • Thomas Jefferson often argued that an educated public was crucial for the survival of self-government
  • We now live in an age in which that education takes place mostly through relatively new platforms. Social networks — Facebook, Twitter, Instagram, etc. — are the main mechanisms by which people receive and share facts, ideas and opinions. But what if they encourage misinformation, rumors and lies?
  • In a comprehensive new study of Facebook that analyzed posts made between 2010 and 2014, a group of scholars found that people mainly shared information that confirmed their prejudices, paying little attention to facts and veracity. (Hat tip to Cass Sunstein, the leading expert on this topic.) The result, the report says, is the “proliferation of biased narratives fomented by unsubstantiated rumors, mistrust and paranoia.
  • ...7 more annotations...
  • The authors specifically studied trolling — the creation of highly provocative, often false information, with the hope of spreading it widely. The report says that “many mechanisms cause false information to gain acceptance, which in turn generate false beliefs that, once adopted by an individual, are highly resistant to correction.”
  • in recent weeks I was the target of a trolling campaign and saw exactly how it works. It started when an obscure website published a post titled “CNN host Fareed Zakaria calls for jihad rape of white women.” The story claimed that in my “private blog” I had urged the use of American women as “sex slaves” to depopulate the white race. The post further claimed that on my Twitter account, I had written the following line: “Every death of a white person brings tears of joy to my eyes.”
  • Disgusting. So much so that the item would collapse from its own weightlessness, right? Wrong. Here is what happened next: Hundreds of people began linking to it, tweeting and retweeting it, and adding their comments, which are too vulgar or racist to repeat. A few ultra-right-wing websites reprinted the story as fact. With each new cycle, the levels of hysteria rose, and people started demanding that I be fired, deported or killed. For a few days, the digital intimidation veered out into the real world. Some people called my house late one night and woke up and threatened my daughters, who are 7 and 12.
  • The people spreading this story were not interested in the facts; they were interested in feeding prejudice. The original story was cleverly written to provide conspiracy theorists with enough ammunition to ignore evidence. It claimed that I had taken down the post after a few hours when I realized it “receive[d] negative attention.” So, when the occasional debunker would point out that there was no evidence of the post anywhere, it made little difference. When confronted with evidence that the story was utterly false, it only convinced many that there was a conspiracy and coverup.
  • conversations on Facebook are somewhat more civil, because people generally have to reveal their identities. But on Twitter and in other places — the online comments section of The Post, for example — people can be anonymous or have pseudonyms. And that is where bile and venom flow freely.
  • an experiment performed by two psychologists in 1970. They divided students into two groups based on their answers to a questionnaire: high prejudice and low prejudice. Each group was told to discuss controversial issues such as school busing and integrated housing. Then the questions were asked again. “The surveys revealed a striking pattern,” Kolbert noted. “Simply by talking to one another, the bigoted students had become more bigoted and the tolerant more tolerant.”
  • This “group polarization” is now taking place at hyper speed, around the world. It is how radicalization happens and extremism spreads.
5More

In Science, It's Never 'Just a Theory' - The New York Times - 0 views

  • In everyday conversation, we tend to use the word “theory” to mean a hunch, an idle speculation, or a crackpot notion.
  • That’s not what “theory” means to scientists.“In science, the word theory isn’t applied lightly,” Kenneth R. Miller, a cell biologist at Brown University, said. “It doesn’t mean a hunch or a guess. A theory is a system of explanations that ties together a whole bunch of facts. It not only explains those facts, but predicts what you ought to find from other observations and experiments.”
  • In 2002, the board of education in Cobb County, Ga., adopted the textbook but also required science teachers to put a warning sticker inside the cover of every copy.“Evolution is a theory, not a fact, regarding the origin of living things,” the sticker read, in part.In 2004, several Cobb County parents filed a lawsuit against the county board of education to have the stickers removed. They called Dr. Miller, who testified for about two hours, explaining, among other things, the strength of evidence for the theory of evolution.
  • ...2 more annotations...
  • It’s helpful, he argues, to think about theories as being like maps.“To say something is a map is not to say it’s a hunch,” said Dr. Godfrey-Smith, a professor at the City University of New York and the University of Sydney. “It’s an attempt to represent some territory.”A theory, likewise, represents a territory of science. Instead of rivers, hills, and towns, the pieces of the territory are facts.“To call something a map is not to say anything about how good it is,” Dr. Godfrey-Smith added. “There are fantastically good maps where there’s not a shred of doubt about their accuracy. And there are maps that are speculative.”
  • To judge a map’s quality, we can see how well it guides us through its territory. In a similar way, scientists test out new theories against evidence. Just as many maps have proven to be unreliable, many theories have been cast aside.But other theories have become the foundation of modern science, such as the theory of evolution, the general theory of relativity, the theory of plate tectonics, the theory that the sun is at the center of the solar system, and the germ theory of disease.“To the best of our ability, we’ve tested them, and they’ve held up,” said Dr. Miller. “And that’s why we’ve held on to these things.”
55More

Why Silicon Valley can't fix itself | News | The Guardian - 1 views

  • After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.
  • Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.”
  • Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.
  • ...52 more annotations...
  • It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity
  • The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.
  • its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction.
  • In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.
  • the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track
  • The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”,
  • In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention.
  • After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.
  • these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires
  • Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform
  • Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes
  • they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.
  • To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”
  • this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives
  • Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.
  • They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.
  • Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.
  • The story of our species began when we began to make tools
  • All of which is to say: humanity and technology are not only entangled, they constantly change together.
  • This is not just a metaphor. Recent research suggests that the human hand evolved to manipulate the stone tools that our ancestors used
  • The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities
  • Yet as we lose certain capacities, we gain new ones.
  • The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology
  • Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.
  • Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them
  • Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes
  • The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.
  • They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit.
  • This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.
  • Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power.
  • these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.
  • reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”
  • Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable
  • not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”
  • Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently.
  • Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.
  • Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact
  • emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform.
  • “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics
  • industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.
  • there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use.
  • It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.
  • The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power
  • The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.
  • If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right
  • The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.
  • Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology
  • What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power.
  • Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources
  • democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation
  • This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run.
  • we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.
3More

The National Book Awards Haul Translators Out of Obscurity - The Atlantic - 0 views

  • In 2018, American literature no longer means literature written by Americans, for Americans, about America. It means literature that, wherever it comes from, whatever nation it describes, American readers recognize as relevant to them, as familiar. Foreign is no longer foreign
  • the question of how “foreign” a translation should “feel” provokes fierce disagreement. When you open a translated novel from overseas, do you want to sense its author’s French, German, Swedish, Spanish or Italian sensibility, even if that breaks the spell of your reading experience? Or do you want to feel as if the book had magically converted itself into flawless, easeful English, attuned to your own idiom? (This is called the “foreignization vs. domestication” debate.)
  • And should a translation hew closely to the language and structure of the original, or should it recraft the language to appeal to the target audience? (This is the “faithfulness” question.) Hardly anyone agrees—not editors, not scholars, not translators, and not readers.
9More

The Yoda of Silicon Valley - The New York Times - 0 views

  • Of course, all the algorithmic rigmarole is also causing real-world problems. Algorithms written by humans — tackling harder and harder problems, but producing code embedded with bugs and biases — are troubling enough
  • More worrisome, perhaps, are the algorithms that are not written by humans, algorithms written by the machine, as it learns.
  • Programmers still train the machine, and, crucially, feed it data
  • ...6 more annotations...
  • However, as Kevin Slavin, a research affiliate at M.I.T.’s Media Lab said, “We are now writing algorithms we cannot read. That makes this a unique moment in history, in that we are subject to ideas and actions and efforts by a set of physics that have human origins without human comprehension.
  • As Slavin has often noted, “It’s a bright future, if you’re an algorithm.”
  • “Today, programmers use stuff that Knuth, and others, have done as components of their algorithms, and then they combine that together with all the other stuff they need,”
  • “With A.I., we have the same thing. It’s just that the combining-together part will be done automatically, based on the data, rather than based on a programmer’s work. You want A.I. to be able to combine components to get a good answer based on the data
  • But you have to decide what those components are. It could happen that each component is a page or chapter out of Knuth, because that’s the best possible way to do some task.”
  • “I am worried that algorithms are getting too prominent in the world,” he added. “It started out that computer scientists were worried nobody was listening to us. Now I’m worried that too many people are listening.”
15More

Geology's Timekeepers Are Feuding - The Atlantic - 0 views

  • , in 2000, the Nobel Prize-winning chemist Paul Crutzen won permanent fame for stratigraphy. He proposed that humans had so throughly altered the fundamental processes of the planet—through agriculture, climate change, and nuclear testing, and other phenomena—that a new geological epoch had commenced: the Anthropocene, the age of humans.
  • Zalasiewicz should know. He is the chair of the Anthropocene working group, which the ICS established in 2009 to investigate whether the new epoch deserved a place in stratigraphic time.
  • In 2015, the group announced that the Anthropocene was a plausible new layer and that it should likely follow the Holocene. But the team has yet to propose a “golden spike” for the epoch: a boundary in the sedimentary rock record where the Anthropocene clearly begins.
  • ...12 more annotations...
  • Officially, the Holocene is still running today. You have lived your entire life in the Holocene, and the Holocene has constituted the geological “present” for as long as there have been geologists.But if we now live in a new epoch, the Anthropocene, then the ICS will have to chop the Holocene somewhere. It will have to choose when the Holocene ended, and it will move some amount of time out of the purview of the Holocene working group and into that of the Anthropocene working group.
  • This is politically difficult. And right now, the Anthropocene working group seems intent on not carving too deep into the Holocene. In a paper published earlier this year in Earth-Science Reviews, the Anthropocene working group’s members strongly imply that they will propose starting the new epoch in the mid-20th century.
  • Some geologists argue that the Anthropocene started even earlier: perhaps 4,000 or 6,000 years ago, as farmers began to remake the land surface.“Most of the world’s forests that were going to be converted to cropland and agriculture were already cleared well before 1950,” says Bill Ruddiman, a geology professor at the University of Virginia and an advocate of this extremely early Anthropocene.
  • “Most of the world’s prairies and steppes that were going to be cleared for crops were already gone, by then. How can you argue the Anthropocene started in 1950 when all of the major things that affect Earth’s surface were already over?”Van der Pluijm agreed that the Anthropocene working group was picking 1950 for “not very good reasons.”“Agriculture was the revolution that allowed society to develop,” he said. “That was really when people started to force the land to work for them. That massive land movement—it’s like a landslide, except it’s a humanslide. And it is not, of course, as dramatic as today’s motion of land, but it starts the clock.”
  • This muddle had to stop. The Holocene comes up constantly in discussions of modern global warming. Geologists and climate scientists did not make their jobs any easier by slicing it in different ways and telling contradictory stories about it.
  • This process started almost 10 years ago. For this reason, Zalasiewicz, the chair of the Anthropocene working group, said he wasn’t blindsided by the new subdivisions at all. In fact, he voted to adopt them as a member of the Quaternary working group.“Whether the Anthropocene works with a unified Holocene or one that’s in three parts makes for very little difference,” he told me.In fact, it had made the Anthropocene group’s work easier. “It has been useful to compare the scale of the two climate events that mark the new boundaries [within the Holocene] with the kind of changes that we’re assessing in the Anthropocene. It has been quite useful to have the compare and contrast,” he said. “Our view is that some of the changes in the Anthropocene are rather bigger.”
  • Zalasiewicz said that he and his colleagues were going as fast as they could. When the working group group began its work in 2009, it was “really starting from scratch,” he told me.While other working groups have a large body of stratigraphic research to consider, the Anthropocene working group had nothing. “We had to spend a fair bit of time deciding whether the Anthropocene was geology at all,” he said. Then they had to decide where its signal could show up. Now, they’re looking for evidence that shows it.
  • This cycle of “glacials” and “interglacials” has played out about 50 times over the last several million years. When the Holocene began, it was only another interglacial—albeit the one we live in. Until recently, glaciers were still on schedule to descend in another 30,000 years or so.Yet geologists still call the Holocene an epoch, even though they do not bestow this term on any of the previous 49 interglacials. It get special treatment because we live in it.
  • Much of this science is now moot. Humanity’s vast emissions of greenhouse gas have now so warmed the climate that they have offset the next glaciation. They may even knock us out of the ongoing cycle of Ice Ages, sending the Earth hurtling back toward a “greenhouse” climate after the more amenable “icehouse” climate during which humans evolved.For this reason, van der Pluijm wants the Anthropocene to supplant the Holocene entirely. Humans made their first great change to the environment at the close of the last glaciation, when they seem to have hunted the world’s largest mammals—the wooly mammoth, the saber-toothed tiger—to extinction. Why not start the Anthropocene then?He would even rename the pre-1800 period “the Holocene Age” as a consolation prize:
  • Zalasiewicz said he would not start the Anthropocene too early in time, as it would be too work-intensive for the field to rename such a vast swath of time. “The early-Anthropocene idea would crosscut against the Holocene as it’s seen by Holocene workers,” he said. If other academics didn’t like this, they could create their own timescales and start the Anthropocene Epoch where they choose. “We have no jurisdiction over the word Anthropocene,” he said.
  • Ruddiman, the University of Virginia professor who first argued for a very early Anthropocene, now makes an even broader case. He’s not sure it makes sense to formally define the Anthropocene at all. In a paper published this week, he objects to designating the Anthropocene as starting in the 1950s—and then he objects to delineating the Anthropocene, or indeed any new geological epoch, by name. “Keep the use of the term informal,” he told me. “Don’t make it rigid. Keep it informal so people can say the early-agricultural Anthropocene, or the industrial-era Anthropocene.”
  • “This is the age of geochemical dating,” he said. Geologists have stopped looking to the ICS to place each rock sample into the rock sequence. Instead, field geologists use laboratory techniques to get a precise year or century of origin for each rock sample. “The community just doesn’t care about these definitions,” he said.
« First ‹ Previous 301 - 320 of 467 Next › Last »
Showing 20 items per page