Skip to main content

Home/ TOK Friends/ Group items tagged drawing

Rss Feed Group items tagged

Javier E

Why Silicon Valley can't fix itself | News | The Guardian - 1 views

  • After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.
  • Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.”
  • Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.
  • ...52 more annotations...
  • It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity
  • The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.
  • its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction.
  • In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.
  • the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track
  • The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”,
  • In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention.
  • After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.
  • these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires
  • Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform
  • Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes
  • they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.
  • To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”
  • this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives
  • Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.
  • Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.
  • They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.
  • The story of our species began when we began to make tools
  • All of which is to say: humanity and technology are not only entangled, they constantly change together.
  • This is not just a metaphor. Recent research suggests that the human hand evolved to manipulate the stone tools that our ancestors used
  • The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities
  • Yet as we lose certain capacities, we gain new ones.
  • The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology
  • Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.
  • Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.
  • Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes
  • The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.
  • They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit.
  • This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.
  • Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power.
  • these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.
  • reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”
  • Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable
  • not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”
  • Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently.
  • Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them
  • Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact
  • emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform.
  • “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics
  • industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.
  • there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use.
  • It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.
  • The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power
  • The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.
  • If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right
  • The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.
  • Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology
  • What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power.
  • Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources
  • democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation
  • This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run.
  • we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.
krystalxu

Why Study Philosophy? 'To Challenge Your Own Point of View' - The Atlantic - 1 views

  • Goldstein’s forthcoming book, Plato at the Googleplex: Why Philosophy Won’t Go Away, offers insight into the significant—and often invisible—progress that philosophy has made. I spoke with Goldstein about her take on the science vs. philosophy debates, how we can measure philosophy’s advances, and why an understanding of philosophy is critical to our lives today.
  • One of the things about philosophy is that you don’t have to give up on any other field. Whatever field there is, there’s a corresponding field of philosophy. Philosophy of language, philosophy of politics, philosophy of math. All the things I wanted to know about I could still study within a philosophical framework.
  • There’s a peer pressure that sets in at a certain age. They so much want to be like everybody else. But what I’ve found is that if you instill this joy of thinking, the sheer intellectual fun, it will survive even the adolescent years and come back in fighting form. It’s empowering.
  • ...18 more annotations...
  • One thing that’s changed tremendously is the presence of women and the change in focus because of that. There’s a lot of interest in literature and philosophy, and using literature as a philosophical examination. It makes me so happy! Because I was seen as a hard-core analytic philosopher, and when I first began to write novels people thought, Oh, and we thought she was serious! But that’s changed entirely. People take literature seriously, especially in moral philosophy, as thought experiments. A lot of the most developed and effective thought experiments come from novels. Also, novels contribute to making moral progress, changing people’s emotions.
  • The other thing that’s changed is that there’s more applied philosophy. Let’s apply philosophical theory to real-life problems, like medical ethics, environmental ethics, gender issues. This is a real change from when I was in school and it was only theory.
  • here’s a lot of philosophical progress, it’s just a progress that’s very hard to see. It’s very hard to see because we see with it. We incorporate philosophical progress into our own way of viewing the world.
  • Plato would be constantly surprised by what we know. And not only what we know scientifically, or by our technology, but what we know ethically. We take a lot for granted. It’s obvious to us, for example, that individual’s ethical truths are equally important.
  • it’s usually philosophical arguments that first introduce the very outlandish idea that we need to extend rights. And it takes more, it takes a movement, and activism, and emotions, to affect real social change. It starts with an argument, but then it becomes obvious. The tracks of philosophy’s work are erased because it becomes intuitively obvious
  • The arguments against slavery, against cruel and unusual punishment, against unjust wars, against treating children cruelly—these all took arguments.
  • About 30 years ago, the philosopher Peter Singer started to argue about the way animals are treated in our factory farms. Everybody thought he was nuts. But I’ve watched this movement grow; I’ve watched it become emotional. It has to become emotional. You have to draw empathy into it. But here it is, right in our time—a philosopher making the argument, everyone dismissing it, but then people start discussing it. Even criticizing it, or saying it’s not valid, is taking it seriously
  • This is what we have to teach our children. Even things that go against their intuition they need to take seriously. What was intuition two generations ago is no longer intuition; and it’s arguments that change i
  • We are very inertial creatures. We do not like to change our thinking, especially if it’s inconvenient for us. And certainly the people in power never want to wonder whether they should hold power.
  • I’m really trying to draw the students out, make them think for themselves. The more they challenge me, the more successful I feel as a teacher. It has to be very active
  • Plato used the metaphor that in teaching philosophy, there needs to be a fire in the teacher, and the sheer heat will help the fire grow in the student. It’s something that’s kindled because of the proximity to the heat.
  • how can you make the case that they should study philosophy?
  • ches your inner life. You have lots of frameworks to apply to problems, and so many ways to interpret things. It makes life so much more interesting. It’s us at our most human. And it helps us increase our humanity. No matter what you do, that’s an asset.
  • What do you think are the biggest philosophical issues of our time? The growth in scientific knowledge presents new philosophical issues.
  • The idea of the multiverse. Where are we in the universe? Physics is blowing our minds about this.
  • The question of whether some of these scientific theories are really even scientific. Can we get predictions out of them?
  • And with the growth in cognitive science and neuroscience. We’re going into the brain and getting these images of the brain. Are we discovering what we really are? Are we solving the problem of free will? Are we learning that there isn’t any free will? How much do the advances in neuroscience tell us about the deep philosophical issues?
  • With the decline of religion is there a sense of the meaninglessness of life and the easy consumerist answer that’s filling the space religion used to occupy? This is something that philosophers ought to be addressing.
Javier E

Denying Genetics Isn't Shutting Down Racism, It's Fueling It - 0 views

  • For many on the academic and journalistic left, genetics are deemed largely irrelevant when it comes to humans. Our large brains and the societies we have constructed with them, many argue, swamp almost all genetic influences.
  • Humans, in this view, are the only species on Earth largely unaffected by recent (or ancient) evolution, the only species where, for example, the natural division of labor between male and female has no salience at all, the only species, in fact, where natural variations are almost entirely social constructions, subject to reinvention.
  • if we assume genetics play no role, and base our policy prescriptions on something untrue, we are likely to overshoot and over-promise in social policy, and see our rhetoric on race become ever more extreme and divisive.
  • ...21 more annotations...
  • Reich simply points out that this utopian fiction is in danger of collapse because it is not true and because genetic research is increasingly proving it untrue.
  • “You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work.” Which means to say that the differences could be (and actually are) substantial.
  • If you don’t establish a reasonable forum for debate on this, Reich argues, if you don’t establish the principle is that we do not have to be afraid of any of this, it will be monopolized by truly unreasonable and indeed dangerous racists. And those racists will have the added prestige for their followers of revealing forbidden knowledge.
  • so there are two arguments against the suppression of this truth and the stigmatization of its defenders: that it’s intellectually dishonest and politically counterproductive.
  • Klein seems to back a truly extreme position: that only the environment affects IQ scores, and genes play no part in group differences in human intelligence. To this end, he cites the “Flynn effect,” which does indeed show that IQ levels have increased over the years, and are environmentally malleable up to a point. In other words, culture, politics, and economics do matter.
  • But Klein does not address the crucial point that even with increases in IQ across all races over time, the racial gap is still frustratingly persistent, that, past a certain level, IQ measurements have actually begun to fall in many developed nations, and that Flynn himself acknowledges that the effect does not account for other genetic influences on intelligence.
  • In an email exchange with me, in which I sought clarification, Klein stopped short of denying genetic influences altogether, but argued that, given rising levels of IQ, and given how brutal the history of racism against African-Americans has been, we should nonetheless assume “right now” that genes are irrelevant.
  • My own brilliant conclusion: Group differences in IQ are indeed explicable through both environmental and genetic factors and we don’t yet know quite what the balance is.
  • We are, in this worldview, alone on the planet, born as blank slates, to be written on solely by culture. All differences between men and women are a function of this social effect; as are all differences between the races. If, in the aggregate, any differences in outcome between groups emerge, it is entirely because of oppression, patriarchy, white supremacy, etc. And it is a matter of great urgency that we use whatever power we have to combat these inequalities.
  • Liberalism has never promised equality of outcomes, merely equality of rights. It’s a procedural political philosophy rooted in means, not a substantive one justified by achieving certain ends.
  • A more nuanced understanding of race, genetics, and environment would temper this polarization, and allow for more unifying, practical efforts to improve equality of opportunity, while never guaranteeing or expecting equality of outcomes.
  • In some ways, this is just a replay of the broader liberal-conservative argument. Leftists tend to believe that all inequality is created; liberals tend to believe we can constantly improve the world in every generation, forever perfecting our societies.
  • Rightists believe that human nature is utterly unchanging; conservatives tend to see the world as less plastic than liberals, and attempts to remake it wholesale dangerous and often counterproductive.
  • I think the genius of the West lies in having all these strands in our politics competing with one another.
  • Where I do draw the line is the attempt to smear legitimate conservative ideas and serious scientific arguments as the equivalent of peddling white supremacy and bigotry. And Klein actively contributes to that stigmatization and demonization. He calls the science of this “race science” as if it were some kind of illicit and illegitimate activity, rather than simply “science.”
  • He goes on to equate the work of these scientists with the “most ancient justification for bigotry and racial inequality.” He even uses racism to dismiss Murray and Harris: they are, after all, “two white men.
  • He still refuses to believe that Murray’s views on this are perfectly within the academic mainstream in studies of intelligence, as they were in 1994.
  • Klein cannot seem to hold the following two thoughts in his brain at the same time: that past racism and sexism are foul, disgusting, and have wrought enormous damage and pain and that unavoidable natural differences between races and genders can still exist.
  • , it matters that we establish a liberalism that is immune to such genetic revelations, that can strive for equality of opportunity, and can affirm the moral and civic equality of every human being on the planet.
  • We may even embrace racial discrimination, as in affirmative action, that fuels deeper divides. All of which, it seems to me, is happening — and actively hampering racial progress, as the left defines the most multiracial and multicultural society in human history as simply “white supremacy” unchanged since slavery; and as the right viscerally responds by embracing increasingly racist white identity politics.
  • liberalism is integral to our future as a free society — and it should not falsely be made contingent on something that can be empirically disproven. It must allow for the truth of genetics to be embraced, while drawing the firmest of lines against any moral or political abuse of it
Javier E

The Price of the Coronavirus Pandemic | The New Yorker - 0 views

  • “You don’t know anyone who has made as much money out of this as I have,” he said over the phone. No argument here. He wouldn’t specify an amount, but reckoned that he was up almost two thousand per cent on the year.
  • He bought a big stake in Alpha Pro Tech, one of the few North American manufacturers of N95 surgical masks, with the expectation that when the virus made it across the Pacific the company would get government contracts to produce more. The stock was trading at about three dollars and fifty cents a share, and so, for cents on the dollar, he bought options to purchase the shares at a future date for ten dollars: he was betting that it would go up much more than that. By the end of February, the stock was trading at twenty-five dollars a share
  • He quickly put some money to work
  • ...17 more annotations...
  • He shorted oil and, as a proxy for oil, the Canadian dollar. (That is, he bet against both.) Finally, he shorted U.S. equities.
  • Last October, he listened to an audiobook by the Hardcore History podcaster, Dan Carlin, called “The End Is Always Near.” “So I had pandemics and plagues in my head,” the Australian said. “In December, I started seeing the first articles about this wet-market thing going on in China, and then in early January there was a lot on Twitter about the shit in Wuhan.” He was in Switzerland on a ski holiday with his family, and he bought all the surgical masks and gloves he could find.
  • The Australian, who spoke on the condition that his name not be used, is a voluble redhead just shy of fifty.
  • The problem, he said, was that, perhaps more now than ever, Americans lack what he called “social cohesion,” and thus the collective will, to commit to such a path.
  • perhaps the government should reward each citizen who strictly observed the quarantine with fifty thousand dollars. “The virus would burn out after four weeks,” he said. The U.S. had all the food and water and fuel it would need to survive months, if not years, of total isolation from the world. “If you don’t trade with China, they’re screwed,” he said. “You’d win this war. Let the rest of the world burn.
  • I’d been eavesdropping for a week on the friend’s WhatsApp conversation with dozens of his acquaintances and colleagues (he called them the Fokkers, for an acronym involving his name), all of them men, most of them expensively educated financial professionals, some of them very rich, a few with connections in high places. The general disposition of the participants, with exceptions, was the opposite of the Australian’s
  • they expressed the belief, with a conviction that occasionally tipped into stridency or mockery, that the media, the modellers, and the markets were overreacting to the threat of the coronavirus
  • They mocked Jim Cramer, the host of the market program “Mad Money,” on CNBC, for predicting a great depression and wondering if anyone would ever board an airplane again. Anecdotes, hyperbole: the talking chuckleheads sowing and selling fear.
  • it’s hard for a coldhearted capitalist to know just how cold the heart must go. Public-health professionals make a cost-benefit calculation, too, with different weightings.
  • This brutal shock is attacking a body that was already vulnerable. In the event of a global depression, a postmortem might identify COVID-19 as the cause of death, but, as with so many of the virus’s victims, the economy had a preëxisting condition—debt, instead of pulmonary disease.
  • “It’s as if the virus is almost beside the point,” a trader I know told me. “This was all set up to happen.”
  • the “smart money,” like the giant asset-management firms Blackstone and the Carlyle Group, was now telling companies to draw down their bank lines, and borrow as much as they could, in case the lenders went out of business or found ways to say no. Sure enough, by March’s end, corporations had reportedly tapped a record two hundred and eight billion dollars from their revolving-credit lines
  • In a world where we talk, suddenly, of trillions, two hundred billion may not seem like a lot, but it is: in 2007, the subprime-mortgage lender Countrywide Financial, in drawing down “just” $11.5 billion, helped bring the system to its knees.
  • It is hard to navigate out of the debt trap. Creditors can forgive debtors, but that process, especially at this level, would be almost impossibly laborious and fraught. Meanwhile, defaults flood the market with collateral, be it buildings, stocks, or aircraft. The price of that collateral collapses—haircuts for baldheads—leading to more defaults.
  • In New York State, where nearly half a million new claims had been filed in two weeks, the unemployment-insurance trust began to teeter toward insolvency. Come summer, there would be no money left to pay unemployment benefits.
  • As April arrived, businesses, large and small, decided not to pay rent, either because they didn’t have the cash on hand or because, with a recession looming, they wanted to preserve what cash they had. Furloughed or fired employees, meanwhile, faced similar decisions
  • On March 20th, Goldman Sachs spooked the world, by predicting a twenty-four-per-cent decline in G.D.P. in the second quarter, a falloff in activity that seemed at once both unthinkable and inevitable. Subsequent predictions grew even more disma
sissij

Nimuno Loops Invents LEGO Sticky Tape So You Can Build Vertical or Even Defy Gravity | ... - 1 views

  • The creators of the Nimuno Loops tape have done some genius inventing bringing us a product that makes you wonder why no one else has come up with it before.
  • They have created the world's first toy block compatible tape — simple, versatile, cheap, and promising unlimited creative possibilities.
  •  
    LEGO is one of my favorite toy from when I was little. The creativities in those building blocks inspired my mind. Different series have different building blocks but they are all very creative and interesting. And now, they have come up with a new idea of LEGO-compatible tape. I think this is a very genius idea. This invention is a combination of age and LEGO building boards and I am amazed at people's ability of drawing useful connections between totally different object. I think it show how the advantage of making connections in human mind benefit our life and mindset. --Sissi (3/31/2017)
Javier E

The Problem With History Classes - The Atlantic - 3 views

  • The passion and urgency with which these battles are fought reflect the misguided way history is taught in schools. Currently, most students learn history as a set narrative—a process that reinforces the mistaken idea that the past can be synthesized into a single, standardized chronicle of several hundred pages. This teaching pretends that there is a uniform collective story, which is akin to saying everyone remembers events the same.
  • Yet, history is anything but agreeable. It is not a collection of facts deemed to be "official" by scholars on high. It is a collection of historians exchanging different, often conflicting analyses.
  • rather than vainly seeking to transcend the inevitable clash of memories, American students would be better served by descending into the bog of conflict and learning the many "histories" that compose the American national story.
  • ...18 more annotations...
  • Perhaps Fisher offers the nation an opportunity to divorce, once and for all, memory from history. History may be an attempt to memorialize and preserve the past, but it is not memory; memories can serve as primary sources, but they do not stand alone as history. A history is essentially a collection of memories, analyzed and reduced into meaningful conclusions—but that collection depends on the memories chosen.
  • Memories make for a risky foundation: As events recede further into the past, the facts are distorted or augmented by entirely new details
  • people construct unique memories while informing perfectly valid histories. Just as there is a plurality of memories, so, too, is there a plurality of histories.
  • Scholars who read a diverse set of historians who are all focused on the same specific period or event are engaging in historiography
  • This approach exposes textbooks as nothing more than a compilation of histories that the authors deemed to be most relevant and useful.
  • In historiography, the barrier between historian and student is dropped, exposing a conflict-ridden landscape. A diplomatic historian approaches an event from the perspective of the most influential statesmen (who are most often white males), analyzing the context, motives, and consequences of their decisions. A cultural historian peels back the objects, sights, and sounds of a period to uncover humanity’s underlying emotions and anxieties. A Marxist historian adopts the lens of class conflict to explain the progression of events. There are intellectual historians, social historians, and gender historians, among many others. Historians studying the same topic will draw different interpretations—sometimes radically so, depending on the sources they draw from
  • Jacoba Urist points out that history is "about explaining and interpreting past events analytically." If students are really to learn and master these analytical tools, then it is absolutely essential that they read a diverse set of historians and learn how brilliant men and women who are scrutinizing the same topic can reach different conclusions
  • Rather than constructing a curriculum based on the muddled consensus of boards, legislatures, and think tanks, schools should teach students history through historiography. The shortcomings of one historian become apparent after reading the work of another one on the list.
  • Although, as Urist notes, the AP course is "designed to teach students to think like historians," my own experience in that class suggests that it fails to achieve that goal.
  • The course’s framework has always served as an outline of important concepts aiming to allow educators flexibility in how to teach; it makes no reference to historiographical conflicts. Historiography was an epiphany for me because I had never before come face-to-face with how historians think and reason
  • When I took AP U.S. History, I jumbled these diverse histories into one indistinct narrative. Although the test involved open-ended essay questions, I was taught that graders were looking for a firm thesis—forcing students to adopt a side. The AP test also, unsurprisingly, rewards students who cite a wealth of supporting details
  • By the time I took the test in 2009, I was a master at "checking boxes," weighing political factors equally against those involving socioeconomics and ensuring that previously neglected populations like women and ethnic minorities received their due. I did not know that I was pulling ideas from different historiographical traditions. I still subscribed to the idea of a prevailing national narrative and served as an unwitting sponsor of synthesis, oblivious to the academic battles that made such synthesis impossible.
  • Although there may be an inclination to seek to establish order where there is chaos, that urge must be resisted in teaching history. Public controversies over memory are hardly new. Students must be prepared to confront divisiveness, not conditioned to shoehorn agreement into situations where none is possible
  • When conflict is accepted rather than resisted, it becomes possible for different conceptions of American history to co-exist. There is no longer a need to appoint a victor.
  • More importantly, the historiographical approach avoids pursuing truth for the sake of satisfying a national myth
  • The country’s founding fathers crafted some of the finest expressions of personal liberty and representative government the world has ever seen; many of them also held fellow humans in bondage. This paradox is only a problem if the goal is to view the founding fathers as faultless, perfect individuals. If multiple histories are embraced, no one needs to fear that one history will be lost.
  • History is not indoctrination. It is a wrestling match. For too long, the emphasis has been on pinning the opponent. It is time to shift the focus to the struggle itself
  • There is no better way to use the past to inform the present than by accepting the impossibility of a definitive history—and by ensuring that current students are equipped to grapple with the contested memories in their midst.
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
Javier E

Meet DALL-E, the A.I. That Draws Anything at Your Command - The New York Times - 0 views

  • A half decade ago, the world’s leading A.I. labs built systems that could identify objects in digital images and even generate images on their own, including flowers, dogs, cars and faces. A few years later, they built systems that could do much the same with written language, summarizing articles, answering questions, generating tweets and even writing blog posts.
  • DALL-E is a notable step forward because it juggles both language and images and, in some cases, grasps the relationship between the two
  • “We can now use multiple, intersecting streams of information to create better and better technology,”
  • ...5 more annotations...
  • when Mr. Nichol tweaked his requests a little, adding or subtracting a few words here or there, it provided what he wanted. When he asked for “a piano in a living room filled with sand,” the image looked more like a beach in a living room.
  • DALL-E is what artificial intelligence researchers call a neural network, which is a mathematical system loosely modeled on the network of neurons in the brain.
  • the same technology that recognizes the commands spoken into smartphones and identifies the presence of pedestrians as self-driving cars navigate city streets.
  • A neural network learns skills by analyzing large amounts of data. By pinpointing patterns in thousands of avocado photos, for example, it can learn to recognize an avocado.
  • DALL-E looks for patterns as it analyzes millions of digital images as well as text captions that describe what each image depicts. In this way, it learns to recognize the links between the images and the words.
Javier E

The Great PowerPoint Panic of 2003 - The Atlantic - 0 views

  • if all of those bad presentations really led to broad societal ills, the proof is hard to find.
  • Some scientists have tried to take a formal measure of the alleged PowerPoint Effect, asking whether the software really influences our ability to process information. Sebastian Kernbach, a professor of creativity and design at the University of St. Gallen, in Switzerland, has co-authored multiple reviews synthesizing this literature. On the whole, he told me, the research suggests that Tufte was partly right, partly wrong. PowerPoint doesn’t seem to make us stupid—there is no evidence of lower information retention or generalized cognitive decline, for example, among those who use it—but it does impose a set of assumptions about how information ought to be conveyed: loosely, in bullet points, and delivered by presenters to an audience of passive listeners. These assumptions have even reshaped the physical environment for the slide-deck age, Kernbach said: Seminar tables, once configured in a circle, have been bent, post-PowerPoint, into a U-shape to accommodate presenters.
  • When I spoke with Kernbach, he was preparing for a talk on different methods of visual thinking to a group of employees at a large governmental organization. He said he planned to use a flip chart, draw on blank slides like a white board, and perhaps even have audience members do some drawing of their own. But he was also gearing up to use regular old PowerPoint slides. Doing so, he told me, would “signal preparation and professionalism” for his audience. The organization was NASA.
  • ...3 more annotations...
  • The fact that the American space agency still uses PowerPoint should not be surprising. Despite the backlash it inspired in the press, and the bile that it raised in billionaires, and the red alert it caused within the military, the corporate-presentation juggernaut rolls on. The program has more monthly users than ever before, according to Shawn Villaron, Microsoft’s vice president of product for PowerPoint—well into the hundreds of millions. If anything, its use cases have proliferated. During lockdown, people threw PowerPoint parties on Zoom. Kids now make PowerPoint presentations for their parents when they want to get a puppy or quit soccer or attend a Niall Horan meet and greet. If PowerPoint is evil, then evil rules the world.
  • it’s tempting to entertain counterfactuals and wonder how things might have played out if Tufte and the rest of us had worried about social media back in 2003 instead of presentation software. Perhaps a timely pamphlet on The Cognitive Style of Friendster or a Wired headline asserting that “LinkedIn Is Evil” would have changed the course of history. If the social-media backlash of the past few years had been present from the start, maybe Facebook would never have grown into the behemoth it is now, and the country would never have become so hopelessly divided.
  • it could be that nothing whatsoever would have changed. No matter what their timing, and regardless of their aptness, concerns about new media rarely seem to make a difference. Objections get steamrolled. The new technology takes over. And years later, when we look back and think, How strange that we were so perturbed, the effects of that technology may well be invisible.
Javier E

Opinion | The Alt-Right Manipulated My Comic. Then A.I. Claimed It. - The New York Times - 1 views

  • Legally, it appears as though LAION was able to scour what seems like the entire internet because it deems itself a nonprofit organization engaging in academic research. While it was funded at least in part by Stability AI, the company that created Stable Diffusion, it is technically a separate entity. Stability AI then used its nonprofit research arm to create A.I. generators first via Stable Diffusion and then commercialized in a new model called DreamStudio.
  • hat makes up these data sets? Well, pretty much everything. For artists, many of us had what amounted to our entire portfolios fed into the data set without our consent. This means that A.I. generators were built on the backs of our copyrighted work, and through a legal loophole, they were able to produce copies of varying levels of sophistication.
  • eing able to imitate a living artist has obvious implications for our careers, and some artists are already dealing with real challenges to their livelihood.
  • ...4 more annotations...
  • Greg Rutkowski, a hugely popular concept artist, has been used in a prompt for Stable Diffusion upward of 100,000 times. Now, his name is no longer attached to just his own work, but it also summons a slew of imitations of varying quality that he hasn’t approved. This could confuse clients, and it muddies the consistent and precise output he usually produces. When I saw what was happening to him, I thought of my battle with my shadow self. We were each fighting a version of ourself that looked similar but that was uncanny, twisted in a way to which we didn’t consent.
  • In theory, everyone is at risk for their work or image to become a vulgarity with A.I., but I suspect those who will be the most hurt are those who are already facing the consequences of improving technology, namely members of marginalized groups.
  • In the future, with A.I. technology, many more people will have a shadow self with whom they must reckon. Once the features that we consider personal and unique — our facial structure, our handwriting, the way we draw — can be programmed and contorted at the click of a mouse, the possibilities for violations are endless.
  • I’ve been playing around with several generators, and so far none have mimicked my style in a way that can directly threaten my career, a fact that will almost certainly change as A.I. continues to improve. It’s undeniable; the A.I.s know me. Most have captured the outlines and signatures of my comics — black hair, bangs, striped T-shirts. To others, it may look like a drawing taking shape.I see a monster forming.
Javier E

A Leading Memory Researcher Explains How to Make Precious Moments Last - The New York T... - 0 views

  • Our memories form the bedrock of who we are. Those recollections, in turn, are built on one very simple assumption: This happened. But things are not quite so simple
  • “We update our memories through the act of remembering,” says Charan Ranganath, a professor of psychology and neuroscience at the University of California, Davis, and the author of the illuminating new book “Why We Remember.” “So it creates all these weird biases and infiltrates our decision making. It affects our sense of who we are.
  • Rather than being photo-accurate repositories of past experience, Ranganath argues, our memories function more like active interpreters, working to help us navigate the present and future. The implication is that who we are, and the memories we draw on to determine that, are far less fixed than you might think. “Our identities,” Ranganath says, “are built on shifting sand.”
  • ...24 more annotations...
  • People believe that memory should be effortless, but their expectations for how much they should remember are totally out of whack with how much they’re capable of remembering.1
  • What is the most common misconception about memory?
  • Another misconception is that memory is supposed to be an archive of the past. We expect that we should be able to replay the past like a movie in our heads.
  • we don’t replay the past as it happened; we do it through a lens of interpretation and imagination.
  • How much are we capable of remembering, from both an episodic2 2 Episodic memory is the term for the memory of life experiences. and a semantic3 3 Semantic memory is the term for the memory of facts and knowledge about the world. standpoint?
  • I would argue that we’re all everyday-memory experts, because we have this exceptional semantic memory, which is the scaffold for episodic memory.
  • If what we’re remembering, or the emotional tenor of what we’re remembering, is dictated by how we’re thinking in a present moment, what can we really say about the truth of a memory?
  • But if memories are malleable, what are the implications for how we understand our “true” selves?
  • your question gets to a major purpose of memory, which is to give us an illusion of stability in a world that is always changing. Because if we look for memories, we’ll reshape them into our beliefs of what’s happening right now. We’ll be biased in terms of how we sample the past. We have these illusions of stability, but we are always changing
  • And depending on what memories we draw upon, those life narratives can change.
  • I know it sounds squirmy to say, “Well, I can’t answer the question of how much we remember,” but I don’t want readers to walk away thinking memory is all made up.
  • One thing that makes the human brain so sophisticated is that we have a longer timeline in which we can integrate information than many other species. That gives us the ability to say: “Hey, I’m walking up and giving money to the cashier at the cafe. The barista is going to hand me a cup of coffee in about a minute or two.”
  • There is this illusion that we know exactly what’s going to happen, but the fact is we don’t. Memory can overdo it: Somebody lied to us once, so they are a liar; somebody shoplifted once, they are a thief.
  • If people have a vivid memory of something that sticks out, that will overshadow all their knowledge about the way things work. So there’s kind of an illus
  • we have this illusion that much of the world is cause and effect. But the reason, in my opinion, that we have that illusion is that our brain is constantly trying to find the patterns
  • I think of memory more like a painting than a photograph. There’s often photorealistic aspects of a painting, but there’s also interpretation. As a painter evolves, they could revisit the same subject over and over and paint differently based on who they are now. We’re capable of remembering things in extraordinary detail, but we infuse meaning into what we remember. We’re designed to extract meaning from the past, and that meaning should have truth in it. But it also has knowledge and imagination and, sometimes, wisdom.
  • memory, often, is educated guesses by the brain about what’s important. So what’s important? Things that are scary, things that get your desire going, things that are surprising. Maybe you were attracted to this person, and your eyes dilated, your pulse went up. Maybe you were working on something in this high state of excitement, and your dopamine was up.
  • It could be any of those things, but they’re all important in some way, because if you’re a brain, you want to take what’s surprising, you want to take what’s motivationally important for survival, what’s new.
  • On the more intentional side, are there things that we might be able to do in the moment to make events last in our memories? In some sense, it’s about being mindful. If we want to form a new memory, focus on aspects of the experience you want to take with you.
  • If you’re with your kid, you’re at a park, focus on the parts of it that are great, not the parts that are kind of annoying. Then you want to focus on the sights, the sounds, the smells, because those will give you rich detail later on
  • Another part of it, too, is that we kill ourselves by inducing distractions in our world. We have alerts on our phones. We check email habitually.
  • When we go on trips, I take candid shots. These are the things that bring you back to moments. If you capture the feelings and the sights and the sounds that bring you to the moment, as opposed to the facts of what happened, that is a huge part of getting the best of memory.
  • this goes back to the question of whether the factual truth of a memory matters to how we interpret it. I think it matters to have some truth, but then again, many of the truths we cling to depend on our own perspective.
  • There’s a great experiment on this. These researchers had people read this story about a house.8 8 The study was “Recall of Previously Unrecallable Information Following a Shift in Perspective,” by Richard C. Anderson and James W. Pichert. One group of subjects is told, I want you to read this story from the perspective of a prospective home buyer. When they remember it, they remember all the features of the house that are described in the thing. Another group is told, I want you to remember this from the perspective of a burglar. Those people tend to remember the valuables in the house and things that you would want to take. But what was interesting was then they switched the groups around. All of a sudden, people could pull up a number of details that they didn’t pull up before. It was always there, but they just didn’t approach it from that mind-set. So we do have a lot of information that we can get if we change our perspective, and this ability to change our perspective is exceptionally important for being accurate. It’s exceptionally important for being able to grow and modify our beliefs
Javier E

The G.O.P.'s Demographic Excuse - NYTimes.com - 0 views

  • What the party really needs, much more than a better identity-politics pitch, is an economic message that would appeal across demographic lines — reaching both downscale white voters turned off by Romney’s Bain Capital background and upwardly mobile Latino voters who don’t relate to the current G.O.P. fixation on upper-bracket tax cuts. As the American Enterprise Institute’s Henry Olsen writes, it should be possible for Republicans to oppose an overweening and intrusive state while still recognizing that “government can give average people a hand up to achieve the American Dream.” It should be possible for the party to reform and streamline government while also addressing middle-class anxieties about wages, health care, education and more. The good news is that such an agenda already exists, at least in embryonic form. Thanks to four years of intellectual ferment, Republicans seeking policy renewal have a host of thinkers and ideas to draw from: Luigi Zingales and Jim Pethokoukis on crony capitalism, Ramesh Ponnuru and Robert Stein on tax policy, Frederick Hess on education reform, James Capretta on alternatives to Obamacare, and many more.
Emily Horwitz

Struggle For Smarts? How Eastern And Western Cultures Tackle Learning : Shots - Health ... - 1 views

  • In 1979, when Jim Stigler was still a graduate student at the University of Michigan, he went to Japan to research teaching methods and found himself sitting in the back row of a crowded fourth grade math class.
  • and one kid was just totally having trouble with it. His cube looked all cockeyed, so the teacher said to him, 'Why don't you go put yours on the board?' So right there I thought, 'That's interesting! He took the one who can't do it and told him to go and put it on the board.'"
  • the kid didn't break into tears. Stigler says the child continued to draw his cube with equanimity. "And at the end of the class, he did make his cube look right! And the teacher said to the class, 'How does that look, class?' And they all looked up and said, 'He did it!' And they broke into applause." The kid smiled a huge smile and sat down, clearly proud of himself.
  • ...12 more annotations...
  • very early ages we [in America] see struggle as an indicator that you're just not very smart," Stigler says. "It's a sign of low ability — people who are smart don't struggle, they just naturally get it, that's our folk theory. Whereas in Asian cultures they tend to see struggle more as an opportunity."
  • For the most part in American culture, intellectual struggle in schoolchildren is seen as an indicator of weakness, while in Eastern cultures it is not only tolerated, it is often used to measure emotional strength.
  • to understand why these two cultures view struggle so differently, it's good to step back and examine how they think about where academic excellence comes from.
  • American mother is communicating to her son that the cause of his success in school is his intelligence. He's smart — which, Li says, is a common American view.
  • children are not creative. Our children do not have individuality. They're just robots. You hear the educators from Asian countries express that concern, a
  • "So the focus is on the process of persisting through it despite the challenges, not giving up, and that's what leads to success," Li says.
  • Obviously if struggle indicates weakness — a lack of intelligence — it makes you feel bad, and so you're less likely to put up with it. But if struggle indicates strength — an ability to face down the challenges that inevitably occur when you are trying to learn something — you're more willing to accept it.
  • American students "worked on it less than 30 seconds on average and then they basically looked at us and said, 'We haven't had this,'" he says.
  • Japanese students worked for the entire hour on the impossible problem.
  • Westerns tend to worry that their kids won't be able to compete against Asian kids who excel in many areas but especially in math and science. Jin Li says that educators from Asian countries have their own set of worries.
  • "The idea of intelligence in believed in the West as a cause," Li explains. "She is telling him that there is something in him, in his mind, that enables him to do what he does."
  • in the Japanese classrooms that he's studied, teachers consciously design tasks that are slightly beyond the capabilities of the students they teach, so the students can actually experience struggling with something just outside their reach. Then, once the task is mastered, the teachers actively point out that the student was able to accomplish it through the students hard work and struggle.
  •  
    An interesting look into the differences between how Eastern and Western cultures see academic struggle
Javier E

Obscurity: A Better Way to Think About Your Data Than 'Privacy' - Woodrow Hartzog and E... - 1 views

  • Obscurity is the idea that when information is hard to obtain or understand, it is, to some degree, safe. Safety, here, doesn't mean inaccessible. Competent and determined data hunters armed with the right tools can always find a way to get it. Less committed folks, however, experience great effort as a deterrent.
  • Online, obscurity is created through a combination of factors. Being invisible to search engines increases obscurity. So does using privacy settings and pseudonyms. Disclosing information in coded ways that only a limited audience will grasp enhances obscurity, too
  • What obscurity draws our attention to, is that while the records were accessible to any member of the public prior to the rise of big data, more effort was required to obtain, aggregate, and publish them. In that prior context, technological constraints implicitly protected privacy interests.
  • ...9 more annotations...
  • the "you choose who to let in" narrative is powerful because it trades on traditional notions of space and boundary regulation, and further appeals to our heightened sense of individual responsibility, and, possibly even vanity. The basic message is that so long as we exercise good judgment when selecting our friends, no privacy problems will arise
  • What this appeal to status quo relations and existing privacy settings conceals is the transformative potential of Graph : new types of searching can emerge that, due to enhanced frequency and newly created associations between data points, weaken, and possibly obliterate obscurity.
  • the stalker frame muddies the concept, implying that the problem is people with bad intentions getting our information. Determined stalkers certainly pose a threat to the obscurity of information because they represent an increased likelihood that obscure information will be found and understood.
  • he other dominant narrative emerging is that the Graph will simplify "stalking."
  • Well-intentioned searches can be problematic, too.
  • It is not a stretch to assume Graph could enable searching through the content of posts a user has liked or commented on and generating categories of interests from it. For example, users could search which of their friends are interested in politics, or, perhaps, specifically, in left-wing politics.
  • In this scenario, a user who wasn't a fan of political groups or causes, didn't list political groups or causes as interests, and didn't post political stories, could still be identified as political.
  • In a system that purportedly relies upon user control, it is still unclear how and if users will be able to detect when their personal information is no longer obscure. How will they be able to anticipate the numerous different queries that might expose previously obscure information? Will users even be aware of all the composite results including their information?
  • Obscurity is a protective state that can further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power. A major task ahead is for society to determine how much obscurity citizens need to thrive.
Javier E

Computer Algorithms Rely Increasingly on Human Helpers - NYTimes.com - 0 views

  • Although algorithms are growing ever more powerful, fast and precise, the computers themselves are literal-minded, and context and nuance often elude them. Capable as these machines are, they are not always up to deciphering the ambiguity of human language and the mystery of reasoning.
  • And so, while programming experts still write the step-by-step instructions of computer code, additional people are needed to make more subtle contributions as the work the computers do has become more involved. People evaluate, edit or correct an algorithm’s work. Or they assemble online databases of knowledge and check and verify them — creating, essentially, a crib sheet the computer can call on for a quick answer. Humans can interpret and tweak information in ways that are understandable to both computers and other humans.
  • Even at Google, where algorithms and engineers reign supreme in the company’s business and culture, the human contribution to search results is increasing. Google uses human helpers in two ways. Several months ago, it began presenting summaries of information on the right side of a search page when a user typed in the name of a well-known person or place, like “Barack Obama” or “New York City.” These summaries draw from databases of knowledge like Wikipedia, the C.I.A. World Factbook and Freebase, whose parent company, Metaweb, Google acquired in 2010. These databases are edited by humans.
  • ...3 more annotations...
  • When Google’s algorithm detects a search term for which this distilled information is available, the search engine is trained to go fetch it rather than merely present links to Web pages. “There has been a shift in our thinking,” said Scott Huffman, an engineering director in charge of search quality at Google. “A part of our resources are now more human curated.”
  • “Our engineers evolve the algorithm, and humans help us see if a suggested change is really an improvement,” Mr. Huffman said.
  • Ben Taylor, 25, is a product manager at FindTheBest, a fast-growing start-up in Santa Barbara, Calif. The company calls itself a “comparison engine” for finding and comparing more than 100 topics and products, from universities to nursing homes, smartphones to dog breeds. Its Web site went up in 2010, and the company now has 60 full-time employees. Mr. Taylor helps design and edit the site’s education pages. He is not an engineer, but an English major who has become a self-taught expert in the arcane data found in Education Department studies and elsewhere. His research methods include talking to and e-mailing educators. He is an information sleuth.
Javier E

One of Us - Lapham's Quarterly - 0 views

  • On what seems like a monthly basis, scientific teams announce the results of new experiments, adding to a preponderance of evidence that we’ve been underestimating animal minds, even those of us who have rated them fairly highly
  • an international group of prominent neuroscientists meeting at the University of Cambridge issued “The Cambridge Declaration on Consciousness in Non-Human Animals,” a document stating that “humans are not unique in possessing the neurological substrates that generate consciousness.” It goes further to conclude that numerous documented animal behaviors must be considered “consistent with experienced feeling states.”
  • Only with the Greeks does there enter the notion of a formal divide between our species, our animal, and every other on earth.
  • ...7 more annotations...
  • there’s that exquisite verse, one of the most beautiful in the Bible, the one that says if God cares deeply about sparrows, don’t you think He cares about you? One is so accustomed to dwelling on the second, human, half of the equation, the comforting part, but when you put your hand over that and consider only the first, it’s a little startling: God cares deeply about the sparrows. Not just that, He cares about them individually. “Are not five sparrows sold for two pennies?” Jesus says. “Yet not one of them is forgotten in God’s sight.”
  • The modern conversation on animal consciousness proceeds, with the rest of the Enlightenment, from the mind of René Descartes, whose take on animals was vividly (and approvingly) paraphrased by the French philosopher Nicolas Malebranche: they “eat without pleasure, cry without pain, grow without knowing it; they desire nothing, fear nothing, know nothing.” Descartes’ term for them was automata
  • In On the Origin of Species, Charles Darwin made the intriguing claim that among the naturalists he knew it was consistently the case that the better a researcher got to know a certain species, the more each individual animal’s actions appeared attributable to “reason and the less to unlearnt instinct.” The more you knew, the more you suspected that they were rational. That marks an important pivot, that thought, insofar as it took place in the mind of someone devoted to extremely close and meticulous study of living animals, a mind that had trained itself not to sentimentalize.
  • The sheer number and variety of experiments carried out in the twentieth century—and with, if anything, a renewed intensity in the twenty-first—exceeds summary. Reasoning, language, neurology, the science of emotions—every chamber where “consciousness” is thought to hide has been probed. Birds and chimps and dolphins have been made to look at themselves in mirrors—to observe whether, on the basis of what they see, they groom or preen (a measure, if somewhat arbitrary, of self-awareness). Dolphins have been found to grieve. Primates have learned symbolic or sign languages and then been interrogated with them. Their answers show thinking but have proved stubbornly open to interpretation on the issue of “consciousness,” with critics warning, as always, about the dangers of anthropomorphism, animal-rights bias, etc.
  • If we put aside the self-awareness standard—and really, how arbitrary and arrogant is that, to take the attribute of consciousness we happen to possess over all creatures and set it atop the hierarchy, proclaiming it the very definition of consciousness (Georg Christoph Lichtenberg wrote something wise in his notebooks, to the effect of: only a man can draw a self-portrait, but only a man wants to)—it becomes possible to say at least the following: the overwhelming tendency of all this scientific work, of its results, has been toward more consciousness. More species having it, and species having more of it than assumed.
  • The animal kingdom is symphonic with mental activity, and of its millions of wavelengths, we’re born able to understand the minutest sliver. The least we can do is have a proper respect for our ignorance.
  • The philosopher Thomas Nagel wrote an essay in 1974 titled, “What Is It Like To Be a Bat?”, in which he put forward perhaps the least overweening, most useful definition of “animal consciousness” ever written, one that channels Spinoza’s phrase about “that nature belonging to him wherein he has his being.” Animal consciousness occurs, Nagel wrote, when “there is something that it is to be that organism—something it is like for the organism.” The strangeness of his syntax carries the genuine texture of the problem. We’ll probably never be able to step far enough outside of our species-reality to say much about what is going on with them, beyond saying how like or unlike us they are. Many things are conscious on the earth, and we are one, and our consciousness feels like this; one of the things it causes us to do is doubt the existence of the consciousness of the other millions of species. But it also allows us to imagine a time when we might stop doing that.
Javier E

Girls Outnumbered in New York's Elite Public Schools - NYTimes.com - 0 views

  • the gap at the elite schools could be as elemental as their perception as havens for science, technology, engineering or math, making them a natural magnet for boys, just as girls might gravitate to schools known for humanities.
  • Mr. Finn, who, with Jessica A. Hockett, wrote the recent book, “Exam Schools: Inside America’s Most Selective Public High Schools.” “I think you’re looking at habit, culture, perceptions, tradition and curricular emphasis.”
  • enrollment in highly competitive high schools is 55 percent female. “The big gender-related chasm in American education these days is how much worse boys are doing, than girls,”
  • ...1 more annotation...
  • Of the 3,060 students who applied to his school this year, 44 percent were boys. To help rank the candidates, he said, he simply adjusted the focus of student interviews to more effectively draw boys out in describing their own strengths. This year he offered seats to 136 boys and 134 girls. “Are we worried about getting unqualified boys?” asked Dr. Lerner. “No not at all.”
Javier E

Yelp and the Wisdom of 'The Lonely Crowd' : The New Yorker - 1 views

  • David Riesman spent the first half of his career writing one of the most important books of the twentieth century. He spent the second half correcting its pervasive misprision. “The Lonely Crowd,” an analysis of the varieties of social character that examined the new American middle class
  • the “profound misinterpretation” of the book as a simplistic critique of epidemic American postwar conformity via its description of the contours of the “other-directed character,” whose identity and behavior is shaped by its relationships.
  • he never meant to suggest that Americans now were any more conformist than they ever had been, or that there’s even such a thing as social structure without conformist consensus.
  • ...17 more annotations...
  • In this past weekend’s Styles section of the New York Times, Siegel uses “The Lonely Crowd” to analyze the putative “Yelpification” of contemporary life: according to Siegel, Riesman’s view was that “people went from being ‘inner-directed’ to ‘outer-directed,’ from heeding their own instincts and judgment to depending on the judgments and opinions of tastemakers and trendsetters.” The “conformist power of the crowd” and its delighted ability to write online reviews led Siegel down a sad path to a lackluster expensive dinner.
  • What Riesman actually suggested was that we think of social organization in terms of a series of “ideal types” along a spectrum of increasingly loose authority
  • On one end of the spectrum is a “tradition-directed” community, where we all understand that what we’re supposed to do is what we’re supposed to do because it’s just the thing that one does; authority is unequivocal, and there’s neither the room nor the desire for autonomous action
  • In the middle of the spectrum, as one moves toward a freer distribution of, and response to, authority, is “inner-direction.” The inner-directed character is concerned not with “what one does” but with “what people like us do.” Which is to say that she looks to her own internalizations of past authorities to get a sense for how to conduct her affairs.
  • Contemporary society, Riesman thought, was best understood as chiefly “other-directed,” where the inculcated authority of the vertical (one’s lineage) gives way to the muddled authority of the horizontal (one’s peers).
  • The inner-directed person orients herself by an internal “gyroscope,” while the other-directed person orients herself by “radar.”
  • It’s not that the inner-directed person consults some deep, subjective, romantically sui generis oracle. It’s that the inner-directed person consults the internalized voices of a mostly dead lineage, while her other-directed counterpart heeds the external voices of her living contemporaries.
  • “the gyroscopic mechanism allows the inner-directed person to appear far more independent than he really is: he is no less a conformist to others than the other-directed person, but the voices to which he listens are more distant, of an older generation, their cues internalized in his childhood.” The inner-directed person is, simply, “somewhat less concerned than the other-directed person with continuously obtaining from contemporaries (or their stand-ins: the mass media) a flow of guidance, expectation, and approbation.
  • Riesman drew no moral from the transition from a community of primarily inner-directed people to a community of the other-directed. Instead, he saw that each ideal type had different advantages and faced different problems
  • As Riesman understood it, the primary disciplining emotion under tradition direction is shame, the threat of ostracism and exile that enforces traditional action. Inner-directed people experience not shame but guilt, or the fear that one’s behavior won’t be commensurate with the imago within. And, finally, other-directed folks experience not guilt but a “contagious, highly diffuse” anxiety—the possibility that, now that authority itself is diffuse and ambiguous, we might be doing the wrong thing all the time.
  • Siegel is right to make the inference, if wayward in his conclusions. It makes sense to associate the anxiety of how to relate to livingly diffuse authorities with the Internet, which presents the greatest signal-to-noise-ratio problem in human history.
  • The problem with Yelp is not the role it plays, for Siegel, in the proliferation of monoculture; most people of my generation have learned to ignore Yelp entirely. It’s the fact that, after about a year of usefulness, Yelp very quickly became a terrible source of information.
  • There are several reasons for this. The first is the nature of an algorithmic response to the world. As Jaron Lanier points out in “Who Owns the Future?,” the hubris behind each new algorithm is the idea that its predictive and evaluatory structure is game-proof; but the minute any given algorithm gains real currency, all the smart and devious people devote themselves to gaming it. On Yelp, the obvious case would be garnering positive reviews by any means necessary.
  • A second problem with Yelp’s algorithmic ranking is in the very idea of using online reviews; as anybody with a book on Amazon knows, they tend to draw more contributions from people who feel very strongly about something, positively or negatively. This undermines the statistical relevance of their recommendations.
  • the biggest problem with Yelp is not that it’s a popularity contest. It’s not even that it’s an exploitable popularity contest.
  • it’s the fact that Yelp makes money by selling ads and prime placements to the very businesses it lists under ostensibly neutral third-party review
  • But Yelp’s valuations are always possibly in bad faith, even if its authority is dressed up as the distilled algorithmic wisdom of a crowd. For Riesman, that’s the worst of all possible worlds: a manipulated consumer certainty that only shores up the authority of an unchosen, hidden source. In that world, cold monkfish is the least of our problems.
Javier E

C. S. Lewis, Evangelical Rock Star - NYTimes.com - 0 views

  • the text for which Lewis is best known is his “Chronicles of Narnia.” And what “Narnia” offers is not theological simplicity, but complexity. The God represented in these books is not quite real (it’s fiction) and yet more real than the books pretend (that’s not a lion, it’s God).
  • In “Mere Christianity,” Lewis wrote that to pretend helps one to experience God as real. In “Narnia” he offered a way to pretend — by depicting a God who is so explicitly not a God from an ordinary human church. Aslan keeps God safe from human clumsiness and error.
  • What does it mean that our society places such a premium on fantasy and imagination?
  • ...6 more annotations...
  • This suggests that we imagine a complex reality in which things might be true — materially, spiritually, psychologically
  • “Inventive pretend,” in which children pretend the fantastic or impossible (enchanted princesses, dragon hunters) “is rarely — if ever — observed in non-industrialized or traditional cultures,”
  • Westerners, by contrast, not only tolerate fantasy play but actively encourage it, for adults as well as for children. We are novel readers, movie watchers and game players.
  • Science leads us to draw a sharp line between what is real and what is unreal. At the same time, we live in an age in which we are exquisitely aware that there are many theories, both religious and scientific, to explain the world, and many ways to be human.
  • Probably fiction does for us what the vision of Aslan did for Bob: it helps us to learn what we find emotionally true in the face of irreconcilable contradictions.
  • fiction teaches us how to think about what we take to be true. In the cacophony of an information-soaked age, we need it.
Javier E

The Importance of Doing Recent History | History News Network - 1 views

  • We argue that writing contemporary history is different from the role historians might play as public intellectuals who draw on their expertise to comment on recent events in the media. Instead, the writing of recent history shifts the boundaries of what might be considered a legitimate topic of historical study. The very definition of “history” has hinged on the sense of a break between past and present that allows for critical perspective. The historians’ traditional task has been to bring a “dead,” absent past back into the present. However, those doing recent history recognize that their subject matter is not fully past, or as Renee Romano puts it in our edited collection about recent history, it’s “not dead yet.”
  • studying the recent past presents real methodological challenges. It untethers the academic historian from the aspects of our practice that give us all, regardless of field or political bent, a sense of common enterprise: objectivity, perspective, a defined archive, and a secondary literature that is there to be argued with, corrected and leaned upon.
‹ Previous 21 - 40 of 178 Next › Last »
Showing 20 items per page