Skip to main content

Home/ TOK Friends/ Group items tagged collaboration

Rss Feed Group items tagged

Javier E

The Singular Mind of Terry Tao - The New York Times - 0 views

  • reflecting on his career so far, Tao told me that his view of mathematics has utterly changed since childhood. ‘‘When I was growing up, I knew I wanted to be a mathematician, but I had no idea what that entailed,’’ he said in a lilting Australian accent. ‘‘I sort of imagined a committee would hand me problems to solve or something.’’
  • But it turned out that the work of real mathematicians bears little resemblance to the manipulations and memorization of the math student. Even those who experience great success through their college years may turn out not to have what it takes. The ancient art of mathematics, Tao has discovered, does not reward speed so much as patience, cunning and, perhaps most surprising of all, the sort of gift for collaboration and improvisation that characterizes the best jazz musicians
  • Tao now believes that his younger self, the prodigy who wowed the math world, wasn’t truly doing math at all. ‘‘It’s as if your only experience with music were practicing scales or learning music theory,’’ he said, looking into light pouring from his window. ‘‘I didn’t learn the deeper meaning of the subject until much later.’’
  • ...8 more annotations...
  • The true work of the mathematician is not experienced until the later parts of graduate school, when the student is challenged to create knowledge in the form of a novel proof. It is common to fill page after page with an attempt, the seasons turning, only to arrive precisely where you began, empty-handed — or to realize that a subtle flaw of logic doomed the whole enterprise from its outset. The steady state of mathematical research is to be completely stuck. It is a process that Charles Fefferman of Princeton, himself a onetime math prodigy turned Fields medalist, likens to ‘‘playing chess with the devil.’’ The rules of the devil’s game are special, though: The devil is vastly superior at chess, but, Fefferman explained, you may take back as many moves as you like, and the devil may not. You play a first game, and, of course, ‘‘he crushes you.’’ So you take back moves and try something different, and he crushes you again, ‘‘in much the same way.’’ If you are sufficiently wily, you will eventually discover a move that forces the devil to shift strategy; you still lose, but — aha! — you have your first clue.
  • Tao has emerged as one of the field’s great bridge-­builders. At the time of his Fields Medal, he had already made discoveries with more than 30 different collaborators. Since then, he has also become a prolific math blogger with a decidedly non-­Gaussian ebullience: He celebrates the work of others, shares favorite tricks, documents his progress and delights at any corrections that follow in the comments. He has organized cooperative online efforts to work on problems. ‘‘Terry is what a great 21st-­century mathematician looks like,’’ Jordan Ellenberg, a mathematician at the University of Wisconsin, Madison, who has collaborated with Tao, told me. He is ‘‘part of a network, always communicating, always connecting what he is doing with what other people are doing.’’
  • Most mathematicians tend to specialize, but Tao ranges widely, learning from others and then working with them to make discoveries. Markus Keel, a longtime collaborator and close friend, reaches to science fiction to explain Tao’s ability to rapidly digest and employ mathematical ideas: Seeing Tao in action, Keel told me, reminds him of the scene in ‘‘The Matrix’’ when Neo has martial arts downloaded into his brain and then, opening his eyes, declares, ‘‘I know kung fu.’’ The citation for Tao’s Fields Medal, awarded in 2006, is a litany of boundary hopping and notes particularly ‘‘beautiful work’’ on Horn’s conjecture, which Tao completed with a friend he had played foosball with in graduate school. It was a new area of mathematics for Tao, at a great remove from his known stamping grounds. ‘‘This is akin,’’ the citation read, ‘‘to a leading English-­language novelist suddenly producing the definitive Russian novel.’’
  • For their work, Tao and Green salvaged a crucial bit from an earlier proof done by others, which had been discarded as incorrect, and aimed at a different goal. Other maneuvers came from masterful proofs by Timothy Gowers of England and Endre Szemeredi of Hungary. Their work, in turn, relied on contributions from Erdos, Klaus Roth and Frank Ramsey, an Englishman who died at age 26 in 1930, and on and on, into history. Ask mathematicians about their experience of the craft, and most will talk about an intense feeling of intellectual camaraderie. ‘‘A very central part of any mathematician’s life is this sense of connection to other minds, alive today and going back to Pythagoras,’’ said Steven Strogatz, a professor of mathematics at Cornell University. ‘‘We are having this conversation with each other going over the millennia.’’
  • As a group, the people drawn to mathematics tend to value certainty and logic and a neatness of outcome, so this game becomes a special kind of torture. And yet this is what any ­would-be mathematician must summon the courage to face down: weeks, months, years on a problem that may or may not even be possible to unlock. You find yourself sitting in a room without doors or windows, and you can shout and carry on all you want, but no one is listening.
  • An effort to prove that 1 equals 0 is not likely to yield much fruit, it’s true, but the hacker’s mind-set can be extremely useful when doing math. Long ago, mathematicians invented a number that when multiplied by itself equals negative 1, an idea that seemed to break the basic rules of multiplication. It was so far outside what mathematicians were doing at the time that they called it ‘‘imaginary.’’ Yet imaginary numbers proved a powerful invention, and modern physics and engineering could not function without them.
  • Early encounters with math can be misleading. The subject seems to be about learning rules — how and when to apply ancient tricks to arrive at an answer. Four cookies remain in the cookie jar; the ball moves at 12.5 feet per second. Really, though, to be a mathematician is to experiment. Mathematical research is a fundamentally creative act. Lore has it that when David Hilbert, arguably the most influential mathematician of fin de siècle Europe, heard that a colleague had left to pursue fiction, he quipped: ‘‘He did not have enough imagination for mathematics.’’
  • Many people think that substantial progress on Navier-­Stokes may be impossible, and years ago, Tao told me, he wrote a blog post concurring with this view. Now he has some small bit of hope. The twin-prime conjecture had the same feel, a sense of breaking through the wall of intimidation that has scared off many aspirants. Outside the world of mathematics, both Navier-­Stokes and the twin-prime conjecture are described as problems. But for Tao and others in the field, they are more like opponents. Tao’s opponent has been known to taunt him, convincing him that he is overlooking the obvious, or to fight back, making quick escapes when none should be possible. Now the opponent appears to have revealed a weakness. But Tao said he has been here before, thinking he has found a way through the defenses, when in fact he was being led into an ambush. ‘‘You learn to get suspicious,’’ Tao said. ‘‘You learn to be on the lookout.’’
caelengrubb

A new collaborative approach to investigate what happens in the brain when it makes a d... - 0 views

  • Different areas process sounds, sights or pertinent prior knowledge.
  • neuroscience has operated in a traditional science research model: Individual labs work on their own, usually focusing on one or a few brain areas.
  • The BRAIN Initiative, which the Obama administration launched in 2013, started to encourage the kind of collaboration that neuroscience needs.
  • ...6 more annotations...
  • The first question the collaboration is tackling focuses on decision-making by the brain.
  • By collating information from the eyes, the ears and so on, the association areas may give a more coherent, big-picture view of what’s happening in the world.
  • Then, there’s the frontal cortex, which is believed to play a role in controlling voluntary action. Research suggests it’s involved in committing to a particular action once enough incoming information has arrived.
  • Research on decision-making may also inform treatment of patients with other disorders, such as substance abuse and addiction. Indeed, addiction is perhaps a prime example of how decision-making can go very wrong.
  • In real-world decisions, you’re combining lots of different pieces of information – your sensory signals, your internal knowledge about what’s rewarding, what’s risky.
  • Even in the simplest, very earliest stage we’re looking at, where the animals are just making voluntary movements, they’re deciding when to make a movement to harvest a reward
Javier E

Daniel Kahneman | Profile on TED.com - 1 views

  • rather than stating the optimal, rational answer, as an economist of the time might have, they quantified how most real people, consistently, make a less-rational choice. Their work treated economics not as a perfect or self-correcting machine, but as a system prey to quirks of human perception. The field of behavioral economics was born.
  • Tversky and calls for a new form of academic cooperation, marked not by turf battles but by "adversarial collaboration," a good-faith effort by unlike minds to conduct joint research, critiquing each other in the service of an ideal of truth to which both can contribute.
Javier E

Teachers - Will We Ever Learn? - NYTimes.com - 0 views

  • America’s overall performance in K-12 education remains stubbornly mediocre.
  • The debate over school reform has become a false polarization
  • teaching is a complex activity that is hard to direct and improve from afar. The factory model is appropriate to simple work that is easy to standardize; it is ill suited to disciplines like teaching that require considerable skill and discretion.
  • ...13 more annotations...
  • In the nations that lead the international rankings — Singapore, Japan, South Korea, Finland, Canada — teachers are drawn from the top third of college graduates, rather than the bottom 60 percent as is the case in the United States. Training in these countries is more rigorous, more tied to classroom practice and more often financed by the government than in America. There are also many fewer teacher-training institutions, with much higher standards.
  • By these criteria, American education is a failed profession. There is no widely agreed-upon knowledge base, training is brief or nonexistent, the criteria for passing licensing exams are much lower than in other fields, and there is little continuous professional guidance. It is not surprising, then, that researchers find wide variation in teaching skills across classrooms; in the absence of a system devoted to developing consistent expertise, we have teachers essentially winging it as they go along, with predictably uneven results.
  • Teaching requires a professional model, like we have in medicine, law, engineering, accounting, architecture and many other fields. In these professions, consistency of quality is created less by holding individual practitioners accountable and more by building a body of knowledge, carefully training people in that knowledge, requiring them to show expertise before they become licensed, and then using their professions’ standards to guide their work.
  • Teachers in leading nations’ schools also teach much less than ours do. High school teachers provide 1,080 hours per year of instruction in America, compared with fewer than 600 in South Korea and Japan, where the balance of teachers’ time is spent collaboratively on developing and refining lesson plans
  • These countries also have much stronger welfare states; by providing more support for students’ social, psychological and physical needs, they make it easier for teachers to focus on their academic needs.
  • hese elements create a virtuous cycle: strong academic performance leads to schools with greater autonomy and more public financing, which in turn makes education an attractive profession for talented people.
  • In America, both major teachers’ unions and the organization representing state education officials have, in the past year, called for raising the bar for entering teachers; one of the unions, the American Federation of Teachers, advocates a “bar exam.”
  • Ideally the exam should not be a one-time paper-and-pencil test, like legal bar exams, but a phased set of milestones to be attained over the first few years of teaching. Akin to medical boards, they would require prospective teachers to demonstrate subject and pedagogical knowledge — as well as actual teaching skill.
  • We let doctors operate, pilots fly, and engineers build because their fields have developed effective ways of certifying that they can do these things. Teaching, on the whole, lacks this specialized knowledge base; teachers teach based mostly on what they have picked up from experience and from their colleagues.
  • other fields spend 5 percent to 15 percent of their budgets on research and development, while in education, it is around 0.25 percent
  • Education-school researchers publish for fellow academics; teachers develop practical knowledge but do not evaluate or share it; commercial curriculum designers make what districts and states will buy, with little regard for quality.
  • Early- to mid-career teachers need time to collaborate and explore new directions — having mastered the basics, this is the stage when they can refine their skills. The system should reward master teachers with salaries commensurate with leading professionals in other fields.
  • research suggests that the labels don’t matter — there are good and bad programs of all types, including university-based ones. The best programs draw people who majored as undergraduates in the subjects they wanted to teach; focus on extensive clinical practice rather than on classroom theory; are selective in choosing their applicants rather than treating students as a revenue stream; and use data about how their students fare as teachers to assess and revise their practice.
Javier E

Great Scientists Don't Need Math - WSJ - 0 views

  • Without advanced math, how can you do serious work in the sciences? Well, I have a professional secret to share: Many of the most successful scientists in the world today are mathematically no more than semiliterate.
  • I was reassured by the discovery that superior mathematical ability is similar to fluency in foreign languages. I might have become fluent with more effort and sessions talking with the natives, but being swept up with field and laboratory research, I advanced only by a small amount.
  • Far more important throughout the rest of science is the ability to form concepts, during which the researcher conjures images and processes by intuition.
  • ...9 more annotations...
  • exceptional mathematical fluency is required in only a few disciplines, such as particle physics, astrophysics and information theory
  • When something new is encountered, the follow-up steps usually require mathematical and statistical methods to move the analysis forward. If that step proves too technically difficult for the person who made the discovery, a mathematician or statistician can be added as a collaborator
  • Ideas in science emerge most readily when some part of the world is studied for its own sake. They follow from thorough, well-organized knowledge of all that is known or can be imagined of real entities and processes within that fragment of existence
  • Ramped up and disciplined, fantasies are the fountainhead of all creative thinking. Newton dreamed, Darwin dreamed, you dream. The images evoked are at first vague. They may shift in form and fade in and out. They grow a bit firmer when sketched as diagrams on pads of paper, and they take on life as real examples are sought and found.
  • Over the years, I have co-written many papers with mathematicians and statisticians, so I can offer the following principle with confidence. Call it Wilson's Principle No. 1: It is far easier for scientists to acquire needed collaboration from mathematicians and statisticians than it is for mathematicians and statisticians to find scientists able to make use of their equations.
  • If your level of mathematical competence is low, plan to raise it, but meanwhile, know that you can do outstanding scientific work with what you have. Think twice, though, about specializing in fields that require a close alternation of experiment and quantitative analysis. These include most of physics and chemistry, as well as a few specialties in molecular biology.
  • Newton invented calculus in order to give substance to his imagination
  • Darwin had little or no mathematical ability, but with the masses of information he had accumulated, he was able to conceive a process to which mathematics was later applied.
  • For aspiring scientists, a key first step is to find a subject that interests them deeply and focus on it. In doing so, they should keep in mind Wilson's Principle No. 2: For every scientist, there exists a discipline for which his or her level of mathematical competence is enough to achieve excellence.
charlottedonoho

Who's to blame when fake science gets published? - 1 views

  • The now-discredited study got headlines because it offered hope. It seemed to prove that our sense of empathy, our basic humanity, could overcome prejudice and bridge seemingly irreconcilable differences. It was heartwarming, and it was utter bunkum. The good news is that this particular case of scientific fraud isn't going to do much damage to anyone but the people who concocted and published the study. The bad news is that the alleged deception is a symptom of a weakness at the heart of the scientific establishment.
  • When it was published in Science magazine last December, the research attracted academic as well as media attention; it seemed to provide solid evidence that increasing contact between minority and majority groups could reduce prejudice.
  • But in May, other researchers tried to reproduce the study using the same methods, and failed. Upon closer examination, they uncovered a number of devastating "irregularities" - statistical quirks and troubling patterns - that strongly implied that the whole LaCour/Green study was based upon made-up data.
  • ...6 more annotations...
  • The data hit the fan, at which point Green distanced himself from the survey and called for the Science article to be retracted. The professor even told Retraction Watch, the website that broke the story, that all he'd really done was help LaCour write up the findings.
  • Science magazine didn't shoulder any blame, either. In a statement, editor in chief Marcia McNutt said the magazine was essentially helpless against the depredations of a clever hoaxer: "No peer review process is perfect, and in fact it is very difficult for peer reviewers to detect artful fraud."
  • This is, unfortunately, accurate. In a scientific collaboration, a smart grad student can pull the wool over his adviser's eyes - or vice versa. And if close collaborators aren't going to catch the problem, it's no surprise that outside reviewers dragooned into critiquing the research for a journal won't catch it either. A modern science article rests on a foundation of trust.
  • If the process can't catch such obvious fraud - a hoax the perpetrators probably thought wouldn't work - it's no wonder that so many scientists feel emboldened to sneak a plagiarised passage or two past the gatekeepers.
  • Major peer-review journals tend to accept big, surprising, headline-grabbing results when those are precisely the ones that are most likely to be wrong.
  • Despite the artful passing of the buck by LaCour's senior colleague and the editors of Science magazine, affairs like this are seldom truly the product of a single dishonest grad student. Scientific publishers and veteran scientists - even when they don't take an active part in deception - must recognise that they are ultimately responsible for the culture producing the steady drip-drip-drip of falsification, exaggeration and outright fabrication eroding the discipline they serve.
grayton downing

Retracing Steps | The Scientist Magazine® - 1 views

  • growing body of research has highlighted scientists’ inability to reproduce one another’s results, including a 2012 study that found only 11 percent of “landmark” cancer studies investigated could be independently confirmed.
  • “Some communities have standards requiring raw data to be deposited at or before publication, but the computer code is generally not made available, typically due to the time it takes to prepare it for release,”
  • Sage’s solution? An open-source computational platform, called Synapse, which enables seamless collaboration among geographically dispersed scientific teams—providing them with the tools to share data, source code, and analysis methods on specific research projects or on any of the 10,000 datasets in the organization’s massive data corpus. Key to these collaborations are tools embedded in Synapse that allow for everything from data “freezing” and versioning controls to graphical provenance records—delineating who did what to which dataset, for example.
  • ...2 more annotations...
  • It was indeed the connecting data framework that held the entire project together,” said Josh Stuart, professor of biomolecular engineering at the University of California, Santa Cruz, who is part of the TCGA-led project.
  • “It provides a framework for the science to be extended upon, instead of publication as a finite endpoint for research,”
katedriscoll

Making Sense of the World, Several Senses at a Time - Scientific American - 0 views

  • Our five senses–sight, hearing, touch, taste and smell–seem to operate independently, as five distinct modes of perceiving the world. In reality, however, they collaborate closely to enable the mind to better understand its surroundings. We can become aware of this collaboration under special circumstances.
  • In some cases, a sense may covertly influence the one we think is dominant. When visual information clashes with that from sound, sensory crosstalk can cause what we see to alter what we hear. When one sense drops out, another can pick up the slack.
  • People with synesthesia have a particularly curious cross wiring of the senses, in which activating one sense spontaneously triggers another.
  • ...4 more annotations...
  • During speech perception, our brain integrates information from our ears with that from our eyes. Because this integration happens early in the perceptual process, visual cues influence what we think we are hearing. That is, what we see can actually shape what we "hear."
  • When visual information clashes with that from sound, sensory crosstalk can cause what we see to alter what we hear
  • Perceptual systems, particularly smell, connect with memory and emotion centers to enable sensory cues to trigger feelings and recollections, and to be incorporated within them
  • What might life be like if you had synesthesia? Here is one artist's rendition of the experience of a synaesthete. In this surreal world, music records smell like different colors, foods tastes like specific noises, and sound comes in all varieties of textures and shapes
  •  
    This article describes how our senses work together and we piece together the small amounts of information we take in to create an image.
Javier E

J. Robert Oppenheimer's Defense of Humanity - WSJ - 0 views

  • Von Neumann, too, was deeply concerned about the inability of humanity to keep up with its own inventions. “What we are creating now,” he said to his wife Klári in 1945, “is a monster whose influence is going to change history, provided there is any history left.” Moving to the subject of future computing machines he became even more agitated, foreseeing disaster if “people” could not “keep pace with what they create.”
  • Oppenheimer, Einstein, von Neumann and other Institute faculty channeled much of their effort toward what AI researchers today call the “alignment” problem: how to make sure our discoveries serve us instead of destroying us. Their approaches to this increasingly pressing problem remain instructive.
  • Von Neumann focused on applying the powers of mathematical logic, taking insights from games of strategy and applying them to economics and war planning. Today, descendants of his “game theory” running on von Neumann computing architecture are applied not only to our nuclear strategy, but also many parts of our political, economic and social lives. This is one approach to alignment: humanity survives technology through more technology, and it is the researcher’s role to maximize progress.
  • ...5 more annotations...
  • he also thought that this approach was not enough. “What are we to make of a civilization,” he asked in 1959, a few years after von Neumann’s death, “which has always regarded ethics as an essential part of human life, and…which has not been able to talk about the prospect of killing almost everybody, except in prudential and game-theoretical terms?”
  • to design a “fairness algorithm” we need to know what fairness is. Fairness is not a mathematical constant or even a variable. It is a human value, meaning that there are many often competing and even contradictory visions of it on offer in our societies.
  • Hence Oppenheimer set out to make the Institute for Advanced Study a place for thinking about humanistic subjects like Russian culture, medieval history, or ancient philosophy, as well as about mathematics and the theory of the atom. He hired scholars like George Kennan, the diplomat who designed the Cold War policy of Soviet “containment”; Harold Cherniss, whose work on the philosophies of Plato and Aristotle influenced many Institute colleagues; and the mathematical physicist Freeman Dyson, who had been one of the youngest collaborators in the Manhattan Project. Traces of their conversations and collaborations are preserved not only in their letters and biographies, but also in their research, their policy recommendations, and in their ceaseless efforts to help the public understand the dangers and opportunities technology offers the world.
  • In their biography “American Prometheus,” which inspired Nolan’s film, Martin Sherwin and Kai Bird document Oppenheimer’s conviction that “the safety” of a nation or the world “cannot lie wholly or even primarily in its scientific or technical prowess.” If humanity wants to survive technology, he believed, it needs to pay attention not only to technology but also to ethics, religions, values, forms of political and social organization, and even feelings and emotions.
  • Preserving any human value worthy of the name will therefore require not only a computer scientist, but also a sociologist, psychologist, political scientist, philosopher, historian, theologian. Oppenheimer even brought the poet T.S. Eliot to the Institute, because he believed that the challenges of the future could only be met by bringing the technological and the human together. The technological challenges are growing, but the cultural abyss separating STEM from the arts, humanities, and social sciences has only grown wider. More than ever, we need institutions capable of helping them think together.
peterconnelly

Virtual meetings can crush creativity, new study finds - CNN - 0 views

  • (CNN)Collaboration has been behind some of humanity's greatest achievements -- the Beatles' biggest hits, putting a man on the moon, the smartphone.
  • Yes, according to new research published Wednesday that found it's easier to come up with creative ideas in person.
  • "We initially started the project (in 2016) because we heard from managers and executives that innovation was one of the biggest challenges with video interaction. And I'll admit, I was initially skeptical," said Melanie Brucks
  • ...9 more annotations...
  • "When we innovate, we have to depart from existing solutions and come up with new ideas by drawing broadly from our knowledge. Coming up with alternative ways to use known objects requires the same psychological process," she explained.
  • Researchers also used eye-tracking software, which found that virtual participants spent more time looking directly at their partner, as opposed to gazing around the room.
  • "This visual focus on the screen narrows cognition. In other words, people are more focused when interacting on video, which hurts the broad, expansive idea generation process," Brucks said.
  • "Objects in the room can prompt new associations easier than trying to generate them all internally,"
  • "The field study shows that the negative effects of videoconferencing on idea generation is not limited to simplistic tasks and can play out in more complicated and high-tech brainstorming sessions as well," she said.
  • The study found that videoconferencing didn't hinder all collaborative work
  • However, she said it was a mistake to conclude that creativity and videoconferencing are incompatible.
  • "Perhaps many of us make friends faster in person than over Zoom, and creativity flourishes when we're relaxed. But when Zooming from home, people are probably more relaxed than when in an experiment," she added.
  • "I wouldn't want to see a company double their in-person meetings hoping to improve their innovation, if this also means doubling the commute time resulting in less happy -- and perhaps less creative -- employees."
Javier E

Pupils at elite Welsh school to get lessons in climate change - 0 views

  • For the first time in 50 years, the directors of the IB are collaborating with schools to create an updated version of the qualification taken by 16 to 19-year-olds as an alternative to A-levels
  • A cohort of 20 pupils at UWC will drop a third of the traditional subjects usually studied within the IB. Instead they will spend 300 hours, over two years of study, on the new areas. Forms of assessment have yet to be finalised but there will be no exams on these subjects.
  • Learning will be project-based, collaborative and designed to tackle “multiple global crises” around the world. IB says it is creating the new qualification because there is a growing disconnect between the education children are receiving and the education they need.
  • ...9 more annotations...
  • Olli-Pekka Heinonen, director-general of the IB, told The Times that teenagers taking the new diploma would be “uniquely empowered” on leaving school.
  • Traditional assessment often led to pupils forgetting what they had learnt once exams were over, he said. “The aim of this type of assessment is to strengthen deep learning and understanding, something you learn that stays with you and is part of you. We’re experimenting with non-exam assessment and finding the best ways to evaluate.
  • “Biodiversity, food, migration and energy are meaningful and attractive to young people. They’re appealing and relevant.“We’re looking at these areas through the lens of systems leadership — ie, how it’s possible to make change happen.”
  • The school hosts pupils from 80 different countries and charges fees of £37,000 a year. It is set in St Donat’s Castle, in 122 acres and its grounds include woodland, farmland, a valley and its own seafront.
  • Naheed Bardai, principal of UWC Atlantic, said he felt dissatisfied with the state of education in the world today as pupils should be prepared to become leaders of organisations and government in the 2040s and 2050s.
  • He said the new qualification would help them understand the root causes of problems but also become compassionate world leaders that can create “transformative solutions”.
  • “Our core policies are peace, sustainability and experiential learning but there’s very little education for young people at the intersection of problems, particularly where the human meets the natural world,”
  • We need people who can take constructive action as inequality is growing at a tremendous rate.”
  • Bardai said the school had been given a free hand in curriculum writing and was looking at more radical forms of assessment. This may include interviews, peer assessment and portfolios.
Javier E

'The Godfather of AI' Quits Google and Warns of Danger Ahead - The New York Times - 0 views

  • he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
  • Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
  • ...24 more annotations...
  • Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
  • But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
  • “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
  • After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.
  • Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
  • Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job
  • Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
  • Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
  • In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
  • In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
  • Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
  • Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
  • “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
  • As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
  • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
  • His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
  • He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
  • Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their ow
  • And he fears a day when truly autonomous weapons — those killer robots — become reality.
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
  • Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
  • But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
  • He does not say that anymore.
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
karenmcgregor

Solving the Puzzle: Network Design Assignment Helpers Unleashed - 0 views

Welcome to https://www.computernetworkassignmenthelp.com, where we unravel the complexities of network design assignments and bring you a team of expert network design assignment helpers ready to a...

#networkdesignassignmenthelper #assignmenthelpservices #onlinelearning #elearning #student #education technology knowledge education

started by karenmcgregor on 08 Dec 23 no follow-up yet
Javier E

Forecasting Fox - NYTimes.com - 0 views

  • Intelligence Advanced Research Projects Agency, to hold a forecasting tournament to see if competition could spur better predictions.
  • In the fall of 2011, the agency asked a series of short-term questions about foreign affairs, such as whether certain countries will leave the euro, whether North Korea will re-enter arms talks, or whether Vladimir Putin and Dmitri Medvedev would switch jobs. They hired a consulting firm to run an experimental control group against which the competitors could be benchmarked.
  • Tetlock and his wife, the decision scientist Barbara Mellers, helped form a Penn/Berkeley team, which bested the competition and surpassed the benchmarks by 60 percent in Year 1. How did they make such accurate predictions? In the first place, they identified better forecasters. It turns out you can give people tests that usefully measure how open-minded they are.
  • ...5 more annotations...
  • The teams with training that engaged in probabilistic thinking performed best. The training involved learning some of the lessons included in Daniel Kahneman’s great work, “Thinking, Fast and Slow.” For example, they were taught to alternate between taking the inside view and the outside view.
  • Most important, participants were taught to turn hunches into probabilities. Then they had online discussions with members of their team adjusting the probabilities, as often as every day
  • In these discussions, hedgehogs disappeared and foxes prospered. That is, having grand theories about, say, the nature of modern China was not useful. Being able to look at a narrow question from many vantage points and quickly readjust the probabilities was tremendously useful.
  • In the second year of the tournament, Tetlock and collaborators skimmed off the top 2 percent of forecasters across experimental conditions, identifying 60 top performers and randomly assigning them into five teams of 12 each. These “super forecasters” also delivered a far-above-average performance in Year 2. Apparently, forecasting skill cannot only be taught, it can be replicated.
  • He believes that this kind of process may help depolarize politics. If you take Republicans and Democrats and ask them to make a series of narrow predictions, they’ll have to put aside their grand notions and think clearly about the imminently falsifiable.
Javier E

What's the secret to learning a second language? - Salon.com - 0 views

  • “Arabic is a language of memorization,” he said. “You just have to drill the words into your head, which unfortunately takes a lot of time.” He thought, “How can I maximize the number of words I learn in the minimum amount of time?”
  • Siebert started studying the science of memory and second-language acquisition and found two concepts that went hand in hand to make learning easier: selective learning and spaced repetition. With selective learning, you spend more time on the things you don’t know, rather than on the things you already do
  • Siebert designed his software to use spaced repetition. If you get cup right, the program will make the interval between seeing the word cup longer and longer, but it will cycle cup back in just when you’re about to forget it. If you’ve forgotten cup entirely, the cycle starts again. This system moves the words from your brain’s short-term memory into long-term memory and maximizes the number of words you can learn effectively in a period. You don’t have to cram
  • ...8 more annotations...
  • ARABIC IS ONE of the languages the U.S. Department of State dubs “extremely hard.” Chinese, Japanese, and Korean are the others. These languages’ structures are vastly different from that of English, and they are memorization-driven.
  • To help meet its language-learning goals, in 2003 the Department of Defense established the University of Maryland Center for Advanced Study of Language.
  • MICHAEL GEISLER, a vice president at Middlebury College, which runs the foremost language-immersion school in the country, was blunt: “The drill-and-kill approach we used 20 years ago doesn’t work.” He added, “The typical approach that most programs take these days—Rosetta Stone is one example—is scripted dialogue and picture association. You have a picture of the Eiffel Tower, and you have a sentence to go with it. But that’s not going to teach you the language.”
  • According to Geisler, you need four things to learn a language. First, you have to use it. Second, you have to use it for a purpose. Research shows that doing something while learning a language—preparing a cooking demonstration, creating an art project, putting on a play—stimulates an exchange of meaning that goes beyond using the language for the sake of learning it.Third, you have to use the language in context. This is where Geisler says all programs have fallen short.
  • Fourth, you have to use language in interaction with others. In a 2009 study led by Andrew Meltzoff at the University of Washington, researchers found that young children easily learned a second language from live human interaction while playing and reading books. But audio and DVD approaches with the same material, without the live interaction, fostered no learning progress at all. Two people in conversation constantly give each other feedback that can be used to make changes in how they respond.
  • our research shows that the ideal model is a blended one,” one that blends technology and a teacher. “Our latest research shows that with the proper use of technology and cognitive neuroscience, we can make language learning more efficient.” 
  • The school released its first two online programs, for French and Spanish, last year. The new courses use computer avatars for virtual collaboration; rich video of authentic, unscripted conversations with native speakers; and 3-D role-playing games in which students explore life in a city square, acting as servers and taking orders from customers in a café setting. The goal at the end of the day, as Geisler put it, is for you to “actually be able to interact with a native speaker in his own language and have him understand you, understand him, and, critically, negotiate when you don’t understand what he is saying.” 
  • The program includes the usual vocabulary lists and lessons in how to conjugate verbs, but students are also consistently immersed in images, audio, and video of people from different countries speaking with different accents. Access to actual teachers is another critical component.
Javier E

[Six Questions] | Astra Taylor on The People's Platform: Taking Back Power and Culture ... - 1 views

  • Astra Taylor, a cultural critic and the director of the documentaries Zizek! and Examined Life, challenges the notion that the Internet has brought us into an age of cultural democracy. While some have hailed the medium as a platform for diverse voices and the free exchange of information and ideas, Taylor shows that these assumptions are suspect at best. Instead, she argues, the new cultural order looks much like the old: big voices overshadow small ones, content is sensationalist and powered by advertisements, quality work is underfunded, and corporate giants like Google and Facebook rule. The Internet does offer promising tools, Taylor writes, but a cultural democracy will be born only if we work collaboratively to develop the potential of this powerful resource
  • Most people don’t realize how little information can be conveyed in a feature film. The transcripts of both of my movies are probably equivalent in length to a Harper’s cover story.
  • why should Amazon, Apple, Facebook, and Google get a free pass? Why should we expect them to behave any differently over the long term? The tradition of progressive media criticism that came out of the Frankfurt School, not to mention the basic concept of political economy (looking at the way business interests shape the cultural landscape), was nowhere to be seen, and that worried me. It’s not like political economy became irrelevant the second the Internet was invented.
  • ...15 more annotations...
  • How do we reconcile our enjoyment of social media even as we understand that the corporations who control them aren’t always acting in our best interests?
  • hat was because the underlying economic conditions hadn’t been changed or “disrupted,” to use a favorite Silicon Valley phrase. Google has to serve its shareholders, just like NBCUniversal does. As a result, many of the unappealing aspects of the legacy-media model have simply carried over into a digital age — namely, commercialism, consolidation, and centralization. In fact, the new system is even more dependent on advertising dollars than the one that preceded it, and digital advertising is far more invasive and ubiquitous
  • the popular narrative — new communications technologies would topple the establishment and empower regular people — didn’t accurately capture reality. Something more complex and predictable was happening. The old-media dinosaurs weren’t dying out, but were adapting to the online environment; meanwhile the new tech titans were coming increasingly to resemble their predecessors
  • I use lots of products that are created by companies whose business practices I object to and that don’t act in my best interests, or the best interests of workers or the environment — we all do, since that’s part of living under capitalism. That said, I refuse to invest so much in any platform that I can’t quit without remorse
  • these services aren’t free even if we don’t pay money for them; we pay with our personal data, with our privacy. This feeds into the larger surveillance debate, since government snooping piggybacks on corporate data collection. As I argue in the book, there are also negative cultural consequences (e.g., when advertisers are paying the tab we get more of the kind of culture marketers like to associate themselves with and less of the stuff they don’t) and worrying social costs. For example, the White House and the Federal Trade Commission have both recently warned that the era of “big data” opens new avenues of discrimination and may erode hard-won consumer protections.
  • I’m resistant to the tendency to place this responsibility solely on the shoulders of users. Gadgets and platforms are designed to be addictive, with every element from color schemes to headlines carefully tested to maximize clickability and engagement. The recent news that Facebook tweaked its algorithms for a week in 2012, showing hundreds of thousands of users only “happy” or “sad” posts in order to study emotional contagion — in other words, to manipulate people’s mental states — is further evidence that these platforms are not neutral. In the end, Facebook wants us to feel the emotion of wanting to visit Facebook frequently
  • social inequalities that exist in the real world remain meaningful online. What are the particular dangers of discrimination on the Internet?
  • That it’s invisible or at least harder to track and prove. We haven’t figured out how to deal with the unique ways prejudice plays out over digital channels, and that’s partly because some folks can’t accept the fact that discrimination persists online. (After all, there is no sign on the door that reads Minorities Not Allowed.)
  • just because the Internet is open doesn’t mean it’s equal; offline hierarchies carry over to the online world and are even amplified there. For the past year or so, there has been a lively discussion taking place about the disproportionate and often outrageous sexual harassment women face simply for entering virtual space and asserting themselves there — research verifies that female Internet users are dramatically more likely to be threatened or stalked than their male counterparts — and yet there is very little agreement about what, if anything, can be done to address the problem.
  • What steps can we take to encourage better representation of independent and non-commercial media? We need to fund it, first and foremost. As individuals this means paying for the stuff we believe in and want to see thrive. But I don’t think enlightened consumption can get us where we need to go on its own. I’m skeptical of the idea that we can shop our way to a better world. The dominance of commercial media is a social and political problem that demands a collective solution, so I make an argument for state funding and propose a reconceptualization of public media. More generally, I’m struck by the fact that we use these civic-minded metaphors, calling Google Books a “library” or Twitter a “town square” — or even calling social media “social” — but real public options are off the table, at least in the United States. We hand the digital commons over to private corporations at our peril.
  • 6. You advocate for greater government regulation of the Internet. Why is this important?
  • I’m for regulating specific things, like Internet access, which is what the fight for net neutrality is ultimately about. We also need stronger privacy protections and restrictions on data gathering, retention, and use, which won’t happen without a fight.
  • I challenge the techno-libertarian insistence that the government has no productive role to play and that it needs to keep its hands off the Internet for fear that it will be “broken.” The Internet and personal computing as we know them wouldn’t exist without state investment and innovation, so let’s be real.
  • there’s a pervasive and ill-advised faith that technology will promote competition if left to its own devices (“competition is a click away,” tech executives like to say), but that’s not true for a variety of reasons. The paradox of our current media landscape is this: our devices and consumption patterns are ever more personalized, yet we’re simultaneously connected to this immense, opaque, centralized infrastructure. We’re all dependent on a handful of firms that are effectively monopolies — from Time Warner and Comcast on up to Google and Facebook — and we’re seeing increased vertical integration, with companies acting as both distributors and creators of content. Amazon aspires to be the bookstore, the bookshelf, and the book. Google isn’t just a search engine, a popular browser, and an operating system; it also invests in original content
  • So it’s not that the Internet needs to be regulated but that these big tech corporations need to be subject to governmental oversight. After all, they are reaching farther and farther into our intimate lives. They’re watching us. Someone should be watching them.
Javier E

Social Media and the Devolution of Friendship: Full Essay (Pts I & II) » Cybo... - 1 views

  • social networking sites create pressure to put time and effort into tending weak ties, and how it can be impossible to keep up with them all. Personally, I also find it difficult to keep up with my strong ties. I’m a great “pick up where we left off” friend, as are most of the people closest to me (makes sense, right?). I’m decidedly sub-awesome, however, at being in constant contact with more than a few people at a time.
  • the devolution of friendship. As I explain over the course of this essay, I link the devolution of friendship to—but do not “blame” it on—the affordances of various social networking platforms, especially (but not exclusively) so-called “frictionless sharing” features.
  • I’m using the word here in the same way that people use it to talk about the devolution of health care. One example of devolution of health care is some outpatient surgeries: patients are allowed to go home after their operations, but they still require a good deal of post-operative care such as changing bandages, irrigating wounds, administering medications, etc. Whereas before these patients would stay in the hospital and nurses would perform the care-labor necessary for their recoveries, patients must now find their own caregivers (usually family members or friends; sometimes themselves) to perform free care-labor. In this context, devolution marks the shift of labor and responsibility away from the medical establishment and onto the patient; within the patient-medical establishment collaboration, the patient must now provide a greater portion of the necessary work. Similarly, in some ways, we now expect our friends to do a greater portion of the work of being friends with us.
  • ...13 more annotations...
  • Through social media, “sharing with friends” is rationalized to the point of relentless efficiency. The current apex of such rationalization is frictionless sharing: we no longer need to perform the labor of telling our individual friends about what we read online, or of copy-pasting links and emailing them to “the list,” or of clicking a button for one-step posting of links on our Facebook walls. With frictionless sharing, all we have to do is look, or listen; what we’ve read or watched or listened to is then “shared” or “scrobbled” to our Facebook, Twitter, Tumblr, or whatever other online profiles. Whether we share content actively or passively, however, we feel as though we’ve done our half of the friendship-labor by ‘pushing’ the information to our walls, streams, and tumblelogs. It’s then up to our friends to perform their halves of the friendship-labor by ‘pulling’ the information we share from those platforms.
  • We’re busy people; we like the idea of making one announcement on Facebook and being done with it, rather than having to repeat the same story over and over again to different friends individually. We also like not always having to think about which friends might like which stories or songs; we like the idea of sharing with all of our friends at once, and then letting them sort out amongst themselves who is and isn’t interested. Though social media can create burdensome expectations to keep up with strong ties, weak ties, and everyone in between, social media platforms can also be very efficient. Using the same moment of friendship-labor to tend multiple friendships at once kills more birds with fewer stones.
  • sometimes we like the devolution of friendship. When we have to ‘pull’ friendship-content instead of receiving it in a ‘push’, we can pick and choose which content items to pull. We can ignore the baby pictures, or the pet pictures, or the sushi pictures—whatever it is our friends post that we only pretend to care about
  • I’ve been thinking since, however, on what it means to view our friends as “generalized others.” I may now feel like less of like “creepy stalker” when I click on a song in someone’s Spotify feed, but I don’t exactly feel ‘shared with’ either. Far as I know, I’ve never been SpotiVaguebooked (or SubSpotified?); I have no reason to think anyone is speaking to me personally as they listen to music, or as they choose not to disable scrobbling (if they make that choice consciously at all). I may have been granted the opportunity to view something, but it doesn’t follow that what I’m viewing has anything to do with me unless I choose to make it about me. Devolved friendship means it’s not up to us to interact with our friends personally; instead it’s now up to our friends to make our generalized broadcasts personal.
  • While I won’t go so far as to say they’re definitely ‘problems,’ there are two major things about devolved friendship that I think are worth noting. The first is the non-uniform rationalization of friendship-labor, and the second is the depersonalization of friendship-labor.
  • In short, “sharing” has become a lot easier and a lot more efficient, but “being shared with” has become much more time-consuming, demanding, and inefficient (especially if we don’t ignore most of our friends most of the time). Given this, expecting our friends to keep up with our social media content isn’t expecting them to meet us halfway; it’s asking them to take on the lion’s share of staying in touch with us. Our jobs (in this role) have gotten easier; our friends’ jobs have gotten harder.
  • The second thing worth noting is that devolved friendship is also depersonalized friendship.
  • Personal interaction doesn’t just happen on Spotify, and since I was hoping Spotify would be the New Porch, I initially found Spotify to be somewhat lonely-making. It’s the mutual awareness of presence that gives companionate silence its warmth, whether in person or across distance. The silence within Spotify’s many sounds, on the other hand, felt more like being on the outside looking in. This isn’t to say that Spotify can’t be social in a more personal way; once I started sending tracks to my friends, a few of them started sending tracks in return. But it took a lot more work to get to that point, which gets back to the devolution of friendship (as I explain below).
  • Within devolved friendship interactions, it takes less effort to be polite while secretly waiting for someone to please just stop talking.
  • When we consider the lopsided rationalization of ‘sharing’ and ‘shared with,’ as well as the depersonalization of frictionless sharing and generalized broadcasting, what becomes clear is this: the social media deck is stacked in such a way as to make being ‘a self’ easier and more rewarding than being ‘a friend.’
  • It’s easy to share, to broadcast, to put our selves and our tastes and our identity performances out into the world for others to consume; what feedback and friendship we get in return comes in response to comparatively little effort and investment from us. It takes a lot more work, however, to do the consumption, to sift through everything all (or even just some) of our friends produce, to do the work of connecting to our friends’ generalized broadcasts so that we can convert their depersonalized shares into meaningful friendship-labor.
  • We may be prosumers of social media, but the reward structures of social media sites encourage us to place greater emphasis on our roles as share-producers—even though many of us probably spend more time consuming shared content than producing it. There’s a reason for this, of course; the content we produce (for free) is what fuels every last ‘Web 2.0’ machine, and its attendant self-centered sociality is the linchpin of the peculiarly Silicon Valley concept of “Social” (something Nathan Jurgenson and I discuss together in greater detail here). It’s not super-rewarding to be one of ten people who “like” your friend’s shared link, but it can feel rewarding to get 10 “likes” on something you’ve shared—even if you have hundreds or thousands of ‘friends.’ Sharing is easy; dealing with all that shared content is hard.
  • t I wonder sometimes if the shifts in expectation that accompany devolved friendship don’t migrate across platforms and contexts in ways we don’t always see or acknowledge. Social media affects how we see the world—and how we feel about being seen in the world—even when we’re not engaged directly with social media websites. It’s not a stretch, then, to imagine that the affordances of social media platforms might also affect how we see friendship and our obligations as friends most generally.
Javier E

Quitters Never Win: The Costs of Leaving Social Media - Woodrow Hartzog and Evan Seling... - 2 views

  • Manjoo offers this security-centric path for folks who are anxious about the service being "one the most intrusive technologies ever built," and believe that "the very idea of making Facebook a more private place borders on the oxymoronic, a bit like expecting modesty at a strip club". Bottom line: stop tuning in and start dropping out if you suspect that the culture of oversharing, digital narcissism, and, above all, big-data-hungry, corporate profiteering will trump privacy settings.
  • Angwin plans on keeping a bare-bones profile. She'll maintain just enough presence to send private messages, review tagged photos, and be easy for readers to find. Others might try similar experiments, perhaps keeping friends, but reducing their communication to banal and innocuous expressions. But, would such disclosures be compelling or sincere enough to retain the technology's utility?
  • The other unattractive option is for social web users to willingly pay for connectivity with extreme publicity.
  • ...9 more annotations...
  • go this route if you believe privacy is dead, but find social networking too good to miss out on.
  • While we should be attuned to constraints and their consequences, there are at least four problems with conceptualizing the social media user's dilemma as a version of "if you can't stand the heat, get out of the kitchen".
  • The efficacy of abandoning social media can be questioned when others are free to share information about you on a platform long after you've left.
  • Second, while abandoning a single social technology might seem easy, this "love it or leave it" strategy -- which demands extreme caution and foresight from users and punishes them for their naivete -- isn't sustainable without great cost in the aggregate. If we look past the consequences of opting out of a specific service (like Facebook), we find a disconcerting and more far-reaching possibility: behavior that justifies a never-ending strategy of abandoning every social technology that threatens privacy -- a can being kicked down the road in perpetuity without us resolving the hard question of whether a satisfying balance between protection and publicity can be found online
  • if your current social network has no obligation to respect the obscurity of your information, what justifies believing other companies will continue to be trustworthy over time?
  • Sticking with the opt-out procedure turns digital life into a paranoid game of whack-a-mole where the goal is to stay ahead of the crushing mallet. Unfortunately, this path of perilously transferring risk from one medium to another is the direction we're headed if social media users can't make reasonable decisions based on the current context of obscurity, but instead are asked to assume all online social interaction can or will eventually lose its obscurity protection.
  • The fourth problem with the "leave if you're unhappy" ethos is that it is overly individualistic. If a critical mass participates in the "Opt-Out Revolution," what would happen to the struggling, the lonely, the curious, the caring, and the collaborative if the social web went dark?
  • Our point is that there is a middle ground between reclusion and widespread publicity, and the reduction of user options to quitting or coping, which are both problematic, need not be inevitable, especially when we can continue exploring ways to alleviate the user burden of retreat and the societal cost of a dark social web.
  • it is easy to presume that "even if you unfriend everybody on Facebook, and you never join Twitter, and you don't have a LinkedIn profile or an About.me page or much else in the way of online presence, you're still going to end up being mapped and charted and slotted in to your rightful place in the global social network that is life." But so long it remains possible to create obscurity through privacy enhancing technology, effective regulation, contextually appropriate privacy settings, circumspect behavior, and a clear understanding of how our data can be accessed and processed, that fatalism isn't justified.
kushnerha

Our Natural History, Endangered - The New York Times - 0 views

  • Worse, this rumored dustiness reinforces the widespread notion that natural history museums are about the past — just a place to display bugs and brontosaurs. Visitors may go there to be entertained, or even awe-struck, but they are often completely unaware that curators behind the scenes are conducting research into climate change, species extinction and other pressing concerns of our day. That lack of awareness is one reason these museums are now routinely being pushed to the brink. Even the National Science Foundation, long a stalwart of federal support for these museums, announced this month that it was suspending funding for natural history collections as it conducts a yearlong budget review.
  • It gets worse: A new Republican governor last year shut down the renowned Illinois State Museum, ostensibly to save the state $4.8 million a year. The museum pointed out that this would actually cost $33 million a year in lost tourism revenue and an untold amount in grants. But the closing went through, endangering a trove of 10 million artifacts, from mastodon bones to Native American tools, collected over 138 years, and now just languishing in the shuttered building. Eric Grimm, the museum’s director of science, characterized it as an act of “political corruption and malevolent anti-intellectualism.”
  • Other museums have survived by shifting their focus from research to something like entertainment.
  • ...9 more annotations...
  • The pandering can be insidious, too. The Perot Museum of Nature and Science in Dallas, which treats visitors to a virtual ride down a hydraulic fracturing well, recently made headlines for avoiding explicit references to climate change. Other museums omit scientific information on evolution. “We don’t need people to come in here and reject us,”
  • Even the best natural history museums have been obliged to reduce their scientific staff in the face of government cutbacks and the decline in donations following the 2008 economic crash. They still have their collections, and their public still comes through the door. But they no longer employ enough scientists to interpret those collections adequately for visitors or the world at large. Hence the journal Nature last year characterized natural history collections as “the endangered dead.”
  • these collections are less about the past than about our world and how it is changing. Sediment cores like the ones at the Illinois State Museum, for instance, may not sound terribly important, but the pollen in them reveals how past climates changed, what species lived and died as a result, and thus how our own future may be rapidly unfolding.
  • Natural history museums are so focused on the future that they have for centuries routinely preserved such specimens to answer questions they didn’t yet know how to ask, requiring methodologies that had not yet been invented, to make discoveries that would have been, for the original collectors, inconceivable.
  • THE people who first put gigantic mammoth and mastodon specimens in museums, for instance, did so mainly out of dumb wonderment. But those specimens soon led to the stunning 18th-century recognition that parts of God’s creation could become extinct. The heretical idea of extinction then became an essential preamble to Darwin, whose understanding of evolution by natural selection depended in turn on the detailed study of barnacle specimens collected and preserved over long periods and for no particular reason. Today, those same specimens continue to answer new questions with the help of genome sequencing, CT scans, stable isotope analysis and other technologies.
  • These museums also play a critical role in protecting what’s left of the natural world, in part because they often combine biological and botanical knowledge with broad anthropological experience.
  • “You have no nationality. You are scientists. You speak for nature.” Just since 1999, according to the Field Museum, inventories by its curators and their collaborators have been a key factor in the protection of 26.6 million acres of wilderness, mainly in the headwaters of the Amazon.
  • It may be optimistic to say that natural history museums have saved the world. It may even be too late for that. But they provide one other critical service that can save us, and our sense of wonder: Almost everybody in this country — even children in Denver who have never been to the Rocky Mountains, or people in San Francisco who have never walked on a Pacific Ocean beach — goes to a natural history museum at some point in his life, and these visits influence us in deep and unpredictable ways.
  • we dimly begin to understand the passage of time and cultures, and how our own species fits amid millions of others. We start to understand the strangeness and splendor of the only planet where we will ever have the great pleasure of living.
1 - 20 of 89 Next › Last »
Showing 20 items per page