Skip to main content

Home/ TOK Friends/ Group items tagged academy

Rss Feed Group items tagged

carolinewren

National Secular Society - Forced academisation could impose religion on pupils - 0 views

  • The National Secular Society has warned that proposals to force struggling local authority schools in England to become academies could increase the proportion of faith based schools.
  • The Government's new Education and Adoption Bill will force councils and governing bodies to actively progress the conversion of failing schools into academies. Education Secretary Nicky Morgan says the tough new measures intended to turn around failing schools will "sweep away bureaucratic and legal loopholes" that previously prevented schools from being improved.
  • without adequate safeguards, schools joining faith academy chains could acquire a religious designation or faith ethos upon conversion with no opportunity for parents to object or even be consulted.
  • ...4 more annotations...
  • "Plans to scrap the requirement for academy sponsors to consult with school communities, including parents, could result in a faith based education being imposed on parents and young people against their wishes.
  • "Given England's religiously diverse population – around half of which self-identify as non-religious, any increase in the proportion of religiously designated or faith ethos schools is likely to impede parents' ability to secure an education that doesn't run counter to their beliefs.
  • "Forcing a religious ethos on young people through their education would in many cases disrespect their parents' wishes and be at odds with principles of fairness and equality.
  • the academisation of local authority controlled schools would increase the risk of faith-based organisations gaining greater control over school curriculums, admissions arrangements and employment practices – leading to even greater discrimination in our education system than already exists.
Javier E

The New History Wars - The Atlantic - 0 views

  • Critical historians who thought they were winning the fight for control within the academy now face dire retaliation from outside the academy. The dizzying turn from seeming triumph in 2020 to imminent threat in 2022 has unnerved many practitioners of the new history. Against this background, they did not welcome it when their association’s president suggested that maybe their opponents had a smidgen of a point.
  • a background reality of the humanities in the contemporary academy: a struggle over who is entitled to speak about what. Nowhere does this struggle rage more fiercely than in anything to do with the continent of Africa. Who should speak? What may be said? Who will be hired?
  • ne obvious escape route from the generational divide in the academy—and the way the different approaches to history, presentist and antiquarian, tend to map onto it—is for some people, especially those on the older and whiter side of the divide, to keep their mouths shut about sensitive issues
  • ...15 more annotations...
  • The political and methodological stresses within the historical profession are intensified by economic troubles. For a long time, but especially since the economic crisis of 2008, university students have turned away from the humanities, preferring to major in fields that seem to offer more certain and lucrative employment. Consequently, academic jobs in the humanities and especially in history have become radically more precarious for younger faculty—even as universities have sought to meet diversity goals in their next-generation hiring by expanding offerings in history-adjacent specialties, such as gender and ethnic studies.
  • The result has produced a generational divide. Younger scholars feel oppressed and exploited by universities pressing them to do more labor for worse pay with less security than their elders; older scholars feel that overeager juniors are poised to pounce on the least infraction as an occasion to end an elder’s career and seize a job opening for themselves. Add racial difference as an accelerant, and what was intended as an interesting methodological discussion in a faculty newsletter can explode into a national culture war.
  • One of the greatest American Africanists was the late Philip Curtin. He wrote one of the first attempts to tally the exact number of persons trafficked by the transatlantic slave trade. Upon publication in 1972, his book was acclaimed as a truly pioneering work of history. By 1995, however, he was moved to protest against trends in the discipline at that time in an article in the Chronicle of Higher Education:I am troubled by increasing evidence of the use of racial criteria in filling faculty posts in the field of African history … This form of intellectual apartheid has been around for several decades, but it appears to have become much more serious in the past few years, to the extent that white scholars trained in African history now have a hard time finding jobs.
  • Much of academia is governed these days by a joke from the Soviet Union: “If you think it, don’t speak it. If you speak it, don’t write it. If you write it, don’t sign it. But if you do think it, speak it, write it, and sign it—don’t be surprised.”
  • Yet this silence has consequences, too. One of the most unsettling is the displacement of history by mythmaking
  • mythmaking is spreading from “just the movies” to more formal and institutional forms of public memory. If old heroes “must fall,” their disappearance opens voids for new heroes to be inserted in their place—and that insertion sometimes requires that new history be fabricated altogether, the “bad history” that Sweet tried to warn against.
  • If it is not the job of the president of the American Historical Association to confront those questions, then whose is it?
  • Sweet used a play on words—“Is History History?”—for the title of his complacency-shaking essay. But he was asking not whether history is finished, done with, but Is history still history? Is it continuing to do what history is supposed to do? Or is it being annexed for other purposes, ideological rather than historical ones?
  • Advocates of studying the more distant past to disturb and challenge our ideas about the present may accuse their academic rivals of “presentism.”
  • In real life, of course, almost everybody who cares about history believes in a little of each option. But how much of each? What’s the right balance? That’s the kind of thing that historians do argue about, and in the arguing, they have developed some dismissive labels for one another
  • Those who look to the more recent past to guide the future may accuse the other camp of “antiquarianism.”
  • The accusation of presentism hurts because it implies that the historian is sacrificing scholarly objectivity for ideological or political purposes. The accusation of antiquarianism stings because it implies that the historian is burrowing into the dust for no useful purpose at all.
  • In his mind, he was merely reopening one of the most familiar debates in professional history: the debate over why? What is the value of studying the past? To reduce the many available answers to a stark choice: Should we study the more distant past to explore its strangeness—and thereby jolt ourselves out of easy assumptions that the world we know is the only possible one?
  • Or should we study the more recent past to understand how our world came into being—and thereby learn some lessons for shaping the future?
  • The August edition of the association’s monthly magazine featured, as usual, a short essay by the association’s president, James H. Sweet, a professor at the University of Wisconsin at Madison. Within hours of its publication, an outrage volcano erupted on social media. A professor at Cornell vented about the author’s “white gaze.”
Javier E

Politics in the Academy: The Same Old Song - NYTimes.com - 0 views

  • I have been arguing for years that academic activity is distinctive and that it is wrong, both descriptively and normatively, to mix it up with other kinds of activity, especially with politics. In response, I am often asked how a discussion, either in a class or at a conference like this one, could be kept purely academic if the subject matter on the table is itself inherently political. The answer is really quite simple:  Political subject matter could be the starting point of an intellectual inquiry in the course of which the political urgency originally attached to it is replaced by the academic urgency of getting something right, where “right” means accurately described or analyzed or rendered more problematic. (In the academic world not reaching a resolution is a good thing.)
  • The urgency presiding over the occasion was not the urgency of doing something, but of understanding something, and everyone agreed that the project of understanding was an endless one and that the best we could do in two days would be to expose areas of concern that had not yet been recognized or adequately formulated. The comment heard most often at the end of a session was, “ You’ve given me a lot to think about.”
  • that’s what we do when we’re doing the job properly — talking (or writing about) issues without ever coming up with a policy proposal or with an argument for supporting a party or a candidate.
  • ...1 more annotation...
  • Why should funds be authorized to bring 29 scholars to San Diego so that they can sit around all day gabbing and eating unhealthful pastry? There is no good answer to that question, nor should there be.
charlottedonoho

Innovation and equity in an age of gene editing | Science | The Guardian - 1 views

  • “A Gathering of Global Thought Leaders to Reach Consensus on the Direction of Biotechnology for the 21st Century”, in Atlanta, coincided with the announcement by the National Academy of Science and National Academy of Medicine of an initiative to look into “promising new treatments for disease,” given that “recent experiments to attempt to edit human genes also have raised important questions about the potential risks and ethical concerns of altering the human germline.”
  • On the other hand, there were others in the room, ourselves included, who argue in our work that “promising new treatments for disease” should not be pre-emptively morally centered, as to do so leaves out too many of the ethical issues at stake. As we pursue promising treatments, we should also be asking what we are trying to treat; whether it is best treated biomedically; who is included as funders, patients, donors, and scientists; who is left out; who profits; and whether or not the treatment masks, depoliticizes, or exacerbates political and social inequality.
  • Yesterday’s National Academies framing morally centered “promising new treatments for disease” and pitted “risks and ethical concerns” against those potential treatments.
  • ...2 more annotations...
  • Many of the worlds’ greatest medical challenges stem from poverty, inequality, discrimination, pollution, and environmental devastation, as critical race, gender, and decolonization scholars have shown us.
  • Let’s be alert to the ways in which risks and ethical issues present themselves at every stage of developing and implementing promising new treatments. Scientists themselves do not have to do this work. That is what social scientists do.
Javier E

Scholarship and Politics - The Case of Noam Chomsky - NYTimes.com - 0 views

  • (1) The academy is a world of its own, complete with rules, protocols, systems of evaluation, recognized achievements, agreed-on goals, a roster of heroes and a list of tasks yet to be done.
  • (2) Academic work proceeds within the confines of that world, within, that is, a professional, not a public, space, although its performance may be, and often is, public.
  • (3) academic work is only tangentially, not essentially, political; politics may attend the formation of academic units and the selection of academic personnel, but political concerns and pressures have no place in the unfolding of academic argument, except as objects of its distinctive forms of attention
  • ...16 more annotations...
  • This is as good as it gets. There is “no evolution in our capacity for language.”
  • The answer given in the first lecture — “What is Language?” — is that we are creatures with language, and that language as a uniquely human biological capacity appeared suddenly and quite late in the evolutionary story, perhaps 75,000 years ago.
  • Chomsky gave three lectures under the general title “What Kind of Creatures are We?”
  • Language, then, does not arise from the social/cultural environment, although the environment provides the stuff or input it works on. That input is “impoverished”; it can’t account for the creativity of language performance, which has its source not in the empirical world, but in an innate ability that is more powerful than the stimuli it utilizes and plays with. It follows that if you want to understand language, you shouldn’t look to linguistic behavior but to the internal mechanism — the Universal Grammar — of which particular linguistic behaviors are a non-exhaustive expression. (The capacity exceeds the empirical resources it might deploy.)
  • In his second lecture (“What Can We Understand?”), Chomsky took up the question of what humans are capable of understanding and his answer, generally, was that we can understand what we can understand, and that means that we can’t understand what is beyond our innate mental capacities
  • This does not mean, he said, that what we can’t understand is not real: “What is mysterious to me is not an argument that it does not exist.” It’s just that while language is powerful and creative, its power and creativity have limits; and since language is thought rather than an addition to or clothing of thought, the limits of language are the limits of what we can fruitfully think about
  • (4) The academic views of a professor are independent of his or her real-world political views; academic disputes don’t track partisan disputes or vice versa; you can’t reason from an academic’s disciplinary views to the positions he or she would take in the public sphere; they are independent variables.
  • These assertions are offered as a counter to what Chomsky sees as the over-optimistic Enlightenment belief — common to many empiricist philosophies — that ours is a “limitless explanatory power” and that “we can do anything.”
  • In the third lecture (“What is the Common Good?”) Chomsky turned from the philosophy of mind and language to political philosophy and the question of what constitutes a truly democratic society
  • He likened dogmatic intellectual structures that interfere with free inquiry to coercive political structures that stifle the individual’s creative independence and fail to encourage humanity’s “richest diversity
  • He asserted that any institution marked by domination and hierarchy must rise to the challenge of justifying itself, and if it cannot meet the challenge, it should be dismantled.
  • He contrasted two accounts of democracy: one — associated by him with James Madison — distrusts the “unwashed” populace and puts its faith in representative government where those doing the representing (and the voting and the distributing of goods) constitute a moneyed and propertied elite
  • the other — associated by him with Adam Smith (in one of his moods), J. S. Mill, the 1960s and a tradition of anarchist writing — seeks to expand the franchise and multiply choices in the realms of thought, politics and economics. The impulse of this second, libertarian, strain of democracy, is “to free society from economic or theological guardianship,” and by “theological” Chomsky meant not formal religion as such but any assumed and frozen ideology that blocked inquiry and limited participation. There can’t, in short, be “too much democracy.”
  • It was thought of the highest order performed by a thinker, now 85 years old, who by and large eschewed rhetorical flourishes (he has called his own speaking style “boring” and says he likes it that way) and just did it, where ‘it” was the patient exploration of deep issues that had been explored before him by a succession of predecessors, fully acknowledged, in a conversation that is forever being continued and forever being replenished.
  • Yes, I said to myself, this is what we — those of us who bought a ticket on this particular train — do; we think about problems and puzzles and try to advance the understanding of them; and we do that kind of thinking because its pleasures are, in a strong sense, athletic and provide for us, at least on occasion, the experience of fully realizing whatever capabilities we might have. And we do it in order to have that experience, and to share it with colleagues and students of like mind, and not to make a moral or political point.
  • The term “master class” is a bit overused, but I feel no hesitation in using it here. It was a master class taught by a master, and if someone were to ask me what exactly is it that academics do, I would point to these lectures and say, simply, here it is, the thing itself.
Javier E

How Memory Works: Interview with Psychologist Daniel L. Schacter | History News Network - 2 views

  • knowledge from a scientific perspective of how human memory works can be instructive to historians.
  • Memory is much more than a simple retrieval system, as Dr. Schacter has demonstrated in his research. Rather, the nature of memory is constructive and influenced by a person’s current state as well as intervening emotions, beliefs, events and other factors since a recalled event.
  • Dr. Schacter is William R. Kenan, Jr. Professor of Psychology at Harvard University. His books include Searching for Memory: The Brain, The Mind, and The Past, and The Seven Sins of Memory: How the Mind Forgets and Remembers, both winners of the American Psychological Association’s William James Book Award, and Forgotten Ideas, Neglected Pioneers: Richard Semon and the Story of Memory. He also has written hundreds of articles on memory and related matters. He was elected a Fellow of the American Academy of Arts and Sciences in 1996 and the National Academy of Sciences in 2013.
  • ...16 more annotations...
  • that memory is not a video recorder [but that] it’s a constructive activity that is in many ways accurate but prone to interesting errors and distortions. It’s the constructive side of memory that is most relevant to historians.
  • Is it the case then that our memories constantly change every time we access them?
  • That certainly can happen depending on how you recount a memory. What you emphasize. What you exaggerate. What you don’t talk about. All of those things will shape and sculpt the memory for future use. Certainly the potential is there.
  • Research on memory shows that the more distant in time the event, the more prone to inaccuracy the memory. There are several experiments when subjects recorded impressions of an event soon afterward, then a year later and then a few years later, and the memory changed.Yes. It’s not that the information is lost but, as the memory weakens, you become more prone to incorporating other kinds of information or mixing up elements of other events. This has been seen, for example, in the study of flashbulb memories. Where were you when Kennedy was shot? Where were you when you heard about 9/11?
  • Isn’t there a tendency to add details or information that may make the story more convincing or interesting later?Yes. That’s more a social function of memory. It may be that you draw on your general knowledge and probable information from your memory in a social context where there may be social demands that lead you distort the memory.
  • What are the different memory systems?
  • What is the difference between working memory and permanent memory?Working memory is really a temporary memory buffer where you hold onto information, manipulate information, use it, and it’s partly a gateway to long-term memory and also a buffer that you use when you’re retrieving information from long-term memory and that information temporarily resides in working memory, so to speak.
  • Your discussion of the testimony of White House Counsel John Dean about Watergate is illuminating. There was a perception that Dean had a photographic memory and he testified in rich detail about events. Yet later studies of White House tape recordings revealed that he was often inaccurate.
  • He was perceived because of all the detail with which he reported events and the great confidence to be something analogous to a human tape recorder. Yet there was interesting work done by psychologist Ulric Neisser who went back and analyzed what Dean said at the hearings as compared to available information on the White House taping system and basically found many and significant discrepancies between what Dean remembered and what was actually said. He usually had the gist and the meaning and overall significance right, but the exact details were often quite different in his memory than what actually was said.
  • That seems to get into the area of false memories and how they present problems in the legal system.We know from DNA exonerations of people wrongfully convicted of crimes that a large majority of those cases -- one of the more recent estimates is that in the first 250 cases of 2011 DNA exonerations, roughly 70 to 75 percent of those individuals were convicted on the basis of faulty eyewitness memory.
  • One of the interesting recent lines of research that my lab has been involved in over the past few years has been looking at similarities between what goes on between the brain and mind when we remember past events on the one hand and imagine events that might occur in the future or might have occurred in the past. What we have found, particularly with brain scanning studies, is that you get very similar brain networks coming online when you remember past events and imagine future events, for example. Many of the same brain regions or network of structures come online, and this has helped us understand more why, for example, imagining events that might have occurred can be so harmful to memory accuracy because when you imagine, you’re recruiting many of the same brain regions as accessed when you actually remember. So it’s not surprising that some of these imagined events can actually turn into false memories under the right circumstances.
  • One reasonably well accepted distinction involves episodic memory, the memory for personal experience; semantic memory, the memory for general knowledge; and procedural memory, the memory for skills and unconscious forms of memory.Those are three of the major kinds of memory and they all have different neural substrates.
  • One of the points from that Ross Perot study is that his supporters often misremembered what they felt like at the time he reported he had dropped out of the race. The nature of that misremembering depended on their state at the time they were remembering and what decisions they had made about Perot in the interim affected how they reconstructed their earlier memories.Again, that makes nicely the point that our current emotions and current appraisals of a situation can feed back into our reconstruction of the past and sometimes lead us to distort our memories so that they better support our current emotions and our current selves. We’re often using memories to justify what we currently know, believe and feel.
  • memory doesn’t work like a video camera or tape recorder.That is the main point. Our latest thinking on this is the idea that one of the major functions of memory is to support our ability to plan for the future, to imagine the future, and to use our past experiences in a flexible way to simulate different outcomes of events.
  • flexibility of memory is something that makes it useful to support this very important ability to run simulations of future events. But that very flexibility might be something that contributes to some of the memory distortion we talked about. That has been prominent in the last few years in my thinking about the constructive nature of memory.
  • The historian Daniel Aaron told his students “we remember what’s important.” What do you think of that comment?I think that generally holds true. Certainly, again, more important memories tend to be more significant with more emotional arousal and may elicit “deeper processing”, as we call it in cognitive psychology
qkirkpatrick

US science leaders to tackle ethics of gene-editing technology - BuenosAiresHerald.com - 1 views

  • The leading US scientific organization, responding to concerns expressed by scientists and ethicists, has launched an ambitious initiative to recommend guidelines for new genetic technology that has the potential to create "designer babies."
  • The technology, called CRISPR-Cas9, allows scientists to edit virtually any gene they target
  • Although the embryos were not viable and could not have developed into babies, the announcement ignited an outcry from scientists warning that such a step, which could alter human genomes for generations, was just a matter of time.
  • ...2 more annotations...
  • In response, the National Academy of Sciences (NAS) and its Institute of Medicine will convene an international summit this fall where researchers and other experts will "explore the scientific, ethical, and policy issues associated with human gene-editing research," the academies said in a statement
  • It is a step reminiscent of one in 1975, when NAS convened the Asilomar Conference. That led to guidelines and federal regulations of recombinant DNA, the gene-splicing technology that underlay the founding of Genentech and other biotech companies and revolutionized the production of many pharmaceuticals
  •  
    ethics in science of designing own baby. 
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
katherineharron

Lawmakers around the nation are proposing bills for -- and against -- vaccinations - CNN - 0 views

  • At a time when almost everything is politicized, vaccination has planted itself squarely on the national stage.
  • On one side of the debate are parents who are rebelling against settled science and calling on states to broaden vaccine exemptions. They cite their faith or believe vaccines pose danger to their children, even though no major religion opposes them and claims of vaccines' link to autism has been long debunked.
  • "I won't be surprised if we see many pro-vaccine bills this year," said Dr. Sean O'Leary, a member of the American Academy of Pediatrics Committee on Infectious Diseases. "The measles outbreaks were really a wake-up call, showing legislators that maintaining high vaccination rates is not just a theoretical goal."
  • ...10 more annotations...
  • An overwhelming majority of American adults (88%) say the benefits of the measles, mumps and rubella (MMR) vaccine outweigh the risks, according to a new Pew Research Center survey.And last year, 14 states proposed eliminating religious exemptions for vaccines -- a marked increase from years past, according to the American Academy of Pediatrics.
  • "When you choose not to vaccinate, you're putting your child at risk of disease, but you're also putting other people at risk," O'Leary said.
  • "We need to have the ability in our country, if we find a commercial pharmaceutical product is not as safe and effective as we're being told it is, we should have the right to make informed consent to use the product," she said.
  • "When vaccination rates fall, we see disease, and people suffer. Protecting children in schools is a worthy goal of government, regardless of political affiliation," he said. "There's really no good reason to exempt your child from vaccination -- only medical."
  • "Science is really on the side of vaccinations," said O'Leary, who is an associate professor of pediatrics at the University of Colorado School of Medicine. "They're one of the best public health interventions in history in terms of the numbers of lives saved. The benefits far outweigh the risk."
  • New York, California and Washington state took action after massive measles outbreaks in 2019, a year that saw the highest reported measles cases since the disease was declared eliminated nationwide in 2000.
  • Many of the religious exemption laws are not new. Several states first passed them in the 1960s and 1970s, thanks to an influx of lobbyists from the Christian Science Church, which doesn't ban members from using vaccines but encourages healing through prayer.
  • Supporters of vaccine exemptions see laws like those passed in New York and Washington as "fundamentally a threat to their ability to make informed consent about vaccinations," said Fisher, president of the National Vaccine Information Center.
  • Proponents on both sides of the debate have found allies across the political spectrum. Republican lawmakers have sponsored stricter bills, and Democratic governors have drawn the line at mandating vaccines.
  • "It's a tough balance, but you're using a public -- and private -- resource in conjunction with lots of other kids," Harris told CNN. "There are other venues where they can be educated, they can still have their freedom, but they're not going into a public school and spread their disease."
Javier E

Doubts about Johns Hopkins research have gone unanswered, scientist says - The Washingt... - 0 views

  • Over and over, Daniel Yuan, a medical doctor and statistician, couldn’t understand the results coming out of the lab, a prestigious facility at Johns Hopkins Medical School funded by millions from the National Institutes of Health.He raised questions with the lab’s director. He reran the calculations on his own. He looked askance at the articles arising from the research, which were published in distinguished journals. He told his colleagues: This doesn’t make sense.“At first, it was like, ‘Okay — but I don’t really see it,’ ” Yuan recalled. “Then it started to smell bad.”
  • The passions of scientific debate are probably not much different from those that drive achievement in other fields, so a tragic, even deadly dispute might not be surprising.But science, creeping ahead experiment by experiment, paper by paper, depends also on institutions investigating errors and correcting them if need be, especially if they are made in its most respected journals.If the apparent suicide and Yuan’s detailed complaints provoked second thoughts about the Nature paper, though, there were scant signs of it.The journal initially showed interest in publishing Yuan’s criticism and told him that a correction was “probably” going to be written, according to e-mail rec­ords. That was almost six months ago. The paper has not been corrected.The university had already fired Yuan in December 2011, after 10 years at the lab. He had been raising questions about the research for years. He was escorted from his desk by two security guards.
  • Fang said retractions may be rising because it is simply easier to cheat in an era of digital images, which can be easily manipulated. But he said the increase is caused at least in part by the growing competition for publication and for NIH grant money.He noted that in the 1960s, about two out of three NIH grant requests were funded; today, the success rate for applicants for research funding is about one in five. At the same time, getting work published in the most esteemed journals, such as Nature, has become a “fetish” for some scientists, Fang said.
  • ...3 more annotations...
  • Last year, research published in the Proceedings of the National Academy of Sciences found that the percentage of scientific articles retracted because of fraud had increased tenfold since 1975. The same analysis reviewed more than 2,000 retracted biomedical papers and found that 67 percent of the retractions were attributable to misconduct, mainly fraud or suspected fraud.
  • many observers note that universities and journals, while sometimes agreeable to admitting small mistakes, are at times loath to reveal that the essence of published work was simply wrong.“The reader of scientific information is at the mercy of the scientific institution to investigate or not,” said Adam Marcus, who with Ivan Oransky founded the blog Retraction Watch in 2010. In this case, Marcus said, “if Hopkins doesn’t want to move, we may not find out what is happening for two or three years.”
  • The trouble is that a delayed response — or none at all — leaves other scientists to build upon shaky work. Fang said he has talked to researchers who have lost months by relying on results that proved impossible to reproduce.Moreover, as Marcus and Oransky have noted, much of the research is funded by taxpayers. Yet when retractions are done, they are done quietly and “live in obscurity,” meaning taxpayers are unlikely to find out that their money may have been wasted.
Javier E

Drones, Ethics and the Armchair Soldier - NYTimes.com - 0 views

  • the difference between humans and robots is precisely the ability to think and reflect, in Immanuel Kant’s words, to set and pursue ends for themselves. And these ends cannot be set beforehand in some hard and fast way
  • Working one’s way through the complexities of “just war” and moral theory makes it perfectly clear that ethics is not about arriving easily at a single right answer, but rather coming to understand the profound difficulty of doing so. Experiencing this difficulty is what philosophers call existential responsibility.
  • One of the jobs of philosophy, at least as I understand it, is neither to help people to avoid these difficulties nor to exaggerate them, but rather to face them in resolute and creative ways.
  • ...6 more annotations...
  • ground troops, unfortunately, had more pressing concerns than existential responsibility. They did not have leisure, unlike their commanders, who also often had the philosophical training to think through the complexities of their jobs.
  • This training was not simply a degree requirement at Officer Candidate School or one of the United States military academies, but a sustained, ongoing, and rigorous engagement with a philosophical tradition. Alexander lived with Aristotle.
  • , Jeff McMahan argued that traditional “just war theory” should be reworked in several important ways. He suggested that the tenets of a revised theory apply not only to governments, traditionally represented by commanders and heads of state, but also to individual soldiers. This is a significant revision since it broadens the scope of responsibility for warfare
  • McMahan believes that individuals are to bear at least some responsibility in upholding “just cause” requirements. McMahan expects more of soldiers and, in this age of drones and leisure, he is right to do so.
  • while drones are to be applauded for keeping these soldiers out of harm’s way physically, we would do well to remember that they do not keep them out of harm’s way morally or psychologically. The high rates of “burnout” should drive this home. Supporting our troops requires ensuring that they are provided not just with training and physical armor, but with the intellectual tools to navigate these new difficulties.
  • Just as was the case in the invasion of Iraq 10 years ago, the most important questions we should be asking should not be directed to armchair soldiers but to those of us in armchairs at home: What wars are being fought in our name? On what grounds are they being fought?
Javier E

The American Scholar: Hardwired for Talk? - Jessica Love - 0 views

  • during the last decade, the pendulum of scientific thought has begun its inevitable swing in the other direction. These days, general cognitive mechanisms, not language-specific ones, are all the rage. We humans are really smart. We’re fantastic at recognizing patterns in our environments—patterns that may have nothing to do with language. Who says that the same abilities that allow us to play the violin aren’t also sufficient for learning subject-verb agreement? Perhaps speech isn’t genetically privileged so much as babies are just really motivated to learn to communicate.
  • If the brain did evolve for language, how did it do so? An idea favored by some scholars is that better communicators may also have been more reproductively successful. Gradually, as the prevalence of these smooth talkers’ offspring increased in the population, the concentration of genes favorable to linguistic communication may have increased as well.
  • two recent articles, one published in 2009 in the Proceedings of the National Academy of the Sciences and a 2012 follow-up in PLOS ONE (freely available), rebut this approach
  • ...4 more annotations...
  • Over the course of many generations, the gene pool thickens with helpful alleles until—voila!—the overwhelming number of these alleles are helpful and learners guesses are so uncannily accurate as to seem instinctual. Makes sense, no? But now consider that languages change. (And in the real world they do—quickly.) If the language’s principles switch often, many of those helpfully biased alleles are suddenly not so helpful at all. For fast-changing languages, the model finds, neutral alleles win out:
  • when the language is programmed to hardly mutate at all, the genes have a chance to adapt to the new language. The two populations become genetically distinct, their alleles heavily biased toward the idiosyncrasies of their local language—precisely what we don’t see in the real world
  • when the language is programmed to change quickly, neutral alleles are again favored.
  • maybe our brains couldn’t have evolved to handle language’s more arbitrary properties, because languages never stay the same and, as far as we know, they never have. What goes unspoken here is that the simulations seem to suggest that truly universal properties—such as language’s hierarchical nature—could have been encoded in our brains.
lenaurick

Being sleep-deprived makes people much more likely to give false confessions - Vox - 0 views

  • According to the Innocence Project, one in four people who have been exonerated for crimes they didn't commit confessed to that crime.
  • Psychologists have documented several reasons this might occur. The big one is that interrogating police officers can impose their suggestions on suspects: "We have evidence proving you were there!" "Your fingerprints were found!"
  • Only about 18 percent of the well-rested participants signed the form (such is the baseline power of an authority figure demanding guilt). But the results were more dramatic in the sleep-deprived condition. "That 18 percent now has risen to 50 percent," Loftus says.
  • ...5 more annotations...
  • According to Loftus's study, the majority of false confessions occur when interrogations last more than 12 hours.
  • Law enforcement "really needs to be super careful when a person is being interrogated after they have been up a long time," says Elizabeth Loftus, a co-author on a new study on sleep deprivation and false confessions in the Proceedings of the National Academy of Sciences.
  • hen they were told a second time to sign the form and admit their guilt, 68 percent of sleep-deprived participants gave in. (On the second request, 38 percent of the rested participants signed.)
  • "It would probably be scientifically prudent to go out and demonstrate it again with a more serious paradigm," Loftus admits. But there are also ethical limits to how far researchers can manipulate participants into thinking they've done something horrible.
  • She's also found that through subtle suggestions, people can be made to recall childhood memories that never happened.
Javier E

How 'Concept Creep' Made Americans So Sensitive to Harm - The Atlantic - 0 views

  • How did American culture arrive at these moments? A new research paper by Nick Haslam, a professor of psychology at the University of Melbourne, Australia, offers as useful a framework for understanding what’s going on as any I’ve seen. In “Concept Creep: Psychology's Expanding Concepts of Harm and Pathology,”
  • concepts like abuse, bullying, trauma, mental disorder, addiction, and prejudice, “now encompass a much broader range of phenomena than before,”expanded meanings that reflect “an ever-increasing sensitivity to harm.”
  • “they also have potentially damaging ramifications for society and psychology that cannot be ignored.”
  • ...20 more annotations...
  • He calls these expansions of meaning “concept creep.”
  • critics may hold concept creep responsible for damaging cultural trends, he writes, “such as supposed cultures of fear, therapy, and victimhood, the shifts I present have some positive implications.”
  • Concept creep is inevitable and vital if society is to make good use of new information. But why has the direction of concept creep, across so many different concepts, trended toward greater sensitivity to harm as opposed to lesser sensitivity?
  • The concept of abuse expanded too far.
  • Classically, psychological investigations recognized two forms of child abuse, physical and sexual, Haslam writes. In more recent decades, however, the concept of abuse has witnessed “horizontal creep” as new forms of abuse were recognized or studied. For example, “emotional abuse” was added as a new subtype of abuse. Neglect, traditionally a separate category, came to be seen as a type of abuse, too.
  • Meanwhile, the concept of abuse underwent “vertical creep.” That is, the behavior seen as qualifying for a given kind of abuse became steadily less extreme. Some now regard any spanking as physical abuse. Within psychology, “the boundary of neglect is indistinct,” Haslam writes. “As a consequence, the concept of neglect can become over-inclusive, identifying behavior as negligent that is substantially milder or more subtle than other forms of abuse. This is not to deny that some forms of neglect are profoundly damaging, merely to argue that the concept’s boundaries are sufficiently vague and elastic to encompass forms that are not severe.”
  • How did a working-class mom get arrested, lose her fast food job, and temporarily lose custody of her 9-year-old for letting the child play alone at a nearby park?
  • One concerns the field of psychology and its incentives. “It could be argued that just as successful species increase their territory, invading and adapting to new habitats, successful concepts and disciplines also expand their range into new semantic niches,” he theorizes. “Concepts that successfully attract the attention of researchers and practitioners are more likely to be applied in new ways and new contexts than those that do not.”
  • Concept creep can be necessary or needless. It can align concepts more or less closely with underlying realities. It can change society for better or worse. Yet many who push for more sensitivy to harm seem unaware of how oversensitivty can do harm.
  • The other theory posits an ideological explanation. “Psychology has played a role in the liberal agenda of sensitivity to harm and responsiveness to the harmed,” he writes “and its increased focus on negative phenomena—harms such as abuse, addiction, bullying, mental disorder, prejudice, and trauma—has been symptomatic of the success of that social agenda.”
  • Jonathan Haidt, who believes it has gone too far, offers a fourth theory. “If an increasingly left-leaning academy is staffed by people who are increasingly hostile to conservatives, then we can expect that their concepts will shift, via motivated scholarship, in ways that will help them and their allies (e.g., university administrators) to prosecute and condemn conservatives,
  • While Haslam and Haidt appear to have meaningfully different beliefs about why concept creep arose within academic psychology and spread throughout society, they were in sufficient agreement about its dangers to co-author a Guardian op-ed on the subject.
  • It focuses on how greater sensitivity to harm has affected college campuses.
  • “Of course young people need to be protected from some kinds of harm, but overprotection is harmful, too, for it causes fragility and hinders the development of resilience,” they wrote. “As Nasim Taleb pointed out in his book Antifragile, muscles need resistance to develop, bones need stress and shock to strengthen and the growing immune system needs to be exposed to pathogens in order to function. Similarly, he noted, children are by nature anti-fragile – they get stronger when they learn to recover from setbacks, failures and challenges to their cherished ideas.”
  • police officers fearing harm from dogs kill them by the hundreds or perhaps thousands every year in what the DOJ calls an epidemic.
  • After the terrorist attacks of September 11, 2001, the Bush Administration and many Americans grew increasingly sensitive to harms, real and imagined, from terrorism
  • Dick Cheney declared, “If there's a 1% chance that Pakistani scientists are helping al-Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response. It's not about our analysis ... It's about our response.” The invasion of Iraq was predicated, in part, on the idea that 9/11 “changed everything,”
  • Before 9/11, the notion of torturing prisoners was verboten. After the Bush Administration’s torture was made public, popular debate focused on mythical “ticking time bomb” scenarios, in which a whole city would be obliterated but for torture. Now Donald Trump suggests that torture should be used more generally against terrorists. Torture is, as well, an instance in which people within the field of psychology pushed concept creep in the direction of less sensitivity to harm,
  • Haslam endorses two theories
  • there are many reasons to be concerned about excessive sensitivity to harm:
fischerry

Quantum Physics | Physics | Khan Academy - 0 views

  •  
    Interesting
Javier E

Coursera Plans to Announce University Partners for Online Classes - NYTimes.com - 0 views

  • John Doerr, a Kleiner investment partner, said via e-mail that he saw a clear business model: “Yes. Even with free courses. From a community of millions of learners some should ‘opt in’ for valuable, premium services. Those revenues should fund investment in tools, technology and royalties to faculty and universities.”
  • Previously he said he had been involved with Stanford’s effort to put academic lectures online for viewing. But he noted that there was evidence that the newer interactive systems provided much more effective learning experiences.
  • Coursera and Udacity are not alone in the rush to offer mostly free online educational alternatives. Start-up companies like Minerva and Udemy, and, separately, the Massachusetts Institute of Technology, have recently announced similar platforms.
  • ...4 more annotations...
  • Unlike previous video lectures, which offered a “static” learning model, the Coursera system breaks lectures into segments as short as 10 minutes and offers quick online quizzes as part of each segment.
  • Where essays are required, especially in the humanities and social sciences, the system relies on the students themselves to grade their fellow students’ work, in effect turning them into teaching assistants.
  • The Coursera system also offers an online feature that allows students to get support from a global student community. Dr. Ng said an early test of the system found that questions were typically answered within 22 minutes.
  • Dr. Koller said the educational approach was similar to that of the “flipped classroom,” pioneered by the Khan Academy, a creation of the educator Salman Khan. Students watch lectures at home and then work on problem-solving or “homework” in the classroom, either one-on-one with the teacher or in small groups.
Duncan H

Living in the Material World - NYTimes.com - 0 views

  • on a visit to the Academy of Sciences in Almaty some years ago I was presented with a souvenir meant to assure me that Central Asia was indeed still producing philosophy worthy of note. It was a collectively authored book entitled “The Development of Materialist Dialectics in Kazakhstan,” and I still display it proudly on my shelf. Its rough binding and paper bespeak economic hardship. It is packed with the traces of ideas, yet everything about the book announces its materiality.I had arrived in the Kazakh capital 1994, just in time to encounter the last of a dying breed: the philosopher as party functionary (they are all by now retired, dead or defenestrated, or have simply given up on what they learned in school). The book, written by committee, was a collection of official talking points, and what passed for conversation there was something much closer to recitation.
  • The philosophical meaning of materialism may in the final analysis be traced back to a religious view of the world. On this view, to focus on the material side of existence is to turn away from the eternal and divine. Here, the category of the material is assimilated to that of sin or evil.
  • Yet in fact this feature of Marxist philosophical classification is one that, with some variations, continues to be shared by all philosophers, even in the West, even today
  • ...9 more annotations...
  • materialism is not the greedy desire for material goods, but rather the belief that the fundamental reality of the world is material;
  • idealism is not the aspiration toward lofty and laudable goals, but rather the belief that the fundamental reality of the world is mental or idea-like. English-speaking philosophers today tend to speak of “physicalism” or “naturalism” rather than materialism (perhaps to avoid confusion with the Wall Street sense of the term). At the same time, Anglo-American historians of philosophy continue to find the distinction between materialism and idealism a useful one in our attempts at categorizing past schools of thought. Democritus and La Mettrie were materialists; Hobbes was pretty close. Berkeley and Kant were idealists; Leibniz may have been.
  • And it was these paradoxes that led the Irish philosopher to conclude that talk of matter was but a case of multiplying entities beyond necessity. For Berkeley, all we can know are ideas, and for this reason it made sense to suppose that the world itself consists in ideas.
  • Central to this performance was the concept of  “materialism.” The entire history of philosophy, in fact, was portrayed in Soviet historiography as a series of matches between the materialist home-team and its “idealist” opponents, beginning roughly with Democritus (good) and Plato (bad), and culminating in the opposition between official party philosophy and logical positivism, the latter of which was portrayed as a shrouded variety of idealism. Thus from the “Short Philosophical Dictionary,” published in Moscow in 1951, we learn that the school of logical empiricism represented by Rudolf Carnap, Otto Neurath and others, “is a form of subjective idealism, characteristic of degenerating bourgeois philosophy in the epoch of the decline of capitalism.”Now the Soviet usage of this pair of terms appears to fly in the face of our ordinary, non-philosophical understanding of them (that, for example,  Wall Street values are “materialist,” while the Occupy movement is “idealist”). One might have thought that the communists should be flinging the “materialist” label at their capitalist enemies, rather than claiming it for themselves. One might also have thought that the Bolshevik Revolution and the subsequent failed project of building a workers’ utopia was nothing if not idealistic.
  • one great problem with the concept of materialism is that it says very little in itself. What is required in addition is an elaboration of what a given thinker takes matter, or ideas, to be. It may not be just the Marxist aftertaste, but also the fact that the old common-sense idea about matter as brute, given stuff has turned out to have so little to do with the way the physical world actually is, that has led Anglo-American philosophers to prefer to associate themselves with the “physical” or the “natural” rather than with the material.  Reality, they want to say, is just what is natural, while everything else is in turn “supernatural” (this distinction has its clarity going for it, but it also seems uncomfortably close to tautology). Not every philosopher has a solid grasp of subatomic physics, but most know enough to grasp that, even if reality is eventually exhaustively accounted for through an enumeration of the kinds of particles and a few basic forces, this reality will still look nothing like what your average person-in-the-street takes reality to be.
  • The 18th-century idealist philosopher George Berkeley strongly believed that matter was only a fiction contrived by philosophers in the first place, for which the real people had no need. For Berkeley, there was never anything common-sensical about matter. We did not need to arrive at the era of atom-splitting and wave-particle duality, then, in order for the paradoxes inherent in matter to make themselves known (is it infinitely divisible or isn’t it?
  • Soviet and Western Marxists alike, by stark contrast, and before them the French “vulgar” (i.e., non-dialectical) materialists of the 18th century, saw and see the material world as the base and cause of all mental activity, as both bringing ideas into existence, and also determining the form and character of a society’s ideas in accordance with the state of its technology, its methods of resource extraction and its organization of labor. So here to focus on the material is not to become distracted from the true source of being, but rather to zero right in on it.
  • Consider money. Though it might sometimes be represented by bank notes or coins, money is an immaterial thing par excellence, and to seek to acquire it is to move on the plane of ideas. Of course, money can also be converted into material things, yet it seems simplistic to suppose that we want money only in order to convert it into the material things we really want, since even these material things aren’t just material either: they are symbolically dense artifacts, and they convey to others certain ideas about their owners. This, principally, is why their owners want them, which is to say that materialists (in the everyday sense) are trading in ideas just as much as anyone else.
  • In the end no one really cares about stuff itself. Material acquisitions — even, or perhaps especially, material acquisitions of things like Rolls Royces and Rolexes — are maneuvers within a universe of materially instantiated ideas. This is human reality, and it is within this reality that mystics, scientists, and philosophers alike are constrained to pursue their various ends, no matter what they might take the ultimate nature of the external world to be.
  •  
    A very interesting article on the contrast between materialism and idealism.
Emily Freilich

50 Cliches Of Gray: In Defense Of Old Truisms : The Protojournalist : NPR - 1 views

  • The word police at Lake Superior State University in Michigan have been trying to strike the phrase from public discourse since 1999.
  • "English is a very dynamic language," says David F. Beer, a retired writing professor at the University of Texas at Austin, "and parts of it are always growing or dropping off. And we don't have an English Academy as the French do to tell us what is right and what is wrong in the language. Thus cliches such as 'at the end of the day' are to be found all over the language.
  • While some hoary sayings occasionally fall by the wayside — for lots of reasons, such as a rise in social awareness — others will be with us from here to eternity.
  • ...4 more annotations...
  • "Avoid cliches ...like the plague," Toastmasters International, a worldwide group that works to improve communication skills, advises. Tongue-in-cheek, of course.
  • But the very fact that a word or phrase has become a cliche, "through popular use – and overuse ," the report continues, "suggests that the phrase has lost originality and ingenuity and, thus, impact."
  • Cliches can cut through claptrap like a knife through butter. We can use them as a kind of societal shorthand.
  • A cliche can be as comfortable as an old shoe, as helpful as all get out. A cliche is like a long lost friend,
Javier E

The Berkeley Model - NYTimes.com - 0 views

  • Entitled “At Berkeley” — and running some four hours long — it attempts to do nothing less than capture the breadth of activities at the University of California, Berkeley, probably the finest public university in the country
  • “I hope they come away with a feeling that it is a great university, run by people of intelligence and sensitivity, and working hard to maintain standards and integrity.”
Javier E

Why Police Lineups Will Never Be Perfect - The Atlantic - 1 views

  • Eyewitness testimony is hugely influential in criminal cases. And yet, brain research has shown again and again that human memory is unreliable: Every time a memory is recalled it becomes vulnerable to change. Confirming feedback—such as a detective telling a witness she “did great”—seems to distort memories, making them feel more accurate with each recollection. Since the start of the Innocence Project 318 cases have been overturned thanks to DNA testing. Eyewitness mistakes played a part in nearly three-quarters of them.
  • psychology researchers have been searching for ways to make eyewitness identifications more reliable. Many studies have shown, for example, the value of “double-blind” lineups, meaning that neither the cop administering the lineup nor the witness knows which of the photos, if any, is the suspect.
  • 160 page report offers many concrete suggestions for carrying out eyewitness identifications. For example, the Academy recommends using double-blind lineups and standardized witness instructions, and training law enforcement officials on the fallibility of eyewitness memory.
  • ...1 more annotation...
  • In fact, Wells’s studies found that sequential lineups slashed the rate of false positives considerably. His first study showed a drop in incorrect accusations  43 to 17 percent. Sequential lineups also slightly increased the number of missed identifications
1 - 20 of 48 Next › Last »
Showing 20 items per page