Skip to main content

Home/ TOK Friends/ Group items tagged math education

Rss Feed Group items tagged

Javier E

Write My Essay, Please! - Richard Gunderman - The Atlantic - 1 views

  • Why aren't the students who use these services crafting their own essays to begin with?
  • Here is where the real problem lies. The idea of paying someone else to do your work for you has become increasingly commonplace in our broader culture, even in the realm of writing. It is well known that many actors, athletes, politicians, and businesspeople have contracted with uncredited ghostwriters to produce their memoirs for them. There is no law against it.
  • At the same time, higher education has been transformed into an industry, another sphere of economic activity where goods and services are bought and sold. By this logic, a student who pays a fair market price for it has earned whatever grade it brings. In fact, many institutions of higher education market not the challenges provided by their course of study, but the ease with which busy students can complete it in the midst of other daily responsibilities.
  • ...2 more annotations...
  • ultimately, students who use essay-writing services are cheating no one more than themselves. They are depriving themselves of the opportunity to ask, "What new insights and perspectives might I gain in the process of writing this paper?" instead of "How can I check this box and get my credential?"
  • why stop with exams? Why not follow this path to its logical conclusion? If the entire course is online, why shouldn't students hire someone to enroll and complete all its requirements on their behalf? In fact, "Take-my-course.com" sites have already begun to appear. One site called My Math Genius promises to get customers a "guaranteed grade," with experts who will complete all assignments and "ace your final and midterm."
kushnerha

The rise of the 'gentleman's A' and the GPA arms race - The Washington Post - 2 views

  • A’s — once reserved for recognizing excellence and distinction — are today the most commonly awarded grades in America.
  • That’s true at both Ivy League institutions and community colleges, at huge flagship publics and tiny liberal arts schools, and in English, ethnic studies and engineering departments alike. Across the country, wherever and whatever they study, mediocre students are increasingly likely to receive supposedly superlative grades.
  • Analyzing 70 years of transcript records from more than 400 schools, the researchers found that the share of A grades has tripled, from just 15 percent of grades in 1940 to 45 percent in 2013. At private schools, A’s account for nearly a majority of grades awarded.
  • ...11 more annotations...
  • Students sometimes argue that their talents have improved so dramatically that they are deserving of higher grades. Past studies, however, have found little evidence of this.
  • While it’s true that top schools have become more selective, the overall universe of students attending college has gotten broader, reflecting a wider distribution of abilities and levels of preparation, especially at the bottom. College students today also study less and do not appear to be more literate than their predecessors were.
  • Plus, of course, even if students have gotten smarter, or at least more efficient at studying (hey, computers do help), grades are arguably also supposed to measure relative achievement among classmates.
  • Affirmative action also sometimes gets blamed for rising grades; supposedly, professors have been loath to hurt the feelings of underprepared minority students. Rojstaczer and Healy note, however, that much of the increase in minority enrollment occurred from the mid-1970s to mid-’80s, the only period in recent decades when average GPAs fell.
  • That first era, the researchers say, can be explained by changes in pedagogical philosophy (some professors began seeing grades as overly authoritarian and ineffective at motivating students) and mortal exigencies (male students needed higher grades to avoid the Vietnam draft).
  • The authors attribute today’s inflation to the consumerization of higher education. That is, students pay more in tuition, and expect more in return — better service, better facilities and better grades. Or at least a leg up in employment and graduate school admissions through stronger transcripts.
  • some universities have explicitly lifted their grading curves (sometimes retroactively) to make graduates more competitive in the job market, leading to a sort of grade inflation arms race
  • But rising tuition may not be the sole driver of students’ expectations for better grades, given that high school grades have also risen in recent decades. And rather than some top-down directive from administrators, grade inflation also seems related to a steady creep of pressure on professors to give higher grades in exchange for better teaching evaluations.
  • It’s unclear how the clustering of grades near the top is affecting student effort. But it certainly makes it harder to accurately measure how much students have learned. It also makes it more challenging for grad schools and employers to sort the superstars from the also-rans
  • Lax or at least inconsistent grading standards can also distort what students — especially women — choose to study, pushing them away from more stingily graded science, technology, engineering and math fields and into humanities, where high grades are easier to come by.
  • Without collective action — which means both standing up to students and publicly shaming other schools into adopting higher standards — the arms race will continue.
Javier E

The Navy's USS Gabrielle Giffords and the Future of Work - The Atlantic - 0 views

  • Minimal manning—and with it, the replacement of specialized workers with problem-solving generalists—isn’t a particularly nautical concept. Indeed, it will sound familiar to anyone in an organization who’s been asked to “do more with less”—which, these days, seems to be just about everyone.
  • Ten years from now, the Deloitte consultant Erica Volini projects, 70 to 90 percent of workers will be in so-called hybrid jobs or superjobs—that is, positions combining tasks once performed by people in two or more traditional roles.
  • If you ask Laszlo Bock, Google’s former culture chief and now the head of the HR start-up Humu, what he looks for in a new hire, he’ll tell you “mental agility.
  • ...40 more annotations...
  • “What companies are looking for,” says Mary Jo King, the president of the National Résumé Writers’ Association, “is someone who can be all, do all, and pivot on a dime to solve any problem.”
  • The phenomenon is sped by automation, which usurps routine tasks, leaving employees to handle the nonroutine and unanticipated—and the continued advance of which throws the skills employers value into flux
  • Or, for that matter, on the relevance of the question What do you want to be when you grow up?
  • By 2020, a 2016 World Economic Forum report predicted, “more than one-third of the desired core skill sets of most occupations” will not have been seen as crucial to the job when the report was published
  • I asked John Sullivan, a prominent Silicon Valley talent adviser, why should anyone take the time to master anything at all? “You shouldn’t!” he replied.
  • Minimal manning—and the evolution of the economy more generally—requires a different kind of worker, with not only different acquired skills but different inherent abilities
  • It has implications for the nature and utility of a college education, for the path of careers, for inequality and employability—even for the generational divide.
  • Then, in 2001, Donald Rumsfeld arrived at the Pentagon. The new secretary of defense carried with him a briefcase full of ideas from the corporate world: downsizing, reengineering, “transformational” technologies. Almost immediately, what had been an experimental concept became an article of faith
  • But once cadets got into actual command environments, which tend to be fluid and full of surprises, a different picture emerged. “Psychological hardiness”—a construct that includes, among other things, a willingness to explore “multiple possible response alternatives,” a tendency to “see all experience as interesting and meaningful,” and a strong sense of self-confidence—was a better predictor of leadership ability in officers after three years in the field.
  • Because there really is no such thing as multitasking—just a rapid switching of attention—I began to feel overstrained, put upon, and finally irked by the impossible set of concurrent demands. Shouldn’t someone be giving me a hand here? This, Hambrick explained, meant I was hitting the limits of working memory—basically, raw processing power—which is an important aspect of “fluid intelligence” and peaks in your early 20s. This is distinct from “crystallized intelligence”—the accumulated facts and know-how on your hard drive—which peaks in your 50
  • Others noticed the change but continued to devote equal attention to all four tasks. Their scores fell. This group, Hambrick found, was high in “conscientiousness”—a trait that’s normally an overwhelming predictor of positive job performance. We like conscientious people because they can be trusted to show up early, double-check the math, fill the gap in the presentation, and return your car gassed up even though the tank was nowhere near empty to begin with. What struck Hambrick as counterintuitive and interesting was that conscientiousness here seemed to correlate with poor performance.
  • he discovered another correlation in his test: The people who did best tended to score high on “openness to new experience”—a personality trait that is normally not a major job-performance predictor and that, in certain contexts, roughly translates to “distractibility.”
  • To borrow the management expert Peter Drucker’s formulation, people with this trait are less focused on doing things right, and more likely to wonder whether they’re doing the right things.
  • High in fluid intelligence, low in experience, not terribly conscientious, open to potential distraction—this is not the classic profile of a winning job candidate. But what if it is the profile of the winning job candidate of the future?
  • One concerns “grit”—a mind-set, much vaunted these days in educational and professional circles, that allows people to commit tenaciously to doing one thing well
  • These ideas are inherently appealing; they suggest that dedication can be more important than raw talent, that the dogged and conscientious will be rewarded in the end.
  • he studied West Point students and graduates.
  • Traditional measures such as SAT scores and high-school class rank “predicted leader performance in the stable, highly regulated environment of West Point” itself.
  • It would be supremely ironic if the advance of the knowledge economy had the effect of devaluing knowledge. But that’s what I heard, recurrentl
  • “Fluid, learning-intensive environments are going to require different traits than classical business environments,” I was told by Frida Polli, a co-founder of an AI-powered hiring platform called Pymetrics. “And they’re going to be things like ability to learn quickly from mistakes, use of trial and error, and comfort with ambiguity.”
  • “We’re starting to see a big shift,” says Guy Halfteck, a people-analytics expert. “Employers are looking less at what you know and more and more at your hidden potential” to learn new things
  • advice to employers? Stop hiring people based on their work experience. Because in these environments, expertise can become an obstacle.
  • “The Curse of Expertise.” The more we invest in building and embellishing a system of knowledge, they found, the more averse we become to unbuilding it.
  • All too often experts, like the mechanic in LePine’s garage, fail to inspect their knowledge structure for signs of decay. “It just didn’t occur to him,” LePine said, “that he was repeating the same mistake over and over.
  • The devaluation of expertise opens up ample room for different sorts of mistakes—and sometimes creates a kind of helplessness.
  • Aboard littoral combat ships, the crew lacks the expertise to carry out some important tasks, and instead has to rely on civilian help
  • Meanwhile, the modular “plug and fight” configuration was not panning out as hoped. Converting a ship from sub-hunter to minesweeper or minesweeper to surface combatant, it turned out, was a logistical nightmare
  • So in 2016 the concept of interchangeability was scuttled for a “one ship, one mission” approach, in which the extra 20-plus sailors became permanent crew members
  • “As equipment breaks, [sailors] are required to fix it without any training,” a Defense Department Test and Evaluation employee told Congress. “Those are not my words. Those are the words of the sailors who were doing the best they could to try to accomplish the missions we gave them in testing.”
  • These results were, perhaps, predictable given the Navy’s initial, full-throttle approach to minimal manning—and are an object lesson on the dangers of embracing any radical concept without thinking hard enough about the downsides
  • a world in which mental agility and raw cognitive speed eclipse hard-won expertise is a world of greater exclusion: of older workers, slower learners, and the less socially adept.
  • if you keep going down this road, you end up with one really expensive ship with just a few people on it who are geniuses … That’s not a future we want to see, because you need a large enough crew to conduct multiple tasks in combat.
  • hat does all this mean for those of us in the workforce, and those of us planning to enter it? It would be wrong to say that the 10,000-hours-of-deliberate-practice idea doesn’t hold up at all. In some situations, it clearly does
  • A spinal surgery will not be performed by a brilliant dermatologist. A criminal-defense team will not be headed by a tax attorney. And in tech, the demand for specialized skills will continue to reward expertise handsomely.
  • But in many fields, the path to success isn’t so clear. The rules keep changing, which means that highly focused practice has a much lower return
  • In uncertain environments, Hambrick told me, “specialization is no longer the coin of the realm.”
  • It leaves us with lifelong learning,
  • I found myself the target of career suggestions. “You need to be a video guy, an audio guy!” the Silicon Valley talent adviser John Sullivan told me, alluding to the demise of print media
  • I found the prospect of starting over just plain exhausting. Building a professional identity takes a lot of resources—money, time, energy. After it’s built, we expect to reap gains from our investment, and—let’s be honest—even do a bit of coasting. Are we equipped to continually return to apprentice mode? Will this burn us out?
  • Everybody I met on the Giffords seemed to share that mentality. They regarded every minute on board—even during a routine transit back to port in San Diego Harbor—as a chance to learn something new.
Javier E

Girls Outnumbered in New York's Elite Public Schools - NYTimes.com - 0 views

  • the gap at the elite schools could be as elemental as their perception as havens for science, technology, engineering or math, making them a natural magnet for boys, just as girls might gravitate to schools known for humanities.
  • Mr. Finn, who, with Jessica A. Hockett, wrote the recent book, “Exam Schools: Inside America’s Most Selective Public High Schools.” “I think you’re looking at habit, culture, perceptions, tradition and curricular emphasis.”
  • enrollment in highly competitive high schools is 55 percent female. “The big gender-related chasm in American education these days is how much worse boys are doing, than girls,”
  • ...1 more annotation...
  • Of the 3,060 students who applied to his school this year, 44 percent were boys. To help rank the candidates, he said, he simply adjusted the focus of student interviews to more effectively draw boys out in describing their own strengths. This year he offered seats to 136 boys and 134 girls. “Are we worried about getting unqualified boys?” asked Dr. Lerner. “No not at all.”
Javier E

A New Kind of Tutoring Aims to Make Students Smarter - NYTimes.com - 1 views

  • the goal is to improve cognitive skills. LearningRx is one of a growing number of such commercial services — some online, others offered by psychologists. Unlike traditional tutoring services that seek to help students master a subject, brain training purports to enhance comprehension and the ability to analyze and mentally manipulate concepts, images, sounds and instructions. In a word, it seeks to make students smarter.
  • “The average gain on I.Q. is 15 points after 24 weeks of training, and 20 points in less than 32 weeks.”
  • , “Our users have reported profound benefits that include: clearer and quicker thinking; faster problem-solving skills; increased alertness and awareness; better concentration at work or while driving; sharper memory for names, numbers and directions.”
  • ...8 more annotations...
  • “It used to take me an hour to memorize 20 words. Now I can learn, like, 40 new words in 20 minutes.”
  • “I don’t know if it makes you smarter. But when you get to each new level on the math and reading tasks, it definitely builds up your self-confidence.”
  • . “What you care about is not an intelligence test score, but whether your ability to do an important task has really improved. That’s a chain of evidence that would be really great to have. I haven’t seen it.”
  • Still,a new and growing body of scientific evidence indicates that cognitive training can be effective, including that offered by commercial services.
  • He looked at 340 middle-school students who spent two hours a week for a semester using LearningRx exercises in their schools’ computer labs and an equal number of students who received no such training. Those who played the online games, Dr. Hill found, not only improved significantly on measures of cognitive abilities compared to their peers, but also on Virginia’s annual Standards of Learning exam.
  • I’ve had some kids who not only reported that they had very big changes in the classroom, but when we bring them back in the laboratory to do neuropsychological testing, we also see great changes. They show increases that would be highly unlikely to happen just by chance.”
  • where crosswords and Sudoku are intended to be a diversion, the games here give that same kind of reward, only they’re designed to improve your brain, your memory, your problem-solving skills.”
  • More than 40 games are offered by Lumosity. One, the N-back, is based on a task developed decades ago by psychologists. Created to test working memory, the N-back challenges users to keep track of a continuously updated list and remember which item appeared “n” times ago.
Javier E

Sleight of the 'Invisible Hand' - NYTimes.com - 1 views

  • The wealthy, says Smith, spend their days establishing an “economy of greatness,” one founded on “luxury and caprice” and fueled by “the gratification of their own vain and insatiable desires.” Any broader benefit that accrues from their striving is not the consequence of foresight or benevolence, but “in spite of their natural selfishness and rapacity.” They don’t do good, they are led to it.
  • Smith described this state of affairs as “the obvious and simple system of natural liberty,” and he knew that it made for the revolutionary implication of his work. It shifted the way we thought about the relationship between government action and economic growth, making less means more the rebuttable presumption of policy proposals.
  • What it did not do, however, was void any proposal outright, much less prove that all government activity was counterproductive. Smith held that the sovereign had a role supporting education, building infrastructure and public institutions, and providing security from foreign and domestic threats — initiatives that should be paid for, in part, by a progressive tax code and duties on luxury goods. He even believed the government had a “duty” to protect citizens from “oppression,” the inevitable tendency of the strong to take advantage of the ignorance and necessity of the weak.
  • ...4 more annotations...
  • In other words, the invisible hand did not solve the problem of politics by making politics altogether unnecessary. “We don’t think government can solve all our problems,” President Obama said in his convention address, “But we don’t think that government is the source of all our problems.” Smith would have appreciated this formulation. For him, whether government should get out of the way in any given matter, economic or otherwise, was a question for considered judgment abetted by scientific inquiry.
  • politics is a practical venture, and Smith distrusted those statesmen who confused their work with an exercise in speculative philosophy. Their proposals should be judged not by the delusive lights of the imagination, but by the metrics of science and experience, what President Obama described in the first presidential debate as “math, common sense and our history.”
  • John Paul Rollert teaches business ethics at the University of Chicago Booth School of Business and leadership at the Harvard Extension School.  He is the author of a recent paper on President Obama’s “Empathy Standard” for the Yale Law Journal Online.
  • Adam Smith, analytic philosophy, economics, Elections 2012
  •  
    "Adam Smith, analytic philosophy, economics"
Javier E

Why Are Hundreds of Harvard Students Studying Ancient Chinese Philosophy? - Christine G... - 0 views

  • Puett's course Classical Chinese Ethical and Political Theory has become the third most popular course at the university. The only classes with higher enrollment are Intro to Economics and Intro to Computer Science.
  • the class fulfills one of Harvard's more challenging core requirements, Ethical Reasoning. It's clear, though, that students are also lured in by Puett's bold promise: “This course will change your life.”
  • Puett uses Chinese philosophy as a way to give undergraduates concrete, counter-intuitive, and even revolutionary ideas, which teach them how to live a better life. 
  • ...18 more annotations...
  • Puett puts a fresh spin on the questions that Chinese scholars grappled with centuries ago. He requires his students to closely read original texts (in translation) such as Confucius’s Analects, the Mencius, and the Daodejing and then actively put the teachings into practice in their daily lives. His lectures use Chinese thought in the context of contemporary American life to help 18- and 19-year-olds who are struggling to find their place in the world figure out how to be good human beings; how to create a good society; how to have a flourishing life. 
  • Puett began offering his course to introduce his students not just to a completely different cultural worldview but also to a different set of tools. He told me he is seeing more students who are “feeling pushed onto a very specific path towards very concrete career goals”
  • Puett tells his students that being calculating and rationally deciding on plans is precisely the wrong way to make any sort of important life decision. The Chinese philosophers they are reading would say that this strategy makes it harder to remain open to other possibilities that don’t fit into that plan.
  • Students who do this “are not paying enough attention to the daily things that actually invigorate and inspire them, out of which could come a really fulfilling, exciting life,” he explains. If what excites a student is not the same as what he has decided is best for him, he becomes trapped on a misguided path, slated to begin an unfulfilling career.
  • He teaches them that:   The smallest actions have the most profound ramifications. 
  • From a Chinese philosophical point of view, these small daily experiences provide us endless opportunities to understand ourselves. When we notice and understand what makes us tick, react, feel joyful or angry, we develop a better sense of who we are that helps us when approaching new situations. Mencius, a late Confucian thinker (4th century B.C.E.), taught that if you cultivate your better nature in these small ways, you can become an extraordinary person with an incredible influence
  • Decisions are made from the heart. Americans tend to believe that humans are rational creatures who make decisions logically, using our brains. But in Chinese, the word for “mind” and “heart” are the same.
  • If the body leads, the mind will follow. Behaving kindly (even when you are not feeling kindly), or smiling at someone (even if you aren’t feeling particularly friendly at the moment) can cause actual differences in how you end up feeling and behaving, even ultimately changing the outcome of a situation.
  • In the same way that one deliberately practices the piano in order to eventually play it effortlessly, through our everyday activities we train ourselves to become more open to experiences and phenomena so that eventually the right responses and decisions come spontaneously, without angst, from the heart-mind.
  • Whenever we make decisions, from the prosaic to the profound (what to make for dinner; which courses to take next semester; what career path to follow; whom to marry), we will make better ones when we intuit how to integrate heart and mind and let our rational and emotional sides blend into one. 
  • Aristotle said, “We are what we repeatedly do,” a view shared by thinkers such as Confucius, who taught that the importance of rituals lies in how they inculcate a certain sensibility in a person.
  • “The Chinese philosophers we read taught that the way to really change lives for the better is from a very mundane level, changing the way people experience and respond to the world, so what I try to do is to hit them at that level. I’m not trying to give my students really big advice about what to do with their lives. I just want to give them a sense of what they can do daily to transform how they live.”
  • Their assignments are small ones: to first observe how they feel when they smile at a stranger, hold open a door for someone, engage in a hobby. He asks them to take note of what happens next: how every action, gesture, or word dramatically affects how others respond to them. Then Puett asks them to pursue more of the activities that they notice arouse positive, excited feelings.
  • Once they’ve understood themselves better and discovered what they love to do they can then work to become adept at those activities through ample practice and self-cultivation. Self-cultivation is related to another classical Chinese concept: that effort is what counts the most, more than talent or aptitude. We aren’t limited to our innate talents; we all have enormous potential to expand our abilities if we cultivate them
  • To be interconnected, focus on mundane, everyday practices, and understand that great things begin with the very smallest of acts are radical ideas for young people living in a society that pressures them to think big and achieve individual excellence.
  • One of Puett’s former students, Adam Mitchell, was a math and science whiz who went to Harvard intending to major in economics. At Harvard specifically and in society in general, he told me, “we’re expected to think of our future in this rational way: to add up the pros and cons and then make a decision. That leads you down the road of ‘Stick with what you’re good at’”—a road with little risk but little reward.
  • after his introduction to Chinese philosophy during his sophomore year, he realized this wasn’t the only way to think about the future. Instead, he tried courses he was drawn to but wasn’t naturally adroit at because he had learned how much value lies in working hard to become better at what you love. He became more aware of the way he was affected by those around him, and how they were affected by his own actions in turn. Mitchell threw himself into foreign language learning, feels his relationships have deepened, and is today working towards a master’s degree in regional studies.
  • “I can happily say that Professor Puett lived up to his promise, that the course did in fact change my life.”
Javier E

Delay Kindergarten at Your Child's Peril - NYTimes.com - 2 views

  • THIS fall, one in 11 kindergarten-age children in the United States will not be going to class. Parents of these children often delay school entry in an attempt to give them a leg up on peers, but this strategy is likely to be counterproductive.
  • Teachers may encourage redshirting because more mature children are easier to handle in the classroom and initially produce better test scores than their younger classmates.
  • This advantage fades by the end of elementary school, though, and disadvantages start to accumulate. In high school, redshirted children are less motivated and perform less well. By adulthood, they are no better off in wages or educational attainment — in fa
  • ...9 more annotations...
  • ct, their lifetime earnings are reduced by one year.
  • The benefits of being younger are even greater for those who skip a grade, an option available to many high-achieving children. Compared with nonskippers of similar talent and motivation, these youngsters pursue advanced degrees and enter professional school more often. Acceleration is a powerful intervention, with effects on achievement that are twice as large as programs for the gifted.
  • Parents who want to give their young children an academic advantage have a powerful tool: school itself. In a large-scale study at 26 Canadian elementary schools, first graders who were young for their year made considerably more progress in reading and math than kindergartners who were old for their year
  • The question we should ask instead is: What approach gives children the greatest opportunity to learn?
  • school makes children smarter.
  • These differences may come from the increased challenges of a demanding environment. Learning is maximized not by getting all the answers right, but by making errors and correcting them quickly.
  • Some children, especially boys, are slow to mature emotionally, a process that may be aided by the presence of older children.
  • The benefits of interacting with older children may extend to empathetic abilities. Empathy requires the ability to reason about the beliefs of others. This capacity relies on brain maturation, but it is also influenced by interactions with other children. Having an older (but not younger) sibling speeds the onset of this capacity in 3- to 5-year-olds. The acceleration is large: up to half a year per sibling.
  • children are not on a fixed trajectory but learn actively from teachers — and classmates. It matters very much who a child’s peers are. Redshirted children begin school with others who are a little further behind them. Because learning is social, the real winners in that situation are their classmates.
  •  
    I had never realized how incredibly critical the first years of a child's life were. This situation seems almost like a win-lose one; the younger children are more challenged and thus more prepared later on in life while the older ones will always be less motivated and all-around strong. Does this mean that we must set up our classrooms to have some students be statistically advantaged in life while others might potentially suffer? ARE WE GONNA DO THAT?!
Javier E

Welcome to the Age of Denial - NYTimes.com - 1 views

  • instead of sending my students into a world that celebrates the latest science has to offer, I am delivering them into a society ambivalent, even skeptical, about the fruits of science.
  • The triumph of Western science led most of my professors to believe that progress was inevitable. While the bargain between science and political culture was at times challenged — the nuclear power debate of the 1970s, for example — the battles were fought using scientific evidence.
  • many of our leaders have abandoned the postwar bargain in favor of what the scientist Michael Mann calls the “scientization of politics.”
  • ...3 more annotations...
  • Today, however, it is politically effective, and socially acceptable, to deny scientific fact.
  • We face many daunting challenges as a society, and they won’t all be solved with more science and math education. But what has been lost is an understanding that science’s open-ended, evidence-based processes — rather than just its results — are essential to meeting those challenges.
  • My professors’ generation could respond to silliness like creationism with head-scratching bemusement. My students cannot afford that luxury. Instead they must become fierce champions of science in the marketplace of ideas.
carolinewren

US bringing up the middle on gender-science stereotyping - 0 views

  • Gender stereotyping in which men are more strongly associated with science than women has been found in some unlikely countries, with the Netherlands leading the list and the United States in the middle at 38th, according to research that surveyed more than 350,000 people in 66 countries through a website called Project Implicit.
  • asked those surveyed how much they associated science with men or women and how quickly they associated words like "math" or "physics" with words like "woman" or "man." But they were not asked whether men or women were more competent at science.
  • "Educators should present examples beyond Marie Curie to help shape students' beliefs about who pursues science," said Linn. "Students reconsider who pursues science when they can compare examples of female scientists and reflect on their beliefs."
  • ...4 more annotations...
  • only 26.6 percent of U.S. scientists are women, the 10th worst showing below No. 1 Japan with 12 percent women scientists.
  • Latvia, at No. 58, had the highest proportion, with 51.8 percent women scientists.
  • But change is on the way
  • Iran had the best showing in this category, at No. 60, with women representing 67.3 percent of college science majors.
Javier E

How Game Theory Helped Improve New York City's High School Application Process - NYTime... - 0 views

  • “It was an allocation problem,” explained Neil Dorosin, the director of high-school admissions at the time of the redesign. The city had a scarce resource — in this case, good schools — and had to work out an equitable way to distribute it. “But unlike a scarce resource like Rolling Stones tickets, where whoever’s willing to pay the most gets the tickets, here we can’t use price,”
  • In the early 1960s, the economists David Gale and Lloyd Shapley proved that it was theoretically possible to pair an unlimited number of men and women in stable marriages according to their preferences.In game theory, “stable” means that every player’s preferences are optimized; in this case, no man and no woman matched with another partner would both prefer to be with each other.
  • a “deferred acceptance algorithm.”
  • ...6 more annotations...
  • Here is how it works: Each suitor proposes to his first-choice mate; each woman has her own list of favorites. (The economists worked from the now-quaint premise that men only married women, and did the proposing.) She rejects all proposals except her favorite — but does not give him a firm answer. Each suitor rejected by his most beloved then proposes to his second choice, and each woman being wooed in this round again rejects all but her favorite.
  • Professor Abdulkadiroglu said he had fielded calls from anguished parents seeking advice on how their children could snare the best match. His advice: “Rank them in true preference order.”
  • The deferred acceptance algorithm, Professor Pathak said, is “one of the great ideas in economics.” It quickly became the basis for a standard lesson in graduate-level economics courses.
  • In the case of rejection, the algorithm looks to make a match with a student’s second-choice school, and so on. Like the brides and grooms of Professors Gale and Shapley, students and schools connect only tentatively until the very end of the process.
  • The courting continues until everyone is betrothed. But because each woman has waited to give her final answer (the “deferred acceptance”), she has the opportunity to accept a proposal later from a suitor whom she prefers to someone she had tentatively considered earlier. The later match is preferable for her, and therefore more stable.
  • It seems that most students prefer to go to school close to home, and if nearby schools are underperforming, students will choose them nevertheless. Researching other options is labor intensive, and poor and immigrant children in particular may not get the help they need to do it.
Javier E

Vaccine Critics Turn Defensive Over Measles - NYTimes.com - 1 views

  • the parents at the heart of America’s anti-vaccine movement are being blamed for incubating an otherwise preventable public-health crisis.
  • officials scrambled to try to contain a wider spread of the highly contagious disease — which America declared vanquished 15 years ago, before a statistically significant number of parents started refusing to vaccinate their children.
  • The anti-vaccine movement can largely be traced to a 1998 report in a medical journal that suggested a link between vaccines and autism but was later proved fraudulent and retracted. Today, the waves of parents who shun vaccines include some who still believe in the link and some, like the Amish, who have religious objections to vaccines. Then there is a particular subculture of largely wealthy and well-educated families, many living in palmy enclaves around Los Angeles and San Francisco, who are trying to carve out “all-natural” lives for their children.
  • ...9 more annotations...
  • “Sometimes, I feel like we’re practicing in the 1950s,” said Dr. Eric Ball, a pediatrician in southern Orange County, where some schools report that 50 to 60 percent of their kindergartners are not fully vaccinated and that 20 to 40 percent of parents have sought a personal beliefs exemption to vaccination requirements. “It’s very frustrating. It’s hard to see a kid suffer for something that’s entirely preventable.”
  • Dr. Ball said he spent many days trying to persuade parents to vaccinate their children. He tries to alleviate their concerns. He shows parents his own children’s vaccine records. But it has not worked, and lately, as worries and anger over this outbreak have spread, some families who support vaccines have said they do not want to be in the same waiting room as unvaccinated families. The clinic where Dr. Ball works has treated unvaccinated children for years, but its staff is meeting next week to discuss a ban.“Our patients are really scared,” Dr. Ball said. “Our nightmare would be for someone to show up at our door with the measles.”
  • Norm Warren, the manager of the supermarket in Kearny, Gordon’s IGA, has changed his thinking toward those who do not vaccinate their children.“Before, I thought, ‘If you think your child will become autistic, fine.’ But now they’re pushing their beliefs on everybody, and I feel differently,“ he said. “How many lives have been saved by vaccination?“
  • Members of the anti-vaccine movement said the public backlash had terrified many parents. “People are now afraid they’re going to be jailed,” said Barbara Loe Fisher, the president of the National Vaccine Information Center, a clearinghouse for resisters. “I can’t believe what I’m seeing. It’s gotten so out of hand, and it’s gotten so vicious.”
  • In San Geronimo, Calif., a mostly rural community of rolling hills and oak trees about 30 miles north of San Francisco, 40 percent of the students walking into Lagunitas Elementary School have not been inoculated against measles, according to the school’s figures. Twenty-five percent have not been vaccinated for polio. In all, the state says that 58 percent of Lagunitas kindergartners do not have up-to-date vaccine records.
  • “A lot of people here have personal beliefs that are faith based,” said John Carroll, the school superintendent, who sent a letter home to parents last week encouraging them to vaccinate their children. The faith, Mr. Carroll said, is not so much religious as it is a belief that “they raise their children in a natural, organic environment” and are suspicious of pharmaceutical companies and big business.
  • Some parents forgo shots altogether. Others split vaccine doses or stretch out their timeline, worried about somehow overwhelming their children’s immune system. Kelly McMenimen, a Lagunitas parent, said she “meditated on it a lot” before deciding not to vaccinate her son Tobias, 8, against even “deadly or deforming diseases.” She said she did not want “so many toxins” entering the slender body of a bright-eyed boy who loves math and geography.
  • Tobias has endured chickenpox and whooping cough, though Ms. McMenimen said the latter seemed more like a common cold. She considered a tetanus shot after he cut himself on a wire fence but decided against it: “He has such a strong immune system.”
  • “It’s good to explore alternatives rather than go with the panic of everyone around you,” she said. “Vaccines don’t feel right for me and my family.”
kushnerha

BBC - Future - The surprising downsides of being clever - 0 views

  • If ignorance is bliss, does a high IQ equal misery? Popular opinion would have it so. We tend to think of geniuses as being plagued by existential angst, frustration, and loneliness. Think of Virginia Woolf, Alan Turing, or Lisa Simpson – lone stars, isolated even as they burn their brightest. As Ernest Hemingway wrote: “Happiness in intelligent people is the rarest thing I know.”
  • Combing California’s schools for the creme de la creme, he selected 1,500 pupils with an IQ of 140 or more – 80 of whom had IQs above 170. Together, they became known as the “Termites”, and the highs and lows of their lives are still being studied to this day.
  • Termites’ average salary was twice that of the average white-collar job. But not all the group met Terman’s expectations – there were many who pursued more “humble” professions such as police officers, seafarers, and typists. For this reason, Terman concluded that “intellect and achievement are far from perfectly correlated”. Nor did their smarts endow personal happiness. Over the course of their lives, levels of divorce, alcoholism and suicide were about the same as the national average.
  • ...16 more annotations...
  • One possibility is that knowledge of your talents becomes something of a ball and chain. Indeed, during the 1990s, the surviving Termites were asked to look back at the events in their 80-year lifespan. Rather than basking in their successes, many reported that they had been plagued by the sense that they had somehow failed to live up to their youthful expectations.
  • The most notable, and sad, case concerns the maths prodigy Sufiah Yusof. Enrolled at Oxford University aged 12, she dropped out of her course before taking her finals and started waitressing. She later worked as a call girl, entertaining clients with her ability to recite equations during sexual acts.
  • Another common complaint, often heard in student bars and internet forums, is that smarter people somehow have a clearer vision of the world’s failings. Whereas the rest of us are blinkered from existential angst, smarter people lay awake agonising over the human condition or other people’s folly.
  • MacEwan University in Canada found that those with the higher IQ did indeed feel more anxiety throughout the day. Interestingly, most worries were mundane, day-to-day concerns, though; the high-IQ students were far more likely to be replaying an awkward conversation, than asking the “big questions”. “It’s not that their worries were more profound, but they are just worrying more often about more things,” says Penney. “If something negative happened, they thought about it more.”
  • seemed to correlate with verbal intelligence – the kind tested by word games in IQ tests, compared to prowess at spatial puzzles (which, in fact, seemed to reduce the risk of anxiety). He speculates that greater eloquence might also make you more likely to verbalise anxieties and ruminate over them. It’s not necessarily a disadvantage, though. “Maybe they were problem-solving a bit more than most people,” he says – which might help them to learn from their mistakes.
  • The harsh truth, however, is that greater intelligence does not equate to wiser decisions; in fact, in some cases it might make your choices a little more foolish.
  • we need to turn our minds to an age-old concept: “wisdom”. His approach is more scientific that it might at first sound. “The concept of wisdom has an ethereal quality to it,” he admits. “But if you look at the lay definition of wisdom, many people would agree it’s the idea of someone who can make good unbiased judgement.”
  • “my-side bias” – our tendency to be highly selective in the information we collect so that it reinforces our previous attitudes. The more enlightened approach would be to leave your assumptions at the door as you build your argument – but Stanovich found that smarter people are almost no more likely to do so than people with distinctly average IQs.
  • People who ace standard cognitive tests are in fact slightly more likely to have a “bias blind spot”. That is, they are less able to see their own flaws, even when though they are quite capable of criticising the foibles of others. And they have a greater tendency to fall for the “gambler’s fallacy”
  • A tendency to rely on gut instincts rather than rational thought might also explain why a surprisingly high number of Mensa members believe in the paranormal; or why someone with an IQ of 140 is about twice as likely to max out their credit card.
  • “The people pushing the anti-vaccination meme on parents and spreading misinformation on websites are generally of more than average intelligence and education.” Clearly, clever people can be dangerously, and foolishly, misguided.
  • spent the last decade building tests for rationality, and he has found that fair, unbiased decision-making is largely independent of IQ.
  • Crucially, Grossmann found that IQ was not related to any of these measures, and certainly didn’t predict greater wisdom. “People who are very sharp may generate, very quickly, arguments [for] why their claims are the correct ones – but may do it in a very biased fashion.”
  • employers may well begin to start testing these abilities in place of IQ; Google has already announced that it plans to screen candidates for qualities like intellectual humility, rather than sheer cognitive prowess.
  • He points out that we often find it easier to leave our biases behind when we consider other people, rather than ourselves. Along these lines, he has found that simply talking through your problems in the third person (“he” or “she”, rather than “I”) helps create the necessary emotional distance, reducing your prejudices and leading to wiser arguments.
  • If you’ve been able to rest on the laurels of your intelligence all your life, it could be very hard to accept that it has been blinding your judgement. As Socrates had it: the wisest person really may be the one who can admit he knows nothing.
krystalxu

Why Study Philosophy? 'To Challenge Your Own Point of View' - The Atlantic - 1 views

  • Goldstein’s forthcoming book, Plato at the Googleplex: Why Philosophy Won’t Go Away, offers insight into the significant—and often invisible—progress that philosophy has made. I spoke with Goldstein about her take on the science vs. philosophy debates, how we can measure philosophy’s advances, and why an understanding of philosophy is critical to our lives today.
  • One of the things about philosophy is that you don’t have to give up on any other field. Whatever field there is, there’s a corresponding field of philosophy. Philosophy of language, philosophy of politics, philosophy of math. All the things I wanted to know about I could still study within a philosophical framework.
  • There’s a peer pressure that sets in at a certain age. They so much want to be like everybody else. But what I’ve found is that if you instill this joy of thinking, the sheer intellectual fun, it will survive even the adolescent years and come back in fighting form. It’s empowering.
  • ...18 more annotations...
  • One thing that’s changed tremendously is the presence of women and the change in focus because of that. There’s a lot of interest in literature and philosophy, and using literature as a philosophical examination. It makes me so happy! Because I was seen as a hard-core analytic philosopher, and when I first began to write novels people thought, Oh, and we thought she was serious! But that’s changed entirely. People take literature seriously, especially in moral philosophy, as thought experiments. A lot of the most developed and effective thought experiments come from novels. Also, novels contribute to making moral progress, changing people’s emotions.
  • The other thing that’s changed is that there’s more applied philosophy. Let’s apply philosophical theory to real-life problems, like medical ethics, environmental ethics, gender issues. This is a real change from when I was in school and it was only theory.
  • here’s a lot of philosophical progress, it’s just a progress that’s very hard to see. It’s very hard to see because we see with it. We incorporate philosophical progress into our own way of viewing the world.
  • Plato would be constantly surprised by what we know. And not only what we know scientifically, or by our technology, but what we know ethically. We take a lot for granted. It’s obvious to us, for example, that individual’s ethical truths are equally important.
  • it’s usually philosophical arguments that first introduce the very outlandish idea that we need to extend rights. And it takes more, it takes a movement, and activism, and emotions, to affect real social change. It starts with an argument, but then it becomes obvious. The tracks of philosophy’s work are erased because it becomes intuitively obvious
  • The arguments against slavery, against cruel and unusual punishment, against unjust wars, against treating children cruelly—these all took arguments.
  • About 30 years ago, the philosopher Peter Singer started to argue about the way animals are treated in our factory farms. Everybody thought he was nuts. But I’ve watched this movement grow; I’ve watched it become emotional. It has to become emotional. You have to draw empathy into it. But here it is, right in our time—a philosopher making the argument, everyone dismissing it, but then people start discussing it. Even criticizing it, or saying it’s not valid, is taking it seriously
  • The question of whether some of these scientific theories are really even scientific. Can we get predictions out of them?
  • We are very inertial creatures. We do not like to change our thinking, especially if it’s inconvenient for us. And certainly the people in power never want to wonder whether they should hold power.
  • I’m really trying to draw the students out, make them think for themselves. The more they challenge me, the more successful I feel as a teacher. It has to be very active
  • Plato used the metaphor that in teaching philosophy, there needs to be a fire in the teacher, and the sheer heat will help the fire grow in the student. It’s something that’s kindled because of the proximity to the heat.
  • how can you make the case that they should study philosophy?
  • ches your inner life. You have lots of frameworks to apply to problems, and so many ways to interpret things. It makes life so much more interesting. It’s us at our most human. And it helps us increase our humanity. No matter what you do, that’s an asset.
  • What do you think are the biggest philosophical issues of our time? The growth in scientific knowledge presents new philosophical issues.
  • The idea of the multiverse. Where are we in the universe? Physics is blowing our minds about this.
  • This is what we have to teach our children. Even things that go against their intuition they need to take seriously. What was intuition two generations ago is no longer intuition; and it’s arguments that change i
  • And with the growth in cognitive science and neuroscience. We’re going into the brain and getting these images of the brain. Are we discovering what we really are? Are we solving the problem of free will? Are we learning that there isn’t any free will? How much do the advances in neuroscience tell us about the deep philosophical issues?
  • With the decline of religion is there a sense of the meaninglessness of life and the easy consumerist answer that’s filling the space religion used to occupy? This is something that philosophers ought to be addressing.
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
krystalxu

Psychology's Role in Mathematics and Science Education - 0 views

  • as well as research advances in social and motivational issues and assessment, offer new opportunities to help bridge the gap between basic research and classroom practice.
Javier E

Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times - 0 views

  • Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions
  • Some are concerned about far-future risks that sound like science fiction.
  • Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now.
  • ...31 more annotations...
  • Some are motivated by potential business revenue, others by national security concerns.
  • Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others’ in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I.
  • you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
  • It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
  • Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions
  • One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics.
  • The Doomsayers
  • These are the A.I. safety people, and their ranks include the “Godfathers of A.I.,” Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind
  • Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future
  • Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
  • The technology historian David C. Brock calls these fears “wishful worries” — that is, “problems that it would be nice to have, in contrast to the actual agonies of the present.”
  • OpenAI’s Sam Altman and Meta’s Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups
  • the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course.
  • While we shouldn’t dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns.
  • The Reformers
  • While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that there’s plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded résumés lower
  • Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
  • Propagators of these A.I. ethics concerns — like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy O’Neil — have been raising the alarm on inequities coded into A.I. for years. Although we don’t have a census, it’s noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q.
  • Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside — or even above — their self-interest. They point to social media companies’ failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards
  • reformers tend to push back hard against the doomsayers’ focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity.
  • Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
  • Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
  • Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security.
  • Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
  • they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
  • U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, “The answer to our challenges is not to slow down technology but to accelerate it.”
  • The warriors’ narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly.
  • As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism
  • Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
  • By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.
  • Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
« First ‹ Previous 41 - 57 of 57
Showing 20 items per page