Skip to main content

Home/ TOK Friends/ Group items tagged mathematics

Rss Feed Group items tagged

Javier E

J. Robert Oppenheimer's Defense of Humanity - WSJ - 0 views

  • Von Neumann, too, was deeply concerned about the inability of humanity to keep up with its own inventions. “What we are creating now,” he said to his wife Klári in 1945, “is a monster whose influence is going to change history, provided there is any history left.” Moving to the subject of future computing machines he became even more agitated, foreseeing disaster if “people” could not “keep pace with what they create.”
  • Oppenheimer, Einstein, von Neumann and other Institute faculty channeled much of their effort toward what AI researchers today call the “alignment” problem: how to make sure our discoveries serve us instead of destroying us. Their approaches to this increasingly pressing problem remain instructive.
  • Von Neumann focused on applying the powers of mathematical logic, taking insights from games of strategy and applying them to economics and war planning. Today, descendants of his “game theory” running on von Neumann computing architecture are applied not only to our nuclear strategy, but also many parts of our political, economic and social lives. This is one approach to alignment: humanity survives technology through more technology, and it is the researcher’s role to maximize progress.
  • ...5 more annotations...
  • he also thought that this approach was not enough. “What are we to make of a civilization,” he asked in 1959, a few years after von Neumann’s death, “which has always regarded ethics as an essential part of human life, and…which has not been able to talk about the prospect of killing almost everybody, except in prudential and game-theoretical terms?”
  • In their biography “American Prometheus,” which inspired Nolan’s film, Martin Sherwin and Kai Bird document Oppenheimer’s conviction that “the safety” of a nation or the world “cannot lie wholly or even primarily in its scientific or technical prowess.” If humanity wants to survive technology, he believed, it needs to pay attention not only to technology but also to ethics, religions, values, forms of political and social organization, and even feelings and emotions.
  • Hence Oppenheimer set out to make the Institute for Advanced Study a place for thinking about humanistic subjects like Russian culture, medieval history, or ancient philosophy, as well as about mathematics and the theory of the atom. He hired scholars like George Kennan, the diplomat who designed the Cold War policy of Soviet “containment”; Harold Cherniss, whose work on the philosophies of Plato and Aristotle influenced many Institute colleagues; and the mathematical physicist Freeman Dyson, who had been one of the youngest collaborators in the Manhattan Project. Traces of their conversations and collaborations are preserved not only in their letters and biographies, but also in their research, their policy recommendations, and in their ceaseless efforts to help the public understand the dangers and opportunities technology offers the world.
  • to design a “fairness algorithm” we need to know what fairness is. Fairness is not a mathematical constant or even a variable. It is a human value, meaning that there are many often competing and even contradictory visions of it on offer in our societies.
  • Preserving any human value worthy of the name will therefore require not only a computer scientist, but also a sociologist, psychologist, political scientist, philosopher, historian, theologian. Oppenheimer even brought the poet T.S. Eliot to the Institute, because he believed that the challenges of the future could only be met by bringing the technological and the human together. The technological challenges are growing, but the cultural abyss separating STEM from the arts, humanities, and social sciences has only grown wider. More than ever, we need institutions capable of helping them think together.
Javier E

Archimedes - Separating Myth From Science - NYTimes.com - 0 views

  • A panoply of devices and ideas are named after Archimedes. Besides the Archimedes screw, there is the Archimedes principle, the law of buoyancy that states the upward force on a submerged object equals the weight of the liquid displaced. There is the Archimedes claw, a weapon that most likely did exist, grabbing onto Roman ships and tipping them over. And there is the Archimedes sphere, a forerunner of the planetarium — a hand-held globe that showed the constellations as well as the locations of the sun and the planets in the sky.
  • Dr. Rorres said the singular genius of Archimedes was that he not only was able to solve abstract mathematics problems, but also used mathematics to solve physics problems, and he then engineered devices to take advantage of the physics. “He came up with fundamental laws of nature, proved them mathematically and then was able to apply them,” Dr. Rorres said.
Javier E

Book Review: Models Behaving Badly - WSJ.com - 1 views

  • Mr. Derman is perhaps a bit too harsh when he describes EMM—the so-called Efficient Market Model. EMM does not, as he claims, imply that prices are always correct and that price always equals value. Prices are always wrong. What EMM says is that we can never be sure if prices are too high or too low.
  • The Efficient Market Model does not suggest that any particular model of valuation—such as the Capital Asset Pricing Model—fully accounts for risk and uncertainty or that we should rely on it to predict security returns. EMM does not, as Mr. Derman says, "stubbornly assume that all uncertainty about the future is quantifiable." The basic lesson of EMM is that it is very difficult—well nigh impossible—to beat the market consistently.
  • Mr. Derman gives an eloquent description of James Clerk Maxwell's electromagnetic theory in a chapter titled "The Sublime." He writes: "The electromagnetic field is not like Maxwell's equations; it is Maxwell's equations."
  • ...4 more annotations...
  • He sums up his key points about how to keep models from going bad by quoting excerpts from his "Financial Modeler's Manifesto" (written with Paul Wilmott), a paper he published a couple of years ago. Among its admonitions: "I will always look over my shoulder and never forget that the model is not the world"; "I will not be overly impressed with mathematics"; "I will never sacrifice reality for elegance"; "I will not give the people who use my models false comfort about their accuracy"; "I understand that my work may have enormous effects on society and the economy, many beyond my apprehension."
  • As the collapse of the subprime collateralized debt market in 2008 made clear, it is a terrible mistake to put too much faith in models purporting to value financial instruments. "In crises," Mr. Derman writes, "the behavior of people changes and normal models fail. While quantum electrodynamics is a genuine theory of all reality, financial models are only mediocre metaphors for a part of it."
  • Although financial models employ the mathematics and style of physics, they are fundamentally different from the models that science produces. Physical models can provide an accurate description of reality. Financial models, despite their mathematical sophistication, can at best provide a vast oversimplification of reality. In the universe of finance, the behavior of individuals determines value—and, as he says, "people change their minds."
  • Bringing ethics into his analysis, Mr. Derman has no patience for coddling the folly of individuals and institutions who over-rely on faulty models and then seek to escape the consequences. He laments the aftermath of the 2008 financial meltdown, when banks rebounded "to record profits and bonuses" thanks to taxpayer bailouts. If you want to benefit from the seven fat years, he writes, "you must suffer the seven lean years too, even the catastrophically lean ones. We need free markets, but we need them to be principled."
sophie mester

Friends You Can Count On - NYTimes.com - 0 views

  •  
    Mathematics explaining a phenomenon in sense perception. Feeling like your facebook friends always have more facebook friends than you do actually is actually supported by mathematical truths. Known as the "friendship paradox."
Javier E

In 'Misbehaving,' an Economics Professor Isn't Afraid to Attack His Own - NYTimes.com - 0 views

  • economists have increasingly become the go-to experts on every manner of business and public policy issue facing society
  • The economics profession that Mr. Thaler entered in the early 1970s was deeply invested in proving that it was more than a mere social science
  • To achieve the same mathematical precision of hard sciences, economists made a radically simplifying assumption that people are “optimizers” whose behavior is as predictable as the speed of physical body falling through space.
  • ...8 more annotations...
  • Early in his career, Professor Thaler created a list of observed behaviors that were obviously inconsistent with the predictions of established orthodoxy.
  • “Misbehaving” charts Mr. Thaler’s journey to document these anomalies in the face of economists’ increasingly desperate, and sometimes comical, efforts to deny their existence or relevance
  • As these tools have been applied to practical problems, Professor Thaler has noted that there has been “very little actual economics involved.” Instead, the resulting insights have “come primarily from psychology and the other social sciences.”
  • To the extent that economists fought the integration of behavioral insights into economic analyses, it seems that their fears were founded. Rather than making the resulting work less rigorous, however, it simply made its economic underpinnings less relevant. Professor Thaler argues that it is actually “a slur on those other social sciences if people insist on calling any policy-related research some kind of economics.”
  • by trying to set itself as somehow above other social sciences, the “rationalist” school of economics actually ended up contributing far less than it could have. The group’s intellectual denial led to not just sloppy social science, but sloppy philosophy.
  • Economists would do well to embrace both their philosophical and social science roots. No amount of number-crunching can replace the need to confront the complexity of human existence.
  • It is not only in academics that the most difficult questions are avoided behind a mathematical smoke screen. When businesses use cost-benefit analysis, for instance, they are applying a moral philosophy known as utilitarianism, popularized by John Stuart Mill in the 19th century.
  • Mill has relatively few contemporary adherents in professional philosophical circles. But utilitarianism does have the virtue of lending itself to mathematical calculation. By giving the contentious philosophy a benign bureaucratic name like “cost-benefit analysis,” corporations hope to circumvent the need to confront the profound ethical issues implicated.
Javier E

In 'Misbehaving,' an Economics Professor Isn't Afraid to Attack His Own - NYTimes.com - 0 views

  • the book is part memoir, part attack on a breed of economist who dominated the academy – particularly, the Chicago School that dominated economic theory at the University of Chicago – for the much of the latter part of the 20th century.
  • economists have increasingly become the go-to experts on every manner of business and public policy issue facing society.
  • rather than being a disgruntled former employee or otherwise easily marginalized whistle-blower, Mr. Thaler recently took the reins as president of the American Economic Association (and still teaches at Chicago’s graduate business program
  • ...9 more annotations...
  • The economics profession that Mr. Thaler entered in the early 1970s was deeply invested in proving that it was more than a mere social science.
  • But economic outcomes are the result of human decision-making. To achieve the same mathematical precision of hard sciences, economists made a radically simplifying assumption that people are “optimizers” whose behavior is as predictable as the speed of physical body falling through space.
  • After so-called behavioral economics began to go mainstream, Professor Thaler turned his attention to helping solve a variety of business and, increasingly, public policy issues. As these tools have been applied to practical problems, Professor Thaler has noted that there has been “very little actual economics involved.” Instead, the resulting insights have “come primarily from psychology and the other social sciences.”
  • it is actually “a slur on those other social sciences if people insist on calling any policy-related research some kind of economics.”
  • Professor Thaler’s narrative ultimately demonstrates that by trying to set itself as somehow above other social sciences, the “rationalist” school of economics actually ended up contributing far less than it could have. The group’s intellectual denial led to not just sloppy social science, but sloppy philosophy.
  • Economists would do well to embrace both their philosophical and social science roots. No amount of number-crunching can replace the need to confront the complexity of human existence.
  • It is not only in academics that the most difficult questions are avoided behind a mathematical smoke screen. When businesses use cost-benefit analysis, for instance, they are applying a moral philosophy known as utilitarianism, popularized by John Stuart Mill in the 19th century.
  • Compared against alternative moral philosophies, like those of Kant or Aristotle, Mill has relatively few contemporary adherents in professional philosophical circles. But utilitarianism does have the virtue of lending itself to mathematical calculation. By giving the contentious philosophy a benign bureaucratic name like “cost-benefit analysis,” corporations hope to circumvent the need to confront the profound ethical issues implicated.
  • The “misbehaving” of Professor Thaler’s title is supposed to refer to how human actions are inconsistent with rationalist economic theory
kushnerha

Philosophy's True Home - The New York Times - 0 views

  • We’ve all heard the argument that philosophy is isolated, an “ivory tower” discipline cut off from virtually every other progress-making pursuit of knowledge, including math and the sciences, as well as from the actual concerns of daily life. The reasons given for this are many. In a widely read essay in this series, “When Philosophy Lost Its Way,” Robert Frodeman and Adam Briggle claim that it was philosophy’s institutionalization in the university in the late 19th century that separated it from the study of humanity and nature, now the province of social and natural sciences.
  • This institutionalization, the authors claim, led it to betray its central aim of articulating the knowledge needed to live virtuous and rewarding lives. I have a different view: Philosophy isn’t separated from the social, natural or mathematical sciences, nor is it neglecting the study of goodness, justice and virtue, which was never its central aim.
  • identified philosophy with informal linguistic analysis. Fortunately, this narrow view didn’t stop them from contributing to the science of language and the study of law. Now long gone, neither movement defined the philosophy of its day and neither arose from locating it in universities.
  • ...13 more annotations...
  • The authors claim that philosophy abandoned its relationship to other disciplines by creating its own purified domain, accessible only to credentialed professionals. It is true that from roughly 1930 to 1950, some philosophers — logical empiricists, in particular — did speak of philosophy having its own exclusive subject matter. But since that subject matter was logical analysis aimed at unifying all of science, interdisciplinarity was front and center.
  • philosopher-mathematicians Gottlob Frege, Bertrand Russell, Kurt Gödel, Alonzo Church and Alan Turing invented symbolic logic, helped establish the set-theoretic foundations of mathematics, and gave us the formal theory of computation that ushered in the digital age
  • developed ideas relating logic to linguistic meaning that provided a framework for studying meaning in all human languages. Others, including Paul Grice and J.L. Austin, explained how linguistic meaning mixes with contextual information to enrich communicative contents and how certain linguistic performances change social facts. Today a new philosophical conception of the relationship between meaning and cognition adds a further dimension to linguistic science.
  • Decision theory — the science of rational norms governing action, belief and decision under uncertainty — was developed by the 20th-century philosophers Frank Ramsey, Rudolph Carnap, Richard Jeffrey and others. It plays a foundational role in political science and economics by telling us what rationality requires, given our evidence, priorities and the strength of our beliefs. Today, no area of philosophy is more successful in attracting top young minds.
  • Philosophy also assisted psychology in its long march away from narrow behaviorism and speculative Freudianism. The mid-20th-century functionalist perspective pioneered by Hilary Putnam was particularly important. According to it, pain, pleasure and belief are neither behavioral dispositions nor bare neurological states. They are interacting internal causes, capable of very different physical realizations, that serve the goals of individuals in specific ways. This view is now embedded in cognitive psychology and neuroscience.
  • Philosophy also played a role in 20th-century physics, influencing the great physicists Albert Einstein, Niels Bohr and Werner Heisenberg. The philosophers Moritz Schlick and Hans Reichenbach reciprocated that interest by assimilating the new physics into their philosophies.
  • Philosophy of biology is following a similar path. Today’s philosophy of science is less accessible than Aristotle’s natural philosophy chiefly because it systematizes a larger, more technically sophisticated body of knowledge.
  • Philosophy’s interaction with mathematics, linguistics, economics, political science, psychology and physics requires specialization. Far from fostering isolation, this specialization makes communication and cooperation among disciplines possible. This has always been so.
  • Nor did scientific progress rob philosophy of its former scientific subject matter, leaving it to concentrate on the broadly moral. In fact, philosophy thrives when enough is known to make progress conceivable, but it remains unachieved because of methodological confusion. Philosophy helps break the impasse by articulating new questions, posing possible solutions and forging new conceptual tools.
  • Our knowledge of the universe and ourselves expands like a ripple surrounding a pebble dropped in a pool. As we move away from the center of the spreading circle, its area, representing our secure knowledge, grows. But so does its circumference, representing the border where knowledge blurs into uncertainty and speculation, and methodological confusion returns. Philosophy patrols the border, trying to understand how we got there and to conceptualize our next move.  Its job is unending.
  • Although progress in ethics, political philosophy and the illumination of life’s meaning has been less impressive than advances in some other areas, it is accelerating.
  • the advances in our understanding because of careful formulation and critical evaluation of theories of goodness, rightness, justice and human flourishing by philosophers since 1970 compare well to the advances made by philosophers from Aristotle to 1970
  • The knowledge required to maintain philosophy’s continuing task, including its vital connection to other disciplines, is too vast to be held in one mind. Despite the often-repeated idea that philosophy’s true calling can only be fulfilled in the public square, philosophers actually function best in universities, where they acquire and share knowledge with their colleagues in other disciplines. It is also vital for philosophers to engage students — both those who major in the subject, and those who do not. Although philosophy has never had a mass audience, it remains remarkably accessible to the average student; unlike the natural sciences, its frontiers can be reached in a few undergraduate courses.
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
katedriscoll

Is the Schrödinger Equation True? - Scientific American - 0 views

  • haped abstractions called vectors. Pondering Hilbert space makes me feel like a lump of dumb, decrepit flesh trapped in a squalid, 3-D prison. Far from exploring Hilbert space, I can’t even find a window through which to peer into it. I envision it as an immaterial paradise where luminescent cognoscenti glide to and fro, telepathically swapping witticisms about adjoint operators.
  • Reality, great sages have assured us, is essentially mathematical. Plato held that we and other things of this world are mere shadows of the sublime geometric forms that constitute reality. Galileo declared that “the great book of nature is written in mathematics.” We’re part of nature, aren’t we? So why does mathematics, once we get past natural numbers and basic arithmetic, feel so alien to most of us?
  • Physicists’ theories work. They predict the arc of planets and the flutter of electrons, and they have spawned smartphones, H-bombs and—well, what more do we need? But scientists, and especially physicists, aren’t just seeking practical advances. They’re after Truth. They want to believe that their theories are correct—exclusively correct—representations of nature. Physicists share this craving with religious folk, who need to believe that their path to salvation is the One True Path.
caelengrubb

Copernicus, the Revolutionary who Feared Changing the World | OpenMind - 0 views

  • The sages had placed the Earth at the centre of the universe for nearly two thousand years until Copernicus arrived on the scene and let it spin like a top around the Sun, as we know it today.
  • Nicholas Copernicus (1473-1543) was not the first to explain that everything revolves around the Sun, but he did it so thoroughly, in that book, that he initiated a scientific revolution against the universal order established by the greatest scholar ever known, the Greek philosopher Aristotle.
  • Aristotle said in the fourth century BC that a mystical force moved the Sun and the planets in perfect circles around the Earth. Although this was much to the taste of the Church, in order to fit this idea with the strange movements of the planets seen in the sky, astronomers had to resort to the mathematical juggling that another Greek, Ptolemy, invented in the second century AD.
  • ...6 more annotations...
  • hus Copernicus started to look for something simpler, almost at the same time that Michelangelo undertook another great project, that of decorating the ceiling of the Sistine Chapel.
  • Copernicus had a full Renaissance résumé: studies in medicine, art, mathematics, canon law and philosophy; experience as an economist and a diplomat; and also a good position as an ecclesiastical official.
  • By 1514, he had already written a sketch of his theory, although he did not publish it for fear of being condemned as a heretic and also because he was a perfectionist. He spent 15 more years repeating his calculations, diagrams and observations with the naked eye, prior to the invention of the telescope.
  • Copernicus was the first to recite them in order: Mercury, Venus, Earth, Mars, Jupiter and Saturn, the 6 planets that were then known.
  • When Copernicus finally decided to publish his theory, the book’s publisher softened it in the prologue: he said that there were “only easier mathematics” for predicting the movements of the planets, and not a whole new way of looking at the reality of the universe. But this was understood as a challenge to Aristotle, to the Church, and to common sense.
  • It would be 150 years before the Copernican revolution triumphed, and the world finally admitted that the Earth was just one more spinning top.
mshilling1

Isaac Newton's Influence on Modern Science - 0 views

  • Aristotelian thought had dominated mathematics and astronomy for centuries, until revolutionaries like Nicolaus Copernicus and Galileo Galilei challenged those views.
  • The mathematization of physics was a crucial step in the advancement of science. It was realized that the mathematical tools we had at the time weren’t strong enough.
  • By trial and error, Kepler worked and worked until finally, he hit upon the shape that worked—elliptical orbits with the Sun at one focus. It turned out to perfectly fit the known observations.
  • ...10 more annotations...
  • They were stunning results, but no one knew why they would be true. Aristotle’s circular orbits had a philosophical basis—the perfection of the aether from which everything out there was made.
  • The basic concepts which ordered the universe and the picture of reality they gave rise to had become wobbly, but had not fallen.
  • So, the first law describes the behavior of an object subjected to no external force. The second law then describes the behavior of an object that is subjected to an external force.
  • The bigger the push, the more the change; the heavier the object, the less the change. An object is either subject to a force or it isn’t, so the first two laws are sufficient to describe the behavior of the object.
  • Again, if a person is on ice skates and someone pushes them, they accelerate forward because of the force and the other person goes backwards because of it. To every action there is always an equal, but opposite reaction.
  • When these three laws of mechanics and the law of universal gravitation are used together, we suddenly have an explanation for Kepler’s elliptical orbits. Not only that, we can explain the tides, the motion of cannonballs, virtually everything we see in the world around us.
  • When these three laws of mechanics and the law of universal gravitation are used together, it was not only successful in terms of explaining and predicting, but, theoretically, it also undermined the old foundation—Aristotle.
  • Newton’s law of universal gravitation is universal. It applies to everything equally. Aristotle’s worldview was enforced by the centralized power of the Catholic Church. Newton’s worldview came not from authority, but from observing, something anyone could do.
  • And so Newton’s success supercharged an intellectual movement developing around him, the Enlightenment. The picture of reality that emerged from the Enlightenment is one in which the universe is well-ordered according to principles that are accessible to the human mind.
  • We live in a world that we can understand. Humans are perfectly rational beings, made to understand the world we inhabit.
Javier E

How Does Science Really Work? | The New Yorker - 1 views

  • I wanted to be a scientist. So why did I find the actual work of science so boring? In college science courses, I had occasional bursts of mind-expanding insight. For the most part, though, I was tortured by drudgery.
  • I’d found that science was two-faced: simultaneously thrilling and tedious, all-encompassing and narrow. And yet this was clearly an asset, not a flaw. Something about that combination had changed the world completely.
  • “Science is an alien thought form,” he writes; that’s why so many civilizations rose and fell before it was invented. In his view, we downplay its weirdness, perhaps because its success is so fundamental to our continued existence.
  • ...50 more annotations...
  • In school, one learns about “the scientific method”—usually a straightforward set of steps, along the lines of “ask a question, propose a hypothesis, perform an experiment, analyze the results.”
  • That method works in the classroom, where students are basically told what questions to pursue. But real scientists must come up with their own questions, finding new routes through a much vaster landscape.
  • Since science began, there has been disagreement about how those routes are charted. Two twentieth-century philosophers of science, Karl Popper and Thomas Kuhn, are widely held to have offered the best accounts of this process.
  • For Popper, Strevens writes, “scientific inquiry is essentially a process of disproof, and scientists are the disprovers, the debunkers, the destroyers.” Kuhn’s scientists, by contrast, are faddish true believers who promulgate received wisdom until they are forced to attempt a “paradigm shift”—a painful rethinking of their basic assumptions.
  • Working scientists tend to prefer Popper to Kuhn. But Strevens thinks that both theorists failed to capture what makes science historically distinctive and singularly effective.
  • Sometimes they seek to falsify theories, sometimes to prove them; sometimes they’re informed by preëxisting or contextual views, and at other times they try to rule narrowly, based on t
  • Why do scientists agree to this scheme? Why do some of the world’s most intelligent people sign on for a lifetime of pipetting?
  • Strevens thinks that they do it because they have no choice. They are constrained by a central regulation that governs science, which he calls the “iron rule of explanation.” The rule is simple: it tells scientists that, “if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with”; from there, they must “conduct all disputes with reference to empirical evidence alone.”
  • , it is “the key to science’s success,” because it “channels hope, anger, envy, ambition, resentment—all the fires fuming in the human heart—to one end: the production of empirical evidence.”
  • Strevens arrives at the idea of the iron rule in a Popperian way: by disproving the other theories about how scientific knowledge is created.
  • The problem isn’t that Popper and Kuhn are completely wrong. It’s that scientists, as a group, don’t pursue any single intellectual strategy consistently.
  • Exploring a number of case studies—including the controversies over continental drift, spontaneous generation, and the theory of relativity—Strevens shows scientists exerting themselves intellectually in a variety of ways, as smart, ambitious people usually do.
  • “Science is boring,” Strevens writes. “Readers of popular science see the 1 percent: the intriguing phenomena, the provocative theories, the dramatic experimental refutations or verifications.” But, he says,behind these achievements . . . are long hours, days, months of tedious laboratory labor. The single greatest obstacle to successful science is the difficulty of persuading brilliant minds to give up the intellectual pleasures of continual speculation and debate, theorizing and arguing, and to turn instead to a life consisting almost entirely of the production of experimental data.
  • Ultimately, in fact, it was good that the geologists had a “splendid variety” of somewhat arbitrary opinions: progress in science requires partisans, because only they have “the motivation to perform years or even decades of necessary experimental work.” It’s just that these partisans must channel their energies into empirical observation. The iron rule, Strevens writes, “has a valuable by-product, and that by-product is data.”
  • Science is often described as “self-correcting”: it’s said that bad data and wrong conclusions are rooted out by other scientists, who present contrary findings. But Strevens thinks that the iron rule is often more important than overt correction.
  • Eddington was never really refuted. Other astronomers, driven by the iron rule, were already planning their own studies, and “the great preponderance of the resulting measurements fit Einsteinian physics better than Newtonian physics.” It’s partly by generating data on such a vast scale, Strevens argues, that the iron rule can power science’s knowledge machine: “Opinions converge not because bad data is corrected but because it is swamped.”
  • Why did the iron rule emerge when it did? Strevens takes us back to the Thirty Years’ War, which concluded with the Peace of Westphalia, in 1648. The war weakened religious loyalties and strengthened national ones.
  • Two regimes arose: in the spiritual realm, the will of God held sway, while in the civic one the decrees of the state were paramount. As Isaac Newton wrote, “The laws of God & the laws of man are to be kept distinct.” These new, “nonoverlapping spheres of obligation,” Strevens argues, were what made it possible to imagine the iron rule. The rule simply proposed the creation of a third sphere: in addition to God and state, there would now be science.
  • Strevens imagines how, to someone in Descartes’s time, the iron rule would have seemed “unreasonably closed-minded.” Since ancient Greece, it had been obvious that the best thinking was cross-disciplinary, capable of knitting together “poetry, music, drama, philosophy, democracy, mathematics,” and other elevating human disciplines.
  • We’re still accustomed to the idea that a truly flourishing intellect is a well-rounded one. And, by this standard, Strevens says, the iron rule looks like “an irrational way to inquire into the underlying structure of things”; it seems to demand the upsetting “suppression of human nature.”
  • Descartes, in short, would have had good reasons for resisting a law that narrowed the grounds of disputation, or that encouraged what Strevens describes as “doing rather than thinking.”
  • In fact, the iron rule offered scientists a more supple vision of progress. Before its arrival, intellectual life was conducted in grand gestures.
  • Descartes’s book was meant to be a complete overhaul of what had preceded it; its fate, had science not arisen, would have been replacement by some equally expansive system. The iron rule broke that pattern.
  • Strevens sees its earliest expression in Francis Bacon’s “The New Organon,” a foundational text of the Scientific Revolution, published in 1620. Bacon argued that thinkers must set aside their “idols,” relying, instead, only on evidence they could verify. This dictum gave scientists a new way of responding to one another’s work: gathering data.
  • it also changed what counted as progress. In the past, a theory about the world was deemed valid when it was complete—when God, light, muscles, plants, and the planets cohered. The iron rule allowed scientists to step away from the quest for completeness.
  • The consequences of this shift would become apparent only with time
  • In 1713, Isaac Newton appended a postscript to the second edition of his “Principia,” the treatise in which he first laid out the three laws of motion and the theory of universal gravitation. “I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses,” he wrote. “It is enough that gravity really exists and acts according to the laws that we have set forth.”
  • What mattered, to Newton and his contemporaries, was his theory’s empirical, predictive power—that it was “sufficient to explain all the motions of the heavenly bodies and of our sea.”
  • Descartes would have found this attitude ridiculous. He had been playing a deep game—trying to explain, at a fundamental level, how the universe fit together. Newton, by those lights, had failed to explain anything: he himself admitted that he had no sense of how gravity did its work
  • by authorizing what Strevens calls “shallow explanation,” the iron rule offered an empirical bridge across a conceptual chasm. Work could continue, and understanding could be acquired on the other side. In this way, shallowness was actually more powerful than depth.
  • Quantum theory—which tells us that subatomic particles can be “entangled” across vast distances, and in multiple places at the same time—makes intuitive sense to pretty much nobody.
  • Without the iron rule, Strevens writes, physicists confronted with such a theory would have found themselves at an impasse. They would have argued endlessly about quantum metaphysics.
  • ollowing the iron rule, they can make progress empirically even though they are uncertain conceptually. Individual researchers still passionately disagree about what quantum theory means. But that hasn’t stopped them from using it for practical purposes—computer chips, MRI machines, G.P.S. networks, and other technologies rely on quantum physics.
  • One group of theorists, the rationalists, has argued that science is a new way of thinking, and that the scientist is a new kind of thinker—dispassionate to an uncommon degree.
  • As evidence against this view, another group, the subjectivists, points out that scientists are as hopelessly biased as the rest of us. To this group, the aloofness of science is a smoke screen behind which the inevitable emotions and ideologies hide.
  • At least in science, Strevens tells us, “the appearance of objectivity” has turned out to be “as important as the real thing.”
  • The subjectivists are right, he admits, inasmuch as scientists are regular people with a “need to win” and a “determination to come out on top.”
  • But they are wrong to think that subjectivity compromises the scientific enterprise. On the contrary, once subjectivity is channelled by the iron rule, it becomes a vital component of the knowledge machine. It’s this redirected subjectivity—to come out on top, you must follow the iron rule!—that solves science’s “problem of motivation,” giving scientists no choice but “to pursue a single experiment relentlessly, to the last measurable digit, when that digit might be quite meaningless.”
  • If it really was a speech code that instigated “the extraordinary attention to process and detail that makes science the supreme discriminator and destroyer of false ideas,” then the peculiar rigidity of scientific writing—Strevens describes it as “sterilized”—isn’t a symptom of the scientific mind-set but its cause.
  • The iron rule—“a kind of speech code”—simply created a new way of communicating, and it’s this new way of communicating that created science.
  • Other theorists have explained science by charting a sweeping revolution in the human mind; inevitably, they’ve become mired in a long-running debate about how objective scientists really are
  • In “The Knowledge Machine: How Irrationality Created Modern Science” (Liveright), Michael Strevens, a philosopher at New York University, aims to identify that special something. Strevens is a philosopher of science
  • Compared with the theories proposed by Popper and Kuhn, Strevens’s rule can feel obvious and underpowered. That’s because it isn’t intellectual but procedural. “The iron rule is focused not on what scientists think,” he writes, “but on what arguments they can make in their official communications.”
  • Like everybody else, scientists view questions through the lenses of taste, personality, affiliation, and experience
  • geologists had a professional obligation to take sides. Europeans, Strevens reports, tended to back Wegener, who was German, while scholars in the United States often preferred Simpson, who was American. Outsiders to the field were often more receptive to the concept of continental drift than established scientists, who considered its incompleteness a fatal flaw.
  • Strevens’s point isn’t that these scientists were doing anything wrong. If they had biases and perspectives, he writes, “that’s how human thinking works.”
  • Eddington’s observations were expected to either confirm or falsify Einstein’s theory of general relativity, which predicted that the sun’s gravity would bend the path of light, subtly shifting the stellar pattern. For reasons having to do with weather and equipment, the evidence collected by Eddington—and by his colleague Frank Dyson, who had taken similar photographs in Sobral, Brazil—was inconclusive; some of their images were blurry, and so failed to resolve the matter definitively.
  • it was only natural for intelligent people who were free of the rule’s strictures to attempt a kind of holistic, systematic inquiry that was, in many ways, more demanding. It never occurred to them to ask if they might illuminate more collectively by thinking about less individually.
  • In the single-sphered, pre-scientific world, thinkers tended to inquire into everything at once. Often, they arrived at conclusions about nature that were fascinating, visionary, and wrong.
  • How Does Science Really Work?Science is objective. Scientists are not. Can an “iron rule” explain how they’ve changed the world anyway?By Joshua RothmanSeptember 28, 2020
manhefnawi

Opinion | Does Math Make You Smarter? - The New York Times - 0 views

  • Various studies point to the conclusion that subjecting the mind to formal discipline — as when studying geometry or Latin — does not, in general, engender a broad transfer of learning. There is no sweeping increase of a general capacity for tasks like writing a speech or balancing a checkbook.
  • Many reasons have been advanced for this poor showing, including the lack of relevance of such an abstract exercise to people’s daily lives.
  • Most people reflexively eliminate the cards not explicitly specified in the rule (the F and the 2) and then continue with slower, more analytic processing only for the E and the 5. In this, they rely on an initial snap judgment about superficial similarity, a tendency that some scholars speculate evolved in humans because in most real-world contexts, quickly detecting such similarities is a good strategy for survival.
  • ...1 more annotation...
  • I propose we start to teach the Wason selection task in mathematics courses at the high-school level and higher. The puzzle captures so much that is essential to mathematics: the nuts and bolts of inference, the difficulty of absorbing abstract concepts when removed from the context of real-world experience, the importance of a slow, deliberative cogitative process and the pitfalls of instant intuitive judgments.
krystalxu

God, Mathematics and Psychology | Psychology Today - 0 views

  • This is a Epicurean (341–270 BC) belief that the gods are too busy to deal with the day-to-day running of the universe but they set it in motion using mathematics.
  • The same is true for language, art, music and other “Third World” constructs—these are incrementally evolving systems and form one of Karl Popper’s ontological tools (Carr, 1977).
krystalxu

What is Mathematical Psychology? - 0 views

  • Another similar type of psychology, psychometrics, differs in that it measures the behavior of populations while mathematics paired with psychology in this arena looks at the behavior of the “average individual.”
Javier E

The Faulty Logic of the 'Math Wars' - NYTimes.com - 0 views

  • The American philosopher Wilfrid Sellars was challenging this assumption when he spoke of “material inferences.” Sellars was interested in inferences that we can only recognize as valid if we possess certain bits of factual knowledge.
  • That the use of standard algorithms isn’t merely mechanical is not by itself a reason to teach them. It is important to teach them because, as we already noted, they are also the most elegant and powerful methods for specific operations. This means that they are our best representations of connections among mathematical concepts. Math instruction that does not teach both that these algorithms work and why they do is denying students insight into the very discipline it is supposed to be about.
  • according to Wittgenstein, is why it is wrong to understand algorithm-based calculations as expressions of nothing more than “mental mechanisms.” Far from being genuinely mechanical, such calculations involve a distinctive kind of thought.
  • ...3 more annotations...
  • If we make room for such material inferences, we will be inclined to reject the view that individuals can reason well without any substantial knowledge of, say, the natural world and human affairs. We will also be inclined to regard the specifically factual content of subjects such as biology and history as integral to a progressive education.
  • There is a moral here for progressive education that reaches beyond the case of math. Even if we sympathize with progressivists in wanting schools to foster independence of mind, we shouldn’t assume that it is obvious how best to do this. Original thought ranges over many different domains, and it imposes divergent demands as it does so. Just as there is good reason to believe that in biology and history such thought requires significant factual knowledge, there is good reason to believe that in mathematics it requires understanding of and facility with the standard algorithms.
  • there is also good reason to believe that when we examine further areas of discourse we will come across yet further complexities. The upshot is that it would be naïve to assume that we can somehow promote original thinking in specific areas simply by calling for subject-related creative reasoning
Javier E

A Modest Proposal for More Back-Stabbing in Preschool - NYTimes.com - 0 views

  • I am a deluded throwback to carefree days, and in my attempt to raise a conscious, creative and socially and environmentally responsible child while lacking the means to also finance her conscious, creative and environmentally and socially responsible lifestyle forever, I’d accidentally gone and raised a hothouse serf. Oops.
  • Reich’s thesis is that some inequality is inevitable, even necessary, in a free-market system. But what makes an economy stable and prosperous is a strong, vibrant, growing middle class. In the three decades after World War II, a period that Reich calls “the great prosperity,” the G.I. Bill, the expansion of public universities and the rise of labor unions helped create the biggest, best-educated middle class in the world. Reich describes this as an example of a “virtuous circle” in which productivity grows, wages increase, workers buy more, companies hire more, tax revenues increase, government invests more, workers are better educated. On the flip side, when the middle class doesn’t share in the economic gains, it results over time in a downward vicious cycle: Wages stagnate, workers buy less, companies downsize, tax revenues decrease, government cuts programs, workers are less educated, unemployment rises, deficits grow. Since the crash that followed the deregulation of the financial markets, we have struggled to emerge from such a cycle.
  • What if the kid got it in her head that it was a good idea to go into public service, the helping professions, craftsmanship, scholarship or — God help her — the arts? Wouldn’t a greedier, more back-stabby style of early education be more valuable to the children of the shrinking middle class ­ — one suited to the world they are actually living in?
  • ...3 more annotations...
  • Are we feeding our children a bunch of dangerous illusions about fairness and hard work and level playing fields? Are ideals a luxury only the rich can afford?
  • I’m reminded of the quote by John Adams: “I must study politics and war, that my sons may have the liberty to study mathematics and philosophy. My sons ought to study mathematics and philosophy, geography, natural history [and] naval architecture . . . in order to give their children a right to study painting, poetry, music, architecture, tapestry and porcelain.” For all intents and purposes, I guess I studied porcelain. The funny thing is that my parents came from a country (Peru) with a middle class so small that parents had to study business so that their children could study business. If I didn’t follow suit, it’s at least in part because I spent my childhood in the 1970s absorbing the nurturing message of a progressive pop culture that told me I could be anything I wanted, because this is America.
  • “When we see the contrast between the values we share and the realities we live in, that is the fundamental foundation for social change.”
Emily Horwitz

Nature Has A Formula That Tells Us When It's Time To Die : Krulwich Wonders... : NPR - 1 views

  • Every living thing is a pulse. We quicken, then we fade. There is a deep beauty in this, but deeper down, inside every plant, every leaf, inside every living thing (us included) sits a secret.
  • Everything alive will eventually die, we know that, but now we can read the pattern and see death coming. We have recently learned its logic, which "You can put into mathematics," says physicist Geoffrey West. It shows up with "extraordinary regularity," not just in plants, but in all animals, from slugs to giraffes. Death, it seems, is intimately related to size.
  • Life is short for small creatures, longer in big ones.
  • ...5 more annotations...
  • A 2007 paper checked 700 different kinds of plants, and almost every time they applied the formula, it correctly predicted lifespan. "This is universal. It cuts across the design of organisms," West says. "It applies to me, all mammals, and the trees sitting out there, even though we're completely different designs."
  • The formula is a simple quarter-power exercise: You take the mass of a plant or an animal, and its metabolic rate is equal to its mass taken to the three-fourths power.
  • It's hard to believe that creatures as different as jellyfish and cheetahs, daisies and bats, are governed by the same mathematical logic, but size seems to predict lifespan.
  • It tells animals for example, that there's a universal limit to life, that though they come in different sizes, they have roughly a billion and a half heart beats; elephant hearts beat slowly, hummingbird hearts beat fast, but when your count is up, you are over.
  • In any big creature, animal or plant, there are so many more pathways, moving parts, so much more work to do, the big guys could wear out very quickly. So Geoffrey West and his colleagues found that nature gives larger creatures a gift: more efficient cells. Literally.
Lindsay Lyon

Largest Prime Discovered | Mathematics | LiveScience - 0 views

  • The largest prime number yet has been discovered — and it's 17,425,170 digits long. The new prime number crushes the last one discovered in 2008, which was a paltry 12,978,189 digits long.
  • The number — 2 raised to the 57,885,161 power minus 1 — was discovered by University of Central Missouri mathematician Curtis Cooper as part of a giant network of volunteer computers
  • "It's analogous to climbing Mt. Everest," said George Woltman, the retired, Orlando, Fla.-based computer scientist who created GIMPS. "People enjoy it for the challenge of the discovery of finding something that's never been known before."
  • ...4 more annotations...
  • the number is the 48th example of a rare class of primes called Mersenne Primes. Mersenne primes take the form of 2 raised to the power of a prime number minus 1. Since they were first described by French monk Marin Mersenne 350 years ago, only 48 of these elusive numbers have been found, including the most recent discovery. [The Most Massive Numbers in the Universe]
  • mathematicians have devised a much cleverer strategy, that dramatically reduces the time to find primes. That method uses a formula to check much fewer numbers.
  • mber is the 48th example of a rare class of primes called Mersenne Primes. Mersenne primes take the form of 2 raised to the power of a prime number minus 1. Since they were first described by French monk Marin Mersenne 350 years ago, only 48 of these elusive numbers have been found
  • the 48th example of a rare class of primes called Mersenne Primes. Mersenne primes take the form of 2 raised to the power of a prime number minus 1
  •  
    An interesting article that reminded me of the discussions we had last year on whether math was invented or discovered. With regard to prime numbers, could we ever stop finding "new" ones? Can we ever find a formula to pinpoint every single prime number without dividing it by other numbers?
Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 0 views

  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • neuroscience for the last couple hundred years has been on the wrong track. There's a fairly recent book by a very good cognitive neuroscientist, Randy Gallistel and King, arguing -- in my view, plausibly -- that neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
  • ...19 more annotations...
  • in general what he argues is that if you take a look at animal cognition, human too, it's computational systems. Therefore, you want to look the units of computation. Think about a Turing machine, say, which is the simplest form of computation, you have to find units that have properties like "read", "write" and "address." That's the minimal computational unit, so you got to look in the brain for those. You're never going to find them if you look for strengthening of synaptic connections or field properties, and so on. You've got to start by looking for what's there and what's working and you see that from Marr's highest level.
  • it's basically in the spirit of Marr's analysis. So when you're studying vision, he argues, you first ask what kind of computational tasks is the visual system carrying out. And then you look for an algorithm that might carry out those computations and finally you search for mechanisms of the kind that would make the algorithm work. Otherwise, you may never find anything.
  • AI and robotics got to the point where you could actually do things that were useful, so it turned to the practical applications and somewhat, maybe not abandoned, but put to the side, the more fundamental scientific questions, just caught up in the success of the technology and achieving specific goals.
  • "Good Old Fashioned AI," as it's labeled now, made strong use of formalisms in the tradition of Gottlob Frege and Bertrand Russell, mathematical logic for example, or derivatives of it, like nonmonotonic reasoning and so on. It's interesting from a history of science perspective that even very recently, these approaches have been almost wiped out from the mainstream and have been largely replaced -- in the field that calls itself AI now -- by probabilistic and statistical models. My question is, what do you think explains that shift and is it a step in the right direction?
  • The approximating unanalyzed data kind is sort of a new approach, not totally, there's things like it in the past. It's basically a new approach that has been accelerated by the existence of massive memories, very rapid processing, which enables you to do things like this that you couldn't have done by hand. But I think, myself, that it is leading subjects like computational cognitive science into a direction of maybe some practical applicability... ..in engineering? Chomsky: ...But away from understanding.
  • I was very skeptical about the original work. I thought it was first of all way too optimistic, it was assuming you could achieve things that required real understanding of systems that were barely understood, and you just can't get to that understanding by throwing a complicated machine at it.
  • if success is defined as getting a fair approximation to a mass of chaotic unanalyzed data, then it's way better to do it this way than to do it the way the physicists do, you know, no thought experiments about frictionless planes and so on and so forth. But you won't get the kind of understanding that the sciences have always been aimed at -- what you'll get at is an approximation to what's happening.
  • Suppose you want to predict tomorrow's weather. One way to do it is okay I'll get my statistical priors, if you like, there's a high probability that tomorrow's weather here will be the same as it was yesterday in Cleveland, so I'll stick that in, and where the sun is will have some effect, so I'll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow's weather is going to be. That's not what meteorologists do -- they want to understand how it's working. And these are just two different concepts of what success means, of what achievement is.
  • take a concrete example of a new field in neuroscience, called Connectomics, where the goal is to find the wiring diagram of very complex organisms, find the connectivity of all the neurons in say human cerebral cortex, or mouse cortex. This approach was criticized by Sidney Brenner, who in many ways is [historically] one of the originators of the approach. Advocates of this field don't stop to ask if the wiring diagram is the right level of abstraction -- maybe it's no
  • the right approach, is to try to see if you can understand what the fundamental principles are that deal with the core properties, and recognize that in the actual usage, there's going to be a thousand other variables intervening -- kind of like what's happening outside the window, and you'll sort of tack those on later on if you want better approximations, that's a different approach.
  • if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives -- but you learn nothing about the language.
  • if you went to MIT in the 1960s, or now, it's completely different. No matter what engineering field you're in, you learn the same basic science and mathematics. And then maybe you learn a little bit about how to apply it. But that's a very different approach. And it resulted maybe from the fact that really for the first time in history, the basic sciences, like physics, had something really to tell engineers. And besides, technologies began to change very fast, so not very much point in learning the technologies of today if it's going to be different 10 years from now. So you have to learn the fundamental science that's going to be applicable to whatever comes along next. And the same thing pretty much happened in medicine.
  • that's the kind of transition from something like an art, that you learn how to practice -- an analog would be trying to match some data that you don't understand, in some fashion, maybe building something that will work -- to science, what happened in the modern period, roughly Galilean science.
  • it turns out that there actually are neural circuits which are reacting to particular kinds of rhythm, which happen to show up in language, like syllable length and so on. And there's some evidence that that's one of the first things that the infant brain is seeking -- rhythmic structures. And going back to Gallistel and Marr, its got some computational system inside which is saying "okay, here's what I do with these things" and say, by nine months, the typical infant has rejected -- eliminated from its repertoire -- the phonetic distinctions that aren't used in its own language.
  • people like Shimon Ullman discovered some pretty remarkable things like the rigidity principle. You're not going to find that by statistical analysis of data. But he did find it by carefully designed experiments. Then you look for the neurophysiology, and see if you can find something there that carries out these computations. I think it's the same in language, the same in studying our arithmetical capacity, planning, almost anything you look at. Just trying to deal with the unanalyzed chaotic data is unlikely to get you anywhere, just like as it wouldn't have gotten Galileo anywhere.
  • with regard to cognitive science, we're kind of pre-Galilean, just beginning to open up the subject
  • You can invent a world -- I don't think it's our world -- but you can invent a world in which nothing happens except random changes in objects and selection on the basis of external forces. I don't think that's the way our world works, I don't think it's the way any biologist thinks it is. There are all kind of ways in which natural law imposes channels within which selection can take place, and some things can happen and other things don't happen. Plenty of things that go on in the biology in organisms aren't like this. So take the first step, meiosis. Why do cells split into spheres and not cubes? It's not random mutation and natural selection; it's a law of physics. There's no reason to think that laws of physics stop there, they work all the way through. Well, they constrain the biology, sure. Chomsky: Okay, well then it's not just random mutation and selection. It's random mutation, selection, and everything that matters, like laws of physics.
  • What I think is valuable is the history of science. I think we learn a lot of things from the history of science that can be very valuable to the emerging sciences. Particularly when we realize that in say, the emerging cognitive sciences, we really are in a kind of pre-Galilean stage. We don't know wh
  • at we're looking for anymore than Galileo did, and there's a lot to learn from that.
‹ Previous 21 - 40 of 159 Next › Last »
Showing 20 items per page