Skip to main content

Home/ TOK Friends/ Group items tagged physicist

Rss Feed Group items tagged

qkirkpatrick

Testing the Limits of Einstein's General Theory of Relativity - 0 views

  • A century ago this year, a young Swiss physicist, who had already revolutionized physics with discoveries about the relationship between space and time, developed a radical new understanding of gravity.
  • He came up with a set of equations that relate the curvature of space-time to the energy and momentum of the matter and radiation that are present in a particular region.
  • Today, 100 years later, Einstein's theory of gravitation remains a pillar of modern understanding, and has withstood all the tests that scientists could throw at it
  • ...2 more annotations...
  • General relativity describes gravity not as a force, as the physicist Isaac Newton thought of it, but rather as a curvature of space and time due to the mass of objects
  • The reason Earth orbits the sun is not because the sun attracts Earth, but instead because the sun warps space-time, he said
Javier E

Philosophy isn't dead yet | Raymond Tallis | Comment is free | The Guardian - 1 views

  • Fundamental physics is in a metaphysical mess and needs help. The attempt to reconcile its two big theories, general relativity and quantum mechanics, has stalled for nearly 40 years. Endeavours to unite them, such as string theory, are mathematically ingenious but incomprehensible even to many who work with them. This is well known.
  • A better-kept secret is that at the heart of quantum mechanics is a disturbing paradox – the so-called measurement problem, arising ultimately out of the Uncertainty Principle – which apparently demonstrates that the very measurements that have established and confirmed quantum theory should be impossible. Oxford philosopher of physics David Wallace has argued that this threatens to make quantum mechanics incoherent which can be remedied only by vastly multiplying worlds.
  • there is the failure of physics to accommodate conscious beings. The attempt to fit consciousness into the material world, usually by identifying it with activity in the brain, has failed dismally, if only because there is no way of accounting for the fact that certain nerve impulses are supposed to be conscious (of themselves or of the world) while the overwhelming majority (physically essentially the same) are not. In short, physics does not allow for the strange fact that matter reveals itself to material objects (such as physicists).
  • ...3 more annotations...
  • then there is the mishandling of time. The physicist Lee Smolin's recent book, Time Reborn, links the crisis in physics with its failure to acknowledge the fundamental reality of time. Physics is predisposed to lose time because its mathematical gaze freezes change. Tensed time, the difference between a remembered or regretted past and an anticipated or feared future, is particularly elusive. This worried Einstein: in a famous conversation, he mourned the fact that the present tense, "now", lay "just outside of the realm of science".
  • Recent attempts to explain how the universe came out of nothing, which rely on questionable notions such as spontaneous fluctuations in a quantum vacuum, the notion of gravity as negative energy, and the inexplicable free gift of the laws of nature waiting in the wings for the moment of creation, reveal conceptual confusion beneath mathematical sophistication. They demonstrate the urgent need for a radical re-examination of the invisible frameworks within which scientific investigations are conducted.
  • we should reflect on how a scientific image of the world that relies on up to 10 dimensions of space and rests on ideas, such as fundamental particles, that have neither identity nor location, connects with our everyday experience. This should open up larger questions, such as the extent to which mathematical portraits capture the reality of our world – and what we mean by "reality".
knudsenlu

String Theory: The Best Explanation for Everything in the Universe - The Atlantic - 0 views

  • Somehow, space-time curvature emerges as the collective effect of quantized units of gravitational energy—particles known as gravitons. But naïve attempts to calculate how gravitons interact result in nonsensical infinities, indicating the need for a deeper understanding of gravity.
  • String theory (or, more technically, M-theory) is often described as the leading candidate for the theory of everything in our universe. But there’s no empirical evidence for it, or for any alternative ideas about how gravity might unify with the rest of the fundamental forces. Why, then, is string/M-theory given the edge over the others?
  • For such imaginary worlds, physicists can describe processes at all energies, including, in principle, black-hole formation and evaporation. The 16,000 papers that have cited Maldacena’s over the past 20 years mostly aim at carrying out these calculations in order to gain a better understanding of AdS/CFT and quantum gravity.
  • ...2 more annotations...
  • his basic sequence of events has led most experts to consider M-theory the leading TOE candidate, even as its exact definition in a universe like ours remains unknown.
  • ) One philosopher has even argued that string theory’s status as the only known consistent theory counts as evidence that the theory is correct.
sanderk

Book Review: Lee Smolin's 'Time Reborn' : 13.7: Cosmos And Culture : NPR - 0 views

  • Time, of course, seems real to us. We live in and through time. But to physicists, time's fundamental reality is an illusion.
  • Ever since Newton, physicists have been developing ever-more exact laws describing the behavior of the world. These laws live outside of time because they don't change. That means these laws are more real than time.
  • The idea of timeless laws works fine when it's applied to parts of the Universe, like jet planes and GPS satellites, but Smolin argues, "it falls apart when applied to the Universe as whole."
  • ...2 more annotations...
  • Making time so real that nothing can escape it leads Smolin to what we might call his greatest heresy. The laws of physics, he says, evolve just like species in an ecosystem.
  • The laws must live within time like everything else and that means they must change.
Javier E

Nobel Prize in Physics Is Awarded to 3 Scientists for Work Exploring Quantum Weirdness ... - 0 views

  • “We’re used to thinking that information about an object — say that a glass is half full — is somehow contained within the object.” Instead, he says, entanglement means objects “only exist in relation to other objects, and moreover these relationships are encoded in a wave function that stands outside the tangible physical universe.”
  • Einstein, though one of the founders of quantum theory, rejected it, saying famously, God did not play dice with the universe.In a 1935 paper written with Boris Podolsky and Nathan Rosen, he tried to demolish quantum mechanics as an incomplete theory by pointing out that by quantum rules, measuring a particle in one place could instantly affect measurements of the other particle, even if it was millions of miles away.
  • Dr. Clauser, who has a knack for electronics and experimentation and misgivings about quantum theory, was the first to perform Bell’s proposed experiment. He happened upon Dr. Bell’s paper while a graduate student at Columbia University and recognized it as something he could do.
  • ...13 more annotations...
  • In 1972, using duct tape and spare parts in the basement on the campus of the University of California, Berkeley, Dr. Clauser and a graduate student, Stuart Freedman, who died in 2012, endeavored to perform Bell’s experiment to measure quantum entanglement. In a series of experiments, he fired thousands of light particles, or photons, in opposite directions to measure a property known as polarization, which could have only two values — up or down. The result for each detector was always a series of seemingly random ups and downs. But when the two detectors’ results were compared, the ups and downs matched in ways that neither “classical physics” nor Einstein’s laws could explain. Something weird was afoot in the universe. Entanglement seemed to be real.
  • in 2002, Dr. Clauser admitted that he himself had expected quantum mechanics to be wrong and Einstein to be right. “Obviously, we got the ‘wrong’ result. I had no choice but to report what we saw, you know, ‘Here’s the result.’ But it contradicts what I believed in my gut has to be true.” He added, “I hoped we would overthrow quantum mechanics. Everyone else thought, ‘John, you’re totally nuts.’”
  • the correlations only showed up after the measurements of the individual particles, when the physicists compared their results after the fact. Entanglement seemed real, but it could not be used to communicate information faster than the speed of light.
  • In 1982, Dr. Aspect and his team at the University of Paris tried to outfox Dr. Clauser’s loophole by switching the direction along which the photons’ polarizations were measured every 10 nanoseconds, while the photons were already in the air and too fast for them to communicate with each other. He too, was expecting Einstein to be right.
  • Quantum predictions held true, but there were still more possible loopholes in the Bell experiment that Dr. Clauser had identified
  • For example, the polarization directions in Dr. Aspect’s experiment had been changed in a regular and thus theoretically predictable fashion that could be sensed by the photons or detectors.
  • Anton Zeilinger
  • added even more randomness to the Bell experiment, using random number generators to change the direction of the polarization measurements while the entangled particles were in flight.
  • Once again, quantum mechanics beat Einstein by an overwhelming margin, closing the “locality” loophole.
  • as scientists have done more experiments with entangled particles, entanglement is accepted as one of main features of quantum mechanics and is being put to work in cryptology, quantum computing and an upcoming “quantum internet
  • One of its first successes in cryptology is messages sent using entangled pairs, which can send cryptographic keys in a secure manner — any eavesdropping will destroy the entanglement, alerting the receiver that something is wrong.
  • , with quantum mechanics, just because we can use it, doesn’t mean our ape brains understand it. The pioneering quantum physicist Niels Bohr once said that anyone who didn’t think quantum mechanics was outrageous hadn’t understood what was being said.
  • In his interview with A.I.P., Dr. Clauser said, “I confess even to this day that I still don’t understand quantum mechanics, and I’m not even sure I really know how to use it all that well. And a lot of this has to do with the fact that I still don’t understand it.”
Javier E

Quantum Computing Advance Begins New Era, IBM Says - The New York Times - 0 views

  • While researchers at Google in 2019 claimed that they had achieved “quantum supremacy” — a task performed much more quickly on a quantum computer than a conventional one — IBM’s researchers say they have achieved something new and more useful, albeit more modestly named.
  • “We’re entering this phase of quantum computing that I call utility,” said Jay Gambetta, a vice president of IBM Quantum. “The era of utility.”
  • Present-day computers are called digital, or classical, because they deal with bits of information that are either 1 or 0, on or off. A quantum computer performs calculations on quantum bits, or qubits, that capture a more complex state of information. Just as a thought experiment by the physicist Erwin Schrödinger postulated that a cat could be in a quantum state that is both dead and alive, a qubit can be both 1 and 0 simultaneously.
  • ...15 more annotations...
  • That allows quantum computers to make many calculations in one pass, while digital ones have to perform each calculation separately. By speeding up computation, quantum computers could potentially solve big, complex problems in fields like chemistry and materials science that are out of reach today.
  • When Google researchers made their supremacy claim in 2019, they said their quantum computer performed a calculation in 3 minutes 20 seconds that would take about 10,000 years on a state-of-the-art conventional supercomputer.
  • The IBM researchers in the new study performed a different task, one that interests physicists. They used a quantum processor with 127 qubits to simulate the behavior of 127 atom-scale bar magnets — tiny enough to be governed by the spooky rules of quantum mechanics — in a magnetic field. That is a simple system known as the Ising model, which is often used to study magnetism.
  • This problem is too complex for a precise answer to be calculated even on the largest, fastest supercomputers.
  • On the quantum computer, the calculation took less than a thousandth of a second to complete. Each quantum calculation was unreliable — fluctuations of quantum noise inevitably intrude and induce errors — but each calculation was quick, so it could be performed repeatedly.
  • Indeed, for many of the calculations, additional noise was deliberately added, making the answers even more unreliable. But by varying the amount of noise, the researchers could tease out the specific characteristics of the noise and its effects at each step of the calculation.“We can amplify the noise very precisely, and then we can rerun that same circuit,” said Abhinav Kandala, the manager of quantum capabilities and demonstrations at IBM Quantum and an author of the Nature paper. “And once we have results of these different noise levels, we can extrapolate back to what the result would have been in the absence of noise.”In essence, the researchers were able to subtract the effects of noise from the unreliable quantum calculations, a process they call error mitigation.
  • Altogether, the computer performed the calculation 600,000 times, converging on an answer for the overall magnetization produced by the 127 bar magnets.
  • Although an Ising model with 127 bar magnets is too big, with far too many possible configurations, to fit in a conventional computer, classical algorithms can produce approximate answers, a technique similar to how compression in JPEG images throws away less crucial data to reduce the size of the file while preserving most of the image’s details
  • Certain configurations of the Ising model can be solved exactly, and both the classical and quantum algorithms agreed on the simpler examples. For more complex but solvable instances, the quantum and classical algorithms produced different answers, and it was the quantum one that was correct.
  • Thus, for other cases where the quantum and classical calculations diverged and no exact solutions are known, “there is reason to believe that the quantum result is more accurate,”
  • Mr. Anand is currently trying to add a version of error mitigation for the classical algorithm, and it is possible that could match or surpass the performance of the quantum calculations.
  • In the long run, quantum scientists expect that a different approach, error correction, will be able to detect and correct calculation mistakes, and that will open the door for quantum computers to speed ahead for many uses.
  • Error correction is already used in conventional computers and data transmission to fix garbles. But for quantum computers, error correction is likely years away, requiring better processors able to process many more qubits
  • “This is one of the simplest natural science problems that exists,” Dr. Gambetta said. “So it’s a good one to start with. But now the question is, how do you generalize it and go to more interesting natural science problems?”
  • Those might include figuring out the properties of exotic materials, accelerating drug discovery and modeling fusion reactions.
Emily Horwitz

Nature Has A Formula That Tells Us When It's Time To Die : Krulwich Wonders... : NPR - 1 views

  • Every living thing is a pulse. We quicken, then we fade. There is a deep beauty in this, but deeper down, inside every plant, every leaf, inside every living thing (us included) sits a secret.
  • Everything alive will eventually die, we know that, but now we can read the pattern and see death coming. We have recently learned its logic, which "You can put into mathematics," says physicist Geoffrey West. It shows up with "extraordinary regularity," not just in plants, but in all animals, from slugs to giraffes. Death, it seems, is intimately related to size.
  • Life is short for small creatures, longer in big ones.
  • ...5 more annotations...
  • A 2007 paper checked 700 different kinds of plants, and almost every time they applied the formula, it correctly predicted lifespan. "This is universal. It cuts across the design of organisms," West says. "It applies to me, all mammals, and the trees sitting out there, even though we're completely different designs."
  • The formula is a simple quarter-power exercise: You take the mass of a plant or an animal, and its metabolic rate is equal to its mass taken to the three-fourths power.
  • It's hard to believe that creatures as different as jellyfish and cheetahs, daisies and bats, are governed by the same mathematical logic, but size seems to predict lifespan.
  • It tells animals for example, that there's a universal limit to life, that though they come in different sizes, they have roughly a billion and a half heart beats; elephant hearts beat slowly, hummingbird hearts beat fast, but when your count is up, you are over.
  • In any big creature, animal or plant, there are so many more pathways, moving parts, so much more work to do, the big guys could wear out very quickly. So Geoffrey West and his colleagues found that nature gives larger creatures a gift: more efficient cells. Literally.
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
Javier E

Why Our Children Don't Think There Are Moral Facts - NYTimes.com - 1 views

  • I already knew that many college-aged students don’t believe in moral facts.
  • the overwhelming majority of college freshman in their classrooms view moral claims as mere opinions that are not true or are true only relative to a culture.
  • where is the view coming from?
  • ...32 more annotations...
  • the Common Core standards used by a majority of K-12 programs in the country require that students be able to “distinguish among fact, opinion, and reasoned judgment in a text.”
  • So what’s wrong with this distinction and how does it undermine the view that there are objective moral facts?
  • For example, many people once thought that the earth was flat. It’s a mistake to confuse truth (a feature of the world) with proof (a feature of our mental lives)
  • Furthermore, if proof is required for facts, then facts become person-relative. Something might be a fact for me if I can prove it but not a fact for you if you can’t. In that case, E=MC2 is a fact for a physicist but not for me.
  • worse, students are taught that claims are either facts or opinions. They are given quizzes in which they must sort claims into one camp or the other but not both. But if a fact is something that is true and an opinion is something that is believed, then many claims will obviously be both
  • How does the dichotomy between fact and opinion relate to morality
  • Kids are asked to sort facts from opinions and, without fail, every value claim is labeled as an opinion.
  • Here’s a little test devised from questions available on fact vs. opinion worksheets online: are the following facts or opinions? — Copying homework assignments is wrong. — Cursing in school is inappropriate behavior. — All men are created equal. — It is worth sacrificing some personal liberties to protect our country from terrorism. — It is wrong for people under the age of 21 to drink alcohol. — Vegetarians are healthier than people who eat meat. — Drug dealers belong in prison.
  • The answer? In each case, the worksheets categorize these claims as opinions. The explanation on offer is that each of these claims is a value claim and value claims are not facts. This is repeated ad nauseum: any claim with good, right, wrong, etc. is not a fact.
  • In summary, our public schools teach students that all claims are either facts or opinions and that all value and moral claims fall into the latter camp. The punchline: there are no moral facts. And if there are no moral facts, then there are no moral truths.
  • It should not be a surprise that there is rampant cheating on college campuses: If we’ve taught our students for 12 years that there is no fact of the matter as to whether cheating is wrong, we can’t very well blame them for doing so later on.
  • If it’s not true that it’s wrong to murder a cartoonist with whom one disagrees, then how can we be outraged? If there are no truths about what is good or valuable or right, how can we prosecute people for crimes against humanity? If it’s not true that all humans are created equal, then why vote for any political system that doesn’t benefit you over others?
  • the curriculum sets our children up for doublethink. They are told that there are no moral facts in one breath even as the next tells them how they ought to behave.
  • Our children deserve a consistent intellectual foundation. Facts are things that are true. Opinions are things we believe. Some of our beliefs are true. Others are not. Some of our beliefs are backed by evidence. Others are not.
  • Value claims are like any other claims: either true or false, evidenced or not.
  • The hard work lies not in recognizing that at least some moral claims are true but in carefully thinking through our evidence for which of the many competing moral claims is correct.
  • Moral truths are not the same as scientific truths or mathematical truths. Yet they may still be used a guiding principle for our individual lives as well as our laws.But there is equal danger of giving moral judgments the designation of truth as there is in not doing so. Many people believe that abortion is murder on the same level as shooting someone with a gun. But many others do not. So is it true that abortion is murder?Moral principles can become generally accepted and then form the basis for our laws. But many long accepted moral principles were later rejected as being faulty. "Separate but equal" is an example. Judging homosexual relationships as immoral is another example.
  • Whoa! That Einstein derived an equation is a fact. But the equation represents a theory that may have to be tweaked at some point in the future. It may be a fact that the equation foretold the violence of atomic explosions, but there are aspects of nature that elude the equation. Remember "the theory of everything?"
  • Here is a moral fact, this is a sermon masquerading as a philosophical debate on facts, opinions and truth. This professor of religion is asserting that the government via common core is teaching atheism via the opinion vs fact.He is arguing, in a dishonest form, that public schools should be teaching moral facts. Of course moral facts is code for the Ten Commandments.
  • As a fourth grade teacher, I try to teach students to read critically, including distinguishing between facts and opinions as they read (and have been doing this long before the Common Core arrived, by the way). It's not always easy for children to grasp the difference. I can only imagine the confusion that would ensue if I introduced a third category -- moral "facts" that can't be proven but are true nonetheless!
  • horrible acts occur not because of moral uncertainty, but because people are too sure that their views on morality are 100% true, and anyone who fails to recognize and submit themselves are heathens who deserve death.I can't think of any case where a society has suffered because people are too thoughtful and open-minded to different perspectives on moral truth.In any case, it's not an elementary school's job to teach "moral truths."
  • The characterization of moral anti-realism as some sort of fringe view in philosophy is misleading. Claims that can be true or false are, it seems, 'made true' by features of the world. It's not clear to many in philosophy (like me) just what features of the world could make our moral claims true. We are more likely to see people's value claims as making claims about, and enforcing conformity to, our own (contingent) social norms. This is not to hold, as Mr. McBrayer seems to think follows, that there are no reasons to endorse or criticize these social norms.
  • This is nonsense. Giving kids the tools to distinguish between fact and opinion is hard enough in an age when Republicans actively deny reality on Fox News every night. The last thing we need is to muddy their thinking with the concept of "moral facts."A fact is a belief that everyone _should_ agree upon because it is observable and testable. Morals are not agreed upon by all. Consider the hot button issue of abortion.
  • Truthfully, I'm not terribly concerned that third graders will end up taking these lessons in the definition of fact versus opinion to the extremes considered here, or take them as a license to cheat. That will come much later, when they figure out, as people always have, what they can get a way with. But Prof. McBrayer, with his blithe expectation that all the grownups know that there moral "facts"? He scares the heck out of me.
  • I've long chafed at the language of "fact" v. "opinion", which is grounded in a very particular, limited view of human cognition. In my own ethics courses, I work actively to undermine the distinction, focusing instead on considered judgment . . . or even more narrowly, on consideration itself. (See http://wp.me/p5Ag0i-6M )
  • The real waffle here is the very concept of "moral facts." Our statements of values, even very important ones are, obviously, not facts. Trying to dress them up as if they are facts, to me, argues for a pretty serious moral weakness on the part of those advancing the idea.
  • Our core values are not important because they are facts. They are important because we collectively hold them and cherish them. To lean on the false crutch of "moral facts" to admit the weakness of your own moral convictions.
  • I would like to believe that there is a core of moral facts/values upon which all humanity can agree, but it would be tough to identify exactly what those are.
  • For the the ancient philosophers, reality comprised the Good, the True, and the Beautiful (what we might now call ethics, science and art), seeing these as complementary and inseparable, though distinct, realms. With the ascendency of science in our culture as the only valid measure of reality to the detriment of ethics and art (that is, if it is not observable and provable, it is not real), we have turned the good and the beautiful into mere "social constructs" that have no validity on their own. While I am sympathetic in many ways with Dr. McBrayer's objections, I think he falls into the trap of discounting the Good and The Beautiful as valid in and of themselves, and tries, instead, to find ways to give them validity through the True. I think his argument would have been stronger had he used the language of validity rather than the language of truth. Goodness, Truth and Beauty each have their own validity, though interdependent and inseparable. When we artificially extract one of these and give it primacy, we distort reality and alienate ourselves from it.
  • Professor McBrayer seems to miss the major point of the Common Core concern: can students distinguish between premises based on (reasonably construed) fact and premises based on emotion when evaluating conclusions? I would prefer that students learn to reason rather than be taught moral 'truth' that follows Professor McBrayer's logic.
  • Moral issues cannot scientifically be treated on the level that Prof. McBrayer is attempting to use in this column: true or false, fact or opinion or both. Instead, they should be treated as important characteristics of the systematic working of a society or of a group of people in general. One can compare the working of two groups of people: one in which e.g. cheating and lying is acceptable, and one in which they are not. One can use historical or model examples to show the consequences and the working of specific systems of morals. I think that this method - suitably adjusted - can be used even in second grade.
  • Relativism has nothing to do with liberalism. The second point is that I'm not sure it does all that much harm, because I have yet to encounter a student who thought that he or she had to withhold judgment on those who hold opposing political views!
qkirkpatrick

Why Math Works - Scientific American - 1 views

  • Most of us take it for granted that math works—that scientists can devise formulas to describe subatomic events or that engineers can calculate paths for space­craft.
  • As a working theoretical astrophysicist, I encounter the seemingly “unreasonable effectiveness of math­ematics,” as Nobel laureate physicist Eugene Wigner called it in 1960, in every step of my job.
  •  
    Is math invented or discovered
Javier E

The Trouble With Neutrinos That Outpaced Einstein's Theory - NYTimes.com - 0 views

  • The British astrophysicist Arthur S. Eddington once wrote, “No experiment should be believed until it has been confirmed by theory.”
  • Adding to the sense of finality was the simple fact — as Eddington might have pointed out — that faster-than-light neutrinos had never been confirmed by theory. Or as John G. Learned, a neutrino physicist at the University of Hawaii, put it in an e-mail, “An interesting result of all this fracas is that no new model I have seen (or heard of from my friends) really is credible to explain the faster-than-light neutrinos.”
  • Eddington’s dictum is not as radical as it might sound. He made it after early measurements of the rate of expansion of the universe made it appear that our planet was older than the cosmos in which it resides — an untenable notion. “It means that science is not just a book of facts, it is understanding as well,”
  • ...1 more annotation...
  • If a “fact” cannot be understood, fitted into a conceptual framework that we have reason to believe in, or confirmed independently some other way, it risks becoming what journalists like to call a “permanent exclusive” — wrong.
Javier E

Which Is Bigger: A Human Brain Or The Universe? : Krulwich Wonders... : NPR - 0 views

  • If a brain can make crazy leaps across the cosmos and bring extra passengers along (like you when you listen to me), then in a metaphorical way, the brain is bigger than what's around it, wrote 19th century poet Emily Dickinson. The brain is wider than the sky,For, put them side by side,The one the other will includeWith ease, and you beside.
  • If a brain can make crazy leaps across the cosmos and bring extra passengers along (like you when you listen to me), then in a metaphorical way, the brain is bigger than what's around it, wrote 19th century poet Emily Dickinson. The brain is wider than the sky,For, put them side by side,The one the other will includeWith ease, and you beside.
  • "The universe is not only queerer than we suppose," said the biologist J.B.S. Haldane, "but queerer than we can suppose." In Haldane's view, the universe is bigger than the brain. There are things we just can't know, or even conjure with the brains we've got.
  • ...16 more annotations...
  • If a brain can make crazy leaps across the cosmos and bring extra passengers along (like you when you listen to me), then in a metaphorical way, the brain is bigger than what's around it, wrote 19th century poet Emily Dickinson. The brain is wider than the sky,For, put them side by side,The one the other will includeWith ease, and you beside.
  • "The universe is not only queerer than we suppose," said the biologist J.B.S. Haldane, "but queerer than we can suppose." In Haldane's view, the universe is bigger than the brain. There are things we just can't know, or even conjure with the brains we've got.
  • If a brain can make crazy leaps across the cosmos and bring extra passengers along (like you when you listen to me), then in a metaphorical way, the brain is bigger than what's around it, wrote 19th century poet Emily Dickinson. The brain is wider than the sky,For, put them side by side,The one the other will includeWith ease, and you beside.
  • "The universe is not only queerer than we suppose," said the biologist J.B.S. Haldane, "but queerer than we can suppose." In Haldane's view, the universe is bigger than the brain. There are things we just can't know, or even conjure with the brains we've got.
  • "It's beyond our intellectual limits as a species. Put yourself into the position
  • "The universe is not only queerer than we suppose," said the biologist J.B.S. Haldane, "but queerer than we can suppose." In Haldane's view, the universe is bigger than the brain. There are things we just can't know, or even conjure with the brains we've got.
  • "The universe is not only queerer than we suppose," said the biologist J.B.S. Haldane, "but queerer than we can suppose." In Haldane's view, the universe is bigger than the brain. There are things we just can't know, or even conjure with the brains we've got.
  • "The universe is not only queerer than we suppose," said the biologist J.B.S. Haldane, "but queerer than we can suppose." In Haldane's view, the universe is bigger than the brain. There are things we just can't know, or even conjure with the brains we've got.
  • If a brain can make crazy leaps across the cosmos and bring extra passengers along (like you when you listen to me), then in a metaphorical way, the brain is bigger than what's around it, wrote 19th century poet Emily Dickinson. The brain is wider than the sky,For, put them side by side,The one the other will includeWith ease, and you beside.
  • "The universe is not only queerer than we suppose," said the biologist J.B.S. Haldane, "but queerer than we can suppose." In Haldane's view, the universe is bigger than the brain. There are things we just can't know, or even conjure with the brains we've got.
  • There are philosophers and scientists who say we will never understand the universe, we can't fathom the endless details or make good sense of the whole. We can try, but the universe is too big. The writer John Updike once explained the argument this way to reporter Jim Holt: "It's beyond our intellectual limits as a species. Put yourself into the position of a dog. A dog is responsive, shows intuition, looks at us with eyes behind which there is intelligence of a sort, and yet a dog must not understand most of the things it sees people doing. It must have no idea how they invented, say, the internal combustion engine. So maybe what we need to do is imagine that we're dogs and that there are realms that go beyond our understanding."
  • So does the universe get the crown?
  • Carl Sagan thought that we humans are good at finding patterns in nature, and if we know the rules, we can skip the details and understand the outline, the essence. It's not necessary for us to know everything. The problem is we don't know how many rules the cosmos has.
  • et the brain has its champions. "Consider the human brain," says physicist Sir Roger Penrose. "If you look at the entire physical cosmos, our brains are a tiny, tiny part of it. But they're the most perfectly organized part. Compared to the complexity of a brain, a galaxy is just an inert lump
  • my hunch is the universe will still outwit us, will still be "too wonderful" to be decoded, because we are, in the end, so much smaller than it is. And that's not a bad thing. To my mind, it's the search that matters, that sharpens us, gives us something noble to do.
  • Steven Weinberg famously said, "The effort to understand the universe is one of the very few things that lifts human life a little above the level of farce, and gives it some of the grace of tragedy."
Javier E

Big Data Troves Stay Forbidden to Social Scientists - NYTimes.com - 0 views

  • When scientists publish their research, they also make the underlying data available so the results can be verified by other scientists.
  • lately social scientists have come up against an exception that is, true to its name, huge. It is “big data,” the vast sets of information gathered by researchers at companies like Facebook, Google and Microsoft from patterns of cellphone calls, text messages and Internet clicks by millions of users around the world. Companies often refuse to make such information public, sometimes for competitive reasons and sometimes to protect customers’ privacy. But to many scientists, the practice is an invitation to bad science, secrecy and even potential fraud.
  • corporate control of data could give preferential access to an elite group of scientists at the largest corporations.
  • ...2 more annotations...
  • “In the Internet era,” said Andreas Weigend, a physicist and former chief scientist at Amazon, “research has moved out of the universities to the Googles, Amazons and Facebooks of the world.”
  • A recent review found that 44 of 50 leading scientific journals instructed their authors on sharing data but that fewer than 30 percent of the papers they published fully adhered to the instructions. A 2008 review of sharing requirements for genetics data found that 40 of 70 journals surveyed had policies, and that 17 of those were “weak.”
Javier E

The American Scholar: The Decline of the English Department - William M. Chace - 1 views

  • The number of young men and women majoring in English has dropped dramatically; the same is true of philosophy, foreign languages, art history, and kindred fields, including history. As someone who has taught in four university English departments over the last 40 years, I am dismayed by this shift, as are my colleagues here and there across the land. And because it is probably irreversible, it is important to attempt to sort out the reasons—the many reasons—for what has happened.
  • English: from 7.6 percent of the majors to 3.9 percent
  • In one generation, then, the numbers of those majoring in the humanities dropped from a total of 30 percent to a total of less than 16 percent; during that same generation, business majors climbed from 14 percent to 22 percent.
  • ...23 more annotations...
  • History: from 18.5 percent to 10.7 percent
  • But the deeper explanation resides not in something that has happened to it, but in what it has done to itself. English has become less and less coherent as a discipline and, worse, has come near exhaustion as a scholarly pursuit.
  • The twin focus, then, was on the philological nature of the enterprise and the canon of great works to be studied in their historical evolution.
  • Studying English taught us how to write and think better, and to make articulate many of the inchoate impulses and confusions of our post-adolescent minds. We began to see, as we had not before, how such books could shape and refine our thinking. We began to understand why generations of people coming before us had kept them in libraries and bookstores and in classes such as ours. There was, we got to know, a tradition, a historical culture, that had been assembled around these books. Shakespeare had indeed made a difference—to people before us, now to us, and forever to the language of English-speaking people.
  • today there are stunning changes in the student population: there are more and more gifted and enterprising students coming from immigrant backgrounds, students with only slender connections to Western culture and to the assumption that the “great books” of England and the United States should enjoy a fixed centrality in the world. What was once the heart of the matter now seems provincial. Why throw yourself into a study of something not emblematic of the world but representative of a special national interest? As the campus reflects the cultural, racial, and religious complexities of the world around it, reading British and American literature looks more and more marginal. From a global perspective, the books look smaller.
  • With the cost of a college degree surging upward during the last quarter century—tuition itself increasing far beyond any measure of inflation—and with consequent growth in loan debt after graduation, parents have become anxious about the relative earning power of a humanities degree. Their college-age children doubtless share such anxiety. When college costs were lower, anxiety could be kept at bay. (Berkeley in the early ’60s cost me about $100 a year, about $700 in today’s dollars.)
  • Economists, chemists, biologists, psychologists, computer scientists, and almost everyone in the medical sciences win sponsored research, grants, and federal dollars. By and large, humanists don’t, and so they find themselves as direct employees of the institution, consuming money in salaries, pensions, and operating needs—not external money but institutional money.
  • These, then, are some of the external causes of the decline of English: the rise of public education; the relative youth and instability (despite its apparent mature solidity) of English as a discipline; the impact of money; and the pressures upon departments within the modern university to attract financial resources rather than simply use them up.
  • several of my colleagues around the country have called for a return to the aesthetic wellsprings of literature, the rock-solid fact, often neglected, that it can indeed amuse, delight, and educate. They urge the teaching of English, or French, or Russian literature, and the like, in terms of the intrinsic value of the works themselves, in all their range and multiplicity, as well-crafted and appealing artifacts of human wisdom. Second, we should redefine our own standards for granting tenure, placing more emphasis on the classroom and less on published research, and we should prepare to contest our decisions with administrators whose science-based model is not an appropriate means of evaluation.
  • “It may be that what has happened to the profession is not the consequence of social or philosophical changes, but simply the consequence of a tank now empty.” His homely metaphor pointed to the absence of genuinely new frontiers of knowledge and understanding for English professors to explore.
  • In this country and in England, the study of English literature began in the latter part of the 19th century as an exercise in the scientific pursuit of philological research, and those who taught it subscribed to the notion that literature was best understood as a product of language.
  • no one has come forward in years to assert that the study of English (or comparative literature or similar undertakings in other languages) is coherent, does have self-limiting boundaries, and can be described as this but not that.
  • to teach English today is to do, intellectually, what one pleases. No sense of duty remains toward works of English or American literature; amateur sociology or anthropology or philosophy or comic books or studies of trauma among soldiers or survivors of the Holocaust will do. You need not even believe that works of literature have intelligible meaning; you can announce that they bear no relationship at all to the world beyond the text.
  • With everything on the table, and with foundational principles abandoned, everyone is free, in the classroom or in prose, to exercise intellectual laissez-faire in the largest possible way—I won’t interfere with what you do and am happy to see that you will return the favor
  • Consider the English department at Harvard University. It has now agreed to remove its survey of English literature for undergraduates, replacing it and much else with four new “affinity groups”
  • there would be no one book, or family of books, that every English major at Harvard would have read by the time he or she graduates. The direction to which Harvard would lead its students in this “clean slate” or “trickle down” experiment is to suspend literary history, thrusting into the hands of undergraduates the job of cobbling together intellectual coherence for themselves
  • Those who once strove to give order to the curriculum will have learned, from Harvard, that terms like core knowledge and foundational experience only trigger acrimony, turf protection, and faculty mutinies. No one has the stomach anymore to refight the Western culture wars. Let the students find their own way to knowledge.
  • In English, the average number of years spent earning a doctoral degree is almost 11. After passing that milestone, only half of new Ph.D.’s find teaching jobs, the number of new positions having declined over the last year by more than 20 percent; many of those jobs are part-time or come with no possibility of tenure. News like that, moving through student networks, can be matched against, at least until recently, the reputed earning power of recent graduates of business schools, law schools, and medical schools. The comparison is akin to what young people growing up in Rust Belt cities are forced to see: the work isn’t here anymore; our technology is obsolete.
  • unlike other members of the university community, they might well have been plying their trade without proper credentials: “Whereas economists or physicists, geologists or climatologists, physicians or lawyers must master a body of knowledge before they can even think of being licensed to practice,” she said, “we literary scholars, it is tacitly assumed, have no definable expertise.”
  • English departments need not refight the Western culture wars. But they need to fight their own book wars. They must agree on which texts to teach and argue out the choices and the principles of making them if they are to claim the respect due a department of study.
  • They can teach their students to write well, to use rhetoric. They should place their courses in composition and rhetoric at the forefront of their activities. They should announce that the teaching of composition is a skill their instructors have mastered and that students majoring in English will be certified, upon graduation, as possessing rigorously tested competence in prose expression.
  • The study of literature will then take on the profile now held, with moderate dignity, by the study of the classics, Greek and Latin.
  • But we can, we must, do better. At stake are the books themselves and what they can mean to the young. Yes, it is just a literary tradition. That’s all. But without such traditions, civil societies have no compass to guide them.
Javier E

Breathing In vs. Spacing Out - NYTimes.com - 0 views

  • Although pioneers like Jon Kabat-Zinn, now emeritus professor at the University of Massachusetts Medical Center, began teaching mindfulness meditation as a means of reducing stress as far back as the 1970s, all but a dozen or so of the nearly 100 randomized clinical trials have been published since 2005.
  • Michael Posner, of the University of Oregon, and Yi-Yuan Tang, of Texas Tech University, used functional M.R.I.’s before and after participants spent a combined 11 hours over two weeks practicing a form of mindfulness meditation developed by Tang. They found that it enhanced the integrity and efficiency of the brain’s white matter, the tissue that connects and protects neurons emanating from the anterior cingulate cortex, a region of particular importance for rational decision-making and effortful problem-solving.
  • Perhaps that is why mindfulness has proved beneficial to prospective graduate students. In May, the journal Psychological Science published the results of a randomized trial showing that undergraduates instructed to spend a mere 10 minutes a day for two weeks practicing mindfulness made significant improvement on the verbal portion of the Graduate Record Exam — a gain of 16 percentile points. They also significantly increased their working memory capacity, the ability to maintain and manipulate multiple items of attention.
  • ...7 more annotations...
  • By emphasizing a focus on the here and now, it trains the mind to stay on task and avoid distraction.
  • “Your ability to recognize what your mind is engaging with, and control that, is really a core strength,” said Peter Malinowski, a psychologist and neuroscientist at Liverpool John Moores University in England. “For some people who begin mindfulness training, it’s the first time in their life where they realize that a thought or emotion is not their only reality, that they have the ability to stay focused on something else, for instance their breathing, and let that emotion or thought just pass by.”
  • the higher adults scored on a measurement of mindfulness, the worse they performed on tests of implicit learning — the kind that underlies all sorts of acquired skills and habits but that occurs without conscious awareness.
  • he found that having participants spend a brief period of time on an undemanding task that maximizes mind wandering improved their subsequent performance on a test of creativity. In a follow-up study, he reported that physicists and writers alike came up with their most insightful ideas while spacing out.
  • The trick is knowing when mindfulness is called for and when it’s not.
  • one of the most surprising findings of recent mindfulness studies is that it could have unwanted side effects. Raising roadblocks to the mind’s peregrinations could, after all, prevent the very sort of mental vacations that lead to epiphanies.
  • “There’s so much our brain is doing when we’re not aware of it,” said the study’s leader, Chelsea Stillman, a doctoral candidate. “We know that being mindful is really good for a lot of explicit cognitive functions. But it might not be so useful when you want to form new habits.” Learning to ride a bicycle, speak grammatically or interpret the meaning of people’s facial expressions are three examples of knowledge we acquire through implicit learning
Sophia C

When Studies Are Wrong: A Coda - NYTimes.com - 0 views

  • All scientific results are, of course, subject to revision and refutation by later experiments. The problem comes when these replications don’t occur and the information keeps spreading unchecked. Continue reading the main story Related Coverage Raw Data: Hills to Scientific Discoveries Grow SteeperFEB. 17, 2014 Raw Data: New Truths That Only One Can SeeJAN. 20, 2014 D
  • Based on the number of papers in major journals, Dr. Ioannidis estimates that the field accounts for some 50 percent of published research.
  • Together that constitutes most of scientific research. The remaining slice is physical science — everything from geology and climatology to cosmology and particle physics. These fields have not received the same kind of scrutiny as the others. Is that because they are less prone to the problems Dr. Ioannides describe
  • ...5 more annotations...
  • “This certainly increases the transparency, reliability and cross-checking of proposed research findings,” he wrote.
  • “There seems to be a higher community standard for ‘shaming’ reputations if people step out and make claims that are subsequently refuted.” Cold fusion was a notorious example. He also saw less of an aversion to publishing negative experimental results — that is, failed replications.
  • Almost anything might be suspected of causing cancer, but physicists are unlikely to propose conjectures that violate quantum mechanics or general relativity. But I’m not sure the difference is always that stark. Here is how I put it my blog post:
  • “I have no doubt that false positives occur in all of these fields,” he concluded, “and occasionally they may be a major problem.”I’ll be looking further into this matter for a future column and would welcome comments from scientists about the situation in their own domain.
  • problem comes when these replications don’t occur and the information keeps spreading unchecked.
Javier E

In the End, It All Adds Up to - 1/12 - NYTimes.com - 2 views

  • You might think that if you simply started adding the natural numbers, 1 plus 2 plus 3 and so on all the way to infinity, you would get a pretty big number. At least I always did.So it came as a shock to a lot of people when, in a recent video, a pair of physicists purported to prove that this infinite series actually adds up to ...minus 1/12.
Javier E

But What Would the End of Humanity Mean for Me? - James Hamblin - The Atlantic - 0 views

  • Tegmark is more worried about much more immediate threats, which he calls existential risks. That’s a term borrowed from physicist Nick Bostrom, director of Oxford University’s Future of Humanity Institute, a research collective modeling the potential range of human expansion into the cosmos
  • "I am finding it increasingly plausible that existential risk is the biggest moral issue in the world, even if it hasn’t gone mainstream yet,"
  • Existential risks, as Tegmark describes them, are things that are “not just a little bit bad, like a parking ticket, but really bad. Things that could really mess up or wipe out human civilization.”
  • ...17 more annotations...
  • The single existential risk that Tegmark worries about most is unfriendly artificial intelligence. That is, when computers are able to start improving themselves, there will be a rapid increase in their capacities, and then, Tegmark says, it’s very difficult to predict what will happen.
  • Tegmark told Lex Berko at Motherboard earlier this year, "I would guess there’s about a 60 percent chance that I’m not going to die of old age, but from some kind of human-caused calamity. Which would suggest that I should spend a significant portion of my time actually worrying about this. We should in society, too."
  • "Longer term—and this might mean 10 years, it might mean 50 or 100 years, depending on who you ask—when computers can do everything we can do," Tegmark said, “after that they will probably very rapidly get vastly better than us at everything, and we’ll face this question we talked about in the Huffington Post article: whether there’s really a place for us after that, or not.”
  • "This is very near-term stuff. Anyone who’s thinking about what their kids should study in high school or college should care a lot about this.”
  • Tegmark and his op-ed co-author Frank Wilczek, the Nobel laureate, draw examples of cold-war automated systems that assessed threats and resulted in false alarms and near misses. “In those instances some human intervened at the last moment and saved us from horrible consequences,” Wilczek told me earlier that day. “That might not happen in the future.”
  • there are still enough nuclear weapons in existence to incinerate all of Earth’s dense population centers, but that wouldn't kill everyone immediately. The smoldering cities would send sun-blocking soot into the stratosphere that would trigger a crop-killing climate shift, and that’s what would kill us all
  • “We are very reckless with this planet, with civilization,” Tegmark said. “We basically play Russian roulette.” The key is to think more long term, “not just about the next election cycle or the next Justin Bieber album.”
  • “There are several issues that arise, ranging from climate change to artificial intelligence to biological warfare to asteroids that might collide with the earth,” Wilczek said of the group’s launch. “They are very serious risks that don’t get much attention.
  • a widely perceived issue is when intelligent entities start to take on a life of their own. They revolutionized the way we understand chess, for instance. That’s pretty harmless. But one can imagine if they revolutionized the way we think about warfare or finance, either those entities themselves or the people that control them. It could pose some disquieting perturbations on the rest of our lives.”
  • Wilczek’s particularly concerned about a subset of artificial intelligence: drone warriors. “Not necessarily robots,” Wilczek told me, “although robot warriors could be a big issue, too. It could just be superintelligence that’s in a cloud. It doesn’t have to be embodied in the usual sense.”
  • it’s important not to anthropomorphize artificial intelligence. It's best to think of it as a primordial force of nature—strong and indifferent. In the case of chess, an A.I. models chess moves, predicts outcomes, and moves accordingly. If winning at chess meant destroying humanity, it might do that.
  • Even if programmers tried to program an A.I. to be benevolent, it could destroy us inadvertently. Andersen’s example in Aeon is that an A.I. designed to try and maximize human happiness might think that flooding your bloodstream with heroin is the best way to do that.
  • “It’s not clear how big the storm will be, or how long it’s going to take to get here. I don’t know. It might be 10 years before there’s a real problem. It might be 20, it might be 30. It might be five. But it’s certainly not too early to think about it, because the issues to address are only going to get more complex as the systems get more self-willed.”
  • Even within A.I. research, Tegmark admits, “There is absolutely not a consensus that we should be concerned about this.” But there is a lot of concern, and sense of lack of power. Because, concretely, what can you do? “The thing we should worry about is that we’re not worried.”
  • Tegmark brings it to Earth with a case-example about purchasing a stroller: If you could spend more for a good one or less for one that “sometimes collapses and crushes the baby, but nobody’s been able to prove that it is caused by any design flaw. But it’s 10 percent off! So which one are you going to buy?”
  • “There are seven billion of us on this little spinning ball in space. And we have so much opportunity," Tegmark said. "We have all the resources in this enormous cosmos. At the same time, we have the technology to wipe ourselves out.”
  • Ninety-nine percent of the species that have lived on Earth have gone extinct; why should we not? Seeing the biggest picture of humanity and the planet is the heart of this. It’s not meant to be about inspiring terror or doom. Sometimes that is what it takes to draw us out of the little things, where in the day-to-day we lose sight of enormous potentials.
Ellie McGinnis

The Dangers of Pseudoscience - NYTimes.com - 0 views

  • “demarcation problem,” the issue of what separates good science from bad science and pseudoscience
  • Demarcation is crucial to our pursuit of knowledge; its issues go to the core of debates on epistemology and of the nature of truth and discovery
  • our society spends billions of tax dollars on scientific research, so it is important that we also have a good grasp of what constitutes money well spent in this regard
  • ...10 more annotations...
  • pseudoscience is not — contrary to popular belief — merely a harmless pastime of the gullible; it often threatens people’s welfare, sometimes fatally so
  • in the area of medical treatments that the science-pseudoscience divide is most critical, and where the role of philosophers in clarifying things may be most relevant
  • What makes the use of aspirin “scientific,” however, is that we have validated its effectiveness through properly controlled trials, isolated the active ingredient, and understood the biochemical pathways through which it has its effects
  • inaccessibility of the famous Higgs boson, a sub-atomic particle postulated by physicists to play a crucial role in literally holding the universe together (it provides mass to all other particles)
  • Philosophers of science have long recognized that there is nothing wrong with positing unobservable entities per se, it’s a question of what work such entities actually do within a given theoretical-empirical framework.
  • we are attempting to provide explanations for why some things work and others don’t. If these explanations are wrong, or unfounded as in the case of vacuous concepts like Qi, then we ought to correct or abandon them.
  • no sharp line dividing sense from nonsense, and moreover that doctrines starting out in one camp may over time evolve into the other.
  • Popper’s basic insight: the bad habit of creative fudging and finagling with empirical data ultimately makes a theory impervious to refutation. And all pseudoscientists do it, from parapsychologists to creationists and 9/11 Truthers.
  • The open-ended nature of science means that there is nothing sacrosanct in either its results or its methods.
  • The borderlines between genuine science and pseudoscience may be fuzzy, but this should be even more of a call for careful distinctions, based on systematic facts and sound reasoning
Javier E

A Billionaire Mathematician's Life of Ferocious Curiosity - The New York Times - 0 views

  • James H. Simons likes to play against type. He is a billionaire star of mathematics and private investment who often wins praise for his financial gifts to scientific research and programs to get children hooked on math.But in his Manhattan office, high atop a Fifth Avenue building in the Flatiron district, he’s quick to tell of his career failings.He was forgetful. He was demoted. He found out the hard way that he was terrible at programming computers. “I’d keep forgetting the notation,” Dr. Simons said. “I couldn’t write programs to save my life.”After that, he was fired.His message is clearly aimed at young people: If I can do it, so can you.
  • Down one floor from his office complex is Math for America, a foundation he set up to promote math teaching in public schools. Nearby, on Madison Square Park, is the National Museum of Mathematics, or MoMath, an educational center he helped finance. It opened in 2012 and has had a quarter million visitors.
  • Dr. Simons received his doctorate at 23; advanced code breaking for the National Security Agency at 26; led a university math department at 30; won geometry’s top prize at 37; founded Renaissance Technologies, one of the world’s most successful hedge funds, at 44; and began setting up charitable foundations at 56.
  • ...7 more annotations...
  • With a fortune estimated at $12.5 billion, Dr. Simons now runs a tidy universe of science endeavors, financing not only math teachers but hundreds of the world’s best investigators, even as Washington has reduced its support for scientific research. His favorite topics include gene puzzles, the origins of life, the roots of autism, math and computer frontiers, basic physics and the structure of the early cosmos.
  • In time, his novel approach helped change how the investment world looks at financial markets. The man who “couldn’t write programs” hired a lot of programmers, as well as physicists, cryptographers, computational linguists, and, oh yes, mathematicians. Wall Street experience was frowned on. A flair for science was prized. The techies gathered financial data and used complex formulas to make predictions and trade in global markets.
  • Working closely with his wife, Marilyn, the president of the Simons Foundation and an economist credited with philanthropic savvy, Dr. Simons has pumped more than $1 billion into esoteric projects as well as retail offerings like the World Science Festival and a scientific lecture series at his Fifth Avenue building. Characteristically, it is open to the public.
  • On a wall in Dr. Simons’s office is one of his prides: a framed picture of equations known as Chern-Simons, after a paper he wrote with Shiing-Shen Chern, a prominent geometer. Four decades later, the equations define many esoteric aspects of modern physics, including advanced theories of how invisible fields like those of gravity interact with matter to produce everything from superstrings to black holes.
  • “He’s an individual of enormous talent and accomplishment, yet he’s completely unpretentious,” said Marc Tessier-Lavigne, a neuroscientist who is the president of Rockefeller University. “He manages to blend all these admirable qualities.”
  • Forbes magazine ranks him as the world’s 93rd richest person — ahead of Eric Schmidt of Google and Elon Musk of Tesla Motors, among others — and in 2010, he and his wife were among the first billionaires to sign the Giving Pledge, promising to devote “the great majority” of their wealth to philanthropy.
  • For all his self-deprecations, Dr. Simons does credit himself with a contemplative quality that seems to lie behind many of his accomplishments.“I wasn’t the fastest guy in the world,” Dr. Simons said of his youthful math enthusiasms. “I wouldn’t have done well in an Olympiad or a math contest. But I like to ponder. And pondering things, just sort of thinking about it and thinking about it, turns out to be a pretty good approach.”
‹ Previous 21 - 40 of 75 Next › Last »
Showing 20 items per page