Skip to main content

Home/ TOK Friends/ Group items tagged intuition

Rss Feed Group items tagged

sandrine_h

Perspectives: Why humanity needs a God of creativity | New Scientist - 0 views

  • So the unfolding of the universe – biotic, and perhaps abiotic too – appears to be partially beyond natural law. In its place is a ceaseless creativity, with no supernatural creator. If, as a result of this creativity, we cannot know what will happen
  • then reason, the Enlightenment’s highest human virtue, is an insufficient guide to living our lives. We must use reason, emotion, intuition, all that our evolution has brought us. But that means understanding our full humanity: we need Einstein and Shakespeare in the same room.
  • Yet what is more awesome: to believe that God created everything in six days, or to believe that the biosphere came into being on its own, with no creator, and partially lawlessly? I find the latter proposition so stunning, so worthy of awe and respect, that I am happy to accept this natural creativity in the universe as a reinvention of “God”. From it, we can build a sense of the sacred that encompasses all life and the planet itself. From it, we can change our value system across the globe and try, together, to ease the fears of religious fundamentalists with a safe, sacred space we can share. And from it we can, if we are wise, find means to avert wars of civilisations, the ravages of global warming, and the potential disaster of peak oil.
dicindioha

Daniel Kahneman On Hiring Decisions - Business Insider - 0 views

  • Most hiring decisions come down to a gut decision. According to Nobel laureate Daniel Kahneman, however, this process is extremely flawed and there's a much better way.
    • dicindioha
       
      hiring comes down to 'gut feeling'
  • Kahneman asked interviewers to put aside personal judgments and limit interviews to a series of factual questions meant to generate a score on six separate personality traits. A few months later, it became clear that Kahneman's systematic approach was a vast improvement over gut decisions. It was so effective that the army would use his exact method for decades to come. Why you should care is because this superior method can be copied by any organization — and really, by anyone facing a hard decision.
  • First, select a few traits that are prerequisites for success in this position (technical proficiency, engaging personality, reliability, and so on. Don't overdo it — six dimensions is a good number. The traits you choose should be as independent as possible from each other, and you should feel that you can assess them reliably by asking a few factual questions. Next, make a list of those questions for each trait and think about how you will score it, say on a 1-5 scale. You should have an idea of what you will call "very weak" or "very strong."
    • dicindioha
       
      WHAT YOU SHOULD DO IN AN INTERVIEW
  • ...2 more annotations...
  • Do not skip around. To evaluate each candidate add up the six scores ... Firmly resolve that you will hire the candidate whose final score is the highest, even if there is another one whom you like better — try to resist your wish to invent broken legs to change the ranking.
  • than if you do what people normally do in such situations, which is to go into the interview unprepared and to make choices by an overall intuitive judgment such as "I looked into his eyes and liked what I saw."
  •  
    we cannot always use simply a 'gut feeling' from our so called 'reasoning' and emotional response to make big decisions like job hiring, which is what happens much of the time. this is a really interesting way to do it systematically. you still use your own perspective, but the questions asked will hopefully lead you to a better outcome
oliviaodon

How scientists fool themselves - and how they can stop : Nature News & Comment - 1 views

  • In 2013, five years after he co-authored a paper showing that Democratic candidates in the United States could get more votes by moving slightly to the right on economic policy1, Andrew Gelman, a statistician at Columbia University in New York City, was chagrined to learn of an error in the data analysis. In trying to replicate the work, an undergraduate student named Yang Yang Hu had discovered that Gelman had got the sign wrong on one of the variables.
  • Gelman immediately published a three-sentence correction, declaring that everything in the paper's crucial section should be considered wrong until proved otherwise.
  • Reflecting today on how it happened, Gelman traces his error back to the natural fallibility of the human brain: “The results seemed perfectly reasonable,” he says. “Lots of times with these kinds of coding errors you get results that are just ridiculous. So you know something's got to be wrong and you go back and search until you find the problem. If nothing seems wrong, it's easier to miss it.”
  • ...6 more annotations...
  • This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today's environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept 'reasonable' outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.
  • Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results
  • Although it is impossible to document how often researchers fool themselves in data analysis, says Ioannidis, findings of irreproducibility beg for an explanation. The study of 100 psychology papers is a case in point: if one assumes that the vast majority of the original researchers were honest and diligent, then a large proportion of the problems can be explained only by unconscious biases. “This is a great time for research on research,” he says. “The massive growth of science allows for a massive number of results, and a massive number of errors and biases to study. So there's good reason to hope we can find better ways to deal with these problems.”
  • Although the human brain and its cognitive biases have been the same for as long as we have been doing science, some important things have changed, says psychologist Brian Nosek, executive director of the non-profit Center for Open Science in Charlottesville, Virginia, which works to increase the transparency and reproducibility of scientific research. Today's academic environment is more competitive than ever. There is an emphasis on piling up publications with statistically significant results — that is, with data relationships in which a commonly used measure of statistical certainty, the p-value, is 0.05 or less. “As a researcher, I'm not trying to produce misleading results,” says Nosek. “But I do have a stake in the outcome.” And that gives the mind excellent motivation to find what it is primed to find.
  • Another reason for concern about cognitive bias is the advent of staggeringly large multivariate data sets, often harbouring only a faint signal in a sea of random noise. Statistical methods have barely caught up with such data, and our brain's methods are even worse, says Keith Baggerly, a statistician at the University of Texas MD Anderson Cancer Center in Houston. As he told a conference on challenges in bioinformatics last September in Research Triangle Park, North Carolina, “Our intuition when we start looking at 50, or hundreds of, variables sucks.”
  • One trap that awaits during the early stages of research is what might be called hypothesis myopia: investigators fixate on collecting evidence to support just one hypothesis; neglect to look for evidence against it; and fail to consider other explanations.
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
Javier E

In Defense of Facts - The Atlantic - 1 views

  • over 13 years, he has published a series of anthologies—of the contemporary American essay, of the world essay, and now of the historical American essay—that misrepresents what the essay is and does, that falsifies its history, and that contains, among its numerous selections, very little one would reasonably classify within the genre. And all of this to wide attention and substantial acclaim
  • D’Agata’s rationale for his “new history,” to the extent that one can piece it together from the headnotes that preface each selection, goes something like this. The conventional essay, nonfiction as it is, is nothing more than a delivery system for facts. The genre, as a consequence, has suffered from a chronic lack of critical esteem, and thus of popular attention. The true essay, however, deals not in knowing but in “unknowing”: in uncertainty, imagination, rumination; in wandering and wondering; in openness and inconclusion
  • Every piece of this is false in one way or another.
  • ...31 more annotations...
  • There are genres whose principal business is fact—journalism, history, popular science—but the essay has never been one of them. If the form possesses a defining characteristic, it is that the essay makes an argument
  • That argument can rest on fact, but it can also rest on anecdote, or introspection, or cultural interpretation, or some combination of all these and more
  • what makes a personal essay an essay and not just an autobiographical narrative is precisely that it uses personal material to develop, however speculatively or intuitively, a larger conclusion.
  • Nonfiction is the source of the narcissistic injury that seems to drive him. “Nonfiction,” he suggests, is like saying “not art,” and if D’Agata, who has himself published several volumes of what he refers to as essays, desires a single thing above all, it is to be known as a maker of art.
  • D’Agata tells us that the term has been in use since about 1950. In fact, it was coined in 1867 by the staff of the Boston Public Library and entered widespread circulation after the turn of the 20th century. The concept’s birth and growth, in other words, did coincide with the rise of the novel to literary preeminence, and nonfiction did long carry an odor of disesteem. But that began to change at least as long ago as the 1960s, with the New Journalism and the “nonfiction novel.”
  • What we really seem to get in D’Agata’s trilogy, in other words, is a compendium of writing that the man himself just happens to like, or that he wants to appropriate as a lineage for his own work.
  • What it’s like is abysmal: partial to trivial formal experimentation, hackneyed artistic rebellion, opaque expressions of private meaning, and modish political posturing
  • If I bought a bag of chickpeas and opened it to find that it contained some chickpeas, some green peas, some pebbles, and some bits of goat poop, I would take it back to the store. And if the shopkeeper said, “Well, they’re ‘lyric’ chickpeas,” I would be entitled to say, “You should’ve told me that before I bought them.”
  • when he isn’t cooking quotes or otherwise fudging the record, he is simply indifferent to issues of factual accuracy, content to rely on a mixture of guesswork, hearsay, and his own rather faulty memory.
  • His rejoinders are more commonly a lot more hostile—not to mention juvenile (“Wow, Jim, your penis must be so much bigger than mine”), defensive, and in their overarching logic, deeply specious. He’s not a journalist, he insists; he’s an essayist. He isn’t dealing in anything as mundane as the facts; he’s dealing in “art, dickhead,” in “poetry,” and there are no rules in art.
  • D’Agata replies that there is something between history and fiction. “We all believe in emotional truths that could never hold water, but we still cling to them and insist on their relevance.” The “emotional truths” here, of course, are D’Agata’s, not Presley’s. If it feels right to say that tae kwon do was invented in ancient India (not modern Korea, as Fingal discovers it was), then that is when it was invented. The term for this is truthiness.
  • D’Agata clearly wants to have it both ways. He wants the imaginative freedom of fiction without relinquishing the credibility (and for some readers, the significance) of nonfiction. He has his fingers crossed, and he’s holding them behind his back. “John’s a different kind of writer,” an editor explains to Fingal early in the book. Indeed he is. But the word for such a writer isn’t essayist. It’s liar.
  • he point of all this nonsense, and a great deal more just like it, is to advance an argument about the essay and its history. The form, D’Agata’s story seems to go, was neglected during the long ages that worshiped “information” but slowly emerged during the 19th and 20th centuries as artists learned to defy convention and untrammel their imaginations, coming fully into its own over the past several decades with the dawning recognition of the illusory nature of knowledge.
  • Most delectable is when he speaks about “the essay’s traditional ‘five-paragraph’ form.” I almost fell off my chair when I got to that one. The five-paragraph essay—introduction, three body paragraphs, conclusion; stultifying, formulaic, repetitive—is the province of high-school English teachers. I have never met one outside of a classroom, and like any decent college writing instructor, I never failed to try to wean my students away from them. The five-paragraph essay isn’t an essay; it’s a paper.
  • When he refers to his selections as essays, he does more than falsify the essay as a genre. He also effaces all the genres that they do belong to: not only poetry, fiction, journalism, and travel, but, among his older choices, history, parable, satire, the sermon, and more—genres that possess their own particular traditions, conventions, and expectation
  • —by ignoring the actual contexts of his selections, and thus their actual intentions—D’Agata makes the familiar contemporary move of imposing his own conceits and concerns upon the past. That is how ethnography turns into “song,” Socrates into an essayist, and the whole of literary history into a single man’s “emotional truth.”
  • The history of the essay is indeed intertwined with “facts,” but in a very different way than D’Agata imagines. D’Agata’s mind is Manichaean. Facts bad, imagination good
  • What he fails to understand is that facts and the essay are not antagonists but siblings, offspring of the same historical moment
  • one needs to recognize that facts themselves have a history.
  • Facts are not just any sort of knowledge, such as also existed in the ancient and medieval worlds. A fact is a unit of information that has been established through uniquely modern methods
  • Fact, etymologically, means “something done”—that is, an act or deed
  • It was only in the 16th century—an age that saw the dawning of a new empirical spirit, one that would issue not only in modern science, but also in modern historiography, journalism, and scholarship—that the word began to signify our current sense of “real state of things.”
  • It was at this exact time, and in this exact spirit, that the essay was born. What distinguished Montaigne’s new form—his “essays” or attempts to discover and publish the truth about himself—was not that it was personal (precursors like Seneca also wrote personally), but that it was scrupulously investigative. Montaigne was conducting research into his soul, and he was determined to get it right.
  • His famous motto, Que sais-je?—“What do I know?”—was an expression not of radical doubt but of the kind of skepticism that fueled the modern revolution in knowledge.
  • It is no coincidence that the first English essayist, Galileo’s contemporary Francis Bacon, was also the first great theorist of science.
  • That knowledge is problematic—difficult to establish, labile once created, often imprecise and always subject to the limitations of the human mind—is not the discovery of postmodernism. It is a foundational insight of the age of science, of fact and information, itself.
  • The point is not that facts do not exist, but that they are unstable (and are becoming more so as the pace of science quickens). Knowledge is always an attempt. Every fact was established by an argument—by observation and interpretation—and is susceptible to being overturned by a different one
  • A fact, you might say, is nothing more than a frozen argument, the place where a given line of investigation has come temporarily to rest.
  • Sometimes those arguments are scientific papers. Sometimes they are news reports, which are arguments with everything except the conclusions left out (the legwork, the notes, the triangulation of sources—the research and the reasoning).
  • When it comes to essays, though, we don’t refer to those conclusions as facts. We refer to them as wisdom, or ideas
  • the essay draws its strength not from separating reason and imagination but from putting them in conversation. A good essay moves fluidly between thought and feeling. It subjects the personal to the rigors of the intellect and the discipline of external reality. The truths it finds are more than just emotional.
clairemann

Flights to Nowhere and Travel After the Pandemic | Time - 0 views

  • I’ve taken to staying in bed and flying to Morocco. It’s the place I’ve been that’s the least like Brooklyn, where I have spent most of this pandemic. Trying to remember the way the air feels on your skin in an unfamiliar climate is the smallest of escapes. Maybe it’s a necessary one, now that everything within reach feels so unrelentingly familiar.
  • In our travel-starved, pandemic-addled state, people will actually pay to go to the airport, get on a plane wearing their face masks, and fly over their own country or a neighboring one and come right back. A seven-hour Qantas sightseeing flight over Australian landmarks sold out in 10 minutes.
  • I don’t think we’ll need to book a SpaceX flight to feel like we’re somewhere startling and new. For many of us, seeing a new movie in a real theater will feel like a trip. Or better yet, dancing in the sticky aisles of a dark music venue humming with people and anticipation.
  • ...1 more annotation...
  • “The metaphor of the parental scaffold is visual, intuitive, and simple: Your child is the ‘building.’ You, the parent, are the scaffold that surrounds the building. The framework of all your decisions and efforts as parents is the three pillars of your scaffold: structure, support, and encouragement. Eventually, when the building is finished and ready to stand completely on its own, the parental scaffold can come down.”
katedriscoll

Tip of the iceberg - TOK RESOURCE.ORG - 0 views

  • Intuition allows us make judgments in the blink of an eye without careful deliberation or systematic analysis of all the available facts. We trust our “gut” reactions and first impressions. They enable us to discern the sincerity of a conversation partner, read the prevailing ambience in a room or feel a sense of impending doom. These insights or early warning "survival" mechanisms are palpable and we ignore them at our peril. They are only irrational in the sense that the cognitive fragments (some of them non-linguistic) and experiential memories that support them remain largely hidden.
caelengrubb

Believing in Overcoming Cognitive Biases | Journal of Ethics | American Medical Associa... - 0 views

  • Cognitive biases contribute significantly to diagnostic and treatment errors
  • A 2016 review of their roles in decision making lists 4 domains of concern for physicians: gathering and interpreting evidence, taking action, and evaluating decisions
  • Confirmation bias is the selective gathering and interpretation of evidence consistent with current beliefs and the neglect of evidence that contradicts them.
  • ...14 more annotations...
  • It can occur when a physician refuses to consider alternative diagnoses once an initial diagnosis has been established, despite contradicting data, such as lab results. This bias leads physicians to see what they want to see
  • Anchoring bias is closely related to confirmation bias and comes into play when interpreting evidence. It refers to physicians’ practices of prioritizing information and data that support their initial impressions, even when first impressions are wrong
  • When physicians move from deliberation to action, they are sometimes swayed by emotional reactions rather than rational deliberation about risks and benefits. This is called the affect heuristic, and, while heuristics can often serve as efficient approaches to problem solving, they can sometimes lead to bias
  • Further down the treatment pathway, outcomes bias can come into play. This bias refers to the practice of believing that good or bad results are always attributable to prior decisions, even when there is no valid reason to do so
  • The dual-process theory, a cognitive model of reasoning, can be particularly relevant in matters of clinical decision making
  • This theory is based on the argument that we use 2 different cognitive systems, intuitive and analytical, when reasoning. The former is quick and uses information that is readily available; the latter is slower and more deliberate.
  • Consideration should be given to the difficulty physicians face in employing analytical thinking exclusively. Beyond constraints of time, information, and resources, many physicians are also likely to be sleep deprived, work in an environment full of distractions, and be required to respond quickly while managing heavy cognitive loads
  • Simply increasing physicians’ familiarity with the many types of cognitive biases—and how to avoid them—may be one of the best strategies to decrease bias-related errors
  • The same review suggests that cognitive forcing strategies may also have some success in improving diagnostic outcomes
  • Afterwards, the resident physicians were debriefed on both case-specific details and on cognitive forcing strategies, interviewed, and asked to complete a written survey. The results suggested that resident physicians further along in their training (ie, postgraduate year three) gained more awareness of cognitive strategies than resident physicians in earlier years of training, suggesting that this tool could be more useful after a certain level of training has been completed
  • A 2013 study examined the effect of a 3-part, 1-year curriculum on recognition and knowledge of cognitive biases and debiasing strategies in second-year residents
  • Cognitive biases in clinical practice have a significant impact on care, often in negative ways. They sometimes manifest as physicians seeing what they want to see rather than what is actually there. Or they come into play when physicians make snap decisions and then prioritize evidence that supports their conclusions, as opposed to drawing conclusions from evidence
  • Fortunately, cognitive psychology provides insight into how to prevent biases. Guided reflection and cognitive forcing strategies deflect bias through close examination of our own thinking processes.
  • During medical education and consistently thereafter, we must provide physicians with a full appreciation of the cost of biases and the potential benefits of combatting them.
caelengrubb

What Is A Paradigm? - 0 views

  • A scientific paradigm is a framework containing all the commonly accepted views about a subject, conventions about what direction research should take and how it should be performed.
  • Paradigms contain all the distinct, established patterns, theories, common methods and standards that allow us to recognize an experimental result as belonging to a field or not.
  • The vocabulary and concepts in Newton’s three laws or the central dogma in biology are examples of scientific “open resources" that scientists have adopted and which now form part of the scientific paradigm.
  • ...12 more annotations...
  • A paradigm dictates:
  • what is observed and measured
  • the questions we ask about those observations
  • how the questions are formulated
  • how the results are interpreted
  • how research is carried out
  • what equipment is appropriate
  • In fact, Kuhn strongly suggested that research in a deeply entrenched paradigm invariably ends up reinforcing that paradigm, since anything that contradicts it is ignored or else pressed through the preset methods until it conforms to already established dogma
  • The body of pre-existing evidence in a field conditions and shapes the collection and interpretation of all subsequent evidence. The certainty that the current paradigm is reality itself is precisely what makes it so difficult to accept alternatives.
  • It is very common for scientists to discard certain models or pick up emerging theories. But once in a while, enough anomalies accumulate within a field that the entire paradigm itself is required to change to accommodate them.
  • Many physicists in the 19th century were convinced that the Newtonian paradigm that had reigned for 200 years was the pinnacle of discovery and that scientific progress was more or less a question of refinement. When Einstein published his theories on General Relativity, it was not just another idea that could fit comfortably into the existing paradigm. Instead, Newtonian Physics itself was relegated to being a special subclass of the greater paradigm ushered in by General Relativity. Newton’s three laws are still faithfully taught in schools, however we now operate within a paradigm that puts those laws into a much broader context
  • The concept of paradigm is closely related to the Platonic and Aristotelian views of knowledge. Aristotle believed that knowledge could only be based upon what is already known, the basis of the scientific method. Plato believed that knowledge should be judged by what something could become, the end result, or final purpose. Plato's philosophy is more like the intuitive leaps that cause scientific revolution; Aristotle's the patient gathering of data.
kaylynfreeman

Believe what you like: How we fit the facts around our prejudices - TOK Topics - 0 views

  • This idea of a gullible, pliable populace is, of course, nothing new. Voltaire said, “those who can make you believe absurdities can make you commit atrocities”. But no, says Mercier, Voltaire had it backwards: “It is wanting to commit atrocities that makes you believe absurdities”…
  • If someone says Obama is a Muslim, their primary reason may be to indicate that they are a member of the group of people who co-ordinate around that statement. When a social belief and a true belief are in conflict, Klintman says, people will opt for the belief that best signals their social identity – even if it means lying to themselves…
  • Such a “belief” – being largely performative – rarely translates into action. It remains what Mercier calls a reflective belief, with no consequences on one’s behaviour, as opposed to an intuitive belief, which guides decisions and actions.
tonycheng6

Accurate machine learning in materials science facilitated by using diverse data sources - 0 views

  • Computational modelling is also used to estimate the properties of materials. However, there is usually a trade-off between the cost of the experiments (or simulations) and the accuracy of the measurements (or estimates), which has limited the number of materials that can be tested rigorously.
  • Materials scientists commonly supplement their own ‘chemical intuition’ with predictions from machine-learning models, to decide which experiments to conduct next
  • More importantly, almost all of these studies use models built on data obtained from a single, consistent source. Such models are referred to as single-fidelity models.
  • ...4 more annotations...
  • However, for most real-world applications, measurements of materials’ properties have varying levels of fidelity, depending on the resources available.
  • A comparison of prediction errors clearly demonstrates the benefit of the multi-fidelity approach
  • The authors’ system is not restricted to materials science, but is generalizable to any problem that can be described using graph structures, such as social networks and knowledge graphs (digital frameworks that represent knowledge as concepts connected by relationships)
  • More research is needed to understand the scenarios for which multi-fidelity learning is most beneficial, balancing prediction accuracy with the cost of acquiring data
adonahue011

How Joe Biden was Donald Trump's kryptonite - CNNPolitics - 0 views

    • adonahue011
       
      This is very interesting to me because I think much of Donald Trumps campaigns have been about manipulation.
  • Some of the lowest points for Trump over the last two years revolved around Biden.
    • adonahue011
       
      Offending an ego, how that connects to our brain and us being sensitive beings.
  • Biden had merely announced he was running for president earlier that year.
  • ...9 more annotations...
  • What perhaps Trump didn't realize was that he was playing right into Biden's hands. His efforts seem to prove an important point for Biden
    • adonahue011
       
      Many times Trump did not even notice he was in many ways helping Biden, intuitively.
  • Democrats got the message Trump was sending and nominated Biden.
  • Trump was impeached a second time after an insurrection that he incited last week over outrage of the 2020 election results.
  • During the transition period, Biden has been actively planning his presidency and not spending too much time publicly worrying about Trump's false claims of voter fraud.
  • Trump it seems, couldn't stand not to be the center of attention, which is unusual for an outgoing president.
    • adonahue011
       
      What does this say about his mental state?
  • Trump got about 70% of the news mentions.
  • The end result of all of this is that Biden goes into his administration this week with the vast majority of voters approving of Biden's handling of the transition.
  • Trump's political career seemed to be impermeable. That was until Trump ran into President-elect Joe Biden.
  • Biden proved to be Trump's kryptonite and helped himself tremendously by doing something very simple: allowing Trump to be Trump.
katedriscoll

Theory of Knowledge IB Guide | Part 5 | IB Blog - 0 views

  • All knowledge comes from somewhere. Even if we say it is innate (comes from within us) we still have to say how that knowledge appears. The Ways of Knowing are what they sound like, the methods through which knowledge becomes apparent to us. In the IB there are eight different ways of knowing: Language, Sense perception, Emotion, Reason, Imagination, Faith, Intuition and Memory. Although this might seem like a lot, the good news is that the for the IB you’re only really advised to study four of them in depth (although it’s worth knowing how each of them works).
  • This quote from author Olivia Fox Cabane points out the power of the human imagination. What is being described here is the what we traditionally call imagination: the ability to form a mental representation of a sense experience without the normal stimulus. There is another form of imagination, however. Propositional imagining: is the idea of ‘imagining that’ things were different than they are, for example that the cold war had never ended.
Javier E

How Does Science Really Work? | The New Yorker - 1 views

  • I wanted to be a scientist. So why did I find the actual work of science so boring? In college science courses, I had occasional bursts of mind-expanding insight. For the most part, though, I was tortured by drudgery.
  • I’d found that science was two-faced: simultaneously thrilling and tedious, all-encompassing and narrow. And yet this was clearly an asset, not a flaw. Something about that combination had changed the world completely.
  • “Science is an alien thought form,” he writes; that’s why so many civilizations rose and fell before it was invented. In his view, we downplay its weirdness, perhaps because its success is so fundamental to our continued existence.
  • ...50 more annotations...
  • In school, one learns about “the scientific method”—usually a straightforward set of steps, along the lines of “ask a question, propose a hypothesis, perform an experiment, analyze the results.”
  • That method works in the classroom, where students are basically told what questions to pursue. But real scientists must come up with their own questions, finding new routes through a much vaster landscape.
  • Since science began, there has been disagreement about how those routes are charted. Two twentieth-century philosophers of science, Karl Popper and Thomas Kuhn, are widely held to have offered the best accounts of this process.
  • For Popper, Strevens writes, “scientific inquiry is essentially a process of disproof, and scientists are the disprovers, the debunkers, the destroyers.” Kuhn’s scientists, by contrast, are faddish true believers who promulgate received wisdom until they are forced to attempt a “paradigm shift”—a painful rethinking of their basic assumptions.
  • Working scientists tend to prefer Popper to Kuhn. But Strevens thinks that both theorists failed to capture what makes science historically distinctive and singularly effective.
  • Sometimes they seek to falsify theories, sometimes to prove them; sometimes they’re informed by preëxisting or contextual views, and at other times they try to rule narrowly, based on t
  • Why do scientists agree to this scheme? Why do some of the world’s most intelligent people sign on for a lifetime of pipetting?
  • Strevens thinks that they do it because they have no choice. They are constrained by a central regulation that governs science, which he calls the “iron rule of explanation.” The rule is simple: it tells scientists that, “if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with”; from there, they must “conduct all disputes with reference to empirical evidence alone.”
  • , it is “the key to science’s success,” because it “channels hope, anger, envy, ambition, resentment—all the fires fuming in the human heart—to one end: the production of empirical evidence.”
  • Strevens arrives at the idea of the iron rule in a Popperian way: by disproving the other theories about how scientific knowledge is created.
  • The problem isn’t that Popper and Kuhn are completely wrong. It’s that scientists, as a group, don’t pursue any single intellectual strategy consistently.
  • Exploring a number of case studies—including the controversies over continental drift, spontaneous generation, and the theory of relativity—Strevens shows scientists exerting themselves intellectually in a variety of ways, as smart, ambitious people usually do.
  • “Science is boring,” Strevens writes. “Readers of popular science see the 1 percent: the intriguing phenomena, the provocative theories, the dramatic experimental refutations or verifications.” But, he says,behind these achievements . . . are long hours, days, months of tedious laboratory labor. The single greatest obstacle to successful science is the difficulty of persuading brilliant minds to give up the intellectual pleasures of continual speculation and debate, theorizing and arguing, and to turn instead to a life consisting almost entirely of the production of experimental data.
  • Ultimately, in fact, it was good that the geologists had a “splendid variety” of somewhat arbitrary opinions: progress in science requires partisans, because only they have “the motivation to perform years or even decades of necessary experimental work.” It’s just that these partisans must channel their energies into empirical observation. The iron rule, Strevens writes, “has a valuable by-product, and that by-product is data.”
  • Science is often described as “self-correcting”: it’s said that bad data and wrong conclusions are rooted out by other scientists, who present contrary findings. But Strevens thinks that the iron rule is often more important than overt correction.
  • Eddington was never really refuted. Other astronomers, driven by the iron rule, were already planning their own studies, and “the great preponderance of the resulting measurements fit Einsteinian physics better than Newtonian physics.” It’s partly by generating data on such a vast scale, Strevens argues, that the iron rule can power science’s knowledge machine: “Opinions converge not because bad data is corrected but because it is swamped.”
  • Why did the iron rule emerge when it did? Strevens takes us back to the Thirty Years’ War, which concluded with the Peace of Westphalia, in 1648. The war weakened religious loyalties and strengthened national ones.
  • Two regimes arose: in the spiritual realm, the will of God held sway, while in the civic one the decrees of the state were paramount. As Isaac Newton wrote, “The laws of God & the laws of man are to be kept distinct.” These new, “nonoverlapping spheres of obligation,” Strevens argues, were what made it possible to imagine the iron rule. The rule simply proposed the creation of a third sphere: in addition to God and state, there would now be science.
  • Strevens imagines how, to someone in Descartes’s time, the iron rule would have seemed “unreasonably closed-minded.” Since ancient Greece, it had been obvious that the best thinking was cross-disciplinary, capable of knitting together “poetry, music, drama, philosophy, democracy, mathematics,” and other elevating human disciplines.
  • We’re still accustomed to the idea that a truly flourishing intellect is a well-rounded one. And, by this standard, Strevens says, the iron rule looks like “an irrational way to inquire into the underlying structure of things”; it seems to demand the upsetting “suppression of human nature.”
  • Descartes, in short, would have had good reasons for resisting a law that narrowed the grounds of disputation, or that encouraged what Strevens describes as “doing rather than thinking.”
  • In fact, the iron rule offered scientists a more supple vision of progress. Before its arrival, intellectual life was conducted in grand gestures.
  • Descartes’s book was meant to be a complete overhaul of what had preceded it; its fate, had science not arisen, would have been replacement by some equally expansive system. The iron rule broke that pattern.
  • by authorizing what Strevens calls “shallow explanation,” the iron rule offered an empirical bridge across a conceptual chasm. Work could continue, and understanding could be acquired on the other side. In this way, shallowness was actually more powerful than depth.
  • it also changed what counted as progress. In the past, a theory about the world was deemed valid when it was complete—when God, light, muscles, plants, and the planets cohered. The iron rule allowed scientists to step away from the quest for completeness.
  • The consequences of this shift would become apparent only with time
  • In 1713, Isaac Newton appended a postscript to the second edition of his “Principia,” the treatise in which he first laid out the three laws of motion and the theory of universal gravitation. “I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses,” he wrote. “It is enough that gravity really exists and acts according to the laws that we have set forth.”
  • What mattered, to Newton and his contemporaries, was his theory’s empirical, predictive power—that it was “sufficient to explain all the motions of the heavenly bodies and of our sea.”
  • Descartes would have found this attitude ridiculous. He had been playing a deep game—trying to explain, at a fundamental level, how the universe fit together. Newton, by those lights, had failed to explain anything: he himself admitted that he had no sense of how gravity did its work
  • Strevens sees its earliest expression in Francis Bacon’s “The New Organon,” a foundational text of the Scientific Revolution, published in 1620. Bacon argued that thinkers must set aside their “idols,” relying, instead, only on evidence they could verify. This dictum gave scientists a new way of responding to one another’s work: gathering data.
  • Quantum theory—which tells us that subatomic particles can be “entangled” across vast distances, and in multiple places at the same time—makes intuitive sense to pretty much nobody.
  • Without the iron rule, Strevens writes, physicists confronted with such a theory would have found themselves at an impasse. They would have argued endlessly about quantum metaphysics.
  • ollowing the iron rule, they can make progress empirically even though they are uncertain conceptually. Individual researchers still passionately disagree about what quantum theory means. But that hasn’t stopped them from using it for practical purposes—computer chips, MRI machines, G.P.S. networks, and other technologies rely on quantum physics.
  • One group of theorists, the rationalists, has argued that science is a new way of thinking, and that the scientist is a new kind of thinker—dispassionate to an uncommon degree.
  • As evidence against this view, another group, the subjectivists, points out that scientists are as hopelessly biased as the rest of us. To this group, the aloofness of science is a smoke screen behind which the inevitable emotions and ideologies hide.
  • At least in science, Strevens tells us, “the appearance of objectivity” has turned out to be “as important as the real thing.”
  • The subjectivists are right, he admits, inasmuch as scientists are regular people with a “need to win” and a “determination to come out on top.”
  • But they are wrong to think that subjectivity compromises the scientific enterprise. On the contrary, once subjectivity is channelled by the iron rule, it becomes a vital component of the knowledge machine. It’s this redirected subjectivity—to come out on top, you must follow the iron rule!—that solves science’s “problem of motivation,” giving scientists no choice but “to pursue a single experiment relentlessly, to the last measurable digit, when that digit might be quite meaningless.”
  • If it really was a speech code that instigated “the extraordinary attention to process and detail that makes science the supreme discriminator and destroyer of false ideas,” then the peculiar rigidity of scientific writing—Strevens describes it as “sterilized”—isn’t a symptom of the scientific mind-set but its cause.
  • The iron rule—“a kind of speech code”—simply created a new way of communicating, and it’s this new way of communicating that created science.
  • Other theorists have explained science by charting a sweeping revolution in the human mind; inevitably, they’ve become mired in a long-running debate about how objective scientists really are
  • In “The Knowledge Machine: How Irrationality Created Modern Science” (Liveright), Michael Strevens, a philosopher at New York University, aims to identify that special something. Strevens is a philosopher of science
  • Compared with the theories proposed by Popper and Kuhn, Strevens’s rule can feel obvious and underpowered. That’s because it isn’t intellectual but procedural. “The iron rule is focused not on what scientists think,” he writes, “but on what arguments they can make in their official communications.”
  • Like everybody else, scientists view questions through the lenses of taste, personality, affiliation, and experience
  • geologists had a professional obligation to take sides. Europeans, Strevens reports, tended to back Wegener, who was German, while scholars in the United States often preferred Simpson, who was American. Outsiders to the field were often more receptive to the concept of continental drift than established scientists, who considered its incompleteness a fatal flaw.
  • Strevens’s point isn’t that these scientists were doing anything wrong. If they had biases and perspectives, he writes, “that’s how human thinking works.”
  • Eddington’s observations were expected to either confirm or falsify Einstein’s theory of general relativity, which predicted that the sun’s gravity would bend the path of light, subtly shifting the stellar pattern. For reasons having to do with weather and equipment, the evidence collected by Eddington—and by his colleague Frank Dyson, who had taken similar photographs in Sobral, Brazil—was inconclusive; some of their images were blurry, and so failed to resolve the matter definitively.
  • it was only natural for intelligent people who were free of the rule’s strictures to attempt a kind of holistic, systematic inquiry that was, in many ways, more demanding. It never occurred to them to ask if they might illuminate more collectively by thinking about less individually.
  • In the single-sphered, pre-scientific world, thinkers tended to inquire into everything at once. Often, they arrived at conclusions about nature that were fascinating, visionary, and wrong.
  • How Does Science Really Work?Science is objective. Scientists are not. Can an “iron rule” explain how they’ve changed the world anyway?By Joshua RothmanSeptember 28, 2020
Javier E

Does Your Language Shape How You Think? - NYTimes.com - 1 views

  • it turns out that the colors that our language routinely obliges us to treat as distinct can refine our purely visual sensitivity to certain color differences in reality, so that our brains are trained to exaggerate the distance between shades of color if these have different names in our language.
  • some languages, like Matses in Peru, oblige their speakers, like the finickiest of lawyers, to specify exactly how they came to know about the facts they are reporting. You cannot simply say, as in English, “An animal passed here.” You have to specify, using a different verbal form, whether this was directly experienced (you saw the animal passing), inferred (you saw footprints), conjectured (animals generally pass there that time of day), hearsay or such. If a statement is reported with the incorrect “evidentiality,” it is considered a lie.
  • For many years, our mother tongue was claimed to be a “prison house” that constrained our capacity to reason. Once it turned out that there was no evidence for such claims, this was taken as proof that people of all cultures think in fundamentally the same way. But surely it is a mistake to overestimate the importance of abstract reasoning in our lives. After all, how many daily decisions do we make on the basis of deductive logic compared with those guided by gut feeling, intuition, emotions, impulse or practical skills? The habits of mind that our culture has instilled in us from infancy shape our orientation to the world and our emotional responses to the objects we encounter, and their consequences probably go far beyond what has been experimentally demonstrated so far; they may also have a marked impact on our beliefs, values and ideologies.
  •  
    Fascinating follow-up to Sapir-Whorf.
pier-paolo

The Importance of Intuition, Time-And Speaking Last - The New York Times - 0 views

  • What makes the difference between a good and a great leader? A. Time. I do believe there are people in the world that have a natural ability at leadership, but I also believe you can teach, coach and build great leaders,
  • you need to get things finished and done, follow through on your commitments to your employees, leaders, consumers. Clarity and consistency of communication are also important. I often think that although great leaders can overcommunicate they keep it simple
  • I think it’s much more useful to me and to the team if I speak last. So I work very hard to create an environment to encourage everybody else to speak up before I weigh in.
  • ...2 more annotations...
  • It’s a long list. For me, if you’re not making mistakes, you’re not growing. It’s O.K. to try and not achieve, it’s not O.K. not to try
  • As your company has grown through acquisitions, you’ve had to integrate a lot of different teams. How do you tackle this integration of different cultures? A. Time. You can fundamentally approach acquisitions two ways. The company that acquires says, “This is the way we do it around here, so you will change and adopt our processes. Bang!”
anonymous

How Engaging With Art Affects the Human Brain | American Association for the Advancemen... - 0 views

  • Today, the neurological mechanisms underlying these responses are the subject of fascination to artists, curators and scientists alike.
  • "Once you circle these little things and come to the end of this little project, you'll be invited to compare where you came out against what the results of this experiment were and are," Vikan said. "What you'll find in this show is that there is an amazing convergence. The people that came to the museum liked and disliked the same categories of shapes as the people in the lab as the people in the fMRIs."
  • "Art accesses some of the most advanced processes of human intuitive analysis and expressivity and a key form of aesthetic appreciation is through embodied cognition, the ability to project oneself as an agent in the depicted scene,
  • ...13 more annotations...
  • Embodied cognition is "the sense of drawing you in and making you really feel the quality of the paintings,"
  • The Birth of Venus" because it makes them feel as though they are floating in with Venus on the seashell. Similarly, viewers can feel the flinging of the paint on the canvas when appreciating a drip painting by Jackson Pollock.
  • Mirror neurons, cells in the brain that respond similarly when observing and performing an action, are responsible for embodied cognition
  • Most research on the effects of music education has been done on populations that are privileged enough to afford private music instruction so Kraus is studying music instruction in group settings
  • "But observing the action requires the information to flow inward from the image you're seeing into the control centers. So that bidirectional flow is what's captured in this concept of mirror neurons and it gives the extra vividness to this aesthetics of art appreciation
  • Performing an action requires the information to flow out from the control centers to the limbs,
  • While congenitally blind people usually don't have activation in the visual area of the brain, in brain scans done after the subjects were taught to draw from memory,
  • Hearing speech in noise is one area in which musicians are uniquely skilled. In standardized tests, musicians across the lifespan were much better than the general public at listening to sentences and repeating them back as the level of background noise increased, Kraus said.
  • Artists are known to be better observers and exhibit better memory than non-artists. In an effort to see what happens in the brain when an individual is drawing and whether drawing can increase the brain's plasticity
  • Musicians are also known for their ability to keep rhythm, a skill that is correlated with reading ability and how precisely the brain responds to sound. After one year, students who participated in the group music instruction were faster and more accurate at keeping a beat than students in the control group, Kraus said.
  • "To sum things up, we are what we do and our past shapes our present," Kraus said. "Auditory biology is not frozen in time. It's a moving target. And music education really does seem to enhance communication by strengthening language skills."
  • "When you're doing art, your brain is running full speed,"
  • "It's hitting on all eight cylinders. So if you can figure out what's happening to the brain on art,
kaylynfreeman

COVID-19 and Compassion Fatigue | Psychology Today - 0 views

  • “Researchers say our brains aren’t wired to make sense of big numbers.” A story about a single tragic death evokes waves of sadness and emotion in us. We focus on the individual’s details, their life story, and the circumstances of their death. As the number of victims increases, our ability to muster empathy fades, something often called compassion fatigue.
  • If we talk instead about multiple people like Constance Johnson at once or just give the numbers involved—about 4,200 women die every year from cervical cancer—the information loses its impact. We don’t comprehend a number like 4,200 the way we do the story of a single individual.
  • As the results of a study by Paul Slovic and colleagues in 2014 showed, the tendency to be charitable and feel compassion diminishes rapidly as the number of people involved increases from one.
  • ...5 more annotations...
  • In support of compassion fatigue, both self-report and physiological measures of affect showed that positive affect declined substantially when the group size was two or more.”
  • After several puffs of neurotransmitter are released, however, that channel may undergo a process called “desensitization” in which it closes and stops responding to signals from the presynaptic neuron.
  • “Observing that the tendency to mentalize with one person more than many people is built into our brains does not mean we should accept it as an excuse for acting passively when facing large-scale crises. This observation implies, however, that we can no longer rely on our moral intuitions.”
  • eople may cope with the enormity of the pandemic by trying to find ways to minimize or even dismiss it. Saying that there are other diseases that cause more deaths than COVID-19 could be one such emotional mechanism.
  • We need to tell more stories about real people who have had COVID-19 and experienced it as more than mild symptoms, including stories about people who have been killed by the disease. The stories need to be told one by one. That way, we will be harnessing what we know from cognitive neuroscience to bring the sad message home.
knudsenlu

The History of Dice Reflects Beliefs in Fate and Chance - The Atlantic - 0 views

  • Dice, in their standard six-sided form, seem like the simplest kind of device—almost a classic embodiment of chance. But a new study of more than 100 examples from the last 2,000 years or so unearthed in the Netherlands shows that they have not always looked exactly the way they do now. What’s more, the shifts in dice’s appearance may reflect people’s changing sense of what exactly is behind a roll—fate, or probability.
  • Did it matter to game players that these dice were not fair? “We don’t know for sure,” Eerkens says. The way Romans wrote about dice falls suggests they were regarded as signs of supernatural favor or of a player’s fortune, however. The archaeologist Ellen Swift, in her book Roman Artifacts and Society, writes that high rolls had associations of benevolence and felicity, and that rolling three sixes at the same time seems to have been called a Venus. “Dice potentially played an important role in conceptualizing divine action in the world,” she writes.
  • At the same time, dice tend to get smaller, perhaps to make them easier to hide, as gambling was not favored by the increasingly powerful religious authorities. Some of the study’s dice from this era, in fact, were found all together in a small hole in a garbage heap, including one cheater’s die with an extra three. Could it be someone saw the light and forswore dice?
  • ...3 more annotations...
  • . In the 17th century, even Galileo writes about why, in a game with three dice, the number 10 should come up more than the number 9. It’s an observation that one would only be able to detect after thousands of rolls, but the reason behind it involves how many combinations of numbers can add up to each different option.
  • All these changes in dice come about, says Eerkens, “as different astronomers are coming up with new ideas about the world, and mathematicians are starting to understand numbers and probability.” Which came first: Did people begin to intuitively understand what true chance felt like, and adjusted dice accordingly, or did it trickle out from what would eventually become known as the scientific community?
  • And it does make you wonder: Would you know that fair dice should fall equally on each side, or that in a three-dice game, it’s better to bet on 10 than nine, if someone hadn’t told you?
mmckenziejr01

Does Photographic Memory Exist? - Scientific American - 0 views

  • The intuitive notion of a “photographic” memory is that it is just like a photograph: you can retrieve it from your memory at will and examine it in detail, zooming in on different parts. But a true photographic memory in this sense has never been proved to exist.
  • Most of us do have a kind of photographic memory, in that most people's memory for visual material is much better and more detailed than our recall of most other kinds of material.
  • Sorry to disappoint further, but even an amazing memory in one domain, such as vision, is not a guarantee of great memory across the board.
  • ...1 more annotation...
  • A winner of the memory Olympics, for instance, still had to keep sticky notes on the refrigerator to remember what she had to do during the day.
  •  
    I was researching photographic memory to go off of our discussion last class. It's more scientific name is eidetic memory (this is also more useful to google if you're looking for information).
« First ‹ Previous 81 - 100 of 118 Next ›
Showing 20 items per page