Skip to main content

Home/ TOK Friends/ Group items tagged algorithms

Rss Feed Group items tagged

Javier E

The Failure of Rational Choice Philosophy - NYTimes.com - 1 views

  • According to Hegel, history is idea-driven.
  • Ideas for him are public, rather than in our heads, and serve to coordinate behavior. They are, in short, pragmatically meaningful words.  To say that history is “idea driven” is to say that, like all cooperation, nation building requires a common basic vocabulary.
  • One prominent component of America’s basic vocabulary is ”individualism.”
  • ...12 more annotations...
  • individualism, the desire to control one’s own life, has many variants. Tocqueville viewed it as selfishness and suspected it, while Emerson and Whitman viewed it as the moment-by-moment expression of one’s unique self and loved it.
  • individualism as the making of choices so as to maximize one’s preferences. This differed from “selfish individualism” in that the preferences were not specified: they could be altruistic as well as selfish. It differed from “expressive individualism” in having general algorithms by which choices were made. These made it rational.
  • it was born in 1951 as “rational choice theory.” Rational choice theory’s mathematical account of individual choice, originally formulated in terms of voting behavior, made it a point-for-point antidote to the collectivist dialectics of Marxism
  • Functionaries at RAND quickly expanded the theory from a tool of social analysis into a set of universal doctrines that we may call “rational choice philosophy.” Governmental seminars and fellowships spread it to universities across the country, aided by the fact that any alternative to it would by definition be collectivist.
  • rational choice philosophy moved smoothly on the backs of their pupils into the “real world” of business and governme
  • Today, governments and businesses across the globe simply assume that social reality  is merely a set of individuals freely making rational choices.
  • At home, anti-regulation policies are crafted to appeal to the view that government must in no way interfere with Americans’ freedom of choice.
  • But the real significance of rational choice philosophy lay in ethics. Rational choice theory, being a branch of economics, does not question people’s preferences; it simply studies how they seek to maximize them. Rational choice philosophy seems to maintain this ethical neutrality (see Hans Reichenbach’s 1951 “The Rise of Scientific Philosophy,” an unwitting masterpiece of the genre); but it does not.
  • Whatever my preferences are, I have a better chance of realizing them if I possess wealth and power. Rational choice philosophy thus promulgates a clear and compelling moral imperative: increase your wealth and power!
  • Today, institutions which help individuals do that (corporations, lobbyists) are flourishing; the others (public hospitals, schools) are basically left to rot. Business and law schools prosper; philosophy departments are threatened with closure.
  • Hegel, for one, had denied all three of its central claims in his “Encyclopedia of the Philosophical Sciences” over a century before. In that work, as elsewhere in his writings, nature is not neatly causal, but shot through with randomness. Because of this chaos, we cannot know the significance of what we have done until our community tells us; and ethical life correspondingly consists, not in pursuing wealth and power, but in integrating ourselves into the right kinds of community.
  • By 1953, W. V. O. Quine was exposing the flaws in rational choice epistemology. John Rawls, somewhat later, took on its sham ethical neutrality, arguing that rationality in choice includes moral constraints. The neat causality of rational choice ontology, always at odds with quantum physics, was further jumbled by the environmental crisis, exposed by Rachel Carson’s 1962 book “The Silent Spring,” which revealed that the causal effects of human actions were much more complex, and so less predicable, than previously thought.
Javier E

Facebook's Subtle Empire - The New York Times - 1 views

  • Mark Zuckerberg’s empire has become an immensely powerful media organization in its own right, albeit one that effectively subcontracts actual news gathering to other entities (this newspaper included). And its potential influence is amplified by the fact that this Cronkite-esque role is concealed by Facebook’s self-definition as “just” a social hub.
  • Beck is right that Facebook is different in kind from any news organization before it, and that traditional critiques of media bias — from the Chomskyite left as well as from the right — don’t apply neatly to what it’s doing.
  • the more plausible (and inevitable) exercise of Facebook’s power would be basically unconscious
  • ...4 more annotations...
  • Domenech is right that Zuckerberg’s empire still needs vigilant watchdogs and rigorous critiques. True, any Facebook bias is likely to be subtler-than-subtle. But because so many people effectively live inside its architecture while online, there’s a power in a social network’s subtlety that no newspaper or news broadcast could ever match.
  • Human nature being what it is, a social network managed and maintained by people who tend to share a particular worldview — left-libertarian and spiritual-but-not-religious, if I judge the biases of Silicon Valley right — will tend to gently catechize its users into that perspective.
  • . The way even an “impersonal” algorithm is set up, the kind of stories it elevates and buries, is also a form of catechesis, a way of teaching human beings about how they should think about the world.
  • even what seem like offhand choices — like Google’s choice of its Doodle subject, to cite a different new media entity — point people toward particular icons, particular ideals.
Javier E

Big Data Is Great, but Don't Forget Intuition - NYTimes.com - 2 views

  • THE problem is that a math model, like a metaphor, is a simplification. This type of modeling came out of the sciences, where the behavior of particles in a fluid, for example, is predictable according to the laws of physics.
  • In so many Big Data applications, a math model attaches a crisp number to human behavior, interests and preferences. The peril of that approach, as in finance, was the subject of a recent book by Emanuel Derman, a former quant at Goldman Sachs and now a professor at Columbia University. Its title is “Models. Behaving. Badly.”
  • A report last year by the McKinsey Global Institute, the research arm of the consulting firm, projected that the United States needed 140,000 to 190,000 more workers with “deep analytical” expertise and 1.5 million more data-literate managers, whether retrained or hired.
  • ...4 more annotations...
  • A major part of managing Big Data projects, he says, is asking the right questions: How do you define the problem? What data do you need? Where does it come from? What are the assumptions behind the model that the data is fed into? How is the model different from reality?
  • Society might be well served if the model makers pondered the ethical dimensions of their work as well as studying the math, according to Rachel Schutt, a senior statistician at Google Research. “Models do not just predict, but they can make things happen,” says Ms. Schutt, who taught a data science course this year at Columbia. “That’s not discussed generally in our field.”
  • the increasing use of software that microscopically tracks and monitors online behavior has raised privacy worries. Will Big Data usher in a digital surveillance state, mainly serving corporate interests?
  • my bigger concern is that the algorithms that are shaping my digital world are too simple-minded, rather than too smart. That was a theme of a book by Eli Pariser, titled “The Filter Bubble: What the Internet Is Hiding From You.”
Javier E

How Social Media Silences Debate - NYTimes.com - 1 views

  • Social media, like Twitter and Facebook, has the effect of tamping down diversity of opinion and stifling debate about public affairs. It makes people less likely to voice opinions, particularly when they think their views differ from those of their friends, according to a report published Tuesday by researchers at Pew Research Center and Rutgers University.
  • The researchers also found that those who use social media regularly are more reluctant to express dissenting views in the offline world.
  • The Internet, it seems, is contributing to the polarization of America, as people surround themselves with people who think like them and hesitate to say anything different. Internet companies magnify the effect, by tweaking their algorithms to show us more content from people who are similar to us.
  • ...3 more annotations...
  • the Internet has deepened that divide. It makes it easy for people to read only news and opinions from people they agree with. In many cases, people don’t even make that choice for themselves. Last week, Twitter said it would begin showing people tweets even from people they don’t follow if enough other people they follow favorite them.
  • Humans are acutely attuned to the approval of others, constantly reading cues to judge whether people agree with them, the researchers said. Active social media users get many more of these cues — like status updates, news stories people choose to share and photos of how they spend their days — and so they become less likely to speak up.
  • The study also found that for all the discussion of social media becoming the place where people find and discuss news, most people said they got information about the N.S.A. revelations from TV and radio, while Facebook and Twitter were the least likely to be news sources.
Javier E

[Six Questions] | Astra Taylor on The People's Platform: Taking Back Power and Culture ... - 1 views

  • Astra Taylor, a cultural critic and the director of the documentaries Zizek! and Examined Life, challenges the notion that the Internet has brought us into an age of cultural democracy. While some have hailed the medium as a platform for diverse voices and the free exchange of information and ideas, Taylor shows that these assumptions are suspect at best. Instead, she argues, the new cultural order looks much like the old: big voices overshadow small ones, content is sensationalist and powered by advertisements, quality work is underfunded, and corporate giants like Google and Facebook rule. The Internet does offer promising tools, Taylor writes, but a cultural democracy will be born only if we work collaboratively to develop the potential of this powerful resource
  • Most people don’t realize how little information can be conveyed in a feature film. The transcripts of both of my movies are probably equivalent in length to a Harper’s cover story.
  • why should Amazon, Apple, Facebook, and Google get a free pass? Why should we expect them to behave any differently over the long term? The tradition of progressive media criticism that came out of the Frankfurt School, not to mention the basic concept of political economy (looking at the way business interests shape the cultural landscape), was nowhere to be seen, and that worried me. It’s not like political economy became irrelevant the second the Internet was invented.
  • ...15 more annotations...
  • How do we reconcile our enjoyment of social media even as we understand that the corporations who control them aren’t always acting in our best interests?
  • hat was because the underlying economic conditions hadn’t been changed or “disrupted,” to use a favorite Silicon Valley phrase. Google has to serve its shareholders, just like NBCUniversal does. As a result, many of the unappealing aspects of the legacy-media model have simply carried over into a digital age — namely, commercialism, consolidation, and centralization. In fact, the new system is even more dependent on advertising dollars than the one that preceded it, and digital advertising is far more invasive and ubiquitous
  • the popular narrative — new communications technologies would topple the establishment and empower regular people — didn’t accurately capture reality. Something more complex and predictable was happening. The old-media dinosaurs weren’t dying out, but were adapting to the online environment; meanwhile the new tech titans were coming increasingly to resemble their predecessors
  • I’m resistant to the tendency to place this responsibility solely on the shoulders of users. Gadgets and platforms are designed to be addictive, with every element from color schemes to headlines carefully tested to maximize clickability and engagement. The recent news that Facebook tweaked its algorithms for a week in 2012, showing hundreds of thousands of users only “happy” or “sad” posts in order to study emotional contagion — in other words, to manipulate people’s mental states — is further evidence that these platforms are not neutral. In the end, Facebook wants us to feel the emotion of wanting to visit Facebook frequently
  • these services aren’t free even if we don’t pay money for them; we pay with our personal data, with our privacy. This feeds into the larger surveillance debate, since government snooping piggybacks on corporate data collection. As I argue in the book, there are also negative cultural consequences (e.g., when advertisers are paying the tab we get more of the kind of culture marketers like to associate themselves with and less of the stuff they don’t) and worrying social costs. For example, the White House and the Federal Trade Commission have both recently warned that the era of “big data” opens new avenues of discrimination and may erode hard-won consumer protections.
  • I use lots of products that are created by companies whose business practices I object to and that don’t act in my best interests, or the best interests of workers or the environment — we all do, since that’s part of living under capitalism. That said, I refuse to invest so much in any platform that I can’t quit without remorse
  • there’s a pervasive and ill-advised faith that technology will promote competition if left to its own devices (“competition is a click away,” tech executives like to say), but that’s not true for a variety of reasons. The paradox of our current media landscape is this: our devices and consumption patterns are ever more personalized, yet we’re simultaneously connected to this immense, opaque, centralized infrastructure. We’re all dependent on a handful of firms that are effectively monopolies — from Time Warner and Comcast on up to Google and Facebook — and we’re seeing increased vertical integration, with companies acting as both distributors and creators of content. Amazon aspires to be the bookstore, the bookshelf, and the book. Google isn’t just a search engine, a popular browser, and an operating system; it also invests in original content
  • That it’s invisible or at least harder to track and prove. We haven’t figured out how to deal with the unique ways prejudice plays out over digital channels, and that’s partly because some folks can’t accept the fact that discrimination persists online. (After all, there is no sign on the door that reads Minorities Not Allowed.)
  • just because the Internet is open doesn’t mean it’s equal; offline hierarchies carry over to the online world and are even amplified there. For the past year or so, there has been a lively discussion taking place about the disproportionate and often outrageous sexual harassment women face simply for entering virtual space and asserting themselves there — research verifies that female Internet users are dramatically more likely to be threatened or stalked than their male counterparts — and yet there is very little agreement about what, if anything, can be done to address the problem.
  • What steps can we take to encourage better representation of independent and non-commercial media? We need to fund it, first and foremost. As individuals this means paying for the stuff we believe in and want to see thrive. But I don’t think enlightened consumption can get us where we need to go on its own. I’m skeptical of the idea that we can shop our way to a better world. The dominance of commercial media is a social and political problem that demands a collective solution, so I make an argument for state funding and propose a reconceptualization of public media. More generally, I’m struck by the fact that we use these civic-minded metaphors, calling Google Books a “library” or Twitter a “town square” — or even calling social media “social” — but real public options are off the table, at least in the United States. We hand the digital commons over to private corporations at our peril.
  • 6. You advocate for greater government regulation of the Internet. Why is this important?
  • I’m for regulating specific things, like Internet access, which is what the fight for net neutrality is ultimately about. We also need stronger privacy protections and restrictions on data gathering, retention, and use, which won’t happen without a fight.
  • I challenge the techno-libertarian insistence that the government has no productive role to play and that it needs to keep its hands off the Internet for fear that it will be “broken.” The Internet and personal computing as we know them wouldn’t exist without state investment and innovation, so let’s be real.
  • social inequalities that exist in the real world remain meaningful online. What are the particular dangers of discrimination on the Internet?
  • So it’s not that the Internet needs to be regulated but that these big tech corporations need to be subject to governmental oversight. After all, they are reaching farther and farther into our intimate lives. They’re watching us. Someone should be watching them.
Javier E

Dark social traffic in the mobile app era -- Fusion - 1 views

  • over the last two years, the Internet landscape has been changing. People use their phones differently from their computers, and that has made Facebook more dominant.
  • people spend about as much time in apps as they do on the desktop and mobile webs combined.
  • The takeaway is this: if you’re a media company, you are almost certainly underestimating your Facebook traffic. The only question is how much Facebook traffic you’re not counting.
  • ...11 more annotations...
  • it should be even more clear now: Facebook owns web media distribution.
  • The mobile web has exploded. This is due to the falling cost and rising quality of smartphones. Now, both Apple and Google have huge numbers of great apps, and people love them.
  • a good chunk of what we might have called dark social visits are actually Facebook mobile app visitors in disguise.
  • beginning last October, Facebook made changes in its algorithm that started pushing massive amounts of traffic to media publishers. In some cases, as at The Atlantic, where I last worked, our Facebook traffic went up triple-digit percentages. Facebook simultaneously also pushed users to like pages from media companies, which drove up the fan-counts at all kinds of media sites. If you see a page with a million followers, there is a 99 percent chance that it got a push from Facebook.
  • Chief among the non-gaming apps is Facebook. They’ve done a remarkable job building a mobile app that keeps people using it.
  • when people are going through their news feeds on the Facebook app and they click on a link, it’s as if someone cut and pasted that link into the browser, meaning that the Facebook app and the target website don’t do the normal handshaking that they do on the web. In the desktop scenario, the incoming visitor has a tout that runs ahead to the website and says, “Hey, I’m coming from Facebook.com.” In the mobile app scenario that communication, known as the referrer, does not happen.
  • Facebook—which every media publisher already knows owns them—actually has a much tighter grip on web traffic than anyone had thought. Which would make their big-footing among publishers that much more interesting. Because they certainly know how much traffic they’re sending to all your favorite websites, even if those websites themselves do not.
  • Whenever you go to a website, you take along a little profile called a “user agent.” It says what my operating system is and what kind of browser I use, along with some other information.
  • A story’s shareability is now largely determined by its shareability on Facebook, with all its attendant quirks and feedback loops. We’re all optimizing for Facebook now,
  • the social networks—by which I mostly mean Facebook—have begun to eat away at the roots of the old ways of sharing on non-commercial platforms.
  • what people like to do with their phones, en masse, is open up the Facebook app and thumb through their news feeds.
julia rhodes

Why Wasn't It 'Grapes of Glee'? Study of Books Finds Economic Link - NYTimes.com - 0 views

  • Could the emotional connotations of words in literature be a kind of lagging economic indicator? According to scientists who analyzed a century’s worth of writing, they might: After using big-data techniques to document the frequency of sad and happy words in millions of books, the researchers concluded that the emotional mood of literature reflects the mood of the economy over the previous 10 years.
  • They then matched that against a well-known indicator called the “economic misery index” — the sum of inflation rates and unemployment rates — and found that literary misery in a given year correlated with the average of the previous decade’s economic misery index numbers.
  • “To me it confirms that we do have a collective memory that conditions the way we write, and that economics is a very important driver of that.”
  • ...4 more annotations...
  • “We think we’re all unique, and we think every novel is individual, and it is. So when they reduce us all to a bunch of data about words, I guess you have to laugh.
  • A 1999 study found that when social and economic conditions were bad, movie actresses with “mature facial features” — small eyes, thin cheeks, large chins — were popular, but when conditions were good, the public liked actresses with childlike features.
  • In the new study, researchers also analyzed 650,000 German books and found the same misery correlation.
  • Still, the practice of applying data-sorting algorithms to art can only go so far. For one thing, the lists of emotion words, created by other researchers and used for years, include some surprising choices: words like “smug” and “wallow” were among 224 words on the “joy” list; potentially neutral adjectives like “dark” and “low” were among 115 words on the “sadness” list.
Javier E

The Benefits of 'Binocularity' - NYTimes.com - 0 views

  • Will advances in neuroscience move reasonable people to abandon the idea that criminals deserve to be punished?
  • if the idea of deserving punishment depends upon the idea that criminals freely choose their actions, and if neuroscience reveals that free choice is an illusion, then we can see that the idea of deserving punishment is nonsense
  • “new neuroscience will undermine people’s common sense, libertarian conception of free will and the retributivist thinking that depends on it, both of which have heretofore been shielded by the inaccessibility of sophisticated thinking about the mind and its neural basis.”
  • ...11 more annotations...
  • when university students learn about “the neural basis of behavior” — quite simply, the brain activity underlying human actions —they become less supportive of the idea that criminals deserve to be punished.
  • To see what is right — and wrong — with the notion that neuroscience will transform our idea of just deserts, and, more generally, our idea of what it means to be human, it can help to step back and consider
  • British philosopher Jonathan Glover. He said that if we want to understand what sorts of beings we are in depth, we need to achieve a sort of intellectual “binocularity.”
  • Glover was saying that, just as we need two eyes that integrate slightly different information about one scene to achieve visual depth perception, being able see ourselves though two fundamentally different lenses, and integrate those two sources of information, can give us a greater depth of understanding of ourselves.
  • Through one lens we see that we are “subjects” (we act) who have minds and can have the experience of making free choices. Through the other we see that we are “objects” or bodies (we are acted upon), and that our experiences or movements are determined by an infinitely long chain of natural and social forces.
  • intellectual binocularity itself is not easy to achieve. While visual binocularity comes naturally, intellectual binocularity requires effort. In fact — and this is one source of the trouble we so often have when we try to talk about the sorts of beings we are — we can’t actually achieve perfect binocular understanding.
  • We can’t actually see ourselves as subjects and as objects at the same time any more than we can see Wittgenstein’s famous duck-rabbit figure as a duck and as a rabbit at once. Rather, we have to accept the necessity of oscillating between the lenses or ways of seeing, fully aware that, not only are we unable to use both at once, but that there is no algorithm for knowing when to use which.
  • When I said in the beginning that there’s something right about the reasoning of those researchers who reject the idea that our choices are “spontaneous” and not determined by prior events, I was referring to their rejection of the idea that our choices are rooted in some God-given, extra-natural, bodyless stuff.
  • My complaint is that they slip from making the reasonable claim that such extra-natural stuff is an illusion to speaking in ways that suggest that free will is an illusion, full stop. To suggest that our experience of choosing is wholly an illusion is as unhelpful as to suggest that, to explain the emergence of that experience, we need to appeal to extra-natural phenomena.
  • Using either lens alone can lead to pernicious mistakes. When we use only the subject lens, we are prone to a sort of inhumanity where we ignore the reality of the natural and social forces that bear down on all of us to make our choices.
  • When we use only the object lens, however, we are prone to a different, but equally noxious sort of inhumanity, where we fail to appreciate the reality of the experience of making choices freely and of knowing that we can deserve punishment — or praise.
Javier E

How Google Dominates Us by James Gleick | The New York Review of Books - 1 views

  • Most of the time Google does not actually have the answers. When people say, “I looked it up on Google,” they are committing a solecism. When they try to erase their embarrassing personal histories “on Google,” they are barking up the wrong tree. It is seldom right to say that anything is true “according to Google.” Google is the oracle of redirection. Go there for “hamadryad,” and it points you to Wikipedia. Or the Free Online Dictionary. Or the Official Hamadryad Web Site (it’s a rock band, too, wouldn’t you know). Google defines its mission as “to organize the world’s information,” not to possess it or accumulate it.
  • Then again, a substantial portion of the world’s printed books have now been copied onto the company’s servers, where they share space with millions of hours of video and detailed multilevel imagery of the entire globe, from satellites and from its squadrons of roving street-level cameras. Not to mention the great and growing trove of information Google possesses regarding the interests and behavior of, approximately, everyone.
  • When I say Google “possesses” all this information, that’s not the same as owning it. What it means to own information is very much in flux.
  • ...2 more annotations...
  • The essence of the Web is the linking of individual “pages” on websites, one to another. Every link represents a recommendation—a vote of interest, if not quality. So the algorithm assigns every page a rank, depending on how many other pages link to it. Furthermore, all links are not valued equally. A recommendation is worth more when it comes from a page that has a high rank itself. The math isn’t trivial—PageRank is a probability distribution, and the calculation is recursive, each page’s rank depending on the ranks of pages that depend…and so on. Page and Brin patented PageRank and published the details even before starting the company they called Google.
  • As they saw it from the first, their mission encompassed not just the Internet but all the world’s books and images, too.
maddieireland334

Can an Algorithm Write a Better News Story Than a Human Reporter? | WIRED - 0 views

  •  
    Had Narrative Science - a company that trains computers to write news stories-created this piece, it probably would not mention that the company's Chicago headquarters lie only a long baseball toss from the Tribune newspaper building. Nor would it dwell on the fact that this potentially job-killing technology was incubated in part at Northwestern's Medill School of Journalism, Media, Integrated Marketing Communications.
dpittenger

Is Facebook keeping you in a political bubble? | Science/AAAS | News - 0 views

  • Researchers call it the filter bubble: the personalized view of the Internet created through tech company algorithms.
  • For example, liberals and conservatives may rarely learn about issues that concern the other side simply because those issues never makes it into their news feeds. Over time, this could cause political polarization, because people are not exposed to topics and ideas from the opposite camp.
  • So on the question of whether Facebook is a force for good or ill for democracy, Aral says, "the jury is still out."
nolan_delaney

Five Practical Uses for "Spooky" Quantum Mechanics | Science | Smithsonian - 0 views

  • This can be fixed using potentially unbreakable quantum key distribution (QKD). In QKD, information about the key is sent via photons that have been randomly polarized. This restricts the photon so that it vibrates in only one plane—for example, up and down, or left to right. The recipient can use polarized filters to decipher the key and then use a chosen algorithm to securely encrypt a message. The secret data still gets
  • sent over normal communication channels, but no one can decode the message unless they have the exact quantum key. That's tricky, because quantum rules dictate that "reading" the polarized photons will always change their states, and any attempt at eavesdropping will alert the communicators to a security breach.
  •  
    Mind-blowing applications for Quantum Mechanics including possible computer passwords that are impossible to crack, because they are protected by the laws of physics  
Javier E

How to Invent a Person Online - Curtis Wallen - The Atlantic - 2 views

  • Social networks and data brokers use algorithms and probabilities to reconstruct our identities, and then try to influence the way we think and feel and make decisions.
  • t’s not an exaggeration to say everything you do online is being followed. And the more precisely a company can tailor your online experience, the more money it can make from advertisers.
  • After Edward Snowden’s leaks about NSA surveillance, Tucker and Marthews found, the frequency of these sensitive search terms declined—suggesting that Internet users have become less likely to explore "search terms that they [believe] might get them in trouble with the U.S. government." The study also found that people have become less likely to search "embarrassing" topics
  • ...7 more annotations...
  • In other words, people are doing their best to blend in with the crowd.
  • The challenge of achieving true anonymity, though, is that evading surveillance makes your behavior anomalous—and anomalies stick out. As the Japanese proverb says, "A nail that sticks out gets hammered down." Glenn Greenwald explained recently that simply using encryption can make you a target. For me, this was all the more motivation to disappear.
  • The U.S. Department of Defense has also figured out how influential Facebook and Twitter can be. In 2011, it announced a new “Social Media in Strategic Communication” (SMISC) program to detect and counter information the U.S. government deemed dangerous. “Since everyone is potentially an influencer on social media and is capable of spreading information,” one researcher involved in a SMISC study told The Guardian, “our work aims to identify and engage the right people at the right time on social media to help propagate information when needed.”
  • For those of us who feel confident that we have nothing to hide, the future of Internet security might not seem like a major concern. But we underestimate the many ways in which our online identities can be manipulated.
  • Private companies are also using personal information in hidden ways. They don’t simply learn our tastes and habits, offering us more of what want and less of what we don’t. As Michael Fertik wrote in a 2013 Scientific American article titled “The Rich See a Different Internet Than the Poor,” credit lenders have the ability to hide their offers from people who may need loans the most. And Google now has a patent to change its prices based on who’s buying. 
  • It is essentially impossible to achieve anonymity online. It requires a complete operational posture that extends from the digital to the physical. Downloading a secure messaging app and using Tor won’t all of a sudden make you “NSA-proof.” And doing it right is really, really hard.
  • Weighing these trade-offs in my day-to-day life led to a few behavioral changes, but I have a mostly normal relationship with the Internet—I deleted my Facebook account, I encrypt my emails whenever I can, and I use a handful of privacy minded browser extensions. But even those are steps many people are unwilling, or unable, to take.
Javier E

Is Algebra Necessary? - NYTimes.com - 1 views

  • My aim is not to spare students from a difficult subject, but to call attention to the real problems we are causing by misdirecting precious resources.
  • one in four ninth graders fail to finish high school. In South Carolina, 34 percent fell away in 2008-9, according to national data released last year; for Nevada, it was 45 percent. Most of the educators I’ve talked with cite algebra as the major academic reason.
  • Algebra is an onerous stumbling block for all kinds of students: disadvantaged and affluent, black and white. In New Mexico, 43 percent of white students fell below “proficient,” along with 39 percent in Tennessee
  • ...15 more annotations...
  • The depressing conclusion of a faculty report: “failing math at all levels affects retention more than any other academic factor.” A national sample of transcripts found mathematics had twice as many F’s and D’s compared as other subjects.
  • Of all who embark on higher education, only 58 percent end up with bachelor’s degrees. The main impediment to graduation: freshman math.
  • California’s two university systems, for instance, consider applications only from students who have taken three years of mathematics and in that way exclude many applicants who might excel in fields like art or history. Community college students face an equally prohibitive mathematics wall. A study of two-year schools found that fewer than a quarter of their entrants passed the algebra classes they were required to take.
  • Nor will just passing grades suffice. Many colleges seek to raise their status by setting a high mathematics bar. Hence, they look for 700 on the math section of the SAT, a height attained in 2009 by only 9 percent of men and 4 percent of women. And it’s not just Ivy League colleges that do this: at schools like Vanderbilt, Rice and Washington University in St. Louis, applicants had best be legacies or athletes if they have scored less than 700 on their math SATs.
  • “mathematical reasoning in workplaces differs markedly from the algorithms taught in school.” Even in jobs that rely on so-called STEM credentials — science, technology, engineering, math — considerable training occurs after hiring, including the kinds of computations that will be required.
  • I fully concur that high-tech knowledge is needed to sustain an advanced industrial economy. But we’re deluding ourselves if we believe the solution is largely academic.
  • a definitive analysis by the Georgetown Center on Education and the Workforce forecasts that in the decade ahead a mere 5 percent of entry-level workers will need to be proficient in algebra or above.
  • A January 2012 analysis from the Georgetown center found 7.5 percent unemployment for engineering graduates and 8.2 percent among computer scientists.
  • “Our civilization would collapse without mathematics.” He’s absolutely right.
  • Quantitative literacy clearly is useful in weighing all manner of public policies
  • Mathematics is used as a hoop, a badge, a totem to impress outsiders and elevate a profession’s status.
  • Instead of investing so much of our academic energy in a subject that blocks further attainment for much of our population, I propose that we start thinking about alternatives. Thus mathematics teachers at every level could create exciting courses in what I call “citizen statistics.” This would not be a backdoor version of algebra, as in the Advanced Placement syllabus. Nor would it focus on equations used by scholars when they write for one another. Instead, it would familiarize students with the kinds of numbers that describe and delineate our personal and public lives.
  • This need not involve dumbing down. Researching the reliability of numbers can be as demanding as geometry.
  • I hope that mathematics departments can also create courses in the history and philosophy of their discipline, as well as its applications in early cultures. Why not mathematics in art and music — even poetry — along with its role in assorted sciences? The aim would be to treat mathematics as a liberal art, making it as accessible and welcoming as sculpture or ballet
  • Yes, young people should learn to read and write and do long division, whether they want to or not. But there is no reason to force them to grasp vectorial angles and discontinuous functions. Think of math as a huge boulder we make everyone pull, without assessing what all this pain achieves. So why require it, without alternatives or exceptions? Thus far I haven’t found a compelling answer.
Javier E

Technology's Man Problem - NYTimes.com - 0 views

  • computer engineering, the most innovative sector of the economy, remains behind. Many women who want to be engineers encounter a field where they not only are significantly underrepresented but also feel pushed away.
  • Among the women who join the field, 56 percent leave by midcareer, a startling attrition rate that is double that for men, according to research from the Harvard Business School.
  • A culprit, many people in the field say, is a sexist, alpha-male culture that can make women and other people who don’t fit the mold feel unwelcome, demeaned or even endangered.
  • ...12 more annotations...
  • “I’ve been a programmer for 13 years, and I’ve always been one of the only women and queer people in the room. I’ve been harassed, I’ve had people make suggestive comments to me, I’ve had people basically dismiss my expertise. I’ve gotten rape and death threats just for speaking out about this stuff.”
  • “We see these stories, ‘Why aren’t there more women in computer science and engineering?’ and there’s all these complicated answers like, ‘School advisers don’t have them take math and physics,’ and it’s probably true,” said Lauren Weinstein, a man who has spent his four-decade career in tech working mostly with other men, and is currently a consultant for Google.“But I think there’s probably a simpler reason,” he said, “which is these guys are just jerks, and women know it.”
  • once programming gained prestige, women were pushed out. Over the decades, the share of women in computing has continued to decline. In 2012, just 18 percent of computer-science college graduates were women, down from 37 percent in 1985, according to the National Center for Women & Information Technology.
  • Some 1.2 million computing jobs will be available in 2022, yet United States universities are producing only 39 percent of the graduates needed to fill them, the N.C.W.I.T. estimates.
  • Twenty percent of software developers are women, according to the Labor Department, and fewer than 6 percent of engineers are black or Hispanic. Comparatively, 56 percent of people in business and financial-operations jobs are women, as are 36 percent of physicians and surgeons and one-third of lawyers.
  • an engineer at Pinterest has collected data from people at 133 start-ups and found that an average of 12 percent of the engineers are women.
  • “It makes a hostile environment for me,” she said. “But I don’t want to raise my hand and call negative attention toward myself, and become the woman who is the problem — ‘that woman.’ In start-up culture they protect their own tribe, so by putting my hand up, I’m saying I’m an ‘other,’ I shouldn’t be there, so for me that’s an economic threat.”
  • “Many women have come to me and said they basically have had to hide on the Net now,” said Mr. Weinstein, who works on issues of identity and anonymity online. “They use male names, they don’t put their real photos up, because they are immediately targeted and harassed.”
  • “It’s a boys’ club, and you have to try to get into it, and they’re trying as hard as they can to prove you can’t,” said Ephrat Bitton, the director of algorithms at FutureAdvisor, an online investment start-up that she says has a better culture because almost half the engineers are women.
  • Writing code is a high-pressure job with little room for error, as are many jobs. But coding can be stressful in a different way, women interviewed for this article said, because code reviews — peer reviews to spot mistakes in software — can quickly devolve.
  • “Code reviews are brutal — ‘Mine is better than yours, I see flaws in yours’ — and they should be, for the creation of good software,” said Ellen Ullman, a software engineer and author. “I think when you add a drop of women into it, it just exacerbates the problem, because here’s a kind of foreigner.”
  • But some women argue that these kinds of initiatives are unhelpful.“My general issue with the coverage of women in tech is that women in the technology press are talked about in the context of being women, and men are talked about in the context of being in technology,” said a technical woman who would speak only on condition of anonymity because she did not want to be part of an article about women in tech.
Emily Freilich

All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines - Nicholas ... - 0 views

  • We rely on computers to fly our planes, find our cancers, design our buildings, audit our businesses. That's all well and good. But what happens when the computer fails?
  • On the evening of February 12, 2009, a Continental Connection commuter flight made its way through blustery weather between Newark, New Jersey, and Buffalo, New York.
  • The Q400 was well into its approach to the Buffalo airport, its landing gear down, its wing flaps out, when the pilot’s control yoke began to shudder noisily, a signal that the plane was losing lift and risked going into an aerodynamic stall. The autopilot disconnected, and the captain took over the controls. He reacted quickly, but he did precisely the wrong thing: he jerked back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of pushing the yoke forward to gain velocity.
  • ...43 more annotations...
  • The crash, which killed all 49 people on board as well as one person on the ground, should never have happened.
  • aptain’s response to the stall warning, the investigators reported, “should have been automatic, but his improper flight control inputs were inconsistent with his training” and instead revealed “startle and confusion.
  • Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes.
  • We humans have been handing off chores, both physical and mental, to tools since the invention of the lever, the wheel, and the counting bead.
  • And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes,
  • No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,”
  • “We’re forgetting how to fly.”
  • The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world.
  • What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.
  • Examples of complacency and bias have been well documented in high-risk situations—on flight decks and battlefields, in factory control rooms—but recent studies suggest that the problems can bedevil anyone working with a computer
  • That may leave the person operating the computer to play the role of a high-tech clerk—entering data, monitoring outputs, and watching for failures. Rather than opening new frontiers of thought and action, software ends up narrowing our focus.
  • A labor-saving device doesn’t just provide a substitute for some isolated component of a job or other activity. It alters the character of the entire task, including the roles, attitudes, and skills of the people taking part.
  • when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift.
  • Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears
  • Automation is different now. Computers can be programmed to perform complex activities in which a succession of tightly coordinated tasks is carried out through an evaluation of many variables. Many software programs take on intellectual work—observing and sensing, analyzing and judging, even making decisions—that until recently was considered the preserve of humans.
  • Automation turns us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit the development of expertise.
  • Since the late 1970s, psychologists have been documenting a phenomenon called the “generation effect.” It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they generate them—than when they simply read them.
  • When you engage actively in a task, you set off intricate mental processes that allow you to retain more knowledge. You learn more and remember more. When you repeat the same task over a long period, your brain constructs specialized neural circuits dedicated to the activit
  • What looks like instinct is hard-won skill, skill that requires exactly the kind of struggle that modern software seeks to alleviate.
  • In many businesses, managers and other professionals have come to depend on decision-support systems to analyze information and suggest courses of action. Accountants, for example, use the systems in corporate audits. The applications speed the work, but some signs suggest that as the software becomes more capable, the accountants become less so.
  • You can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than merely observing.
  • Experts used to assume that there were limits to the ability of programmers to automate complicated tasks, particularly those involving sensory perception, pattern recognition, and conceptual knowledge
  • Who needs humans, anyway? That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers’ abilities are expanding so quickly and if people, by comparison, seem slow, clumsy, and error-prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation?
  • The cure for imperfect automation is total automation.
  • That idea is seductive, but no machine is infallible. Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter circumstances that its designers never anticipated. As automation technologies become more complex, relying on interdependencies among algorithms, databases, sensors, and mechanical parts, the potential sources of failure multiply. They also become harder to detect.
  • conundrum of computer automation.
  • Because many system designers assume that human operators are “unreliable and inefficient,” at least when compared with a computer, they strive to give the operators as small a role as possible.
  • People end up functioning as mere monitors, passive watchers of screens. That’s a job that humans, with our notoriously wandering minds, are especially bad at
  • people have trouble maintaining their attention on a stable display of information for more than half an hour. “This means,” Bainbridge observed, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.”
  • a person’s skills “deteriorate when they are not used,” even an experienced operator will eventually begin to act like an inexperienced one if restricted to just watching.
  • You can program software to shift control back to human operators at frequent but irregular intervals; knowing that they may need to take command at any moment keeps people engaged, promoting situational awareness and learning.
  • What’s most astonishing, and unsettling, about computer automation is that it’s still in its early stages.
  • most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity.
  • Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience.
  • Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.
  • The small island of Igloolik, off the coast of the Melville Peninsula in the Nunavut territory of northern Canada, is a bewildering place in the winter.
  • , Inuit hunters have for some 4,000 years ventured out from their homes on the island and traveled across miles of ice and tundra to search for game. The hunters’ ability to navigate vast stretches of the barren Arctic terrain, where landmarks are few, snow formations are in constant flux, and trails disappear overnight, has amazed explorers and scientists for centuries. The Inuit’s extraordinary way-finding skills are born not of technological prowess—they long eschewed maps and compasses—but of a profound understanding of winds, snowdrift patterns, animal behavior, stars, and tides.
  • The Igloolik hunters have begun to rely on computer-generated maps to get around. Adoption of GPS technology has been particularly strong among younger Inuit, and it’s not hard to understand why.
  • But as GPS devices have proliferated on Igloolik, reports of serious accidents during hunts have spread. A hunter who hasn’t developed way-finding skills can easily become lost, particularly if his GPS receiver fails.
  • The routes so meticulously plotted on satellite maps can also give hunters tunnel vision, leading them onto thin ice or into other hazards a skilled navigator would avoid.
  • An Inuit on a GPS-equipped snowmobile is not so different from a suburban commuter in a GPS-equipped SUV: as he devotes his attention to the instructions coming from the computer, he loses sight of his surroundings. He travels “blindfolded,” as Aporta puts it
  • A unique talent that has distinguished a people for centuries may evaporate in a generation.
  • Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?
  •  
    Automation increases efficiency and speed of tasks, but decreases the individual's knowledge of a task and decrease's a human's ability to learn. 
grayton downing

Mapping Disease | The Scientist Magazine® - 0 views

  • researchers and journalists have scrambled to map the spread of H7N9 bird flu through China to identify its source and highlight at risk areas. Mapping is a common response to outbreaks, especially of new diseases, but some scientists believe it must become a more proactive part of disease control
  • efforts to plot the locations of infectious diseases still tend to be reactive rather than proactive.
  • only 4 percent of important infectious diseases have been comprehensively mapped at a global scale. The rest are plagued by patchy data.
  • ...5 more annotations...
  • audited existing maps for 174 infectious diseases of clinical importance. Following a huge systematic review, they scored the maps for each disease according to how much of the known global range is covered and the quality of the data—whether they were up-to-date and whether they relied on accurate measures like molecular diagnostics or GPS coordinates, rather than unverified expert opinion.
  • even the highest-scoring diseases have room for improvement.
  • . They argue that technology can help to plug the gaps in our maps in the future, and they point to several untapped sources of data. For example, both PubMed and GenBank, which collect biomedical literature and gene sequences respectively, contain geospatial information for the majority of diseases that the team reviewed. And social networks like Twitter can provide invaluable real-time clues about spreading symptoms and illnesses, often tagged with geographical information. During the 2009 outbreak of H1N1 swine flu, for example, Twitter predicted outbreaks 1 or 2 weeks ahead of traditional surveillance measures.
  • I struggled because governments or researchers wouldn’t share their information,” he said. “But there was all this incredible knowledge on the web being discussed through professional networks or news media.”
  • believes that the problem now is not a lack of data but a deluge of it. Sites like HealthMap and BioCaster are already using learning algorithms to filter online sources for information relevant to infections. They are also using crowdsourcing tools that ask online volunteers to check if flagged social media chatter actually relates to the disease of interest.
Javier E

Does the Digital Classroom Enfeeble the Mind? - NYTimes.com - 1 views

  • Some of the top digital designs of the moment, both in school and in the rest of life, embed the underlying message that we understand the brain and its workings. That is false. We don’t know how information is represented in the brain. We don’t know how reason is accomplished by neurons. There are some vaguely cool ideas floating around, and we might know a lot more about these things any moment now, but at this moment, we don’t.
  • At school, standardized testing rules. Outside school, something similar happens. Students spend a lot of time acting as trivialized relays in giant schemes designed for the purposes of advertising and other revenue-minded manipulations. They are prompted to create databases about themselves and then trust algorithms to assemble streams of songs and movies and stories for their consumption.
  • What is really lost when this happens is the self-invention of a human brain. If students don’t learn to think, then no amount of access to information will do them any good.
Javier E

'Filter Bubble': Pariser on Web Personalization, Privacy - TIME - 0 views

  • the World Wide Web came along and blew the gatekeepers away. Suddenly anyone with a computer and an Internet connection could take part in the conversation. Countless viewpoints bloomed. There was no longer a mainstream; instead, there was an ocean of information, one in which Web users were free to swim.
  • Where once Google delivered search results based on an algorithm that was identical for everyone, now what we see when we enter a term in the big box depends on who we are, where we are and what we are. Facebook has long since done the same thing for its all-important News Feed: you'll see different status updates and stories float to the top based on the data Mark Zuckerberg and company have on you. The universal Web is a thing of the past. Instead, as Pariser writes, we've been left "isolated in a web of one" — and, given that we increasingly view the world through the lens of the Internet, that change has frightening consequences for the media, community and even democracy.
  • Google has begun personalizing search results — something it does even if you're not signed into your Google account. (A Google engineer told Pariser that the company uses 57 different signals to shape individual search results, including what kind of browser you're using and where you are.)
  • ...1 more annotation...
  • Yahoo! News — the biggest news site on the Web — is personalized, and even mainstream sites like those of the New York Times and the Washington Post are giving more space to personalized recommendations. As Google executive chairman Eric Schmidt has said, "It will be very hard for people to watch or consume something that is not tailored for them."
Javier E

Lead Gen Sites Pose Challenge to Google - the Haggler - NYTimes.com - 0 views

  • Mr. Strom, it turns out, has so little chance of outranking lead gen sites that he’s having a hard time finding a Web consultant to help him fight back. “I told him that it would just be a waste of his money,” says Craig Baerwaldt of Local Inbound Marketing, a search engine expert whom Mr. Strom tried to hire recently. “There are hundreds of these lead gen sites and they spend a ton of money gaming Google.”
  • because few people search beyond the first page online, snookering Google might be far more effective, especially because many people assume that the company’s algorithm does a bit of consumer-friendly vetting.
  • Yet if the example of locksmiths is any indication, the horde has the upper hand in certain service sectors, and it all but owns Google Places.
  • ...1 more annotation...
  • ‘A young man came yesterday, quoted me $49 to open my door, then he drilled my lock, charged me $400 and left — and now I need a new lock.’ I hear something like that almost every week.”
« First ‹ Previous 81 - 100 of 163 Next › Last »
Showing 20 items per page