Skip to main content

Home/ TOK Friends/ Group items tagged machines

Rss Feed Group items tagged

Javier E

Understanding the Social Networks | Talking Points Memo - 0 views

  • Even when people understand in some sense – and often even in detail – how the algorithms work they still tend to see these platforms as modern, digital versions of the town square. There have always been people saying nonsensical things, lying, unknowingly peddling inaccurate information. And our whole civic order is based on a deep skepticism about any authority’s ability to determine what’s true or accurate and what’s not. So really there’s nothing new under the sun, many people say.
  • But all of these points become moot when the networks – the virtual pubic square – are actually run by a series of computer programs designed to maximize ‘engagement’ and strong emotion for the purposes of selling advertising.
  • But really all these networks are running experiments that put us collectively into the role of Pavlov’s dogs.
  • ...6 more annotations...
  • The algorithms are showing you things to see what you react to and showing you more of the things that prompt an emotional response, that make it harder to leave Facebook or Instagram or any of the other social networks.
  • really if your goal is to maximize engagement that is of course what you’d do since anger is a far more compelling and powerful emotion than appreciation.
  • Facebook didn’t do that. That’s coded into our neurology. Facebook really is an extremism generating machine. It’s really an inevitable part of the core engine.
  • it’s not just Facebook. Or perhaps you could say it’s not even Facebook at all. It’s the mix of machine learning and the business models of all the social networks
  • They have real upsides. They connect us with people. Show us fun videos. But they are also inherently destructive. And somehow we have to take cognizance of that – and not just as a matter of the business decisions of one company.
  • the social networks – meaning the mix of machine learning and advertising/engagement based business models – are really something new under the sun. They’re addiction and extremism generating systems. It’s what they’re designed to do.
Javier E

Opinion | Tesla suffers from the boss's addiction to Twitter - The Washington Post - 0 views

  • For some perspective on what’s happening with Elon Musk and Twitter, I suggest spending a few minutes familiarizing yourself with one of Twitter’s sillier episodes from the past, a fight that erupted almost a year ago between the “shape rotators” of Silicon Valley and the “wordcels” (aspersion intended) of journalism and related professions. Many of the combatants were, at first, merely fighting over which group should have higher social status (theirs), but the episode also highlighted real divisions between West Coast and East — math and verbal, free-speech culture and safety culture, people who make things happen and people who talk about them afterward.
  • For years now, conflict between the two groups has been boiling over onto social media, into courtrooms and onto the pages of major news outlets. Team Shape Rotator believes Team Wordcel is parasitic and dangerous, ballyragging institutions into curbing both free speech and innovation in the name of safety. Team “Stop calling me a Wordcel” sees its opponents as self-centered and reckless, disrupting and mean-meming their way toward some vaguely imagined doom.
  • his audacity seems to be backfiring, as of course did Napoleon’s eventually.
  • ...5 more annotations...
  • You can think of Musk’s acquisition of Twitter as the latest sortie, a takeover of the ultimate wordcel site by the world’s most successful shape rotator.
  • more likely, he fell prey to a different delusion, one in which the shape rotators and the wordcels are united: thinking of Twitter in terms of words and arguments, as a “digital public square” where vital questions are hashed out. It is that, sometimes, but that’s not what it’s designed for. It’s designed to maximize engagement, which is to say, it’s an addiction machine for the highly verbal.
  • Both groups theoretically understand what the machine is doing — the wordcels write endless articles about bad algorithms, and the shape rotators build them. But both nonetheless talk as though they’re saving the world even as they compulsively follow the programming. The shape rotators bait the wordcels because that’s what makes the machine spit out more rewarding likes and retweets. We wordcels return the favor for the same reason.
  • Musk could theoretically rework Twitter’s architecture to downrank provocation and make it less addictive. But of course, that would make it a less profitable business
  • More to the point, the reason he bought it is that he, like his critics, is hooked on it the way it is now. Unfortunately for Tesla shareholders, Musk has now put himself in the position of a dealer who can spend all day getting high on his own supply.
Javier E

Machines of Laughter and Forgetting - NYTimes.com - 0 views

  • “Civilization,” wrote the philosopher and mathematician Alfred North Whitehead in 1911, “advances by extending the number of important operations which we can perform without thinking about them.”
  • On this account, technology can save us a lot of cognitive effort, for “thinking” needs to happen only once, at the design stage.
  • The hidden truth about many attempts to “bury” technology is that they embody an amoral and unsustainable vision. Pick any electrical appliance in your kitchen. The odds are that you have no idea how much electricity it consumes, let alone how it compares to other appliances and households. This ignorance is neither natural nor inevitable; it stems from a conscious decision by the designer of that kitchen appliance to free up your “cognitive resources” so that you can unleash your inner Oscar Wilde on “contemplating” other things. Multiply such ignorance by a few billion, and global warming no longer looks like a mystery.
  • ...6 more annotations...
  • on many important issues, civilization only destroys itself by extending the number of important operations that we can perform without thinking about them. On many issues, we want more thinking, not less.
  • Given that our online tools and platforms are built in a way to make our browsing experience as frictionless as possible, is it any surprise that so much of our personal information is disclosed without our ever realizing it?
  • Instead of having the designer think through all the moral and political implications of technology use before it reaches users — an impossible task — we must find a way to get users to do some of that thinking themselves.
  • most designers, following Wilde, think of technologies as nothing more than mechanical slaves that must maximize efficiency. But some are realizing that technologies don’t have to be just trivial problem-solvers: they can also be subversive troublemakers, making us question our habits and received ideas.
  • Recently, designers in Germany built devices — “transformational products,” they call them — that engage users in “conversations without words.” My favorite is a caterpillar-shaped extension cord. If any of the devices plugged into it are left in standby mode, the “caterpillar” starts twisting as if it were in pain. Does it do what normal extension cords do? Yes. But it also awakens users to the fact that the cord is simply the endpoint of a complex socio-technical system with its own politics and ethics. Before, designers have tried to conceal that system. In the future, designers will be obliged to make it visible.
  • Will such extra seconds of thought — nay, contemplation — slow down civilization? They well might. But who said that stopping to catch a breath on our way to the abyss is not a sensible strategy?
Javier E

Noam Chomsky on Where Artificial Intelligence Went Wrong - Yarden Katz - The Atlantic - 0 views

  • If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
  • If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on.
  • neuroscience for the last couple hundred years has been on the wrong track. There's a fairly recent book by a very good cognitive neuroscientist, Randy Gallistel and King, arguing -- in my view, plausibly -- that neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they've been looking for things that have the properties of associationist psychology.
  • ...19 more annotations...
  • in general what he argues is that if you take a look at animal cognition, human too, it's computational systems. Therefore, you want to look the units of computation. Think about a Turing machine, say, which is the simplest form of computation, you have to find units that have properties like "read", "write" and "address." That's the minimal computational unit, so you got to look in the brain for those. You're never going to find them if you look for strengthening of synaptic connections or field properties, and so on. You've got to start by looking for what's there and what's working and you see that from Marr's highest level.
  • it's basically in the spirit of Marr's analysis. So when you're studying vision, he argues, you first ask what kind of computational tasks is the visual system carrying out. And then you look for an algorithm that might carry out those computations and finally you search for mechanisms of the kind that would make the algorithm work. Otherwise, you may never find anything.
  • "Good Old Fashioned AI," as it's labeled now, made strong use of formalisms in the tradition of Gottlob Frege and Bertrand Russell, mathematical logic for example, or derivatives of it, like nonmonotonic reasoning and so on. It's interesting from a history of science perspective that even very recently, these approaches have been almost wiped out from the mainstream and have been largely replaced -- in the field that calls itself AI now -- by probabilistic and statistical models. My question is, what do you think explains that shift and is it a step in the right direction?
  • AI and robotics got to the point where you could actually do things that were useful, so it turned to the practical applications and somewhat, maybe not abandoned, but put to the side, the more fundamental scientific questions, just caught up in the success of the technology and achieving specific goals.
  • The approximating unanalyzed data kind is sort of a new approach, not totally, there's things like it in the past. It's basically a new approach that has been accelerated by the existence of massive memories, very rapid processing, which enables you to do things like this that you couldn't have done by hand. But I think, myself, that it is leading subjects like computational cognitive science into a direction of maybe some practical applicability... ..in engineering? Chomsky: ...But away from understanding.
  • I was very skeptical about the original work. I thought it was first of all way too optimistic, it was assuming you could achieve things that required real understanding of systems that were barely understood, and you just can't get to that understanding by throwing a complicated machine at it.
  • if success is defined as getting a fair approximation to a mass of chaotic unanalyzed data, then it's way better to do it this way than to do it the way the physicists do, you know, no thought experiments about frictionless planes and so on and so forth. But you won't get the kind of understanding that the sciences have always been aimed at -- what you'll get at is an approximation to what's happening.
  • Suppose you want to predict tomorrow's weather. One way to do it is okay I'll get my statistical priors, if you like, there's a high probability that tomorrow's weather here will be the same as it was yesterday in Cleveland, so I'll stick that in, and where the sun is will have some effect, so I'll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow's weather is going to be. That's not what meteorologists do -- they want to understand how it's working. And these are just two different concepts of what success means, of what achievement is.
  • if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives -- but you learn nothing about the language.
  • the right approach, is to try to see if you can understand what the fundamental principles are that deal with the core properties, and recognize that in the actual usage, there's going to be a thousand other variables intervening -- kind of like what's happening outside the window, and you'll sort of tack those on later on if you want better approximations, that's a different approach.
  • take a concrete example of a new field in neuroscience, called Connectomics, where the goal is to find the wiring diagram of very complex organisms, find the connectivity of all the neurons in say human cerebral cortex, or mouse cortex. This approach was criticized by Sidney Brenner, who in many ways is [historically] one of the originators of the approach. Advocates of this field don't stop to ask if the wiring diagram is the right level of abstraction -- maybe it's no
  • if you went to MIT in the 1960s, or now, it's completely different. No matter what engineering field you're in, you learn the same basic science and mathematics. And then maybe you learn a little bit about how to apply it. But that's a very different approach. And it resulted maybe from the fact that really for the first time in history, the basic sciences, like physics, had something really to tell engineers. And besides, technologies began to change very fast, so not very much point in learning the technologies of today if it's going to be different 10 years from now. So you have to learn the fundamental science that's going to be applicable to whatever comes along next. And the same thing pretty much happened in medicine.
  • that's the kind of transition from something like an art, that you learn how to practice -- an analog would be trying to match some data that you don't understand, in some fashion, maybe building something that will work -- to science, what happened in the modern period, roughly Galilean science.
  • it turns out that there actually are neural circuits which are reacting to particular kinds of rhythm, which happen to show up in language, like syllable length and so on. And there's some evidence that that's one of the first things that the infant brain is seeking -- rhythmic structures. And going back to Gallistel and Marr, its got some computational system inside which is saying "okay, here's what I do with these things" and say, by nine months, the typical infant has rejected -- eliminated from its repertoire -- the phonetic distinctions that aren't used in its own language.
  • people like Shimon Ullman discovered some pretty remarkable things like the rigidity principle. You're not going to find that by statistical analysis of data. But he did find it by carefully designed experiments. Then you look for the neurophysiology, and see if you can find something there that carries out these computations. I think it's the same in language, the same in studying our arithmetical capacity, planning, almost anything you look at. Just trying to deal with the unanalyzed chaotic data is unlikely to get you anywhere, just like as it wouldn't have gotten Galileo anywhere.
  • with regard to cognitive science, we're kind of pre-Galilean, just beginning to open up the subject
  • You can invent a world -- I don't think it's our world -- but you can invent a world in which nothing happens except random changes in objects and selection on the basis of external forces. I don't think that's the way our world works, I don't think it's the way any biologist thinks it is. There are all kind of ways in which natural law imposes channels within which selection can take place, and some things can happen and other things don't happen. Plenty of things that go on in the biology in organisms aren't like this. So take the first step, meiosis. Why do cells split into spheres and not cubes? It's not random mutation and natural selection; it's a law of physics. There's no reason to think that laws of physics stop there, they work all the way through. Well, they constrain the biology, sure. Chomsky: Okay, well then it's not just random mutation and selection. It's random mutation, selection, and everything that matters, like laws of physics.
  • What I think is valuable is the history of science. I think we learn a lot of things from the history of science that can be very valuable to the emerging sciences. Particularly when we realize that in say, the emerging cognitive sciences, we really are in a kind of pre-Galilean stage. We don't know wh
  • at we're looking for anymore than Galileo did, and there's a lot to learn from that.
anonymous

Free will debate: What does free will mean and how did it evolve? - 0 views

  • Many scientists cannot imagine how the idea of free will could be reconciled with the laws of physics and chemistry. Brain researchers say that the brain is just a bunch of nerve cells that fire as a direct result of chemical and electrical events, with no room for free will
  • Scientists take delight in (and advance their careers by) claiming to have disproved conventional wisdom, and so bashing free will is appealing. But their statements against free will can be misleading
  • Free will means freedom from causation.” Other scientists who argue against free will say that it means that a soul or other supernatural entity causes behavior, and not surprisingly they consider such explanations unscientific.
  • ...9 more annotations...
  • There is a genuine psychological reality behind the idea of free will. The debate is merely about whether this reality deserves to be called free will.
  • Our actions cannot break the laws of physics, but they can be influenced by things beyond gravity, friction, and electromagnetic charges. No number of facts about a carbon atom can explain life, let alone the meaning of your life. These causes operate at different levels of organization.
  • Free will cannot violate the laws of physics or even neuroscience, but it invokes causes that go beyond them
  • Self-control furnishes the possibility of acting from rational principles rather than acting on impulse.
  • If you think of freedom as being able to do whatever you want, with no rules, you might be surprised to hear that free will is for following rules. Doing whatever you want is fully within the capability of any animal in the forest. Free will is for a far more advanced way of acting
  • That, in a nutshell, is the inner deciding process that humans have evolved. That is the reality behind the idea of free will: these processes of rational choice and self-control
  • Self-control counts as a kind of freedom because it begins with not acting on every impulse. The simple brain acts whenever something triggers a response: A hungry creature sees food and eats it
  • Our ancestors evolved the ability to act in the ways necessary for culture to succeed. Free will likely will be found right there—it’s what enables humans to control their actions in precisely the ways required to build and operate complex social systems.
  • Understanding free will in this way allows us to reconcile the popular understanding of free will as making choices with our scientific understanding of the world.
Javier E

What to Read: Meditations on a World Divided - NYTimes.com - 0 views

  • What’s different about Murray’s analysis is that his villain — largely implicit in the book, but a central presence nonetheless — is the cultural revolution of the 1970s and the consequent relaxation of traditional social restraints, like the disapproval of child-bearing out of wedlock.
  • What’s missing from Murray’s book, as Paul Krugman pointed out in his column on Friday, is money. There is absolutely a cultural chasm between the 1 percent and the 99 percent (as I argued in The Atlantic last year) — but culture is a symptom and not a cause of the gap. What’s going on is what MIT economist David Autor has dubbed the polarization of the labor market and what Maarten Goos and Alan Manning at the Centre for Economic Performance at the LSE call the division of the world of work into “lousy” and “lovely” jobs.
  • Part of that shift is being driven by the technology revolution, whose latest wave is the rise of the machine-to-machine economy
Javier E

New Statesman - All machine and no ghost? - 0 views

  • More subtly, there are many who insist that consciousness just reduces to brain states - a pang of regret, say, is just a surge of chemicals across a synapse. They are collapsers rather than deniers. Though not avowedly eliminative, this kind of view is tacitly a rejection of the very existence of consciousness
  • it occurred to me that the problem might lie not in nature but in ourselves: we just don't have the faculties of comprehension that would enable us to remove the sense of mystery. Ontologically, matter and consciousness are woven intelligibly together but epistemologically we are precluded from seeing how. I used Noam Chomsky's notion of "mysteries of nature" to describe the situation as I saw it. Soon, I was being labelled (by Owen Flanagan) a "mysterian"
  • Dualism makes the mind too separate, thereby precluding intelligible interaction and dependence.
  • ...11 more annotations...
  • At this point the idealist swooshes in: ladies and gentlemen, there is nothing but mind! There is no problem of interaction with matter because matter is mere illusion
  • idealism has its charms but taking it seriously requires an antipathy to matter bordering on the maniacal. Are we to suppose that material reality is just a dream, a baseless fantasy, and that the Big Bang was nothing but the cosmic spirit having a mental sneezing fit?
  • pan­psychism: even the lowliest of material things has a streak of sentience running through it, like veins in marble. Not just parcels of organic matter, such as lizards and worms, but also plants and bacteria and water molecules and even electrons. Everything has its primitive feelings and minute allotment of sensation.
  • The trouble with panpsychism is that there just isn't any evidence of the universal distribution of consciousness in the material world.
  • The dualist, by contrast, freely admits that consciousness exists, as well as matter, holding that reality falls into two giant spheres. There is the physical brain, on the one hand, and the conscious mind, on the other: the twain may meet at some point but they remain distinct entities.
  • The more we know of the brain, the less it looks like a device for creating consciousness: it's just a big collection of biological cells and a blur of electrical activity - all machine and no ghost.
  • mystery is quite pervasive, even in the hardest of sciences. Physics is a hotbed of mystery: space, time, matter and motion - none of it is free of mysterious elements. The puzzles of quantum theory are just a symptom of this widespread lack of understanding
  • The human intellect grasps the natural world obliquely and glancingly, using mathematics to construct abstract representations of concrete phenomena, but what the ultimate nature of things really is remains obscure and hidden. How everything fits together is particularly elusive, perhaps reflecting the disparate cognitive faculties we bring to bear on the world (the senses, introspection, mathematical description). We are far from obtaining a unified theory of all being and there is no guarantee that such a theory is accessible by finite human intelligence.
  • real naturalism begins with a proper perspective on our specifically human intelligence. Palaeoanthropologists have taught us that the human brain gradually evolved from ancestral brains, particularly in concert with practical toolmaking, centring on the anatomy of the human hand. This history shaped and constrained the form of intelligence now housed in our skulls (as the lifestyle of other species form their set of cognitive skills). What chance is there that an intelligence geared to making stone tools and grounded in the contingent peculiarities of the human hand can aspire to uncover all the mysteries of the universe? Can omniscience spring from an opposable thumb? It seems unlikely, so why presume that the mysteries of consciousness will be revealed to a thumb-shaped brain like ours?
  • The "mysterianism" I advocate is really nothing more than the acknowledgment that human intelligence is a local, contingent, temporal, practical and expendable feature of life on earth - an incremental adaptation based on earlier forms of intelligence that no one would reg
  • rd as faintly omniscient. The current state of the philosophy of mind, from my point of view, is just a reflection of one evolutionary time-slice of a particular bipedal species on a particular humid planet at this fleeting moment in cosmic history - as is everything else about the human animal. There is more ignorance in it than knowledge.
Javier E

Arianna Huffington's Improbable, Insatiable Content Machine - The New York Times - 0 views

  • Display advertising — wherein advertisers pay each time an ad is shown to a reader — still dominates the market. But native advertising, designed to match the look and feel of the editorial content it runs alongside, has been on the rise for years.
  • the ethical debate in the media world is over. Socintel360, a research firm, predicts that spending on native advertising in the United States will more than double in the next four years to $18.4 billion.
  • news start-ups today are like cable-television networks in the early ’80s: small, pioneering companies that will be handsomely rewarded for figuring out how to monetize your attention through a new medium. If this is so, the size of The Huffington Post’s audience could one day justify that $1 billion valuation.
Javier E

To Justify Every 'A,' Some Professors Hand Over Grading Power to Outsiders - Technology... - 0 views

  • The best way to eliminate grade inflation is to take professors out of the grading process: Replace them with professional evaluators who never meet the students, and who don't worry that students will punish harsh grades with poor reviews. That's the argument made by leaders of Western Governors University, which has hired 300 adjunct professors who do nothing but grade student work.
  • These efforts raise the question: What if professors aren't that good at grading? What if the model of giving instructors full control over grades is fundamentally flawed? As more observers call for evidence of college value in an era of ever-rising tuition costs, game-changing models like these are getting serious consideration.
  • Professors do score poorly when it comes to fair grading, according to a study published in July in the journal Teachers College Record. After crunching the numbers on decades' worth of grade reports from about 135 colleges, the researchers found that average grades have risen for 30 years, and that A is now the most common grade given at most colleges. The authors, Stuart Rojstaczer and Christopher Healy, argue that a "consumer-based approach" to higher education has created subtle incentives for professors to give higher marks than deserved. "The standard practice of allowing professors free rein in grading has resulted in grades that bear little relation to actual performance," the two professors concluded.
  • ...13 more annotations...
  • Western Governors is entirely online, for one thing. Technically it doesn't offer courses; instead it provides mentors who help students prepare for a series of high-stakes homework assignments. Those assignments are designed by a team of professional test-makers to prove competence in various subject areas. The idea is that as long as students can leap all of those hurdles, they deserve degrees, whether or not they've ever entered a classroom, watched a lecture video, or participated in any other traditional teaching experience. The model is called "competency-based education."
  • Ms. Johnson explains that Western Governors essentially splits the role of the traditional professor into two jobs. Instructional duties fall to a group the university calls "course mentors," who help students master material. The graders, or evaluators, step in once the homework is filed, with the mind-set of, "OK, the teaching's done, now our job is to find out how much you know," says Ms. Johnson. They log on to a Web site called TaskStream and pluck the first assignment they see. The institution promises that every assignment will be graded within two days of submission.
  • Western Governors requires all evaluators to hold at least a master's degree in the subject they're grading.
  • Evaluators are required to write extensive comments on each task, explaining why the student passed or failed to prove competence in the requisite skill. No letter grades are given—students either pass or fail each task.
  • Another selling point is the software's fast response rate. It can grade a batch of 1,000 essay tests in minutes. Professors can set the software to return the grade immediately and can give students the option of making revisions and resubmitting their work on the spot.
  • All evaluators initially receive a month of training, conducted online, about how to follow each task's grading guidelines, which lay out characteristics of a passing score.
  • Other evaluators want to push talented students to do more than the university's requirements for a task, or to allow a struggling student to pass if he or she is just under the bar. "Some people just can't acclimate to a competency-based environment," says Ms. Johnson. "I tell them, If they don't buy this, they need to not be here.
  • She and some teaching assistants scored the tests by hand and compared their performance with the computer's.
  • The graduate students became fatigued and made mistakes after grading several tests in a row, she told me, "but the machine was right-on every time."
  • He argues that students like the idea that their tests are being evaluated in a consistent way.
  • The graders must regularly participate in "calibration exercises," in which they grade a simulated assignment to make sure they are all scoring consistently. As the phrase suggests, the process is designed to run like a well-oiled machine.
  • He said once students get essays back instantly, they start to view essay tests differently. "It's almost like a big math problem. You don't expect to get everything right the first time, but you work through it.
  • robot grading is the hottest trend in testing circles, says Jacqueline Leighton, a professor of educational psychology at the University of Alberta who edits the journal Educational Measurement: Issues and Practice. Companies building essay-grading robots include the Educational Testing Service, which sells e-rater, and Pearson Education, which makes Intelligent Essay Assessor. "The research is promising, but they're still very much in their infancy," Ms. Leighton says.
Javier E

What makes us human? Doing pointless things for fun - 2 views

  • Playfulness is what makes us human. Doing pointless, purposeless things, just for fun. Doing things for the sheer devilment of it. Being silly for the sake of being silly. Larking around. Taking pleasure in activities that do not advantage us and have nothing to do with our survival. These are the highest signs of intelligence. It is when a creature, having met and surmounted all the practical needs that face him, decides to dance that we know we are in the presence of a human. It is when a creature, having successfully performed all necessary functions, starts to play the fool, just for the hell of it, that we know he is not a robot.
  • All at once, it was clear. The bush people, lounging about after dark in their family shelter, perhaps around a fire – basically just hanging out – had been amusing themselves doing a bit of rock art. And perhaps with some leftover red paste, a few of the younger ones had had a competition to see who could jump highest and make their fingermarks highest up the overhang. This was not even art. It called for no particular skill. It was just mucking about. And yet, for all the careful beauty of their pictures, for all the recognition of their lives from the vantage point of my life that was sparked in me by the appreciation of their artwork, it was not what was skilful that brought me closest to them. It was what was playful. It was their jumping and daubing finger-blobs competition that brought them to me, suddenly, as fellow humans across all those thousands of years. It tingled my spine.
  • An age is coming when machines will be able to do everything. “Ah,” you say, “but they will not be conscious.” But how will we know a machine is not conscious – how do we know another human being is conscious? There is only one way. When it starts to play. In playfulness lies the highest expression of the human spirit.
Javier E

The Fall of Facebook - The Atlantic - 0 views

  • Alexis C. Madrigal Nov 17 2014, 7:59 PM ET Social networking is not, it turns out, winner take all. In the past, one might have imagined that switching between Facebook and “some other network” would be difficult, but the smartphone interface makes it easy to be on a dozen networks. All messages come to the same place—the phone’s notifications screen—so what matters is what your friends are doing, not which apps they’re using.
  • if I were to put money on an area in which Facebook might be unable to dominate in the future, it would be apps that take advantage of physical proximity. Something radically new could arise on that front, whether it’s an evolution of Yik Yak
  • The Social Machine, predicts that text will be a less and less important part of our asynchronous communications mix. Instead, she foresees a “very fluid interface” that would mix text with voice, video, sensor outputs (location, say, or vital signs), and who knows what else
  • ...5 more annotations...
  • the forthcoming Apple Watch seems like a step toward the future Donath envisions. Users will be able to send animated smiley faces, drawings, voice snippets, and even their live heartbeats, which will be tapped out on the receiver’s wrist.
  • A simple but rich messaging platform—perhaps with specialized hardware—could replace the omnibus social network for most purposes. “I think we’re shifting in a weird way to one-on-one conversations on social networks and in messaging apps,” says Shani Hilton, the executive editor for news at BuzzFeed, the viral-media site. “People don’t want to perform their lives publicly in the same way that they wanted to five years ago.”
  • Facebook is built around a trade-off that it has asked users to make: Give us all your personal information, post all your pictures, tag all your friends, and so on, forever. In return, we’ll optimize your social life. But this output is only as good as the input. And it turns out that, when scaled up, creating this input—making yourself legible enough to the Facebook machine that your posts are deemed “relevant” and worthy of being displayed to your mom and your friends—is exhausting labor.
  • These new apps, then, are arguments that we can still have an Internet that is weird, and private. That we can still have social networks without the social network. And that we can still have friends on the Internet without “friending” them.
  • A Brief History of Information Gatekeepers 1871: Western Union controls 90 percent of U.S. telegraph traffic. 1947: 97 percent of the country’s radio stations are affiliated with one of four national networks. 1969: Viewership for the three nightly network newscasts hits an all-time high, with 50 percent of all American homes tuning in. 1997: About half of all American homes with Internet access get it through America Online. 2002: Microsoft Internet Explorer captures 97 percent of the worldwide browser market. 2014: Amazon sells 63 percent of all books bought online—and 40 percent of books overall.
Javier E

The Creepy New Wave of the Internet by Sue Halpern | The New York Review of Books - 0 views

  • as human behavior is tracked and merchandized on a massive scale, the Internet of Things creates the perfect conditions to bolster and expand the surveillance state.
  • In the world of the Internet of Things, your car, your heating system, your refrigerator, your fitness apps, your credit card, your television set, your window shades, your scale, your medications, your camera, your heart rate monitor, your electric toothbrush, and your washing machine—to say nothing of your phone—generate a continuous stream of data that resides largely out of reach of the individual but not of those willing to pay for it or in other ways commandeer it.
  • That is the point: the Internet of Things is about the “dataization” of our bodies, ourselves, and our environment. As a post on the tech website Gigaom put it, “The Internet of Things isn’t about things. It’s about cheap data.
  • ...3 more annotations...
  • the ubiquity of the Internet of Things is putting us squarely in the path of hackers, who will have almost unlimited portals into our digital lives.
  • Forbes reported that security researchers had come up with a $20 tool that was able to remotely control a car’s steering, brakes, acceleration, locks, and lights. It was an experiment that, again, showed how simple it is to manipulate and sabotage the smartest of machines, even though—but really because—a car is now, in the words of a Ford executive, a “cognitive device.”
  • a study of ten popular IoT devices by the computer company Hewlett-Packard uncovered a total of 250 security flaws among them. As Jerry Michalski, a former tech industry analyst and founder of the REX think tank, observed in a recent Pew study: “Most of the devices exposed on the internet will be vulnerable. They will also be prone to unintended consequences: they will do things nobody designed for beforehand, most of which will be undesirable.”
Javier E

The Yoda of Silicon Valley - The New York Times - 0 views

  • Of course, all the algorithmic rigmarole is also causing real-world problems. Algorithms written by humans — tackling harder and harder problems, but producing code embedded with bugs and biases — are troubling enough
  • More worrisome, perhaps, are the algorithms that are not written by humans, algorithms written by the machine, as it learns.
  • Programmers still train the machine, and, crucially, feed it data
  • ...6 more annotations...
  • However, as Kevin Slavin, a research affiliate at M.I.T.’s Media Lab said, “We are now writing algorithms we cannot read. That makes this a unique moment in history, in that we are subject to ideas and actions and efforts by a set of physics that have human origins without human comprehension.
  • As Slavin has often noted, “It’s a bright future, if you’re an algorithm.”
  • “Today, programmers use stuff that Knuth, and others, have done as components of their algorithms, and then they combine that together with all the other stuff they need,”
  • “With A.I., we have the same thing. It’s just that the combining-together part will be done automatically, based on the data, rather than based on a programmer’s work. You want A.I. to be able to combine components to get a good answer based on the data
  • But you have to decide what those components are. It could happen that each component is a page or chapter out of Knuth, because that’s the best possible way to do some task.”
  • “I am worried that algorithms are getting too prominent in the world,” he added. “It started out that computer scientists were worried nobody was listening to us. Now I’m worried that too many people are listening.”
Javier E

How Tech Can Turn Doctors Into Clerical Workers - The New York Times - 0 views

  • what I see in my colleague is disillusionment, and it has come too early, and I am seeing too much of it.
  • In America today, the patient in the hospital bed is just the icon, a place holder for the real patient who is not in the bed but in the computer. That virtual entity gets all our attention. Old-fashioned “bedside” rounds conducted by the attending physician too often take place nowhere near the bed but have become “card flip” rounds
  • My young colleague slumping in the chair in my office survived the student years, then three years of internship and residency and is now a full-time practitioner and teacher. The despair I hear comes from being the highest-paid clerical worker in the hospital: For every one hour we spend cumulatively with patients, studies have shown, we spend nearly two hours on our primitive Electronic Health Records, or “E.H.R.s,” and another hour or two during sacred personal time.
  • ...23 more annotations...
  • The living, breathing source of the data and images we juggle, meanwhile, is in the bed and left wondering: Where is everyone? What are they doing? Hello! It’s my body, you know
  • Our $3.4 trillion health care system is responsible for more than a quarter of a million deaths per year because of medical error, the rough equivalent of, say, a jumbo jet’s crashing every day.
  • I can get cash and account details all over America and beyond. Yet I can’t reliably get a patient record from across town, let alone from a hospital in the same state, even if both places use the same brand of E.H.R
  • the leading E.H.R.s were never built with any understanding of the rituals of care or the user experience of physicians or nurses. A clinician will make roughly 4,000 keyboard clicks during a busy 10-hour emergency-room shift
  • In the process, our daily progress notes have become bloated cut-and-paste monsters that are inaccurate and hard to wade through. A half-page, handwritten progress note of the paper era might in a few lines tell you what a physician really thought
  • so much of the E.H.R., but particularly the physical exam it encodes, is a marvel of fiction, because we humans don’t want to leave a check box empty or leave gaps in a template.
  • For a study, my colleagues and I at Stanford solicited anecdotes from physicians nationwide about patients for whom an oversight in the exam (a “miss”) had resulted in real consequences, like diagnostic delay, radiation exposure, therapeutic or surgical misadventure, even death. They were the sorts of things that would leave no trace in the E.H.R. because the recorded exam always seems complete — and yet the omission would be glaring and memorable to other physicians involved in the subsequent care. We got more than 200 such anecdotes.
  • The reason for these errors? Most of them resulted from exams that simply weren’t done as claimed. “Food poisoning” was diagnosed because the strangulated hernia in the groin was overlooked, or patients were sent to the catheterization lab for chest pain because no one saw the shingles rash on the left chest.
  • I worry that such mistakes come because we’ve gotten trapped in the bunker of machine medicine. It is a preventable kind of failure
  • How we salivated at the idea of searchable records, of being able to graph fever trends, or white blood counts, or share records at a keystroke with another institution — “interoperability”
  • The seriously ill patient has entered another kingdom, an alternate universe, a place and a process that is frightening, infantilizing; that patient’s greatest need is both scientific state-of-the-art knowledge and genuine caring from another human being. Caring is expressed in listening, in the time-honored ritual of the skilled bedside exam — reading the body — in touching and looking at where it hurts and ultimately in localizing the disease for patients not on a screen, not on an image, not on a biopsy report, but on their bodies.
  • What if the computer gave the nurse the big picture of who he was both medically and as a person?
  • a professor at M.I.T. whose current interest in biomedical engineering is “bedside informatics,” marvels at the fact that in an I.C.U., a blizzard of monitors from disparate manufacturers display EKG, heart rate, respiratory rate, oxygen saturation, blood pressure, temperature and more, and yet none of this is pulled together, summarized and synthesized anywhere for the clinical staff to use
  • What these monitors do exceedingly well is sound alarms, an average of one alarm every eight minutes, or more than 180 per patient per day. What is our most common response to an alarm? We look for the button to silence the nuisance because, unlike those in a Boeing cockpit, say, our alarms are rarely diagnosing genuine danger.
  • By some estimates, more than 50 percent of physicians in the United States have at least one symptom of burnout, defined as a syndrome of emotional exhaustion, cynicism and decreased efficacy at work
  • It is on the increase, up by 9 percent from 2011 to 2014 in one national study. This is clearly not an individual problem but a systemic one, a 4,000-key-clicks-a-day problem.
  • The E.H.R. is only part of the issue: Other factors include rapid patient turnover, decreased autonomy, merging hospital systems, an aging population, the increasing medical complexity of patients. Even if the E.H.R. is not the sole cause of what ails us, believe me, it has become the symbol of burnou
  • burnout is one of the largest predictors of physician attrition from the work force. The total cost of recruiting a physician can be nearly $90,000, but the lost revenue per physician who leaves is between $500,000 and $1 million, even more in high-paying specialties.
  • I hold out hope that artificial intelligence and machine-learning algorithms will transform our experience, particularly if natural-language processing and video technology allow us to capture what is actually said and done in the exam room.
  • as with any lab test, what A.I. will provide is at best a recommendation that a physician using clinical judgment must decide how to apply.
  • True clinical judgment is more than addressing the avalanche of blood work, imaging and lab tests; it is about using human skills to understand where the patient is in the trajectory of a life and the disease, what the nature of the patient’s family and social circumstances is and how much they want done.
  • Much of that is a result of poorly coordinated care, poor communication, patients falling through the cracks, knowledge not being transferred and so on, but some part of it is surely from failing to listen to the story and diminishing skill in reading the body as a text.
  • As he was nearing death, Avedis Donabedian, a guru of health care metrics, was asked by an interviewer about the commercialization of health care. “The secret of quality,” he replied, “is love.”/•/
sissij

Prejudice AI? Machine Learning Can Pick up Society's Biases | Big Think - 1 views

  • We think of computers as emotionless automatons and artificial intelligence as stoic, zen-like programs, mirroring Mr. Spock, devoid of prejudice and unable to be swayed by emotion.
  • They say that AI picks up our innate biases about sex and race, even when we ourselves may be unaware of them. The results of this study were published in the journal Science.
  • After interacting with certain users, she began spouting racist remarks.
  • ...2 more annotations...
  • It just learns everything from us and as our echo, picks up the prejudices we’ve become deaf to.
  • AI will have to be programmed to embrace equality.
  •  
    I just feel like this is so ironic. As the parents of the AI, humans themselves can't even be equal , how can we expect the robot we made to be perform perfect humanity and embrace flawless equality. I think equality itself is flawed. How can we define equality? Just like we cannot define fairness, we cannot define equality. I think this robot picking up racist remarks just shows that how children become racist. It also reflects how powerful the cultural context and social norms are. They can shape us subconsciously. --Sissi (4/20/2017)
kushnerha

Ignore the GPS. That Ocean Is Not a Road. - The New York Times - 2 views

  • Faith is a concept that often enters the accounts of GPS-induced mishaps. “It kept saying it would navigate us a road,” said a Japanese tourist in Australia who, while attempting to reach North Stradbroke Island, drove into the Pacific Ocean. A man in West Yorkshire, England, who took his BMW off-road and nearly over a cliff, told authorities that his GPS “kept insisting the path was a road.” In perhaps the most infamous incident, a woman in Belgium asked GPS to take her to a destination less than two hours away. Two days later, she turned up in Croatia.
  • These episodes naturally inspire incredulity, if not outright mockery. After a couple of Swedes mistakenly followed their GPS to the city of Carpi (when they meant to visit Capri), an Italian tourism official dryly noted to the BBC that “Capri is an island. They did not even wonder why they didn’t cross any bridge or take any boat.” An Upper West Side blogger’s account of the man who interpreted “turn here” to mean onto a stairway in Riverside Park was headlined “GPS, Brain Fail Driver.”
  • several studies have demonstrated empirically what we already know instinctively. Cornell researchers who analyzed the behavior of drivers using GPS found drivers “detached” from the “environments that surround them.” Their conclusion: “GPS eliminated much of the need to pay attention.”
  • ...6 more annotations...
  • We seem driven (so to speak) to transform cars, conveyances that show us the world, into machines that also see the world for
  • There is evidence that one’s cognitive map can deteriorate. A widely reported study published in 2006 demonstrated that the brains of London taxi drivers have larger than average amounts of gray matter in the area responsible for complex spatial relations. Brain scans of retired taxi drivers suggested that the volume of gray matter in those areas also decreases when that part of the brain is no longer being used as frequently. “I think it’s possible that if you went to someone doing a lot of active navigation, but just relying on GPS,” Hugo Spiers, one of the authors of the taxi study, hypothesized to me, “you’d actually get a reduction in that area.”
  • A consequence is a possible diminution of our “cognitive map,” a term introduced in 1948 by the psychologist Edward Tolman of the University of California, Berkeley. In a groundbreaking paper, Dr. Tolman analyzed several laboratory experiments involving rats and mazes. He argued that rats had the ability to develop not only cognitive “strip maps” — simple conceptions of the spatial relationship between two points — but also more comprehensive cognitive maps that encompassed the entire maze.
  • Could society’s embrace of GPS be eroding our cognitive maps? For Julia Frankenstein, a psychologist at the University of Freiburg’s Center for Cognitive Science, the danger of GPS is that “we are not forced to remember or process the information — as it is permanently ‘at hand,’ we need not think or decide for ourselves.” She has written that we “see the way from A to Z, but we don’t see the landmarks along the way.” In this sense, “developing a cognitive map from this reduced information is a bit like trying to get an entire musical piece from a few notes.” GPS abets a strip-map level of orientation with the world.
  • We seem driven (so to speak) to transform cars, conveyances that show us the world, into machines that also see the world for us.
  • For Dr. Tolman, the cognitive map was a fluid metaphor with myriad applications. He identified with his rats. Like them, a scientist runs the maze, turning strip maps into comprehensive maps — increasingly accurate models of the “great God-given maze which is our human world,” as he put it. The countless examples of “displaced aggression” he saw in that maze — “the poor Southern whites, who take it out on the Negros,” “we psychologists who criticize all other departments,” “Americans who criticize the Russians and the Russians who criticize us” — were all, to some degree, examples of strip-map comprehension, a blinkered view that failed to comprehend the big picture. “What in the name of Heaven and Psychology can we do about it?” he wrote. “My only answer is to preach again the virtues of reason — of, that is, broad cognitive maps.”
caelengrubb

The Economics of Bitcoin - Econlib - 0 views

  • Bitcoin is an ingenious peer-to-peer “virtual” or “digital currency” that challenges the way economists have traditionally thought about money.
  • My conclusion is that, in principle, nothing stands in the way of the whole world embracing Bitcoin or some other digital currency. Yet I predict that, even with the alternative of Bitcoin, people would resort to gold if only governments got out of the way.
  • According to its official website: “Bitcoin uses peer-to-peer technology to operate with no central authority; managing transactions and the issuing of bitcoins is carried out collectively by the network.”
  • ...15 more annotations...
  • To fully understand how Bitcoin operates, one would need to learn the subtleties of public-key cryptography.
  • In the real world, when people want to buy something using Bitcoin, they transfer their ownership of a certain number of bitcoins to other people, in exchange for goods and services.
  • This transfer is effected by the network of computers performing computations and thereby changing the “public key” to which the “sold” bitcoins are assigned.
  • The encryption involved in Bitcoin concerns the identification of the legitimate owner of a particular bitcoin.
  • Without delving into the mathematics, suffice it to say: There is a way that the legitimate owner of a bitcoin can publicly demonstrate to the computers in the network that he or she really is the owner of that bitcoin.
  • Only someone with the possession of the “private key” will be able to produce a valid “signature” that convinces the computers in the network to update the public ledger to reflect the transfer of the bitcoin to another party.
  • When Bitcoin was first implemented in early 2009, computers in the network—dubbed “miners”—received 50 new bitcoins when performing the computations necessary to add a “block” of transactions to the public ledger.
  • In principle, the developers of Bitcoin could have released all 21 million units of the currency immediately with the software.
  • With the current arrangement—where the “mining” operations needed to keep the system running simultaneously yield new bitcoins to the machines performing the calculations—there is an incentive for owners to devote their machines’ processing power to the network.
  • Here, the danger is that the issuing institution—once it had gotten the world to accept its notes or electronic deposits as money—would face an irresistible temptation to issue massive quantities.6
  • Bitcoin has no such vulnerability. No external technological or physical event could cause Bitcoin inflation, and since no one is in charge of Bitcoin, there is no one tempted to inflate “from within.”
  • Some critics argue that Bitcoin’s fixed quantity would imply constant price deflation. Although this is true, everyone will have seen this coming with more than a century’s notice, and so long-term contracts would have been designed accordingly.
  • Whether to call Bitcoin a “fiat” currency depends on the definition. If “fiat” means a currency that is not legally redeemable in some other commodity, then yes, Bitcoin is a fiat currency. But if “fiat” means a currency relying on government fiat to define what will count as legal money, then Bitcoin is not.
  • Bitcoin is an ingenious concept that challenges the way economists have traditionally thought about money. Its inbuilt scarcity provides an assurance of purchasing power arguably safer than any other system yet conceived.
  • We need to let the decentralized market test tell us what is the best money, or monies.
ilanaprincilus06

The Idea of the Brain by Matthew Cobb review - lighting up the grey matter | Science an... - 0 views

  • But the brain doesn’t contain any digital switches and was not designed for the convenience or edification of any external user. The idea that it is a computer is just the latest in a series of metaphors, and one that is looking increasingly threadbare.
  • how previous ages thought of the brain. It was a collection of cavities through which animal spirits flowed; then it became a machine, which was a breakthrough idea: perhaps you could investigate it as you might any machine, by breaking it down into its constituent parts and seeing what they do.
  • A century later, electricity was the fashionable thing, so natural philosophers began to theorise that perhaps the animal spirits sloshing around in the brain were in fact a kind of “electric fluid”.
  • ...4 more annotations...
  • By the mid-19th century, nerves were inevitably compared to telegraph wires and the brain to a completely electrical system.
  • We understand many more things today about the brain’s neurons and how they operate together, but we still lack the faintest beginning of a clue as to why and how they produce your awareness that you are reading this sentence.
  • What will be the next grand metaphor about the brain? Impossible to say, because we need to wait for the next world-changing technology.
  • “Metaphors shape our ideas in ways that are not always helpful.”
tonycheng6

Accurate machine learning in materials science facilitated by using diverse data sources - 0 views

  • Computational modelling is also used to estimate the properties of materials. However, there is usually a trade-off between the cost of the experiments (or simulations) and the accuracy of the measurements (or estimates), which has limited the number of materials that can be tested rigorously.
  • Materials scientists commonly supplement their own ‘chemical intuition’ with predictions from machine-learning models, to decide which experiments to conduct next
  • More importantly, almost all of these studies use models built on data obtained from a single, consistent source. Such models are referred to as single-fidelity models.
  • ...4 more annotations...
  • However, for most real-world applications, measurements of materials’ properties have varying levels of fidelity, depending on the resources available.
  • A comparison of prediction errors clearly demonstrates the benefit of the multi-fidelity approach
  • The authors’ system is not restricted to materials science, but is generalizable to any problem that can be described using graph structures, such as social networks and knowledge graphs (digital frameworks that represent knowledge as concepts connected by relationships)
  • More research is needed to understand the scenarios for which multi-fidelity learning is most beneficial, balancing prediction accuracy with the cost of acquiring data
« First ‹ Previous 41 - 60 of 218 Next › Last »
Showing 20 items per page