Skip to main content

Home/ TOK Friends/ Group items tagged Ai

Rss Feed Group items tagged

Javier E

Software Is Smart Enough for SAT, but Still Far From Intelligent - The New York Times - 0 views

  • An artificial intelligence software program capable of seeing and reading has for the first time answered geometry questions from the SAT at the level of an average 11th grader.
  • The software had to combine machine vision to understand diagrams with the ability to read and understand complete sentences; its success represents a breakthrough in artificial intelligence.
  • Despite the advance, however, the researchers acknowledge that the program’s abilities underscore how far scientists have to go to create software capable of mimicking human intelligence.
  • ...9 more annotations...
  • designer of the test-taking program, noted that even a simple task for children, like understanding the meaning of an arrow in the context of a test diagram, was not yet something the most advanced A.I. programs could do reliably.
  • scientific workshops intended to develop more accurate methods than the Turing test for measuring the capabilities of artificial intelligence programs.
  • Researchers in the field are now developing a wide range of gauges to measure intelligence — including the Allen Institute’s standardized-test approach and a task that Dr. Marcus proposed, which he called the “Ikea construction challenge.” That test would provide an A.I. program with a bag of parts and an instruction sheet and require it to assemble a piece of furniture.
  • First proposed in 2011 by Hector Levesque, a University of Toronto computer scientist, the Winograd Schema Challenge would pose questions that require real-world logic to A.I. programs. A question might be: “The trophy would not fit in the brown suitcase because it was too big. What was too big, A: the trophy or B: the suitcase?” Answering this question would require a program to reason spatially and have specific knowledge about the size of objects.
  • Within the A.I. community, discussions about software programs that can reason in a humanlike way are significant because recent progress in the field has been more focused on improving perception, not reasoning.
  • GeoSolver, or GeoS, was described at the Conference on Empirical Methods on Natural Language Processing in Lisbon this weekend. It operates by separately generating a series of logical equations, which serve as components of possible answers, from the text and the diagram in the question. It then weighs the accuracy of the equations and tries to discern whether its interpretation of the diagram and text is strong enough to select one of the multiple-choice answers.
  • Ultimately, Dr. Marcus said, he believed that progress in artificial intelligence would require multiple tests, just as multiple tests are used to assess human performance.
  • “There is no one measure of human intelligence,” he said. “Why should there be just one A.I. test?”
  • In the 1960s, Hubert Dreyfus, a philosophy professor at the University of California, Berkeley, expressed this skepticism most clearly when he wrote, “Believing that writing these types of programs will bring us closer to real artificial intelligence is like believing that someone climbing a tree is making progress toward reaching the moon.”
Javier E

Our Machine Masters - NYTimes.com - 0 views

  • the smart machines of the future won’t be humanlike geniuses like HAL 9000 in the movie “2001: A Space Odyssey.” They will be more modest machines that will drive your car, translate foreign languages, organize your photos, recommend entertainment options and maybe diagnose your illnesses. “Everything that we formerly electrified we will now cognitize,” Kelly writes. Even more than today, we’ll lead our lives enmeshed with machines that do some of our thinking tasks for us.
  • This artificial intelligence breakthrough, he argues, is being driven by cheap parallel computation technologies, big data collection and better algorithms. The upshot is clear, “The business plans of the next 10,000 start-ups are easy to forecast: Take X and add A.I.”
  • Two big implications flow from this. The first is sociological. If knowledge is power, we’re about to see an even greater concentration of power.
  • ...14 more annotations...
  • in 2001, the top 10 websites accounted for 31 percent of all U.S. page views, but, by 2010, they accounted for 75 percent of them.
  • The Internet has created a long tail, but almost all the revenue and power is among the small elite at the head.
  • Advances in artificial intelligence will accelerate this centralizing trend. That’s because A.I. companies will be able to reap the rewards of network effects. The bigger their network and the more data they collect, the more effective and attractive they become.
  • As a result, our A.I. future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.”
  • engineers at a few gigantic companies will have vast-though-hidden power to shape how data are collected and framed, to harvest huge amounts of information, to build the frameworks through which the rest of us make decisions and to steer our choices. If you think this power will be used for entirely benign ends, then you have not read enough history.
  • The second implication is philosophical. A.I. will redefine what it means to be human. Our identity as humans is shaped by what machines and other animals can’t do
  • On the other hand, machines cannot beat us at the things we do without conscious thinking: developing tastes and affections, mimicking each other and building emotional attachments, experiencing imaginative breakthroughs, forming moral sentiments.
  • For the last few centuries, reason was seen as the ultimate human faculty. But now machines are better at many of the tasks we associate with thinking — like playing chess, winning at Jeopardy, and doing math.
  • In the age of smart machines, we’re not human because we have big brains. We’re human because we have social skills, emotional capacities and moral intuitions.
  • I could paint two divergent A.I. futures, one deeply humanistic, and one soullessly utilitarian.
  • In the cold, utilitarian future, on the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.
  • In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it
  • In the humanistic one, machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much.
  • In the current issue of Wired, the technology writer Kevin Kelly says that we had all better get used to this level of predictive prowess. Kelly argues that the age of artificial intelligence is finally at hand.
dpittenger

Elon Musk, Stephen Hawking warn of artificial intelligence dangers - 0 views

  • Call it preemptive extinction panic, smart people buying into Sci-Fi hype or simply a prudent stance on a possible future issue, but the fear around artificial intelligence is increasingly gaining traction among those with credentials to back up the distress.
  • However, history doesn't always neatly fit into our forecasts. If things continue as they have with brain-to-machine interfaces becoming ever more common, we're just as likely to have to confront the issue of enhanced humans (digitally, mechanically and/or chemically) long before AI comes close to sentience.
  • Still, whether or not you believe computers will one day be powerful enough to go off and find their own paths, which may conflict with humanity's, the very fact that so many intelligent people feel the issue is worth a public stance should be enough to grab your attention.
  •  
    Stephen Hawking and Elon Musk fear that artificial intelligence could become dangerous. We talked about this a bit in class before, but it is starting to become a new fear. Artificial intelligence could possibly become smarter than us, and that wouldn't be good.
Javier E

Silicon Valley Sharknado - NYTimes.com - 0 views

  • algorithms and machines will replace 80 percent of doctors in years to come, making medicine more data driven and less like “witchcraft.”
  • Page predicted a “time of abundance,” when human needs could be more easily met and people would “have more time with their family or to pursue their own interests.”
  • people could be thrown out of work. As Page said, “90 percent of people used to be farmers,” so “it’s not surprising.”
  • ...4 more annotations...
  • “You should presume that someday,” Brin said, “we will be able to make machines that can reason, think and do things better than we can.”
  • Of course, when we get more free time, we’ll simply spend it staring at our iPads
  • “In a way, it’s not being honest,” he said. “We’re still pretending that we’re inventing a brain when all we’ve come up with is a giant mash-up of real brains. We don’t yet understand how brains work, so we can’t build one.”
  • “People are unwittingly feeding information into the Cloud for automated services, which they’re not being paid for,” Lanier said. “I don’t like pretending that humans are becoming buggy whips. You have this fantasy that it’s machines doing it without people helping.
Javier E

It's True: False News Spreads Faster and Wider. And Humans Are to Blame. - The New York... - 0 views

  • What if the scourge of false news on the internet is not the result of Russian operatives or partisan zealots or computer-controlled bots? What if the main problem is us?
  • People are the principal culprits
  • people, the study’s authors also say, prefer false news.
  • ...18 more annotations...
  • As a result, false news travels faster, farther and deeper through the social network than true news.
  • those patterns applied to every subject they studied, not only politics and urban legends, but also business, science and technology.
  • The stories were classified as true or false, using information from six independent fact-checking organizations including Snopes, PolitiFact and FactCheck.org
  • with or without the bots, the results were essentially the same.
  • “It’s not really the robots that are to blame.”
  • “News” and “stories” were defined broadly — as claims of fact — regardless of the source. And the study explicitly avoided the term “fake news,” which, the authors write, has become “irredeemably polarized in our current political and media climate.”
  • False claims were 70 percent more likely than the truth to be shared on Twitter. True stories were rarely retweeted by more than 1,000 people, but the top 1 percent of false stories were routinely shared by 1,000 to 100,000 people. And it took true stories about six times as long as false ones to reach 1,500 people.
  • the researchers enlisted students to annotate as true or false more than 13,000 other stories that circulated on Twitter.
  • “The comprehensiveness is important here, spanning the entire history of Twitter,” said Jon Kleinberg, a computer scientist at Cornell University. “And this study shines a spotlight on the open question of the success of false information online.”
  • The M.I.T. researchers pointed to factors that contribute to the appeal of false news. Applying standard text-analysis tools, they found that false claims were significantly more novel than true ones — maybe not a surprise, since falsehoods are made up.
  • The goal, said Soroush Vosoughi, a postdoctoral researcher at the M.I.T. Media Lab and the lead author, was to find clues about what is “in the nature of humans that makes them like to share false news.”
  • The study analyzed the sentiment expressed by users in replies to claims posted on Twitter. As a measurement tool, the researchers used a system created by Canada’s National Research Council that associates English words with eight emotions
  • False claims elicited replies expressing greater surprise and disgust. True news inspired more anticipation, sadness and joy, depending on the nature of the stories.
  • The M.I.T. researchers said that understanding how false news spreads is a first step toward curbing it. They concluded that human behavior plays a large role in explaining the phenomenon, and mention possible interventions, like better labeling, to alter behavior.
  • For all the concern about false news, there is little certainty about its influence on people’s beliefs and actions. A recent study of the browsing histories of thousands of American adults in the months before the 2016 election found that false news accounted for only a small portion of the total news people consumed.
  • In fall 2016, Mr. Roy, an associate professor at the M.I.T. Media Lab, became a founder and the chairman of Cortico, a nonprofit that is developing tools to measure public conversations online to gauge attributes like shared attention, variety of opinion and receptivity. The idea is that improving the ability to measure such attributes would lead to better decision-making that would counteract misinformation.
  • Mr. Roy acknowledged the challenge in trying to not only alter individual behavior but also in enlisting the support of big internet platforms like Facebook, Google, YouTube and Twitter, and media companies
  • “Polarization,” he said, “has turned out to be a great business model.”
Javier E

While You Were Sleeping - The New York Times - 0 views

  • look at where we are today thanks to artificial intelligence from digital computers — and the amount of middle-skill and even high-skill work they’re supplanting — and then factor in how all of this could be supercharged in a decade by quantum computing.
  • In December 2016, Amazon announced plans for the Amazon Go automated grocery store, in which a combination of computer vision and deep-learning technologies track items and only charges customers when they remove the items from the store. In February 2017, Bank of America began testing three ‘employee-less’ branch locations that offer full-service banking automatically, with access to a human, when necessary, via video teleconference.”
  • This will be a challenge for developed countries, but even more so for countries like Egypt, Pakistan, Iran, Syria, Saudi Arabia, China and India — where huge numbers of youths are already unemployed because they lack the education for even this middle-skill work THAT’S now being automated.
  • ...4 more annotations...
  • Some jobs will be displaced, but 100 percent of jobs will be augmented by A.I.,” added Rometty. Technology companies “are inventing these technologies, so we have the responsibility to help people adapt to it — and I don’t mean just giving them tablets or P.C.s, but lifelong learning systems.”
  • Each time work gets outsourced or tasks get handed off to a machine, “we must reach up and learn a new skill or in some ways expand our capabilities as humans in order to fully realize our collaborative potential,” McGowan said.
  • Therefore, education needs to shift “from education as a content transfer to learning as a continuous process where the focused outcome is the ability to learn and adapt with agency as opposed to the transactional action of acquiring a set skill,
  • “Instructors/teachers move from guiding and accessing that transfer process to providing social and emotional support to the individual as they move into the role of driving their own continuous learning.”
Javier E

Opinion | A.I. Is Harder Than You Think - The New York Times - 1 views

  • The limitations of Google Duplex are not just a result of its being announced prematurely and with too much fanfare; they are also a vivid reminder that genuine A.I. is far beyond the field’s current capabilities, even at a company with perhaps the largest collection of A.I. researchers in the world, vast amounts of computing power and enormous quantities of data.
  • The crux of the problem is that the field of artificial intelligence has not come to grips with the infinite complexity of language. Just as you can make infinitely many arithmetic equations by combining a few mathematical symbols and following a small set of rules, you can make infinitely many sentences by combining a modest set of words and a modest set of rules.
  • A genuine, human-level A.I. will need to be able to cope with all of those possible sentences, not just a small fragment of them.
  • ...3 more annotations...
  • No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.
  • Once upon a time, before the fashionable rise of machine learning and “big data,” A.I. researchers tried to understand how complex knowledge could be encoded and processed in computers. This project, known as knowledge engineering, aimed not to create programs that would detect statistical patterns in huge data sets but to formalize, in a system of rules, the fundamental elements of human understanding, so that those rules could be applied in computer programs.
  • That job proved difficult and was never finished. But “difficult and unfinished” doesn’t mean misguided. A.I. researchers need to return to that project sooner rather than later, ideally enlisting the help of cognitive psychologists who study the question of how human cognition manages to be endlessly flexible.
Javier E

The Navy's USS Gabrielle Giffords and the Future of Work - The Atlantic - 0 views

  • Minimal manning—and with it, the replacement of specialized workers with problem-solving generalists—isn’t a particularly nautical concept. Indeed, it will sound familiar to anyone in an organization who’s been asked to “do more with less”—which, these days, seems to be just about everyone.
  • Ten years from now, the Deloitte consultant Erica Volini projects, 70 to 90 percent of workers will be in so-called hybrid jobs or superjobs—that is, positions combining tasks once performed by people in two or more traditional roles.
  • If you ask Laszlo Bock, Google’s former culture chief and now the head of the HR start-up Humu, what he looks for in a new hire, he’ll tell you “mental agility.
  • ...40 more annotations...
  • “What companies are looking for,” says Mary Jo King, the president of the National Résumé Writers’ Association, “is someone who can be all, do all, and pivot on a dime to solve any problem.”
  • The phenomenon is sped by automation, which usurps routine tasks, leaving employees to handle the nonroutine and unanticipated—and the continued advance of which throws the skills employers value into flux
  • Or, for that matter, on the relevance of the question What do you want to be when you grow up?
  • By 2020, a 2016 World Economic Forum report predicted, “more than one-third of the desired core skill sets of most occupations” will not have been seen as crucial to the job when the report was published
  • I asked John Sullivan, a prominent Silicon Valley talent adviser, why should anyone take the time to master anything at all? “You shouldn’t!” he replied.
  • Minimal manning—and the evolution of the economy more generally—requires a different kind of worker, with not only different acquired skills but different inherent abilities
  • It has implications for the nature and utility of a college education, for the path of careers, for inequality and employability—even for the generational divide.
  • Then, in 2001, Donald Rumsfeld arrived at the Pentagon. The new secretary of defense carried with him a briefcase full of ideas from the corporate world: downsizing, reengineering, “transformational” technologies. Almost immediately, what had been an experimental concept became an article of faith
  • But once cadets got into actual command environments, which tend to be fluid and full of surprises, a different picture emerged. “Psychological hardiness”—a construct that includes, among other things, a willingness to explore “multiple possible response alternatives,” a tendency to “see all experience as interesting and meaningful,” and a strong sense of self-confidence—was a better predictor of leadership ability in officers after three years in the field.
  • Because there really is no such thing as multitasking—just a rapid switching of attention—I began to feel overstrained, put upon, and finally irked by the impossible set of concurrent demands. Shouldn’t someone be giving me a hand here? This, Hambrick explained, meant I was hitting the limits of working memory—basically, raw processing power—which is an important aspect of “fluid intelligence” and peaks in your early 20s. This is distinct from “crystallized intelligence”—the accumulated facts and know-how on your hard drive—which peaks in your 50
  • Others noticed the change but continued to devote equal attention to all four tasks. Their scores fell. This group, Hambrick found, was high in “conscientiousness”—a trait that’s normally an overwhelming predictor of positive job performance. We like conscientious people because they can be trusted to show up early, double-check the math, fill the gap in the presentation, and return your car gassed up even though the tank was nowhere near empty to begin with. What struck Hambrick as counterintuitive and interesting was that conscientiousness here seemed to correlate with poor performance.
  • he discovered another correlation in his test: The people who did best tended to score high on “openness to new experience”—a personality trait that is normally not a major job-performance predictor and that, in certain contexts, roughly translates to “distractibility.”
  • To borrow the management expert Peter Drucker’s formulation, people with this trait are less focused on doing things right, and more likely to wonder whether they’re doing the right things.
  • High in fluid intelligence, low in experience, not terribly conscientious, open to potential distraction—this is not the classic profile of a winning job candidate. But what if it is the profile of the winning job candidate of the future?
  • One concerns “grit”—a mind-set, much vaunted these days in educational and professional circles, that allows people to commit tenaciously to doing one thing well
  • These ideas are inherently appealing; they suggest that dedication can be more important than raw talent, that the dogged and conscientious will be rewarded in the end.
  • he studied West Point students and graduates.
  • Traditional measures such as SAT scores and high-school class rank “predicted leader performance in the stable, highly regulated environment of West Point” itself.
  • It would be supremely ironic if the advance of the knowledge economy had the effect of devaluing knowledge. But that’s what I heard, recurrentl
  • “Fluid, learning-intensive environments are going to require different traits than classical business environments,” I was told by Frida Polli, a co-founder of an AI-powered hiring platform called Pymetrics. “And they’re going to be things like ability to learn quickly from mistakes, use of trial and error, and comfort with ambiguity.”
  • “We’re starting to see a big shift,” says Guy Halfteck, a people-analytics expert. “Employers are looking less at what you know and more and more at your hidden potential” to learn new things
  • advice to employers? Stop hiring people based on their work experience. Because in these environments, expertise can become an obstacle.
  • “The Curse of Expertise.” The more we invest in building and embellishing a system of knowledge, they found, the more averse we become to unbuilding it.
  • All too often experts, like the mechanic in LePine’s garage, fail to inspect their knowledge structure for signs of decay. “It just didn’t occur to him,” LePine said, “that he was repeating the same mistake over and over.
  • The devaluation of expertise opens up ample room for different sorts of mistakes—and sometimes creates a kind of helplessness.
  • Aboard littoral combat ships, the crew lacks the expertise to carry out some important tasks, and instead has to rely on civilian help
  • Meanwhile, the modular “plug and fight” configuration was not panning out as hoped. Converting a ship from sub-hunter to minesweeper or minesweeper to surface combatant, it turned out, was a logistical nightmare
  • So in 2016 the concept of interchangeability was scuttled for a “one ship, one mission” approach, in which the extra 20-plus sailors became permanent crew members
  • “As equipment breaks, [sailors] are required to fix it without any training,” a Defense Department Test and Evaluation employee told Congress. “Those are not my words. Those are the words of the sailors who were doing the best they could to try to accomplish the missions we gave them in testing.”
  • These results were, perhaps, predictable given the Navy’s initial, full-throttle approach to minimal manning—and are an object lesson on the dangers of embracing any radical concept without thinking hard enough about the downsides
  • a world in which mental agility and raw cognitive speed eclipse hard-won expertise is a world of greater exclusion: of older workers, slower learners, and the less socially adept.
  • if you keep going down this road, you end up with one really expensive ship with just a few people on it who are geniuses … That’s not a future we want to see, because you need a large enough crew to conduct multiple tasks in combat.
  • hat does all this mean for those of us in the workforce, and those of us planning to enter it? It would be wrong to say that the 10,000-hours-of-deliberate-practice idea doesn’t hold up at all. In some situations, it clearly does
  • A spinal surgery will not be performed by a brilliant dermatologist. A criminal-defense team will not be headed by a tax attorney. And in tech, the demand for specialized skills will continue to reward expertise handsomely.
  • But in many fields, the path to success isn’t so clear. The rules keep changing, which means that highly focused practice has a much lower return
  • In uncertain environments, Hambrick told me, “specialization is no longer the coin of the realm.”
  • It leaves us with lifelong learning,
  • I found myself the target of career suggestions. “You need to be a video guy, an audio guy!” the Silicon Valley talent adviser John Sullivan told me, alluding to the demise of print media
  • I found the prospect of starting over just plain exhausting. Building a professional identity takes a lot of resources—money, time, energy. After it’s built, we expect to reap gains from our investment, and—let’s be honest—even do a bit of coasting. Are we equipped to continually return to apprentice mode? Will this burn us out?
  • Everybody I met on the Giffords seemed to share that mentality. They regarded every minute on board—even during a routine transit back to port in San Diego Harbor—as a chance to learn something new.
Javier E

Opinion | The Apps on My Phone Are Stalking Me - The New York Times - 0 views

  • There is much about the future that keeps me up at night — A.I. weaponry, undetectable viral deepfakes
  • but in the last few years, one technological threat has blipped my fear radar much faster than others.That fear? Ubiquitous surveillance.
  • I am no longer sure that human civilization can undo or evade living under constant, extravagantly detailed physical and even psychic surveillance
  • ...24 more annotations...
  • as a species, we are not doing nearly enough to avoid always being watched or otherwise digitally recorded.
  • our location, your purchases, video and audio from within your home and office, your online searches and every digital wandering, biometric tracking of your face and other body parts, your heart rate and other vital signs, your every communication, recording, and perhaps your deepest thoughts or idlest dreams
  • in the future, if not already, much of this data and more will be collected and analyzed by some combination of governments and corporations, among them a handful of megacompanies whose powers nearly match those of governments
  • Over the last year, as part of Times Opinion’s Privacy Project, I’ve participated in experiments in which my devices were closely monitored in order to determine the kind of data that was being collected about me.
  • I’ve realized how blind we are to the kinds of insights tech companies are gaining about us through our gadgets. Our blindness not only keeps us glued to privacy-invading tech
  • it also means that we’ve failed to create a political culture that is in any way up to the task of limiting surveillance.
  • few of our cultural or political institutions are even much trying to tamp down the surveillance state.
  • Yet the United States and other supposedly liberty-loving Western democracies have not ruled out such a future
  • like Barack Obama before him, Trump and the Justice Department are pushing Apple to create a backdoor into the data on encrypted iPhones — they want the untrustworthy F.B.I. and any local cop to be able to see everything inside anyone’s phone.
  • the fact that both Obama and Trump agreed on the need for breaking iPhone encryption suggests how thoroughly political leaders across a wide spectrum have neglected privacy as a fundamental value worthy of protection.
  • Americans are sleepwalking into a future nearly as frightening as the one the Chinese are constructing. I choose the word “sleepwalking” deliberately, because when it comes to digital privacy, a lot of us prefer the comfortable bliss of ignorance.
  • Among other revelations: Advertising companies and data brokers are keeping insanely close tabs on smartphones’ location data, tracking users so precisely that their databases could arguably compromise national security or political liberty.
  • Tracking technologies have become cheap and widely available — for less than $100, my colleagues were able to identify people walking by surveillance cameras in Bryant Park in Manhattan.
  • The Clearview AI story suggests another reason to worry that our march into surveillance has become inexorable: Each new privacy-invading technology builds on a previous one, allowing for scary outcomes from new integrations and collections of data that few users might have anticipated.
  • The upshot: As the location-tracking apps followed me, I was able to capture the pings they sent to online servers — essentially recording their spying
  • On the map, you can see the apps are essentially stalking me. They see me drive out one morning to the gas station, then to the produce store, then to Safeway; later on I passed by a music school, stopped at a restaurant, then Whole Foods.
  • But location was only one part of the data the companies had about me; because geographic data is often combined with other personal information — including a mobile advertising ID that can help merge what you see and do online with where you go in the real world — the story these companies can tell about me is actually far more detailed than I can tell about myself.
  • I can longer pretend I’ve got nothing to worry about. Sure, I’m not a criminal — but do I want anyone to learn everything about me?
  • more to the point: Is it wise for us to let any entity learn everything about everyone?
  • The remaining uncertainty about the surveillance state is not whether we will submit to it — only how readily and completely, and how thoroughly it will warp our society.
  • Will we allow the government and corporations unrestricted access to every bit of data we ever generate, or will we decide that some kinds of collections, like the encrypted data on your phone, should be forever off limits, even when a judge has issued a warrant for it?
  • In the future, will there be room for any true secret — will society allow any unrecorded thought or communication to evade detection and commercial analysis?
  • How completely will living under surveillance numb creativity and silence radical thought?
  • Can human agency survive the possibility that some companies will know more about all of us than any of us can ever know about ourselves?
Javier E

I asked Tinder for my data. It sent me 800 pages of my deepest, darkest secrets | Techn... - 0 views

  • I emailed Tinder requesting my personal data and got back way more than I bargained for. Some 800 pages came back containing information such as my Facebook “likes”, my photos from Instagram (even after I deleted the associated account), my education, the age-rank of men I was interested in, how many times I connected, when and where every online conversation with every single one of my matches happened … the list goes on.
  • “You are lured into giving away all this information,” says Luke Stark, a digital technology sociologist at Dartmouth University. “Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can’t feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality.”
  • What will happen if this treasure trove of data gets hacked, is made public or simply bought by another company? I can almost feel the shame I would experience. The thought that, before sending me these 800 pages, someone at Tinder might have read them already makes me cringe.
  • ...3 more annotations...
  • In May, an algorithm was used to scrape 40,000 profile images from the platform in order to build an AI to “genderise” faces. A few months earlier, 70,000 profiles from OkCupid (owned by Tinder’s parent company Match Group) were made public by a Danish researcher some commentators have labelled a “white supremacist”, who used the data to try to establish a link between intelligence and religious beliefs. The data is still out there.
  • The trouble is these 800 pages of my most intimate data are actually just the tip of the iceberg. “Your personal data affects who you see first on Tinder, yes,” says Dehaye. “But also what job offers you have access to on LinkedIn, how much you will pay for insuring your car, which ad you will see in the tube and if you can subscribe to a loan. “We are leaning towards a more and more opaque society, towards an even more intangible world where data collected about you will decide even larger facets of your life. Eventually, your whole existence will be affected.”
  • As a typical millennial constantly glued to my phone, my virtual life has fully merged with my real life. There is no difference any more. Tinder is how I meet people, so this is my reality. It is a reality that is constantly being shaped by others – but good luck trying to find out how.
katedriscoll

Confirmation Bias - an overview | ScienceDirect Topics - 0 views

  • Confirmation bias is a ubiquitous phenomenon, the effects of which have been traced as far back as Pythagoras’ studies of harmonic relationships in the 6th century B.C. (Nickerson, 1998), and is referenced in the writings of William Shakespeare and Francis Bacon (Risinger, Saks, Thompson, & Rosenthal, 2002). It is also a problematic phenomenon, having been implicated in “a significant fraction of the disputes, altercations, and misunderstandings that occur among individuals, groups, and nations” throughout human history, including the witch trials of Western Europe and New England, and the perpetuation of inaccurate medical diagnoses, ineffective medical treatments, and erroneous scientific theories (Nickerson, 1998, p. 175).
  • For over a century, psychologists have observed that people naturally favor information that is consistent with their beliefs or desires, and ignore or discount evidence to the contrary. In an article titled “The Mind’s Eye,” Jastrow (1899) was among the first to explain how the mind plays an active role in information processing, such that two individuals with different mindsets might interpret the same information in entirely different ways (see also Boring, 1930). Since then, a wealth of empirical research has demonstrated that confirmation bias affects how we perceive visual stimuli (e.g., Bruner & Potter, 1964; Leeper, 1935), how we gather and evaluate evidence (e.g., Lord, Ross, & Lepper, 1979; Wason, 1960), and how we judge—and behave toward—other people (e.g., Asch, 1946; Rosenthal & Jacobson, 1966; Snyder & Swann, 1978).
Javier E

Accelerationism: how a fringe philosophy predicted the future we live in | World news |... - 1 views

  • Roger Zelazny, published his third novel. In many ways, Lord of Light was of its time, shaggy with imported Hindu mythology and cosmic dialogue. Yet there were also glints of something more forward-looking and political.
  • accelerationism has gradually solidified from a fictional device into an actual intellectual movement: a new way of thinking about the contemporary world and its potential.
  • Accelerationists argue that technology, particularly computer technology, and capitalism, particularly the most aggressive, global variety, should be massively sped up and intensified – either because this is the best way forward for humanity, or because there is no alternative.
  • ...31 more annotations...
  • Accelerationists favour automation. They favour the further merging of the digital and the human. They often favour the deregulation of business, and drastically scaled-back government. They believe that people should stop deluding themselves that economic and technological progress can be controlled.
  • Accelerationism, therefore, goes against conservatism, traditional socialism, social democracy, environmentalism, protectionism, populism, nationalism, localism and all the other ideologies that have sought to moderate or reverse the already hugely disruptive, seemingly runaway pace of change in the modern world
  • Robin Mackay and Armen Avanessian in their introduction to #Accelerate: The Accelerationist Reader, a sometimes baffling, sometimes exhilarating book, published in 2014, which remains the only proper guide to the movement in existence.
  • “We all live in an operating system set up by the accelerating triad of war, capitalism and emergent AI,” says Steve Goodman, a British accelerationist
  • A century ago, the writers and artists of the Italian futurist movement fell in love with the machines of the industrial era and their apparent ability to invigorate society. Many futurists followed this fascination into war-mongering and fascism.
  • One of the central figures of accelerationism is the British philosopher Nick Land, who taught at Warwick University in the 1990s
  • Land has published prolifically on the internet, not always under his own name, about the supposed obsolescence of western democracy; he has also written approvingly about “human biodiversity” and “capitalistic human sorting” – the pseudoscientific idea, currently popular on the far right, that different races “naturally” fare differently in the modern world; and about the supposedly inevitable “disintegration of the human species” when artificial intelligence improves sufficiently.
  • In our politically febrile times, the impatient, intemperate, possibly revolutionary ideas of accelerationism feel relevant, or at least intriguing, as never before. Noys says: “Accelerationists always seem to have an answer. If capitalism is going fast, they say it needs to go faster. If capitalism hits a bump in the road, and slows down” – as it has since the 2008 financial crisis – “they say it needs to be kickstarted.”
  • On alt-right blogs, Land in particular has become a name to conjure with. Commenters have excitedly noted the connections between some of his ideas and the thinking of both the libertarian Silicon Valley billionaire Peter Thiel and Trump’s iconoclastic strategist Steve Bannon.
  • “In Silicon Valley,” says Fred Turner, a leading historian of America’s digital industries, “accelerationism is part of a whole movement which is saying, we don’t need [conventional] politics any more, we can get rid of ‘left’ and ‘right’, if we just get technology right. Accelerationism also fits with how electronic devices are marketed – the promise that, finally, they will help us leave the material world, all the mess of the physical, far behind.”
  • In 1972, the philosopher Gilles Deleuze and the psychoanalyst Félix Guattari published Anti-Oedipus. It was a restless, sprawling, appealingly ambiguous book, which suggested that, rather than simply oppose capitalism, the left should acknowledge its ability to liberate as well as oppress people, and should seek to strengthen these anarchic tendencies, “to go still further … in the movement of the market … to ‘accelerate the process’”.
  • By the early 90s Land had distilled his reading, which included Deleuze and Guattari and Lyotard, into a set of ideas and a writing style that, to his students at least, were visionary and thrillingly dangerous. Land wrote in 1992 that capitalism had never been properly unleashed, but instead had always been held back by politics, “the last great sentimental indulgence of mankind”. He dismissed Europe as a sclerotic, increasingly marginal place, “the racial trash-can of Asia”. And he saw civilisation everywhere accelerating towards an apocalypse: “Disorder must increase... Any [human] organisation is ... a mere ... detour in the inexorable death-flow.”
  • With the internet becoming part of everyday life for the first time, and capitalism seemingly triumphant after the collapse of communism in 1989, a belief that the future would be almost entirely shaped by computers and globalisation – the accelerated “movement of the market” that Deleuze and Guattari had called for two decades earlier – spread across British and American academia and politics during the 90s. The Warwick accelerationists were in the vanguard.
  • In the US, confident, rainbow-coloured magazines such as Wired promoted what became known as “the Californian ideology”: the optimistic claim that human potential would be unlocked everywhere by digital technology. In Britain, this optimism influenced New Labour
  • The Warwick accelerationists saw themselves as participants, not traditional academic observers
  • The CCRU gang formed reading groups and set up conferences and journals. They squeezed into the narrow CCRU room in the philosophy department and gave each other impromptu seminars.
  • The main result of the CCRU’s frantic, promiscuous research was a conveyor belt of cryptic articles, crammed with invented terms, sometimes speculative to the point of being fiction.
  • At Warwick, however, the prophecies were darker. “One of our motives,” says Plant, “was precisely to undermine the cheery utopianism of the 90s, much of which seemed very conservative” – an old-fashioned male desire for salvation through gadgets, in her view.
  • K-punk was written by Mark Fisher, formerly of the CCRU. The blog retained some Warwick traits, such as quoting reverently from Deleuze and Guattari, but it gradually shed the CCRU’s aggressive rhetoric and pro-capitalist politics for a more forgiving, more left-leaning take on modernity. Fisher increasingly felt that capitalism was a disappointment to accelerationists, with its cautious, entrenched corporations and endless cycles of essentially the same products. But he was also impatient with the left, which he thought was ignoring new technology
  • lex Williams, co-wrote a Manifesto for an Accelerationist Politics. “Capitalism has begun to constrain the productive forces of technology,” they wrote. “[Our version of] accelerationism is the basic belief that these capacities can and should be let loose … repurposed towards common ends … towards an alternative modernity.”
  • What that “alternative modernity” might be was barely, but seductively, sketched out, with fleeting references to reduced working hours, to technology being used to reduce social conflict rather than exacerbate it, and to humanity moving “beyond the limitations of the earth and our own immediate bodily forms”. On politics and philosophy blogs from Britain to the US and Italy, the notion spread that Srnicek and Williams had founded a new political philosophy: “left accelerationism”.
  • Two years later, in 2015, they expanded the manifesto into a slightly more concrete book, Inventing the Future. It argued for an economy based as far as possible on automation, with the jobs, working hours and wages lost replaced by a universal basic income. The book attracted more attention than a speculative leftwing work had for years, with interest and praise from intellectually curious leftists
  • Even the thinking of the arch-accelerationist Nick Land, who is 55 now, may be slowing down. Since 2013, he has become a guru for the US-based far-right movement neoreaction, or NRx as it often calls itself. Neoreactionaries believe in the replacement of modern nation-states, democracy and government bureaucracies by authoritarian city states, which on neoreaction blogs sound as much like idealised medieval kingdoms as they do modern enclaves such as Singapore.
  • Land argues now that neoreaction, like Trump and Brexit, is something that accelerationists should support, in order to hasten the end of the status quo.
  • In 1970, the American writer Alvin Toffler, an exponent of accelerationism’s more playful intellectual cousin, futurology, published Future Shock, a book about the possibilities and dangers of new technology. Toffler predicted the imminent arrival of artificial intelligence, cryonics, cloning and robots working behind airline check-in desks
  • Land left Britain. He moved to Taiwan “early in the new millennium”, he told me, then to Shanghai “a couple of years later”. He still lives there now.
  • In a 2004 article for the Shanghai Star, an English-language paper, he described the modern Chinese fusion of Marxism and capitalism as “the greatest political engine of social and economic development the world has ever known”
  • Once he lived there, Land told me, he realised that “to a massive degree” China was already an accelerationist society: fixated by the future and changing at speed. Presented with the sweeping projects of the Chinese state, his previous, libertarian contempt for the capabilities of governments fell away
  • Without a dynamic capitalism to feed off, as Deleuze and Guattari had in the early 70s, and the Warwick philosophers had in the 90s, it may be that accelerationism just races up blind alleys. In his 2014 book about the movement, Malign Velocities, Benjamin Noys accuses it of offering “false” solutions to current technological and economic dilemmas. With accelerationism, he writes, a breakthrough to a better future is “always promised and always just out of reach”.
  • “The pace of change accelerates,” concluded a documentary version of the book, with a slightly hammy voiceover by Orson Welles. “We are living through one of the greatest revolutions in history – the birth of a new civilisation.”
  • Shortly afterwards, the 1973 oil crisis struck. World capitalism did not accelerate again for almost a decade. For much of the “new civilisation” Toffler promised, we are still waiting
Javier E

FaceApp helped a middle-aged man become a popular younger woman. His fan base has never... - 1 views

  • Soya’s fame illustrated a simple truth: that social media is less a reflection of who we are, and more a performance of who we want to be.
  • It also seemed to herald a darker future where our fundamental senses of reality are under siege: The AI that allows anyone to fabricate a face can also be used to harass women with “deepfake” pornography, invent fraudulent LinkedIn personas and digitally impersonate political enemies.
  • As the photos began receiving hundreds of likes, Soya’s personality and style began to come through. She was relentlessly upbeat. She never sneered or bickered or trolled. She explored small towns, savored scenic vistas, celebrated roadside restaurants’ simple meals.
  • ...25 more annotations...
  • She took pride in the basic things, like cleaning engine parts. And she only hinted at the truth: When one fan told her in October, “It’s great to be young,” Soya replied, “Youth does not mean a certain period of life, but how to hold your heart.”
  • She seemed, well, happy, and FaceApp had made her that way. Creating the lifelike impostor had taken only a few taps: He changed the “Gender” setting to “Female,” the “Age” setting to “Teen,” and the “Impression” setting — a mix of makeup filters — to a glamorous look the app calls “Hollywood.”
  • Soya pouted and scowled on rare occasions when Nakajima himself felt frustrated. But her baseline expression was an extra-wide smile, activated with a single tap.
  • Nakajima grew his shimmering hair below his shoulders and raided his local convenience store for beauty supplies he thought would make the FaceApp images more convincing: blushes, eyeliners, concealers, shampoos.
  • “When I compare how I feel when I started to tweet as a woman and now, I do feel that I’m gradually gravitating toward this persona … this fantasy world that I created,” Nakajima said. “When I see photos of what I tweeted, I feel like, ‘Oh. That’s me.’ ”
  • The sensation Nakajima was feeling is so common that there’s a term for it: the Proteus effect, named for the shape-shifting Greek god. Stanford University researchers first coined it in 2007 to describe how people inhabiting the body of a digital avatar began to act the part
  • People made to appear taller in virtual-reality simulations acted more assertively, even after the experience ended. Prettier characters began to flirt.
  • What is it about online disguises? Why are they so good at bending people’s sense of self-perception?
  • they tap into this “very human impulse to play with identity and pretend to be someone you’re not.”
  • Users in the Internet’s early days rarely had any presumptions of authenticity, said Melanie C. Green, a University of Buffalo professor who studies technology and social trust. Most people assumed everyone else was playing a character clearly distinguished from their real life.
  • “This identity play was considered one of the huge advantages of being online,” Green said. “You could switch your gender and try on all of these different personas. It was a playground for people to explore.”
  • It wasn’t until the rise of giant social networks like Facebook — which used real identities to, among other things, supercharge targeted advertising — that this big game of pretend gained an air of duplicity. Spaces for playful performance shrank, and the biggest Internet watering holes began demanding proof of authenticity as a way to block out malicious intent.
  • The Web’s big shift from text to visuals — the rise of photo-sharing apps, live streams and video calls — seemed at first to make that unspoken rule of real identities concrete. It seemed too difficult to fake one’s appearance when everyone’s face was on constant display.
  • Now, researchers argue, advances in image-editing artificial intelligence have done for the modern Internet what online pseudonyms did for the world’s first chat rooms. Facial filters have allowed anyone to mold themselves into the character they want to play.
  • researchers fear these augmented reality tools could end up distorting the beauty standards and expectations of actual reality.
  • Some political and tech theorists worry this new world of synthetic media threatens to detonate our concept of truth, eroding our shared experiences and infusing every online relationship with suspicion and self-doubt.
  • Deceptive political memes, conspiracy theories, anti-vaccine hoaxes and other scams have torn the fabric of our democracy, culture and public health.
  • But she also thinks about her kids, who assume “that everything online is fabricated,” and wonders whether the rules of online identity require a bit more nuance — and whether that generational shift is already underway.
  • “Bots pretending to be people, automated representations of humanity — that, they perceive as exploitative,” she said. “But if it’s just someone engaging in identity experimentation, they’re like: ‘Yeah, that’s what we’re all doing.'
  • To their generation, “authenticity is not about: ‘Does your profile picture match your real face?’ Authenticity is: ‘Is your voice your voice?’
  • “Their feeling is: ‘The ideas are mine. The voice is mine. The content is mine. I’m just looking for you to receive it without all the assumptions and baggage that comes with it.’ That’s the essence of a person’s identity. That’s who they really are.”
  • But wasn’t this all just a big con? Nakajima had tricked people with a “cool girl” stereotype to boost his Twitter numbers. He hadn’t elevated the role of women in motorcycling; if anything, he’d supplanted them. And the character he’d created was paper thin: Soya had no internal complexity outside of what Nakajima had projected, just that eternally superimposed smile.
  • Perhaps he should have accepted his irrelevance and faded into the digital sunset, sharing his life for few to see. But some of Soya’s followers have said they never felt deceived: It was Nakajima — his enthusiasm, his attitude about life — they’d been charmed by all along. “His personality,” as one Twitter follower said, “shined through.”
  • In Nakajima’s mind, he’d used the tools of a superficial medium to craft genuine connections. He had not felt real until he had become noticed for being fake.
  • Nakajima said he doesn’t know how long he’ll keep Soya alive. But he said he’s grateful for the way she helped him feel: carefree, adventurous, seen.
margogramiak

How To Fight Deforestation In The Amazon From Your Couch | HuffPost - 0 views

  • If you’ve got as little as 30 seconds and a decent internet connection, you can help combat the deforestation of the Amazon. 
  • Some 15% of the Amazon, the world’s largest rainforest and a crucial carbon repository, has been cut or burned down. Around two-thirds of the Amazon lie within Brazil’s borders, where almost 157 square miles of forest were cleared in April alone. In addition to storing billions of tons of carbon, the Amazon is home to tens of millions of people and some 10% of the Earth’s biodiversity.
    • margogramiak
       
      all horrifying stats.
  • you just have to be a citizen that is concerned about the issue of deforestation,
    • margogramiak
       
      that's me!
  • ...12 more annotations...
  • If you’ve got as little as 30 seconds and a decent internet connection, you can help combat the deforestation of the Amazon. 
    • margogramiak
       
      great!
  • to build an artificial intelligence model that can recognize signs of deforestation. That data can be used to alert governments and conservation organizations where intervention is needed and to inform policies that protect vital ecosystems. It may even one day predict where deforestation is likely to happen next.
    • margogramiak
       
      That sounds super cool, and definitely useful.
  • To monitor deforestation, conservation organizations need an eye in the sky.
    • margogramiak
       
      bird's eye view pictures of deforestation are always super impactful.
  • WRI’s Global Forest Watch online tracking system receives images of the world’s forests taken every few days by NASA satellites. A simple computer algorithm scans the images, flagging instances where before there were trees and now there are not. But slight disturbances, such as clouds, can trip up the computer, so experts are increasingly interested in using artificial intelligence.
    • margogramiak
       
      that's so cool.
  • Inman was surprised how willing people have been to spend their time clicking on abstract-looking pictures of the Amazon.
    • margogramiak
       
      I'm glad so many people want to help.
  • Look at these nine blocks and make a judgment about each one. Does that satellite image look like a situation where human beings have transformed the landscape in some way?” Inman explained.
    • margogramiak
       
      seems simple enough
  • It’s not always easy; that’s the point. For example, a brown patch in the trees could be the result of burning to clear land for agriculture (earning a check mark for human impact), or it could be the result of a natural forest fire (no check mark). Keen users might be able to spot subtle signs of intervention the computer would miss, like the thin yellow line of a dirt road running through the clearing. 
    • margogramiak
       
      I was thinking about this issue... that's a hard problem to solve.
  • SAS’s website offers a handful of examples comparing natural forest features and manmade changes. 
    • margogramiak
       
      I guess that would be helpful. What happens if someone messes up though?
  • users have analyzed almost 41,000 images, covering an area of rainforest nearly the size of the state of Montana. Deforestation caused by human activity is evident in almost 2 in 5 photos.
    • margogramiak
       
      wow.
  • The researchers hope to use historical images of these new geographies to create a predictive model that could identify areas most at risk of future deforestation. If they can show that their AI model is successful, it could be useful for NGOs, governments and forest monitoring bodies, enabling them to carefully track forest changes and respond by sending park rangers and conservation teams to threatened areas. In the meantime, it’s a great educational tool for the citizen scientists who use the app
    • margogramiak
       
      But then what do they do with this data? How do they use it to make a difference?
  • Users simply select the squares in which they’ve spotted some indication of human impact: the tell-tale quilt of farm plots, a highway, a suspiciously straight edge of tree line. 
    • margogramiak
       
      I could do that!
  • we have still had people from 80 different countries come onto the app and make literally hundreds of judgments that enabled us to resolve 40,000 images,
    • margogramiak
       
      I like how in a sense it makes all the users one big community because of their common goal of wanting to help the earth.
knudsenlu

You Are Already Living Inside a Computer - The Atlantic - 1 views

  • Nobody really needs smartphone-operated bike locks or propane tanks. And they certainly don’t need gadgets that are less trustworthy than the “dumb” ones they replace, a sin many smart devices commit. But people do seem to want them—and in increasing numbers.
  • Why? One answer is that consumers buy what is on offer, and manufacturers are eager to turn their dumb devices smart. Doing so allows them more revenue, more control, and more opportunity for planned obsolescence. It also creates a secondary market for data collected by means of these devices. Roomba, for example, hopes to deduce floor plans from the movement of its robotic home vacuums so that it can sell them as business intelligence.
  • And the more people love using computers for everything, the more life feels incomplete unless it takes place inside them.
  • ...15 more annotations...
  • Computers already are predominant, human life already takes place mostly within them, and people are satisfied with the results.
  • These devices pose numerous problems. Cost is one. Like a cheap propane gauge, a traditional bike lock is a commodity. It can be had for $10 to $15, a tenth of the price of Nokē’s connected version. Security and privacy are others. The CIA was rumored to have a back door into Samsung TVs for spying. Disturbed people have been caught speaking to children over hacked baby monitors. A botnet commandeered thousands of poorly secured internet-of-things devices to launch a massive distributed denial-of-service attack against the domain-name syste
  • Reliability plagues internet-connected gadgets, too. When the network is down, or the app’s service isn’t reachable, or some other software behavior gets in the way, the products often cease to function properly—or at all.
  • Turing guessed that machines would become most compelling when they became convincing companions, which is essentially what today’s smartphones (and smart toasters) do.
  • But Turing never claimed that machines could think, let alone that they might equal the human mind. Rather, he surmised that machines might be able to exhibit convincing behavior.
  • People choose computers as intermediaries for the sensual delight of using computers
  • Why would anyone ever choose a solution that doesn’t involve computers, when computers are available? Propane tanks and bike locks are still edge cases, but ordinary digital services work similarly: The services people seek out are the ones that allow them to use computers to do things—from finding information to hailing a cab to ordering takeout. This is a feat of aesthetics as much as it is one of business. People choose computers as intermediaries for the sensual delight of using computers, not just as practical, efficient means for solving problems.
  • Doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers.
  • “Being a computer” means something different today than in 1950, when Turing proposed the imitation game. Contra the technical prerequisites of artificial intelligence, acting like a computer often involves little more than moving bits of data around, or acting as a controller or actuator. Grill as computer, bike lock as computer, television as computer. An intermediary
  • Or consider doorbells once more. Forget Ring, the doorbell has already retired in favor of the computer. When my kids’ friends visit, they just text a request to come open the door. The doorbell has become computerized without even being connected to an app or to the internet. Call it “disruption” if you must, but doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers, where they can produce new affections.
  • The present status of intelligent machines is more powerful than any future robot apocalypse.
  • ne such affection is the pleasure of connectivity. You don’t want to be offline. Why would you want your toaster or doorbell to suffer the same fate? Today, computational absorption is an ideal. The ultimate dream is to be online all the time, or at least connected to a computational machine of some kind.
  • This is not where anyone thought computing would end up. Early dystopic scenarios cautioned that the computer could become a bureaucrat or a fascist, reducing human behavior to the predetermined capacities of a dumb machine. Or else, that obsessive computer use would be deadening, sucking humans into narcotic detachment.Those fears persist to some extent, partly because they have been somewhat realized. But they have also been inverted. Being away from them now feels deadening, rather than being attached to them without end. And thus, the actions computers take become self-referential: to turn more and more things into computers to prolong that connection.
  • But the real present status of intelligent machines is both humdrum and more powerful than any future robot apocalypse. Turing is often called the father of AI, but he only implied that machines might become compelling enough to inspire interaction. That hardly counts as intelligence, artificial or real. It’s also far easier to achieve. Computers already have persuaded people to move their lives inside of them. The machines didn’t need to make people immortal, or promise to serve their every whim, or to threaten to destroy them absent assent. They just needed to become a sufficient part of everything human beings do such that they can’t—or won’t—imagine doing those things without them.
  • . The real threat of computers isn’t that they might overtake and destroy humanity with their future power and intelligence. It’s that they might remain just as ordinary and impotent as they are today, and yet overtake us anyway.
runlai_jiang

Elon Musk: Mars ship test flights 'next year' - BBC News - 0 views

  • Elon Musk, a man prone to ludicrous deadlines, has birthed another: test flights of his Mars spaceship next year."I think we’ll be able to do short flights, up and down flights, some time in the first half of next year," he told an audience at the South by South West (SXSW) festival in Austin, Texas.
  • Elon Musk is unquestionably the most interesting businessman in Silicon Valley - arguably the world - thanks to his almost single-handed reignition of the space race.After a string of failed rockets - and near bankruptcy - SpaceX wowed the world with its latest flight, Falcon Heavy, in February.
  • "This is a situation where you have a very serious danger to the public. There needs to be a public body that has insight and oversight so that everyone is delivering AI safely. This is extremely important.
knudsenlu

How badly do you want something? Babies can tell | MIT News - 0 views

  • Babies as young as 10 months can assess how much someone values a particular goal by observing how hard they are willing to work to achieve it, according to a new study from MIT and Harvard University.
  • This ability requires integrating information about both the costs of obtaining a goal and the benefit gained by the person seeking it, suggesting that babies acquire very early an intuition about how people make decisions.
  • “This paper is not the first to suggest that idea, but its novelty is that it shows this is true in much younger babies than anyone has seen. These are preverbal babies, who themselves are not actively doing very much, yet they appear to understand other people’s actions in this sophisticated, quantitative way,”
  • ...4 more annotations...
  • “This study is an important step in trying to understand the roots of common-sense understanding of other people’s actions. It shows quite strikingly that in some sense, the basic math that is at the heart of how economists think about rational choice is very intuitive to babies who don’t know math, don’t speak, and can barely understand a few words
  • “Abstract, interrelated concepts like cost and value — concepts at the center both of our intuitive psychology and of utility theory in philosophy and economics — may originate in an early-emerging system by which infants understand other people's actions,” she says. 
  • In other words, they apply the well-known logic that all of us rely on when we try to assess someone’s preferences: The harder she tries to achieve something, the more valuable is the expected reward to her when she succeeds.”
  • “We have to recognize that we’re very far from building AI systems that have anything like the common sense even of a 10-month-old,” Tenenbaum says. “But if we can understand in engineering terms the intuitive theories that even these young infants seem to have, that hopefully would be the basis for building machines that have more human-like intelligence.
runlai_jiang

You Asked About CES 2018. We Answered. - The New York Times - 0 views

  • You Asked About CES 2018. We Answered. By BRIAN X. CHEN At the International Consumer Electronics Show this week in Las Vegas, thousands of tech companies showcased some of the hottest new innovations: artificial intelligence, self-driving car tech, the smart home, voice-controlled accessories, fifth-generation cellular connectivity and more.Curious about the new products and how they will affect your personal technology? Readers asked Brian X. Chen, our lead consumer technology writer who is attending the trade show, their questions about wireless, TV and the Internet of Things. Jump to topic: CES Internet of Things TV Wireless Expand All Collapse All /* SHOW LIBRARY ===================== */ .g-show-xsmall, .g-show-small, .g-show-smallplus, .g-show-submedium, .g-show-sub-medium, .g-show-medium, .g-show-large, .g-show-xlarge { display: none; } .g-show { display: block; } .lt-ie10 .g-aiImg { width: 100%; } /* story top */ .story.theme-main .story-header .story-meta .story-heading { max-width: 720px; margin: 0 auto 10px; text-align: center; line-height: 2.844rem; font-size: 2.4rem; } @media only screen and (max-width: 1244px) { .story.theme-main .story-header .story-meta .story-heading { line-height: 2.5596rem; font-size: 2.16rem; } } @media only screen and (max-width: 719px) { .story.theme-main .story-header .story-meta .story-heading { line-height: 2.2752rem; font-size: 1.92rem; } } .story.theme-main .story-header .story-meta .interactive-leadin.summary { max-width: 460px; margin: 0 auto 20px auto; text-align: left; font-size: 17px; line-height: 1.6; } .story.theme-main .story-header .story-meta .byline-dateline { text-align: center; } /* top asset */ .g-top-asset { margin-left: auto; margin-right: auto; margin-bottom: 20px; } .g-top-asset img { width: 100%; } /* body text */ .g-body { max-width: 460px; margin-left: auto; margin-right: auto; font-size: 17px; line-height: 1.6; } .g-body b, .g-body strong { font-family: 'nyt-franklin', arial, sans-serif; font-size: 16px; } .g-body a { text-decoration: underline; } /* subhed */ .g-subhed h2 { max-width: 460px; margin: 2em auto 1em auto; font: 700 1.2em/1.3em 'nyt-franklin', arial, sans-serif; text-align: center; } .viewport-small-10 .g-subhed h2 { font-size: 1.5em; } /* images */ .g-item-image { margin: 25px auto; } .g-item-image img { width: 100%; } /* video */ .g-item-video { margin: 25px auto; } /* sources and credits */ .g-asset-source { padding-top: 3px; } .g-asset-source .g-source { font: 400 12px/15px 'nyt-franklin', arial, sans-serif; color: #999; } .g-asset-source .g-pipe { margin: 0 6px 0 3px; font: 400 12px/12px 'nyt-franklin', arial, sans-serif; color: #999; } .g-asset-source .g-caption { font: 300 14px/17px georgia, 'times new roman', times, serif; } .g-asset-source .g-credit { font: 400 12px/17px 'nyt-franklin', arial, sans-serif; display: inline-block; color: #999; } /* graphics */ .g-item-ai2html { margin: 25px auto; } p.g-asset-hed { font: 700 16px/18px 'nyt-franklin', arial, sans-serif; margin-bottom: 0; } .g-map-key { float: none; clear: both; overflow: hidden; margin: 10px auto 4px auto; } .g-map-key .g-key-row { margin-bottom: 5px; margin-right: 15px; float: left; } .viewport-small .g-map-key .g-key-row { width: auto; margin-bottom: 0; } .viewport-small-20 .g-map-key .g-key-row { width: auto; } .g-map-key .g-key-row .g-key-rect, .g-map-key .g-key-row .g-key-circle { display: inline-block; vertical-align: middle; margin-right: 8px; float: left; } .g-map-key .g-key-row p { font: 500 0.9em/1.6 'nyt-franklin', arial, sans-serif; float: left; vertical-align: middle; margin: -2px 0 0 0; } .viewport-small .g-map-key .g-key-row p { max-width: 92%; } .viewport-small-20 .g-map-key .g-key-row p { width: auto; max-width: none; } .g-map-key .g-key-row .g-key-rect { width: 22px; height: 10px; margin-top: 4px; } .g-map-key .g-key-row .g-key-circle { width: 9px; height: 9px; border-radius: 50%; margin-top: 4px; } .g-map-key .g-key-row .g-key-custom { width: 20px; height: 20px; background-size: 100%; display: block; float: left; width: 24px; height: 24px; margin: -4px 2px 0 0; } .viewport-small .g-map-key .g-key-row-title p { width: 100%; max-width: none; } .g-red-dot, .g-black-dot { display: inline-block; background: #d00; color: white; font-weight: bold; width: 20px; height: 20px; font: 700 14px/1.4 'nyt-franklin', arial, sans-serif; text-align: center; border-radius: 10px; line-height: 20px; } .g-black-dot { background: #222; } /* column text */ .g-column-container { max-width: 460px; margin: 20px auto 0 auto; } .viewport-medium .g-column-container { max-width: 1050px; } .viewport-large .g-column-container { margin-bottom: 30px; } .g-column-container .g-column-hed { font-family: 'nyt-franklin', arial, sans-serif; font-weight: 700; margin-bottom: 2px; } .g-column-container .g-column-col { vertical-align: top; } .viewport-small .g-column-container .g-column-col { display: block; min-width: 100%; } .viewport-medium .g-column-container .g-column-col { min-width: 0; display: inline-block; margin-right: 15px; } .viewport-medium .g-column-container .g-column-col:last-child { margin-right: 0; } .g-column-container .g-column-asset, .g-column-container .g-column-image { margin-bottom: 8px; } .g-column-container .g-column-image img { width: 100%; } /* tables */ .g-table { margin: 0 auto; margin-bottom: 25px; } .g-table tr { border-bottom: 1px solid #ececec; } .g-table p { font: 500 14px/1.4 'nyt-franklin', arial, sans-serif; text-align: left; margin: 6px 0; } /* grid */ .g-grid { margin-bottom: 15px; } .g-grid .g-grid-item { position: relative; display: inline-block; min-width: calc(50% - 5px); } .viewport-small-20 .g-grid .g-grid-item { min-width: calc(33% - 5px); } .viewport-medium .g-grid .g-grid-item { min-width: 0; } .g-grid .g-grid-item p { font: 500 15px/1.4 'nyt-franklin', arial, sans-serif; position: absolute; bottom: 10px; left: 10px; color: #fff; margin-bottom: 0; } /* Mobile issues */ /* Get rid of border under intro and share tools on mobile */ .story-header.interactive-header { border-bottom: none !important; } /* Share tools issues */ /* Pad out the kicker/sharetool space */ .story.theme-main .story-header .story-meta .kicker-container { margin-bottom: 12px; } /* Override the moving sharetools on mobile */ .story.theme-main .story-header .story-meta .kicker-container .sharetools { position: relative; float: right; /*right: 0px;*/ bottom: auto; left: auto; width: auto; margin-top: -6px; clear: none; } /* Maintain the proper space with the section name and kicker next to share tools */ .story.theme-main .story-header .story-meta .interactive-kicker { float: left; width: 70%; display: inline-block; } .g-graphic.g-graphic-freebird .g-ad-wrapper { margin: 30px auto; max-width: 1050px; } .viewport-medium .g-graphic.g-graphic-freebird .g-ad-wrapper { margin: 50px auto; } .g-graphic.g-graphic-freebird .g-ad-wrapper .ad { border-top: 1px solid #e2e2e2 !important; border-bottom: 1px solid #e2e2e2 !important; margin: 0 auto; padding: 10px 0; overflow: hidden; max-width: 100%; } .g-graphic.g-graphic-freebird .g-ad-wrapper .ad div { margin: 0 auto; display: block !important; } .g-graphic.g-graphic-freebird .g-ad-wrapper iframe { margin: 0 auto; display: block; } .g-graphic.g-graphic-freebird .g-ad-wrapper .g-ad-text { text-align: center; font: 500 12px/1.2 "nyt-franklin", arial, sans-serif; color: #bfbfbf; text-transform: uppercase; margin-bottom: 7px; } .ad.top-ad { border: none; margin-left: auto; margin-right: auto; } /* Fix spacing at top of story */ .has-top-ad .story.theme-interactive, .has-ribbon .story.theme-interactive { margin-top: 10px; } /* Fix comments button margin */ .story.theme-interactive .comments-button.theme-kicker { margin-top: 0; } /* Get rid of border under intro and share tools on mobile */ .page-interactive-default .story.theme-main .story-header { border-bottom: none; } /*
  • At the International Consumer Electronics Show this week in Las Vegas, thousands of tech companies showcased some of the hottest new innovations: artificial intelligence, self-driving car tech, the smart home, voice-controlled accessories, fifth-generation cellular connectivity and more.
  • Curious about the new products and how they will affect your personal technology? Readers asked Brian X. Chen, our lead consumer technology writer who attended the trade show, their questions about wireless, TV and the Internet of Things. (In addition,
johnsonel7

Entrepreneurial Singularity: Marrying Technology and Human Virtues - 0 views

  • Mona Hamdy believes that technology married with pragmatic optimism can save the world. That’s what the entrepreneur and Harvard University Applied Ethics teaching fellow told me while we sat overlooking the Potomac River at her restaurant in Georgetown. We discussed impossible problems like plastics in the ocean, hostile AI, hypersonic missiles, the perils of cashless economies for the world’s poorest, socioeconomic challenges for women in the Middle East and North Africa, and cultural misunderstandings between the U.S. and Arab nations. 
  • “The kind of technology we have created should give us pause. It means we are aware of its potential in our hands. We can regulate it and use it to help relieve human despair like no other time on earth. Conflict, famine, poverty, and ecological destruction can be mapped on top of each other. Let’s learn as much as we can, and create economies that address these problems as solvable opportunities.” 
  • “The nature of our company combined this traditional wisdom with futuristic technology like cinematic worldbuilding, mixed reality and AR for education,  digital twinning, and 3d printing as effective modes of information transfer. These things were not considered part of the poverty-eradication toolkit a decade ago, but the world is coming around to it. Tech and heritage-- it’s the 21st century version of what our ancestors would have done. ”
  • ...1 more annotation...
  • “ I design projects that prove companies can be profitable when the end result is better stewardship of our planet. I think it’s the most ethical thing we can do for those who will come after us. Sometimes, like in ecological projects, that end result just happens to be a hundred years from now. Which isn’t that long when you consider how trees grow or lakes fill.”
« First ‹ Previous 81 - 100 of 119 Next ›
Showing 20 items per page