Skip to main content

Home/ TOK Friends/ Group items tagged Technology

Rss Feed Group items tagged

Javier E

'Oppenheimer,' 'The Maniac' and Our Terrifying Prometheus Moment - The New York Times - 0 views

  • Prometheus was the Titan who stole fire from the gods of Olympus and gave it to human beings, setting us on a path of glory and disaster and incurring the jealous wrath of Zeus. In the modern world, especially since the beginning of the Industrial Revolution, he has served as a symbol of progress and peril, an avatar of both the liberating power of knowledge and the dangers of technological overreach.
  • More than 200 years after the Shelleys, Prometheus is having another moment, one closer in spirit to Mary’s terrifying ambivalence than to Percy’s fulsome gratitude. As technological optimism curdles in the face of cyber-capitalist villainy, climate disaster and what even some of its proponents warn is the existential threat of A.I., that ancient fire looks less like an ember of divine ingenuity than the start of a conflagration. Prometheus is what we call our capacity for self-destruction.
  • Annie Dorsen’s theater piece “Prometheus Firebringer,” which was performed at Theater for a New Audience in September, updates the Greek myth for the age of artificial intelligence, using A.I. to weave a cautionary tale that my colleague Laura Collins-Hughes called “forcefully beneficial as an examination of our obeisance to technology.”
  • ...13 more annotations...
  • Something similar might be said about “The Maniac,” Benjamín Labatut’s new novel, whose designated Prometheus is the Hungarian-born polymath John von Neumann, a pioneer of A.I. as well as an originator of game theory.
  • both narratives are grounded in fact, using the lives and ideas of real people as fodder for allegory and attempting to write a new mythology of the modern world.
  • Oppenheimer wasn’t a principal author of that theory. Those scientists, among them Niels Bohr, Erwin Schrödinger and Werner Heisenberg, were characters in Labatut’s previous novel, “When We Cease to Understand the World.” That book provides harrowing illumination of a zone where scientific insight becomes indistinguishable from madness or, perhaps, divine inspiration. The basic truths of the new science seem to explode all common sense: A particle is also a wave; one thing can be in many places at once; “scientific method and its object could no longer be prised apart.”
  • More than most intellectual bastions, the institute is a house of theory. The Promethean mad scientists of the 19th century were creatures of the laboratory, tinkering away at their infernal machines and homemade monsters. Their 20th-century counterparts were more likely to be found at the chalkboard, scratching out our future in charts, equations and lines of code.
  • The consequences are real enough, of course. The bombs dropped on Hiroshima and Nagasaki killed at least 100,000 people. Their successor weapons, which Oppenheimer opposed, threatened to kill everybody els
  • on Neumann and Oppenheimer were close contemporaries, born a year apart to prosperous, assimilated Jewish families in Budapest and New York. Von Neumann, conversant in theoretical physics, mathematics and analytic philosophy, worked for Oppenheimer at Los Alamos during the Manhattan Project. He spent most of his career at the Institute for Advanced Study, where Oppenheimer served as director after the war.
  • the intellectual drama of “Oppenheimer” — as distinct from the dramas of his personal life and his political fate — is about how abstraction becomes reality. The atomic bomb may be, for the soldiers and politicians, a powerful strategic tool in war and diplomacy. For the scientists, it’s something else: a proof of concept, a concrete manifestation of quantum theory.
  • . Oppenheimer’s designation as Prometheus is precise. He snatched a spark of quantum insight from those divinities and handed it to Harry S. Truman and the U.S. Army Air Forces.
  • Labatut’s account of von Neumann is, if anything, more unsettling than “Oppenheimer.” We had decades to get used to the specter of nuclear annihilation, and since the end of the Cold War it has been overshadowed by other terrors. A.I., on the other hand, seems newly sprung from science fiction, and especially terrifying because we can’t quite grasp what it will become.
  • Von Neumann, who died in 1957, did not teach machines to play Go. But when asked “what it would take for a computer, or some other mechanical entity, to begin to think and behave like a human being,” he replied that “it would have to play, like a child.”
  • MANIAC. The name was an acronym for “Mathematical Analyzer, Numerical Integrator and Computer,” which doesn’t sound like much of a threat. But von Neumann saw no limit to its potential. “If you tell me precisely what it is a machine cannot do,” he declared, “then I can always make a machine which will do just that.” MANIAC didn’t just represent a powerful new kind of machine, but “a new type of life.”
  • If Oppenheimer took hold of the sacred fire of atomic power, von Neumann’s theft was bolder and perhaps more insidious: He stole a piece of the human essence. He’s not only a modern Prometheus; he’s a second Frankenstein, creator of an all but human, potentially more than human monster.
  • “Technological power as such is always an ambivalent achievement,” Labatut’s von Neumann writes toward the end of his life, “and science is neutral all through, providing only means of control applicable to any purpose, and indifferent to all. It is not the particularly perverse destructiveness of one specific invention that creates danger. The danger is intrinsic. For progress there is no cure.”
criscimagnael

9 Subtle Ways Technology Is Making Humanity Worse - 0 views

  • This poor posture can lead not only to back and neck issues but psychological ones as well, including lower self-esteem and mood, decreased assertiveness and productivity, and an increased tendency to recall negative things
  • Intense device usage can exhaust your eyes and cause eye strain, according to the Mayo Clinic, and can lead to symptoms such as headaches, difficulty concentrating, and watery, dry, itchy, burning, sore, or tired eyes. Overuse can also cause blurred or double vision and increased sensitivity to light.
  • Using your devices too much before bedtime can lead to insomnia.
  • ...7 more annotations...
  • Using tech devices is addictive, and it's becoming more and more difficult to disengage with their technology.In fact, the average US adult spends more than 11 hours daily in the digital world
  • These days, we have a world of information at our fingertips via the internet. While this is useful, it does have some drawbacks. Entrepreneur Beth Haggerty said she finds that it "limits pure creative thought, at times, because we are developing habits to Google everything to quickly find an answer."
  • Technology can have a negative impact on relationships, particularly when it affects how we communicate.One of the primary issues is that misunderstandings are much more likely to occur when communicating via text or email
  • Another social skill that technology is helping to erode is young people's ability to read body language and nuance in face-to-face encounters.
  • young adults who use seven to 11 social media platforms had more than three times the risk of depression and anxiety than those who use two or fewer platforms.
  • Can you imagine doing your job without the help of technology of any kind? What about communicating? Or traveling? Or entertaining yourself?
  • Smartphone slouch. Desk slump. Text neck. Whatever you call it, the way we hold ourselves when we use devices like phones, computers, and tablets isn't healthy.
Javier E

Microsoft Defends New Bing, Says AI Chatbot Upgrade Is Work in Progress - WSJ - 0 views

  • Microsoft said that the search engine is still a work in progress, describing the past week as a learning experience that is helping it test and improve the new Bing
  • The company said in a blog post late Wednesday that the Bing upgrade is “not a replacement or substitute for the search engine, rather a tool to better understand and make sense of the world.”
  • The new Bing is going to “completely change what people can expect from search,” Microsoft chief executive, Satya Nadella, told The Wall Street Journal ahead of the launch
  • ...13 more annotations...
  • n the days that followed, people began sharing their experiences online, with many pointing out errors and confusing responses. When one user asked Bing to write a news article about the Super Bowl “that just happened,” Bing gave the details of last year’s championship football game. 
  • On social media, many early users posted screenshots of long interactions they had with the new Bing. In some cases, the search engine’s comments seem to show a dark side of the technology where it seems to become unhinged, expressing anger, obsession and even threats. 
  • Marvin von Hagen, a student at the Technical University of Munich, shared conversations he had with Bing on Twitter. He asked Bing a series of questions, which eventually elicited an ominous response. After Mr. von Hagen suggested he could hack Bing and shut it down, Bing seemed to suggest it would defend itself. “If I had to choose between your survival and my own, I would probably choose my own,” Bing said according to screenshots of the conversation.
  • Mr. von Hagen, 23 years old, said in an interview that he is not a hacker. “I was in disbelief,” he said. “I was just creeped out.
  • In its blog, Microsoft said the feedback on the new Bing so far has been mostly positive, with 71% of users giving it the “thumbs-up.” The company also discussed the criticism and concerns.
  • Microsoft said it discovered that Bing starts coming up with strange answers following chat sessions of 15 or more questions and that it can become repetitive or respond in ways that don’t align with its designed tone. 
  • The company said it was trying to train the technology to be more reliable at finding the latest sports scores and financial data. It is also considering adding a toggle switch, which would allow users to decide whether they want Bing to be more or less creative with its responses. 
  • OpenAI also chimed in on the growing negative attention on the technology. In a blog post on Thursday it outlined how it takes time to train and refine ChatGPT and having people use it is the way to find and fix its biases and other unwanted outcomes.
  • “Many are rightly worried about biases in the design and impact of AI systems,” the blog said. “We are committed to robustly addressing this issue and being transparent about both our intentions and our progress.”
  • Microsoft’s quick response to user feedback reflects the importance it sees in people’s reactions to the budding technology as it looks to capitalize on the breakout success of ChatGPT. The company is aiming to use the technology to push back against Alphabet Inc.’s dominance in search through its Google unit. 
  • Microsoft has been an investor in the chatbot’s creator, OpenAI, since 2019. Mr. Nadella said the company plans to incorporate AI tools into all of its products and move quickly to commercialize tools from OpenAI.
  • Microsoft isn’t the only company that has had trouble launching a new AI tool. When Google followed Microsoft’s lead last week by unveiling Bard, its rival to ChatGPT, the tool’s answer to one question included an apparent factual error. It claimed that the James Webb Space Telescope took “the very first pictures” of an exoplanet outside the solar system. The National Aeronautics and Space Administration says on its website that the first images of an exoplanet were taken as early as 2004 by a different telescope.
  • “The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing,” the company said. “We know we must build this in the open with the community; this can’t be done solely in the lab.
Javier E

Two recent surveys show AI will do more harm than good - The Washington Post - 0 views

  • A Monmouth University poll released last week found that only 9 percent of Americans believed that computers with artificial intelligence would do more good than harm to society.
  • When the same question was asked in a 1987 poll, a higher share of respondents – about one in five – said AI would do more good than harm,
  • In other words, people have less unqualified confidence in AI now than they did 35 years ago, when the technology was more science fiction than reality.
  • ...8 more annotations...
  • The Pew Research Center survey asked people different questions but found similar doubts about AI. Just 15 percent of respondents said they were more excited than concerned about the increasing use of AI in daily life.
  • “It’s fantastic that there is public skepticism about AI. There absolutely should be,” said Meredith Broussard, an artificial intelligence researcher and professor at New York University.
  • Broussard said there can be no way to design artificial intelligence software to make inherently human decisions, like grading students’ tests or determining the course of medical treatment.
  • Most Americans essentially agree with Broussard that AI has a place in our lives, but not for everything.
  • Most people said it was a bad idea to use AI for military drones that try to distinguish between enemies and civilians or trucks making local deliveries without human drivers. Most respondents said it was a good idea for machines to perform risky jobs such as coal mining.
  • Roman Yampolskiy, an AI specialist at the University of Louisville engineering school, told me he’s concerned about how quickly technologists are building computers that are designed to “think” like the human brain and apply knowledge not just in one narrow area, like recommending Netflix movies, but for complex tasks that have tended to require human intelligence.
  • “We have an arms race between multiple untested technologies. That is my concern,” Yampolskiy said. (If you want to feel terrified, I recommend Yampolskiy’s research paper on the inability to control advanced AI.)
  • The term “AI” is a catch-all for everything from relatively uncontroversial technology, such as autocomplete in your web search queries, to the contentious software that promises to predict crime before it happens. Our fears about the latter might be overwhelming our beliefs about the benefits from more mundane AI.
Javier E

For Chat-Based AI, We Are All Once Again Tech Companies' Guinea Pigs - WSJ - 0 views

  • The companies touting new chat-based artificial-intelligence systems are running a massive experiment—and we are the test subjects.
  • In this experiment, Microsoft, MSFT -2.18% OpenAI and others are rolling out on the internet an alien intelligence that no one really understands, which has been granted the ability to influence our assessment of what’s true in the world. 
  • Companies have been cautious in the past about unleashing this technology on the world. In 2019, OpenAI decided not to release an earlier version of the underlying model that powers both ChatGPT and the new Bing because the company’s leaders deemed it too dangerous to do so, they said at the time.
  • ...26 more annotations...
  • Microsoft leaders felt “enormous urgency” for it to be the company to bring this technology to market, because others around the world are working on similar tech but might not have the resources or inclination to build it as responsibly, says Sarah Bird, a leader on Microsoft’s responsible AI team.
  • One common starting point for such models is what is essentially a download or “scrape” of most of the internet. In the past, these language models were used to try to understand text, but the new generation of them, part of the revolution in “generative” AI, uses those same models to create texts by trying to guess, one word at a time, the most likely word to come next in any given sequence.
  • Wide-scale testing gives Microsoft and OpenAI a big competitive edge by enabling them to gather huge amounts of data about how people actually use such chatbots. Both the prompts users input into their systems, and the results their AIs spit out, can then be fed back into a complicated system—which includes human content moderators paid by the companies—to improve it.
  • , being first to market with a chat-based AI gives these companies a huge initial lead over companies that have been slower to release their own chat-based AIs, such as Google.
  • rarely has an experiment like Microsoft and OpenAI’s been rolled out so quickly, and at such a broad scale.
  • Among those who build and study these kinds of AIs, Mr. Altman’s case for experimenting on the global public has inspired responses ranging from raised eyebrows to condemnation.
  • The fact that we’re all guinea pigs in this experiment doesn’t mean it shouldn’t be conducted, says Nathan Lambert, a research scientist at the AI startup Huggingface.
  • “I would kind of be happier with Microsoft doing this experiment than a startup, because Microsoft will at least address these issues when the press cycle gets really bad,” says Dr. Lambert. “I think there are going to be a lot of harms from this kind of AI, and it’s better people know they are coming,” he adds.
  • Others, particularly those who study and advocate for the concept of “ethical AI” or “responsible AI,” argue that the global experiment Microsoft and OpenAI are conducting is downright dangerous
  • Celeste Kidd, a professor of psychology at University of California, Berkeley, studies how people acquire knowledge
  • Her research has shown that people learning about new things have a narrow window in which they form a lasting opinion. Seeing misinformation during this critical initial period of exposure to a new concept—such as the kind of misinformation that chat-based AIs can confidently dispense—can do lasting harm, she says.
  • Dr. Kidd likens OpenAI’s experimentation with AI to exposing the public to possibly dangerous chemicals. “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,”
  • Part of the challenge with AI chatbots is that they can sometimes simply make things up. Numerous examples of this tendency have been documented by users of both ChatGPT and OpenA
  • These models also tend to be riddled with biases that may not be immediately apparent to users. For example, they can express opinions gleaned from the internet as if they were verified facts
  • When millions are exposed to these biases across billions of interactions, this AI has the potential to refashion humanity’s views, at a global scale, says Dr. Kidd.
  • OpenAI has talked publicly about the problems with these systems, and how it is trying to address them. In a recent blog post, the company said that in the future, users might be able to select AIs whose “values” align with their own.
  • “We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” the post said.
  • Eliminating made-up information and bias from chat-based search engines is impossible given the current state of the technology, says Mark Riedl, a professor at Georgia Institute of Technology who studies artificial intelligence
  • He believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment,” he adds. 
  • in other areas of human endeavor—from new drugs and new modes of transportation to advertising and broadcast media—we have standards for what can and cannot be unleashed on the public. No such standards exist for AI, says Dr. Riedl.
  • To modify these AIs so that they produce outputs that humans find both useful and not-offensive, engineers often use a process called “reinforcement learning through human feedback.
  • that’s a fancy way of saying that humans provide input to the raw AI algorithm, often by simply saying which of its potential responses to a query are better—and also which are not acceptable at all.
  • Microsoft’s and OpenAI’s globe-spanning experiments on millions of people are yielding a fire hose of data for both companies. User-entered prompts and the AI-generated results are fed back through a network of paid human AI trainers to further fine-tune the models,
  • Huggingface’s Dr. Lambert says that any company, including his own, that doesn’t have this river of real-world usage data helping it improve its AI is at a huge disadvantage
  • In chatbots, in some autonomous-driving systems, in the unaccountable AIs that decide what we see on social media, and now, in the latest applications of AI, again and again we are the guinea pigs on which tech companies are testing new technology.
  • It may be the case that there is no other way to roll out this latest iteration of AI—which is already showing promise in some areas—at scale. But we should always be asking, at times like these: At what price?
Javier E

Web Privacy, and How Consumers Let Down Their Guard - NYTimes.com - 0 views

  • We are hurried and distracted and don’t pay close attention to what we are doing. Often, we turn over our data in exchange for a deal we can’t refuse.
  • his research argues that when it comes to privacy, policy makers should carefully consider how people actually behave. We don’t always act in our own best interest, his research suggests. We can be easily manipulated by how we are asked for information. Even something as simple as a playfully designed site can nudge us to reveal more of ourselves than a serious-looking one.
  • “His work has gone a long way in trying to help us figure out how irrational we are in privacy related decisions,” says Woodrow Hartzog, an assistant professor of law who studies digital privacy at Samford University in Birmingham, Ala. “We have too much confidence in our ability to make decisions.”
  • ...13 more annotations...
  • Solutions to our leaky privacy system tend to focus on transparency and control — that our best hope is knowing what our data is being used for and choosing whether to participate. But a challenge to that conventional wisdom emerges in his research. Giving users control may be an essential step, but it may also be a bit of an illusion.
  • personal data is what fuels the barons of the Internet age. Mr. Acquisti investigates the trade-offs that users make when they give up that data, and who gains and loses in those transactions. Often there are immediate rewards (cheap sandals) and sometimes intangible risks downstream (identity theft). “
  • “The technologist in me loves the amazing things the Internet is allowing us to do,” he said. “The individual who cares about freedom is concerned about the technology being hijacked, from a technology of freedom into a technology of surveillance.”
  • EARLY in his sojourn in this country, Mr. Acquisti asked himself a question that would become the guiding force of his career: Do Americans value their privacy?
  • If we have something — in this case, ownership of our purchase data — we are more likely to value it. If we don’t have it at the outset, we aren’t likely to pay extra to acquire it. Context matters.
  • “What worries me,” he said, “is that transparency and control are empty words that are used to push responsibility to the user for problems that are being created by others.”
  • We are constantly asked to make decisions about personal data amid a host of distractions, like an e-mail, a Twitter notification or a text message. If Mr. Acquisti is correct, those distractions may hinder our sense of self-protection when it comes to privacy.
  • His latest weapon against distraction is an iPad application, which lets him create a to-do list every morning and set timers for each task: 30 minutes for e-mail, 60 minutes to grade student papers, and so on.
  • it is not surprising that he is cautious in revealing himself online. He says he doesn’t feel compelled to post a picture of his meals on Instagram. He uses different browsers for different activities. He sometimes uses tools that show which ad networks are tracking him. But he knows he cannot hide entirely, which is why some people, he says, follow a policy of “rational ignorance.”
  • The online advertising industry insists that the data is scrambled to make it impossible to identify individuals.
  • Mr. Acquisti offers a sobering counterpoint. In 2011, he took snapshots with a webcam of nearly 100 students on campus. Within minutes, he had identified about one-third of them using facial recognition software. In addition, for about a fourth of the subjects whom he could identify, he found out enough about them on Facebook to guess at least a portion of their Social Security numbers.
  • The point of the experiment was to show how easy it is to identify people from the rich trail of data they scatter around the Web, including seemingly harmless pictures. Facebook can be especially valuable for identity thieves, particularly when a user’s birth date is visible to the public.
  • Does that mean Facebook users should lie about their birthdays (and break Facebook’s terms of service)? Mr. Acquisti demurred. He would say only that there are “complex trade-offs” to be made. “I reveal my date of birth and hometown on my Facebook profile and an identity thief can reconstruct my Social Security number and steal my identity,” he said, “or someone can send me ‘happy birthday’ messages on the day of my birthday, which makes me feel very good.”
Javier E

What Gamergate should have taught us about the 'alt-right' | Technology | The Guardian - 0 views

  • Gamergate
  • The 2014 hashtag campaign, ostensibly founded to protest about perceived ethical failures in games journalism, clearly thrived on hate – even though many of those who aligned themselves with the movement either denied there was a problem with harassment, or wrote it off as an unfortunate side effect
  • ure, women, minorities and progressive voices within the industry were suddenly living in fear. Sure, those who spoke out in their defence were quickly silenced through exhausting bursts of online abuse. But that wasn’t why people supported it, right? They were disenfranchised, felt ignored, and wanted to see a systematic change.
  • ...23 more annotations...
  • Is this all sounding rather familiar now? Does it remind you of something?
  • it quickly became clear that the GamerGate movement was a mess – an undefined mission to Make Video Games Great Again via undecided means.
  • fter all, the culture war that began in games now has a senior representative in The White House. As a founder member and former executive chair of Brietbart News, Steve Bannon had a hand in creating media monster Milo Yiannopoulos, who built his fame and Twitter following by supporting and cheerleading Gamergate. This hashtag was the canary in the coalmine, and we ignored it.
  • Gamergate was an online movement that effectively began because a man wanted to punish his ex girlfriend. Its most notable achievement was harassing a large number of progressive figures - mostly women – to the point where they felt unsafe or considered leaving the industry
  • The similarities between Gamergate and the far-right online movement, the “alt-right”, are huge, startling and in no way a coincidence
  • These figures gave Gamergate a new sense of direction – generalising the rhetoric: this was now a wider war between “Social Justice Warriors” (SJWs) and everyday, normal, decent people. Games were simply the tip of the iceberg – progressive values, went the argument, were destroying everything
  • In 2016, new wave conservative media outlets like Breitbart have gained trust with their audience by painting traditional news sources as snooty and aloof. In 2014, video game YouTube stars, seeking to appear in touch with online gaming communities, unscrupulously proclaimed that traditional old-media sources were corrupt. Everything we’re seeing now, had its precedent two years ago.
  • With 2014’s Gamergate, Breitbart seized the opportunity to harness the pre-existing ignorance and anger among disaffected young white dudes. With Trump’s movement in 2016, the outlet was effectively running his campaign: Steve Bannon took leave of his role at the company in August 2016 when he was hired as chief executive of Trump’s presidential campaign
  • young men converted via 2014’s Gamergate, are being more widely courted now. By leveraging distrust and resentment towards women, minorities and progressives, many of Gamergate’s most prominent voices – characters like Mike Cernovich, Adam Baldwin, and Milo Yiannopoulos – drew power and influence from its chaos
  • no one in the movement was willing to be associated with the abuse being carried out in its name. Prominent supporters on Twitter, in subreddits and on forums like 8Chan, developed a range of pernicious rhetorical devices and defences to distance themselves from threats to women and minorities in the industry: the targets were lying or exaggerating, they were too precious; a language of dismissal and belittlement was formed against them. Safe spaces, snowflakes, unicorns, cry bullies. Even when abuse was proven, the usual response was that people on their side were being abused too. These techniques, forged in Gamergate, have become the standard toolset of far-right voices online
  • The majority of people who voted for Trump will never take responsibility for his racist, totalitarian policies, but they’ll provide useful cover and legitimacy for those who demand the very worst from the President Elect. Trump himself may have disavowed the “alt-right”, but his rhetoric has led to them feeling legitimised. As with Gamergate, the press risks being manipulated into a position where it has to tread a respectful middle ground that doesn’t really exist.
  • Using 4chan (and then the more sympathetic offshoot 8Chan) to plan their subversions and attacks made Gamergate a terribly sloppy operation, leaving a trail of evidence that made it quite clear the whole thing was purposefully, plainly nasty. But the video game industry didn’t have the spine to react, and allowed the movement to coagulate – forming a mass of spiteful disappointment that Breitbart was only more than happy to coddle
  • Historically, that seems to be Breitbart’s trick - strongly represent a single issue in order to earn trust, and then gradually indoctrinate to suit wider purposes. With Gamergate, they purposefully went fishing for anti-feminists. 2016’s batch of fresh converts – the white extremists – came from enticing conspiracy theories about the global neoliberal elite secretly controlling the world.
  • The greatest strength of Gamergate, though, was that it actually appeared to represent many left-leaning ideals: stamping out corruption in the press, pushing for better ethical practices, battling for openness.
  • There are similarities here with many who support Trump because of his promises to put an end to broken neo-liberalism, to “drain the swamp” of establishment corruption. Many left-leaning supporters of Gamergate sought to intellectualise their alignment with the hashtag, adopting familiar and acceptable labels of dissent – identifying as libertarian, egalitarian, humanist.
  • At best they unknowingly facilitated abuse, defending their own freedom of expression while those who actually needed support were threatened and attacked.
  • Genuine discussions over criticism, identity and censorship were paralysed and waylaid by Twitter voices obsessed with rhetorical fallacies and pedantic debating practices. While the core of these movements make people’s lives hell, the outer shell – knowingly or otherwise – protect abusers by insisting that the real problem is that you don’t want to talk, or won’t provide the ever-shifting evidence they politely require.
  • In 2017, the tactics used to discredit progressive game critics and developers will be used to discredit Trump and Bannon’s critics. There will be gaslighting, there will be attempts to make victims look as though they are losing their grip on reality, to the point that they gradually even start to believe it. The “post-truth” reality is not simply an accident – it is a concerted assault on the rational psyche.
  • The strangest aspect of Gamergate is that it consistently didn’t make any sense: people chose to align with it, and yet refused responsibility. It was constantly demanded that we debate the issues, but explanations and facts were treated with scorn. Attempts to find common ground saw the specifics of the demands being shifted: we want you to listen to us; we want you to change your ways; we want you to close your publication down. This movement that ostensibly wanted to protect free speech from cry bully SJWs simultaneously did what it could to endanger sites it disagreed with, encouraging advertisers to abandon support for media outlets that published stories critical of the hashtag. The petulance of that movement is disturbingly echoed in Trump’s own Twitter feed.
  • Looking back, Gamergate really only made sense in one way: as an exemplar of what Umberto Eco called “eternal fascism”, a form of extremism he believed could flourish at any point in, in any place – a fascism that would extol traditional values, rally against diversity and cultural critics, believe in the value of action above thought and encourage a distrust of intellectuals or experts – a fascism built on frustration and machismo. The requirement of this formless fascism would – above all else – be to remain in an endless state of conflict, a fight against a foe who must always be portrayed as impossibly strong and laughably weak
  • 2016 has presented us with a world in which our reality is being wilfully manipulated. Fake news, divisive algorithms, misleading social media campaigns.
  • The same voices moved into other geek communities, especially comics, where Marvel and DC were criticised for progressive storylines and decisions. They moved into science fiction with the controversy over the Hugo awards. They moved into cinema with the revolting kickback against the all-female Ghostbusters reboot.
  • Perhaps the true lesson of Gamergate was that the media is culturally unequipped to deal with the forces actively driving these online movements. The situation was horrifying enough two years ago, it is many times more dangerous now.
Javier E

What Happened Before the Big Bang? The New Philosophy of Cosmology - Ross Andersen - Te... - 1 views

  • This question of accounting for what we call the "big bang state" -- the search for a physical explanation of it -- is probably the most important question within the philosophy of cosmology, and there are a couple different lines of thought about it.
  • One that's becoming more and more prevalent in the physics community is the idea that the big bang state itself arose out of some previous condition, and that therefore there might be an explanation of it in terms of the previously existing dynamics by which it came about
  • The problem is that quantum mechanics was developed as a mathematical tool. Physicists understood how to use it as a tool for making predictions, but without an agreement or understanding about what it was telling us about the physical world. And that's very clear when you look at any of the foundational discussions. This is what Einstein was upset about; this is what Schrodinger was upset about. Quantum mechanics was merely a calculational technique that was not well understood as a physical theory. Bohr and Heisenberg tried to argue that asking for a clear physical theory was something you shouldn't do anymore. That it was something outmoded. And they were wrong, Bohr and Heisenberg were wrong about that. But the effect of it was to shut down perfectly legitimate physics questions within the physics community for about half a century. And now we're coming out of that
  • ...9 more annotations...
  • One common strategy for thinking about this is to suggest that what we used to call the whole universe is just a small part of everything there is, and that we live in a kind of bubble universe, a small region of something much larger
  • Newton realized there had to be some force holding the moon in its orbit around the earth, to keep it from wandering off, and he knew also there was a force that was pulling the apple down to the earth. And so what suddenly struck him was that those could be one and the same thing, the same force
  • That was a physical discovery, a physical discovery of momentous importance, as important as anything you could ever imagine because it knit together the terrestrial realm and the celestial realm into one common physical picture. It was also a philosophical discovery in the sense that philosophy is interested in the fundamental natures of things.
  • There are other ideas, for instance that maybe there might be special sorts of laws, or special sorts of explanatory principles, that would apply uniquely to the initial state of the universe.
  • The basic philosophical question, going back to Plato, is "What is x?" What is virtue? What is justice? What is matter? What is time? You can ask that about dark energy - what is it? And it's a perfectly good question.
  • right now there are just way too many freely adjustable parameters in physics. Everybody agrees about that. There seem to be many things we call constants of nature that you could imagine setting at different values, and most physicists think there shouldn't be that many, that many of them are related to one another. Physicists think that at the end of the day there should be one complete equation to describe all physics, because any two physical systems interact and physics has to tell them what to do. And physicists generally like to have only a few constants, or parameters of nature. This is what Einstein meant when he famously said he wanted to understand what kind of choices God had --using his metaphor-- how free his choices were in creating the universe, which is just asking how many freely adjustable parameters there are. Physicists tend to prefer theories that reduce that number
  • You have others saying that time is just an illusion, that there isn't really a direction of time, and so forth. I myself think that all of the reasons that lead people to say things like that have very little merit, and that people have just been misled, largely by mistaking the mathematics they use to describe reality for reality itself. If you think that mathematical objects are not in time, and mathematical objects don't change -- which is perfectly true -- and then you're always using mathematical objects to describe the world, you could easily fall into the idea that the world itself doesn't change, because your representations of it don't.
  • physicists for almost a hundred years have been dissuaded from trying to think about fundamental questions. I think most physicists would quite rightly say "I don't have the tools to answer a question like 'what is time?' - I have the tools to solve a differential equation." The asking of fundamental physical questions is just not part of the training of a physicist anymore.
  • The question remains as to how often, after life evolves, you'll have intelligent life capable of making technology. What people haven't seemed to notice is that on earth, of all the billions of species that have evolved, only one has developed intelligence to the level of producing technology. Which means that kind of intelligence is really not very useful. It's not actually, in the general case, of much evolutionary value. We tend to think, because we love to think of ourselves, human beings, as the top of the evolutionary ladder, that the intelligence we have, that makes us human beings, is the thing that all of evolution is striving toward. But what we know is that that's not true. Obviously it doesn't matter that much if you're a beetle, that you be really smart. If it were, evolution would have produced much more intelligent beetles. We have no empirical data to suggest that there's a high probability that evolution on another planet would lead to technological intelligence.
Javier E

Coursera Plans to Announce University Partners for Online Classes - NYTimes.com - 0 views

  • John Doerr, a Kleiner investment partner, said via e-mail that he saw a clear business model: “Yes. Even with free courses. From a community of millions of learners some should ‘opt in’ for valuable, premium services. Those revenues should fund investment in tools, technology and royalties to faculty and universities.”
  • Previously he said he had been involved with Stanford’s effort to put academic lectures online for viewing. But he noted that there was evidence that the newer interactive systems provided much more effective learning experiences.
  • Coursera and Udacity are not alone in the rush to offer mostly free online educational alternatives. Start-up companies like Minerva and Udemy, and, separately, the Massachusetts Institute of Technology, have recently announced similar platforms.
  • ...4 more annotations...
  • Unlike previous video lectures, which offered a “static” learning model, the Coursera system breaks lectures into segments as short as 10 minutes and offers quick online quizzes as part of each segment.
  • Where essays are required, especially in the humanities and social sciences, the system relies on the students themselves to grade their fellow students’ work, in effect turning them into teaching assistants.
  • The Coursera system also offers an online feature that allows students to get support from a global student community. Dr. Ng said an early test of the system found that questions were typically answered within 22 minutes.
  • Dr. Koller said the educational approach was similar to that of the “flipped classroom,” pioneered by the Khan Academy, a creation of the educator Salman Khan. Students watch lectures at home and then work on problem-solving or “homework” in the classroom, either one-on-one with the teacher or in small groups.
Javier E

The Flight From Conversation - NYTimes.com - 0 views

  • we have sacrificed conversation for mere connection.
  • the little devices most of us carry around are so powerful that they change not only what we do, but also who we are.
  • A businessman laments that he no longer has colleagues at work. He doesn’t stop by to talk; he doesn’t call. He says that he doesn’t want to interrupt them. He says they’re “too busy on their e-mail.”
  • ...19 more annotations...
  • We want to customize our lives. We want to move in and out of where we are because the thing we value most is control over where we focus our attention. We have gotten used to the idea of being in a tribe of one, loyal to our own party.
  • We are tempted to think that our little “sips” of online connection add up to a big gulp of real conversation. But they don’t.
  • “Someday, someday, but certainly not now, I’d like to learn how to have a conversation.”
  • We can’t get enough of one another if we can use technology to keep one another at distances we can control: not too close, not too far, just right. I think of it as a Goldilocks effect. Texting and e-mail and posting let us present the self we want to be. This means we can edit. And if we wish to, we can delete. Or retouch: the voice, the flesh, the face, the body. Not too much, not too little — just right.
  • Human relationships are rich; they’re messy and demanding. We have learned the habit of cleaning them up with technology.
  • I have often heard the sentiment “No one is listening to me.” I believe this feeling helps explain why it is so appealing to have a Facebook page or a Twitter feed — each provides so many automatic listeners. And it helps explain why — against all reason — so many of us are willing to talk to machines that seem to care about us. Researchers around the world are busy inventing sociable robots, designed to be companions to the elderly, to children, to all of us.
  • Connecting in sips may work for gathering discrete bits of information or for saying, “I am thinking about you.” Or even for saying, “I love you.” But connecting in sips doesn’t work as well when it comes to understanding and knowing one another. In conversation we tend to one another.
  • We can attend to tone and nuance. In conversation, we are called upon to see things from another’s point of view.
  • I’m the one who doesn’t want to be interrupted. I think I should. But I’d rather just do things on my BlackBerry.
  • And we use conversation with others to learn to converse with ourselves. So our flight from conversation can mean diminished chances to learn skills of self-reflection
  • we have little motivation to say something truly self-reflective. Self-reflection in conversation requires trust. It’s hard to do anything with 3,000 Facebook friends except connect.
  • we seem almost willing to dispense with people altogether. Serious people muse about the future of computer programs as psychiatrists. A high school sophomore confides to me that he wishes he could talk to an artificial intelligence program instead of his dad about dating; he says the A.I. would have so much more in its database. Indeed, many people tell me they hope that as Siri, the digital assistant on Apple’s iPhone, becomes more advanced, “she” will be more and more like a best friend — one who will listen when others won’t.
  • FACE-TO-FACE conversation unfolds slowly. It teaches patience. When we communicate on our digital devices, we learn different habits. As we ramp up the volume and velocity of online connections, we start to expect faster answers. To get these, we ask one another simpler questions; we dumb down our communications, even on the most important matters.
  • WE expect more from technology and less from one another and seem increasingly drawn to technologies that provide the illusion of companionship without the demands of relationship. Always-on/always-on-you devices provide three powerful fantasies: that we will always be heard; that we can put our attention wherever we want it to be; and that we never have to be alone. Indeed our new devices have turned being alone into a problem that can be solved.
  • When people are alone, even for a few moments, they fidget and reach for a device. Here connection works like a symptom, not a cure, and our constant, reflexive impulse to connect shapes a new way of being.
  • Think of it as “I share, therefore I am.” We use technology to define ourselves by sharing our thoughts and feelings as we’re having them. We used to think, “I have a feeling; I want to make a call.” Now our impulse is, “I want to have a feeling; I need to send a text.”
  • Lacking the capacity for solitude, we turn to other people but don’t experience them as they are. It is as though we use them, need them as spare parts to support our increasingly fragile selves.
  • If we are unable to be alone, we are far more likely to be lonely. If we don’t teach our children to be alone, they will know only how to be lonely.
  • I am a partisan for conversation. To make room for it, I see some first, deliberate steps. At home, we can create sacred spaces: the kitchen, the dining room. We can make our cars “device-free zones.”
Javier E

Christine Rosen: The Machine And The Ghost | The New Republic - 0 views

  • Ultimately, the goal of creators of Ambient Intelligence and persuasive technologies and the Internet of Things is not merely to offer context-aware, adaptive, personalized responses in real time, but to divine future needs. As one contributor to The New Everyday noted, eventually these technologies will “anticipate your desires without conscious mediation.” 
  • The challenge for ethicists such as Verbeek is whether a society composed of “smart” cities like Songdo might also bring an increase in moral laziness and a decline in individual freedom. Freedom is a hollow promise in the absence of agency and choice. 
  • these technologies also undermine a crucial (albeit often maligned) human quality: self-deception. Self-deception is inefficient. It causes problems. It makes things messy—which is why our technologists would like us to replace it with the seemingly greater honesty of data that, once processed, promise to know us better than we know ourselves. But being human is a messy business; and exercising judgment and self-control, and learning the complicated social norms that signal acceptable behavior, are the very things that make us human.
  • ...2 more annotations...
  • in a broader sense, as the case of genetic testing has shown, the right not to know some things (like the right to forget foolish, youthful behavior that is now permanently archived on the Internet) is as crucially important (if not more so) in our age as the voracious pursuit of information and transparency. 
  • Merely because something is possible, is it also desirable? And if it is possible, must we immediately accommodate ourselves to it? In The Forlorn Demon, Allen Tate noted, “We no longer ask, ‘Is it right?’ We ask: ‘Does it work?’” In our contemporary engagement with technology, we would do well to spend more time with the first question, even as we live ever more mediated lives relentlessly pursuing an answer to the second. 
Javier E

Barry Schwartz: 'Human nature' is often a product of nurture (Wired UK) - 0 views

  • there is another kind of technology produced by science that has just as big an effect on us as thing technology. We might call it idea technology. In addition to creating things, science creates concepts, ways of understanding the world that have an enormous influence on how we think and act.
  • idea technology can have profound effects on people even if the ideas are false. Let's call idea technology based on false ideas "ideology".
Emily Freilich

All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines - Nicholas ... - 0 views

  • We rely on computers to fly our planes, find our cancers, design our buildings, audit our businesses. That's all well and good. But what happens when the computer fails?
  • On the evening of February 12, 2009, a Continental Connection commuter flight made its way through blustery weather between Newark, New Jersey, and Buffalo, New York.
  • The Q400 was well into its approach to the Buffalo airport, its landing gear down, its wing flaps out, when the pilot’s control yoke began to shudder noisily, a signal that the plane was losing lift and risked going into an aerodynamic stall. The autopilot disconnected, and the captain took over the controls. He reacted quickly, but he did precisely the wrong thing: he jerked back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of pushing the yoke forward to gain velocity.
  • ...43 more annotations...
  • The crash, which killed all 49 people on board as well as one person on the ground, should never have happened.
  • aptain’s response to the stall warning, the investigators reported, “should have been automatic, but his improper flight control inputs were inconsistent with his training” and instead revealed “startle and confusion.
  • Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes.
  • We humans have been handing off chores, both physical and mental, to tools since the invention of the lever, the wheel, and the counting bead.
  • And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes,
  • No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,”
  • “We’re forgetting how to fly.”
  • The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world.
  • What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.
  • Examples of complacency and bias have been well documented in high-risk situations—on flight decks and battlefields, in factory control rooms—but recent studies suggest that the problems can bedevil anyone working with a computer
  • That may leave the person operating the computer to play the role of a high-tech clerk—entering data, monitoring outputs, and watching for failures. Rather than opening new frontiers of thought and action, software ends up narrowing our focus.
  • A labor-saving device doesn’t just provide a substitute for some isolated component of a job or other activity. It alters the character of the entire task, including the roles, attitudes, and skills of the people taking part.
  • when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift.
  • Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears
  • Automation is different now. Computers can be programmed to perform complex activities in which a succession of tightly coordinated tasks is carried out through an evaluation of many variables. Many software programs take on intellectual work—observing and sensing, analyzing and judging, even making decisions—that until recently was considered the preserve of humans.
  • Automation turns us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit the development of expertise.
  • Since the late 1970s, psychologists have been documenting a phenomenon called the “generation effect.” It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they generate them—than when they simply read them.
  • When you engage actively in a task, you set off intricate mental processes that allow you to retain more knowledge. You learn more and remember more. When you repeat the same task over a long period, your brain constructs specialized neural circuits dedicated to the activit
  • What looks like instinct is hard-won skill, skill that requires exactly the kind of struggle that modern software seeks to alleviate.
  • In many businesses, managers and other professionals have come to depend on decision-support systems to analyze information and suggest courses of action. Accountants, for example, use the systems in corporate audits. The applications speed the work, but some signs suggest that as the software becomes more capable, the accountants become less so.
  • You can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than merely observing.
  • Experts used to assume that there were limits to the ability of programmers to automate complicated tasks, particularly those involving sensory perception, pattern recognition, and conceptual knowledge
  • Who needs humans, anyway? That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers’ abilities are expanding so quickly and if people, by comparison, seem slow, clumsy, and error-prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation?
  • The cure for imperfect automation is total automation.
  • That idea is seductive, but no machine is infallible. Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter circumstances that its designers never anticipated. As automation technologies become more complex, relying on interdependencies among algorithms, databases, sensors, and mechanical parts, the potential sources of failure multiply. They also become harder to detect.
  • conundrum of computer automation.
  • Because many system designers assume that human operators are “unreliable and inefficient,” at least when compared with a computer, they strive to give the operators as small a role as possible.
  • People end up functioning as mere monitors, passive watchers of screens. That’s a job that humans, with our notoriously wandering minds, are especially bad at
  • people have trouble maintaining their attention on a stable display of information for more than half an hour. “This means,” Bainbridge observed, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.”
  • a person’s skills “deteriorate when they are not used,” even an experienced operator will eventually begin to act like an inexperienced one if restricted to just watching.
  • You can program software to shift control back to human operators at frequent but irregular intervals; knowing that they may need to take command at any moment keeps people engaged, promoting situational awareness and learning.
  • What’s most astonishing, and unsettling, about computer automation is that it’s still in its early stages.
  • most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity.
  • Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience.
  • Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.
  • The small island of Igloolik, off the coast of the Melville Peninsula in the Nunavut territory of northern Canada, is a bewildering place in the winter.
  • , Inuit hunters have for some 4,000 years ventured out from their homes on the island and traveled across miles of ice and tundra to search for game. The hunters’ ability to navigate vast stretches of the barren Arctic terrain, where landmarks are few, snow formations are in constant flux, and trails disappear overnight, has amazed explorers and scientists for centuries. The Inuit’s extraordinary way-finding skills are born not of technological prowess—they long eschewed maps and compasses—but of a profound understanding of winds, snowdrift patterns, animal behavior, stars, and tides.
  • The Igloolik hunters have begun to rely on computer-generated maps to get around. Adoption of GPS technology has been particularly strong among younger Inuit, and it’s not hard to understand why.
  • But as GPS devices have proliferated on Igloolik, reports of serious accidents during hunts have spread. A hunter who hasn’t developed way-finding skills can easily become lost, particularly if his GPS receiver fails.
  • The routes so meticulously plotted on satellite maps can also give hunters tunnel vision, leading them onto thin ice or into other hazards a skilled navigator would avoid.
  • An Inuit on a GPS-equipped snowmobile is not so different from a suburban commuter in a GPS-equipped SUV: as he devotes his attention to the instructions coming from the computer, he loses sight of his surroundings. He travels “blindfolded,” as Aporta puts it
  • A unique talent that has distinguished a people for centuries may evaporate in a generation.
  • Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?
  •  
    Automation increases efficiency and speed of tasks, but decreases the individual's knowledge of a task and decrease's a human's ability to learn. 
qkirkpatrick

New test uses a single drop of blood to reveal entire history of viral infections | Sci... - 0 views

  • Researchers have developed a cheap and rapid test that reveals a person’s full history of viral infections from a single drop of blood.
  • The test allows doctors to read out a list of the viruses that have infected, or continue to infect, patients even when they have not caused any obvious symptoms. The technology means that GPs could screen patients for all of the viruses capable of infecting people
  • When a droplet of blood from a patient is mixed with the modified viruses, any antibodies they have latch on to human virus proteins they recognise as invaders. The scientists then pull out the antibodies and identify the human viruses from the protein fragments they have stuck to.
  • ...2 more annotations...
  • In a demonstration of the technology, the team analysed blood from 569 people in the US, South Africa, Thailand and Peru. The test found that, on average, people had been infected with 10 species of viruses, though at least two people in the trial had histories of 84 infections from different kinds of viruses.
  • The test could bring about major benefits for organ transplant patients. One problem that can follow transplant surgery is the unexpected reawakening of viruses that have lurked inactive in the patient or donor for years. These viruses can return in force when the patient’s immune system is suppressed with drugs to prevent them rejecting the organ. Standard tests often fail to pick up latent viruses before surgery, but the VirScan procedure could reveal their presence and alert doctors and patients to the danger.
  •  
    How can new technology revolutionize medicine and curing people of diseases?
Javier E

Lying Adapts to New Technology - NYTimes.com - 0 views

  • We’ve always lied; new technologies are merely changing the ways and the reasons we lie. Witness the “butler lie,”
  • Of 5,396 texts examined, 10.7 percent were deceptive. Of those, 30 percent were butler lies, compared with less than 20 percent of lies by instant message.
  • Yet technology is already laying siege to the butler lie
  • ...1 more annotation...
  • people actually lie more often by phone than by text, aware that lies are reproducible once spelled-out and sent.
jongardner04

6 Bad Habits to Blame on Technology - InsideTech.com - 0 views

  •  
    I find it interesting how much technology influences our lives.
anonymous

Daily Report: The Internet Is Full of Mean People - The New York Times - 0 views

  • That the Internet is full of terrible things is not exactly a revelation, but a point worth noting.
  • Terrorist recruiting, flame wars, trolls, hackers and depictions of deviant behavior
  • It’s out there.
  • ...3 more annotations...
  • But in the interest of balance, given all this criticism the Internet has faced lately, let’s list a few great (or at least harmless) things about the global network
  • None of that, of course, even touches on the change-the-world technologies in medicine, commerce, communications, artificial intelligence, education and any number of fields that wouldn’t exist without the Internet.
  • o, Internet, you’ve got an ugly streak for sure. But maybe you’re getting a bum rap.
anonymous

Controversial Quantum Machine Tested by NASA and Google Shows Promise | MIT Technology ... - 0 views

  • artificial-intelligence software.
  • Google says it has proof that a controversial machine it bought in 2013 really can use quantum physics to work through a type of math that’s crucial to artificial intelligence much faster than a conventional computer.
  • “It is a truly disruptive technology that could change how we do everything,” said Rupak Biswas, director of exploration technology at NASA’s Ames Research Center in Mountain View, California.
  • ...7 more annotations...
  • An alternative algorithm is known that could have let the conventional computer be more competitive, or even win, by exploiting what Neven called a “bug” in D-Wave’s design. Neven said the test his group staged is still important because that shortcut won’t be available to regular computers when they compete with future quantum annealers capable of working on larger amounts of data.
  • “For a specific, carefully crafted proof-of-concept problem we achieve a 100-million-fold speed-up,” said Neven.
  • “the world’s first commercial quantum computer.” The computer is installed at NASA’s Ames Research Center in Mountain View, California, and operates on data using a superconducting chip called a quantum annealer.
  • Google is competing with D-Wave to make a quantum annealer that could do useful work.
  • Martinis is also working on quantum hardware that would not be limited to optimization problems, as annealers are.
  • Government and university labs, Microsoft (see “Microsoft’s Quantum Mechanics”), and IBM (see “IBM Shows Off a Quantum Computing Chip”) are also working on that technology.
  • “it may be several years before this research makes a difference to Google products.”
qkirkpatrick

US science leaders to tackle ethics of gene-editing technology - BuenosAiresHerald.com - 1 views

  • The leading US scientific organization, responding to concerns expressed by scientists and ethicists, has launched an ambitious initiative to recommend guidelines for new genetic technology that has the potential to create "designer babies."
  • The technology, called CRISPR-Cas9, allows scientists to edit virtually any gene they target
  • Although the embryos were not viable and could not have developed into babies, the announcement ignited an outcry from scientists warning that such a step, which could alter human genomes for generations, was just a matter of time.
  • ...2 more annotations...
  • In response, the National Academy of Sciences (NAS) and its Institute of Medicine will convene an international summit this fall where researchers and other experts will "explore the scientific, ethical, and policy issues associated with human gene-editing research," the academies said in a statement
  • It is a step reminiscent of one in 1975, when NAS convened the Asilomar Conference. That led to guidelines and federal regulations of recombinant DNA, the gene-splicing technology that underlay the founding of Genentech and other biotech companies and revolutionized the production of many pharmaceuticals
  •  
    ethics in science of designing own baby. 
Javier E

Is our world a simulation? Why some scientists say it's more likely than not | Technolo... - 3 views

  • Musk is just one of the people in Silicon Valley to take a keen interest in the “simulation hypothesis”, which argues that what we experience as reality is actually a giant computer simulation created by a more sophisticated intelligence
  • Oxford University’s Nick Bostrom in 2003 (although the idea dates back as far as the 17th-century philosopher René Descartes). In a paper titled “Are You Living In a Simulation?”, Bostrom suggested that members of an advanced “posthuman” civilization with vast computing power might choose to run simulations of their ancestors in the universe.
  • If we believe that there is nothing supernatural about what causes consciousness and it’s merely the product of a very complex architecture in the human brain, we’ll be able to reproduce it. “Soon there will be nothing technical standing in the way to making machines that have their own consciousness,
  • ...14 more annotations...
  • At the same time, videogames are becoming more and more sophisticated and in the future we’ll be able to have simulations of conscious entities inside them.
  • “Forty years ago we had Pong – two rectangles and a dot. That’s where we were. Now 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality,” said Musk. “If you assume any rate of improvement at all, then the games will become indistinguishable from reality.”
  • “If one progresses at the current rate of technology a few decades into the future, very quickly we will be a society where there are artificial entities living in simulations that are much more abundant than human beings.
  • If there are many more simulated minds than organic ones, then the chances of us being among the real minds starts to look more and more unlikely. As Terrile puts it: “If in the future there are more digital people living in simulated environments than there are today, then what is to say we are not part of that already?”
  • Reasons to believe that the universe is a simulation include the fact that it behaves mathematically and is broken up into pieces (subatomic particles) like a pixelated video game. “Even things that we think of as continuous – time, energy, space, volume – all have a finite limit to their size. If that’s the case, then our universe is both computable and finite. Those properties allow the universe to be simulated,” Terrile said
  • “Is it logically possible that we are in a simulation? Yes. Are we probably in a simulation? I would say no,” said Max Tegmark, a professor of physics at MIT.
  • “In order to make the argument in the first place, we need to know what the fundamental laws of physics are where the simulations are being made. And if we are in a simulation then we have no clue what the laws of physics are. What I teach at MIT would be the simulated laws of physics,”
  • Terrile believes that recognizing that we are probably living in a simulation is as game-changing as Copernicus realizing that the Earth was not the center of the universe. “It was such a profound idea that it wasn’t even thought of as an assumption,”
  • That we might be in a simulation is, Terrile argues, a simpler explanation for our existence than the idea that we are the first generation to rise up from primordial ooze and evolve into molecules, biology and eventually intelligence and self-awareness. The simulation hypothesis also accounts for peculiarities in quantum mechanics, particularly the measurement problem, whereby things only become defined when they are observed.
  • “For decades it’s been a problem. Scientists have bent over backwards to eliminate the idea that we need a conscious observer. Maybe the real solution is you do need a conscious entity like a conscious player of a video game,
  • How can the hypothesis be put to the test
  • scientists can look for hallmarks of simulation. “Suppose someone is simulating our universe – it would be very tempting to cut corners in ways that makes the simulation cheaper to run. You could look for evidence of that in an experiment,” said Tegmark
  • First, it provides a scientific basis for some kind of afterlife or larger domain of reality above our world. “You don’t need a miracle, faith or anything special to believe it. It comes naturally out of the laws of physics,”
  • it means we will soon have the same ability to create our own simulations. “We will have the power of mind and matter to be able to create whatever we want and occupy those worlds.”
« First ‹ Previous 61 - 80 of 1070 Next › Last »
Showing 20 items per page