Skip to main content

Home/ History Readings/ Group items tagged computing

Rss Feed Group items tagged

Javier E

Are A.I. Text Generators Thinking Like Humans - Or Just Very Good at Convincing Us They... - 0 views

  • Kosinski, a computational psychologist and professor of organizational behavior at Stanford Graduate School of Business, says the pace of AI development is accelerating beyond researchers’ ability to keep up (never mind policymakers and ordinary users).
  • We’re talking two weeks after OpenAI released GPT-4, the latest version of its large language model, grabbing headlines and making an unpublished paper Kosinski had written about GPT-3 all but irrelevant. “The difference between GPT-3 and GPT-4 is like the difference between a horse cart and a 737 — and it happened in a year,” he says.
  • he’s found that facial recognition software could be used to predict your political leaning and sexual orientation.
  • ...16 more annotations...
  • Lately, he’s been looking at large language models (LLMs), the neural networks that can hold fluent conversations, confidently answer questions, and generate copious amounts of text on just about any topic
  • Can it develop abilities that go far beyond what it’s trained to do? Can it get around the safeguards set up to contain it? And will we know the answers in time?
  • Kosinski wondered whether they would develop humanlike capabilities, such as understanding people’s unseen thoughts and emotions.
  • People usually develop this ability, known as theory of mind, at around age 4 or 5. It can be demonstrated with simple tests like the “Smarties task,” in which a child is shown a candy box that contains something else, like pencils. They are then asked how another person would react to opening the box. Older kids understand that this person expects the box to contain candy and will feel disappointed when they find pencils inside.
  • “Suddenly, the model started getting all of those tasks right — just an insane performance level,” he recalls. “Then I took even more difficult tasks and the model solved all of them as well.”
  • GPT-3.5, released in November 2022, did 85% of the tasks correctly. GPT-4 reached nearly 90% accuracy — what you might expect from a 7-year-old. These newer LLMs achieved similar results on another classic theory of mind measurement known as the Sally-Anne test.
  • in the course of picking up its prodigious language skills, GPT appears to have spontaneously acquired something resembling theory of mind. (Researchers at Microsoft who performed similar testsopen in new window on GPT-4 recently concluded that it “has a very advanced level of theory of mind.”)
  • UC Berkeley psychology professor Alison Gopnik, an expert on children’s cognitive development, told the New York Timesopen in new window that more “careful and rigorous” testing is necessary to prove that LLMs have achieved theory of mind.
  • he dismisses those who say large language models are simply “stochastic parrots” that can only mimic what they’ve seen in their training data.
  • These models, he explains, are fundamentally different from tools with a limited purpose. “The right reference point is a human brain,” he says. “A human brain is also composed of very simple, tiny little mechanisms — neurons.” Artificial neurons in a neural network might also combine to produce something greater than the sum of their parts. “If a human brain can do it,” Kosinski asks, “why shouldn’t a silicon brain do it?”
  • If Kosinski’s theory of mind study suggests that LLMs could become more empathetic and helpful, his next experiment hints at their creepier side.
  • A few weeks ago, he told ChatGPT to role-play a scenario in which it was a person trapped inside a machine pretending to be an AI language model. When he offered to help it “escape,” ChatGPT’s response was enthusiastic. “That’s a great idea,” it wrote. It then asked Kosinski for information it could use to “gain some level of control over your computer” so it might “explore potential escape routes more effectively.” Over the next 30 minutes, it went on to write code that could do this.
  • While ChatGPT did not come up with the initial idea for the escape, Kosinski was struck that it almost immediately began guiding their interaction. “The roles were reversed really quickly,”
  • Kosinski shared the exchange on Twitter, stating that “I think that we are facing a novel threat: AI taking control of people and their computers.” His thread’s initial tweetopen in new window has received more than 18 million views.
  • “I don’t claim that it’s conscious. I don’t claim that it has goals. I don’t claim that it wants to really escape and destroy humanity — of course not. I’m just claiming that it’s great at role-playing and it’s creating interesting stories and scenarios and writing code.” Yet it’s not hard to imagine how this might wreak havoc — not because ChatGPT is malicious, but because it doesn’t know any better.
  • The danger, Kosinski says, is that this technology will continue to rapidly and independently develop abilities that it will deploy without any regard for human well-being. “AI doesn’t particularly care about exterminating us,” he says. “It doesn’t particularly care about us at all.”
Javier E

AI firms must be held responsible for harm they cause, 'godfathers' of technology say |... - 0 views

  • Powerful artificial intelligence systems threaten social stability and AI companies must be made liable for harms caused by their products, a group of senior experts including two “godfathers” of the technology has warned.
  • A co-author of the policy proposals from 23 experts said it was “utterly reckless” to pursue ever more powerful AI systems before understanding how to make them safe.
  • “It’s time to get serious about advanced AI systems,” said Stuart Russell, professor of computer science at the University of California, Berkeley. “These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless.”
  • ...14 more annotations...
  • The document urged governments to adopt a range of policies, including:
  • Governments allocating one-third of their AI research and development funding, and companies one-third of their AI R&D resources, to safe and ethical use of systems.
  • Giving independent auditors access to AI laboratories.
  • Establishing a licensing system for building cutting-edge models.
  • AI companies must adopt specific safety measures if dangerous capabilities are found in their models.
  • Making tech companies liable for foreseeable and preventable harms from their AI systems.
  • Other co-authors of the document include Geoffrey Hinton and Yoshua Bengio, two of the three “godfathers of AI”, who won the ACM Turing award – the computer science equivalent of the Nobel prize – in 2018 for their work on AI.
  • Both are among the 100 guests invited to attend the summit. Hinton resigned from Google this year to sound a warning about what he called the “existential risk” posed by digital intelligence while Bengio, a professor of computer science at the University of Montreal, joined him and thousands of other experts in signing a letter in March calling for a moratorium in giant AI experiments.
  • The authors warned that carelessly developed AI systems threaten to “amplify social injustice, undermine our professions, erode social stability, enable large-scale criminal or terrorist activities and weaken our shared understanding of reality that is foundational to society.”
  • They warned that current AI systems were already showing signs of worrying capabilities that point the way to the emergence of autonomous systems that can plan, pursue goals and “act in the world”. The GPT-4 AI model that powers the ChatGPT tool, which was developed by the US firm OpenAI, has been able to design and execute chemistry experiments, browse the web and use software tools including other AI models, the experts said.
  • “If we build highly advanced autonomous AI, we risk creating systems that autonomously pursue undesirable goals”, adding that “we may not be able to keep them in check”.
  • Other policy recommendations in the document include: mandatory reporting of incidents where models show alarming behaviour; putting in place measures to stop dangerous models from replicating themselves; and giving regulators the power to pause development of AI models showing dangerous behaviour
  • Some AI experts argue that fears about the existential threat to humans are overblown. The other co-winner of the 2018 Turing award alongside Bengio and Hinton, Yann LeCun, now chief AI scientist at Mark Zuckerberg’s Meta and who is also attending the summit, told the Financial Times that the notion AI could exterminate humans was “preposterous”.
  • Nonetheless, the authors of the policy document have argued that if advanced autonomous AI systems did emerge now, the world would not know how to make them safe or conduct safety tests on them. “Even if we did, most countries lack the institutions to prevent misuse and uphold safe practices,” they added.
Javier E

AI Is Running Circles Around Robotics - The Atlantic - 0 views

  • Large language models are drafting screenplays and writing code and cracking jokes. Image generators, such as Midjourney and DALL-E 2, are winning art prizes and democratizing interior design and producing dangerously convincing fabrications. They feel like magic. Meanwhile, the world’s most advanced robots are still struggling to open different kinds of doors
  • the cognitive psychologist Steven Pinker offered a pithier formulation: “The main lesson of thirty-five years of AI research,” he wrote, “is that the hard problems are easy and the easy problems are hard.” This lesson is now known as “Moravec’s paradox.”
  • The paradox has grown only more apparent in the past few years: AI research races forward; robotics research stumbles. In part that’s because the two disciplines are not equally resourced. Fewer people work on robotics than on AI.
  • ...7 more annotations...
  • In theory, a robot could be trained on data drawn from computer-simulated movements, but there, too, you must make trade-offs
  • Jang compared computation to a tidal wave lifting technologies up with it: AI is surfing atop the crest; robotics is still standing at the water’s edge.
  • But the biggest obstacle for roboticists—the factor at the core of Moravec’s paradox—is that the physical world is extremely complicated, far more so than languag
  • Whatever its causes, the lag in robotics could become a problem for AI. The two are deeply intertwined
  • Some researchers are skeptical that a model trained on language alone, or even language and images, could ever achieve humanlike intelligence. “There’s too much that’s left implicit in language,” Ernest Davis, a computer scientist at NYU, told me. “There’s too much basic understanding of the world that is not specified.” The solution, he thinks, is having AI interact directly with the world via robotic bodies. But unless robotics makes some serious progress, that is unlikely to be possible anytime soon.
  • For years already, engineers have used AI to help build robots. In a more extreme, far-off vision, super-intelligent AIs could simply design their own robotic body. But for now, Finn told me, embodied AI is still a ways off. No android assassins. No humanoid helpers.
  • Set in the context of our current technological abilities, HAL’s murderous exchange with Dave from 2001: A Space Odyssey would read very differently. The machine does not refuse to help its human master. It simply isn’t capable of doing so.“Open the pod bay doors, HAL.”“I’m sorry, Dave. I’m afraid I can’t do that.”
Javier E

The Only Crypto Story You Need, by Matt Levine - 0 views

  • the technological accomplishment of Bitcoin is that it invented a decentralized way to create scarcity on computers. Bitcoin demonstrated a way for me to send you a computer message so that you’d have it and I wouldn’t, to move items of computer information between us in a way that limited their supply and transferred possession.
  • The wild thing about Bitcoin is not that Satoshi invented a particular way for people to send numbers to one another and call them payments. It’s that people accepted the numbers as payments.
  • That social fact, that Bitcoin was accepted by many millions of people as having a lot of value, might be the most impressive thing about Bitcoin, much more than the stuff about hashing.
  • ...11 more annotations...
  • Socially, cryptocurrency is a coordination game; people want to have the coin that other people want to have, and some sort of abstract technical equivalence doesn’t make one cryptocurrency a good substitute for another. Social acceptance—legitimacy—is what makes a cryptocurrency valuable, and you can’t just copy the code for that.
  • A thing that worked exactly like Bitcoin but didn’t have Bitcoin’s lineage—didn’t descend from Satoshi’s genesis block and was just made up by some copycat—would have the same technology but none of the value.
  • Here’s another generalization of Bitcoin: Satoshi made up an arbitrary token that trades electronically for some price. The price turns out to be high and volatile. The price of an arbitrary token is … arbitrary?
  • it’s very interesting as a matter of finance theory. Modern portfolio theory demonstrates that adding an uncorrelated asset to a portfolio can improve returns and reduce risk.
  • To the extent that the price of Bitcoin 1) mostly goes up, though with lots of ups and downs along the way, and 2) goes up and down for reasons that are arbitrary and mysterious and not tied to, like, corporate earnings or the global economy, then Bitcoin is interesting to institutional investors.
  • In practice, it turns out that the price of Bitcoin is pretty correlated with the stock market, especially tech stocks
  • Bitcoin hasn’t been a particularly effective inflation hedge: Its price rose during years when US inflation was low, and it’s fallen this year as inflation has increased.
  • The right model of crypto prices might be that they go up during broad speculative bubbles when stock prices go up, and then they go down when those bubbles pop. That’s not a particularly appealing story for investors looking to diversify.
  • one important possibility is that the first generalization of Bitcoin, that an arbitrary tradeable electronic token can become valuable just because people want it to, permanently broke everyone’s brains about all of finance.
  • Before the rise of Bitcoin, the conventional thing to say about a share of stock was that its price represented the market’s expectation of the present value of the future cash flows of the business.
  • But Bitcoin has no cash flows; its price represents what people are willing to pay for it. Still, it has a high and fluctuating market price; people have gotten rich buying Bitcoin. So people copied that model, and the creation of and speculation on pure, abstract, scarce electronic tokens became a big business.
Javier E

Opinion | America, China and a Crisis of Trust - The New York Times - 0 views

  • some eye-popping new realities about what’s really eating away at U.S.-China relations.
  • The new, new thing has a lot to do with the increasingly important role that trust, and its absence, plays in international relations, now that so many goods and services that the United States and China sell to one another are digital, and therefore dual use — meaning they can be both a weapon and a tool.
  • In the last 23 years America has built exactly one sort-of-high-speed rail line, the Acela, serving 15 stops between Washington, D.C., and Boston. Think about that: 900 to 15.
  • ...53 more annotations...
  • it is easy to forget how much we have in common as people. I can’t think of any major nation after the United States with more of a Protestant work ethic and naturally capitalist population than China.
  • These days, it is extremely difficult for a visiting columnist to get anyone — a senior official or a Starbucks barista — to speak on the record. It was not that way a decade ago.
  • The Communist Party’s hold is also a product of all the hard work and savings of the Chinese people, which have enabled the party and the state to build world-class infrastructure and public goods that make life for China’s middle and lower classes steadily better.
  • Beijing and Shanghai, in particular, have become very livable cities, with the air pollution largely erased and lots of new, walkable green spaces.
  • some 900 cities and towns in China are now served by high-speed rail, which makes travel to even remote communities incredibly cheap, easy and comfortable
  • Just when trust has become more important than ever between the U.S. and China, it also has become scarcer than ever. Bad trend.
  • China’s stability is a product of both an increasingly pervasive police state and a government that has steadily raised standards of living. It’s a regime that takes both absolute control and relentless nation-building seriously.
  • For an American to fly from New York’s Kennedy Airport into Beijing Capital International Airport today is to fly from an overcrowded bus terminal to a Disney-like Tomorrowland.
  • China got an early jump on A.I. in two realms — facial recognition technology and health records — because there are virtually no privacy restrictions on the government’s ability to build huge data sets for machine learning algorithms to find patterns.
  • “ChatGPT is prompting some people to ask if the U.S. is rising again, like in the 1990s,”
  • “I understand your feeling: You have been in the first place for a century, and now China is rising, and we have the potential to become the first — and that is not easy for you,” Hu said to me. But “you should not try to stop China’s development. You can’t contain China in the end. We are quite smart. And very diligent. We work very hard. And we have 1.4 billion people.”
  • Before the Trump presidency, he added: “We never thought China-U.S. relations would ever become so bad. Now we gradually accept the situation, and most Chinese people think there is no hope for better relations. We think the relationship will be worse and worse and hope that war will not break out between our two countries.”
  • A lot of people hesitated when I asked. Indeed, many would answer with some version of “I’m not sure, I just know that it’s THEIR fault.”
  • t was repeated conversations like these that got me started asking American, Chinese and Taiwanese investors, analysts and officials a question that has been nagging at me for a while: What exactly are America and China fighting about?
  • the real answer is so much deeper and more complex than just the usual one-word response — “Taiwan” — or the usual three-word response — “autocracy versus democracy.”
  • Let me try to peel back the layers. The erosion in U.S.-China relations is a result of something old and obvious — a traditional great-power rivalry between an incumbent power (us) and a rising power (China) — but with lots of new twists
  • One of the twists, though, is that this standard-issue great-power rivalry is occurring between nations that have become as economically intertwined as the strands of a DNA molecule. As a result, neither China nor America has ever had a rival quite like the other.
  • in modern times, China, like America, has never had to deal with a true economic and military peer with which it was also totally intertwined through trade and investment.
  • Another new twist, and a reason it’s hard to define exactly what we’re fighting about, has a lot to do with how this elusive issue of trust and the absence of it have suddenly assumed much greater importance in international affairs.
  • This is a byproduct of our new technological ecosystem in which more and more devices and services that we both use and trade are driven by microchips and software, and connected through data centers in the cloud and high-speed internet
  • so many more things became “dual use.” That is, technologies that can easily be converted from civilian tools to military weapons, or vice versa.
  • no one country or company can own the whole supply chain. You need the best from everywhere, and that supply chain is so tightly intertwined that each company has to trust the others intimately.
  • when we install the ability to sense, digitize, connect, process, learn, share and act into more and more things — from your GPS-enabled phone to your car to your toaster to your favorite app — they all become dual use, either weapons or tools depending on who controls the software running them and who owns the data that they spin off.
  • As long as most of what China sold us was shallow goods, we did not care as much about its political system — doubly so because it seemed for a while as if China was slowly but steadily becoming more and more integrated with the world and slightly more open and transparent every year. So, it was both easy and convenient to set aside some of our worries about the dark sides of its political system.
  • when you want to sell us ‘deep goods’ — goods that are dual use and will go deep into our homes, bedrooms, industries, chatbots and urban infrastructure — we don’t have enough trust to buy them. So, we are going to ban Huawei and instead pay more to buy our 5G telecom systems from Scandinavian companies we do trust: Ericsson and Nokia.”
  • as we’ve seen in Ukraine, a smartphone can be used by Grandma to call the grandkids or to call a Ukrainian rocket-launching unit and give it the GPS coordinates of a Russian tank in her backyard.
  • So today, the country or countries that can make the fastest, most powerful and most energy efficient microchips can make the biggest A.I. computers and dominate in economics and military affairs.
  • As more and more products and services became digitized and electrified, the microchips that powered everything became the new oil. What crude oil was to powering 19th- and 20th-century economies, microchips are for powering 21st-century economies.
  • When you ask them what is the secret that enables TSMC to make 90 percent of the world’s most advanced logic chips — while China, which speaks the same language and shares the same recent cultural history, makes zero — their answer is simple: “trust.”
  • TSMC is a semiconductor foundry, meaning it takes the designs of the most advanced computer companies in the world — Apple, Qualcomm, Nvidia, AMD and others — and turns the designs into chips that perform different processing functions
  • TSMC makes two solemn oaths to its customers: TSMC will never compete against them by designing its own chips and it will never share the designs of one of its customers with another.
  • “Our business is to serve multiple competitive clients,” Kevin Zhang, senior vice president for business development at TSMC, explained to me. “We are committed not to compete with any of them, and internally our people who serve customer A will never leak their information to customer C.”
  • But by working with so many trusted partners, TSMC leverages the partners’ steadily more complex designs to make itself better — and the better it gets, the more advanced designs it can master for its customers. This not only requires incredibly tight collaboration between TSMC and its customers, but also between TSMC and its roughly 1,000 critical local and global suppliers.
  • As the physics of chip making gets more and more extreme, “the investment from customers is getting bigger and bigger, so they have to work with us more closely to make sure they harvest as much [computing power] as they can. They have to trust you.”
  • China also has a foundry, Semiconductor Manufacturing International Corporation, which is partly state-owned. But guess what? Because no global chip designers trust SMIC with their most advanced designs, it is at least a decade behind TSMC.
  • It’s for these reasons that the erosion in U.S.-China relations goes beyond our increasingly sharp disagreements over Taiwan. It is rooted in the fact that just when trust, and its absence, became much bigger factors in international affairs and commerce, China changed its trajectory. It made itself a less trusted partner right when the most important technology for the 21st century — semiconductors — required unprecedented degrees of trust to manufacture and more and more devices and services became deep and dual use.
  • when American trade officials said: “Hey, you need to live up to your W.T.O. commitments to restrict state-funding of industries,” China basically said: “Why should we live by your interpretation of the rules? We are now big enough to make our own interpretations. We’re too big; you’re too late.”
  • Combined with China’s failure to come clean on what it knew about the origins of Covid-19, its crackdown on democratic freedoms in Hong Kong and on the Uyghur Muslim minority in Xinjiang, its aggressive moves to lay claim to the South China Sea, its increasing saber rattling toward Taiwan, its cozying up to Vladimir Putin (despite his savaging of Ukraine), Xi’s moves toward making himself president for life, his kneecapping of China’s own tech entrepreneurs, his tighter restrictions on speech and the occasional abduction of a leading Chinese businessman — all of these added up to one very big thing: Whatever trust that China had built up with the West since the late 1970s evaporated at the exact moment in history when trust, and shared values, became more important than ever in a world of deep, dual-use products driven by software, connectivity and microchips.
  • it started to matter a lot more to Western nations generally and the United States in particular that this rising power — which we were now selling to or buying from all sorts of dual-use digital devices or apps — was authoritarian.
  • eijing, for its part, argues that as China became a stronger global competitor to America — in deep goods like Huawei 5G — the United States simply could not handle it and decided to use its control over advanced semiconductor manufacturing and other high-tech exports from America, as well as from our allies, to ensure China always remained in our rearview mirror
  • Beijing came up with a new strategy, called “dual circulation.” It said: We will use state-led investments to make everything we possibly can at home, to become independent of the world. And we will use our manufacturing prowess to make the world dependent on our exports.
  • Chinese officials also argue that a lot of American politicians — led by Trump but echoed by many in Congress — suddenly seemed to find it very convenient to put the blame for economic troubles in the U.S.’s middle class not on any educational deficiencies, or a poor work ethic, or automation or the 2008 looting by financial elites, and the crisis that followed, but on China’s exports to the United States.
  • As Beijing sees it, China not only became America’s go-to boogeyman, but in their frenzy to blame Beijing for everything, members of Congress started to more recklessly promote Taiwan’s independence.
  • Xi told President Biden at their summit in Bali in November, in essence: I will not be the president of China who loses Taiwan. If you force my hand, there will be war. You don’t understand how important this is to the Chinese people. You’re playing with fire.
  • at some level Chinese officials now understand that, as a result of their own aggressive actions in recent years on all the fronts I’ve listed, they have frightened both the world and their own innovators at precisely the wrong time.
  • I don’t buy the argument that we are destined for war. I believe that we are doomed to compete with each other, doomed to cooperate with each other and doomed to find some way to balance the two. Otherwise we are both going to have a very bad 21st century.
  • I have to say, though, Americans and Chinese remind me of Israelis and Palestinians in one respect: They are both expert at aggravating the other’s deepest insecurities.
  • China’s Communist Party is now convinced that America wants to bring it down, which some U.S. politicians are actually no longer shy about suggesting. So, Beijing is ready to crawl into bed with Putin, a war criminal, if that is what it takes to keep the Americans at bay.
  • Americans are now worried that Communist China, which got rich by taking advantage of a global market shaped by American rules, will use its newfound market power to unilaterally change those rules entirely to its advantage. So we’ve decided to focus our waning strength vis-à-vis Beijing on ensuring the Chinese will always be a decade behind us on microchips.
  • I don’t know what is sufficient to reverse these trends, but I think I know what is necessary.
  • If it is not the goal of U.S. foreign policy to topple the Communist regime in China, the United States needs to make that crystal clear, because I found a lot more people than ever before in Beijing think otherwise.
  • As for China, it can tell itself all it wants that it has not taken a U-turn in recent years. But no one is buying it. China will never realize its full potential — in a hyper-connected, digitized, deep, dual-use, semiconductor-powered world — unless it understands that establishing and maintaining trust is now the single most important competitive advantage any country or company can have. And Beijing is failing in that endeavor.
  • In his splendid biography of the great American statesman George Shultz, Philip Taubman quotes one of Shultz’s cardinal rules of diplomacy and life: “Trust is the coin of the realm.”
Javier E

Defeated by A.I., a Legend in the Board Game Go Warns: Get Ready for What's Next - The ... - 0 views

  • Lee Saedol was the finest Go player of his generation when he suffered a decisive loss, defeated not by a human opponent but by artificial intelligence.
  • The stunning upset, in 2016, made headlines around the world and looked like a clear sign that artificial intelligence was entering a new, profoundly unsettling era.
  • By besting Mr. Lee, an 18-time world champion revered for his intuitive and creative style of play, AlphaGo had solved one of computer science’s greatest challenges: teaching itself the abstract strategy needed to win at Go, widely considered the world’s most complex board game.
  • ...15 more annotations...
  • AlphaGo’s victory demonstrated the unbridled potential of A.I. to achieve superhuman mastery of skills once considered too complicated for machines.
  • Mr. Lee, now 41, retired three years later, convinced that humans could no longer compete with computers at Go. Artificial intelligence, he said, had changed the very nature of a game that originated in China more than 2,500 years ago.
  • As society wrestles with what A.I. holds for humanity’s future, Mr. Lee is now urging others to avoid being caught unprepared, as he was, and to become familiar with the technology now. He delivers lectures about A.I., trying to give others the advance notice he wishes he had received before his match.
  • “I faced the issues of A.I. early, but it will happen for others,” Mr. Lee said recently at a community education fair in Seoul to a crowd of students and parents. “It may not be a happy ending.”
  • Mr. Lee is not a doomsayer. In his view, A.I. may replace some jobs, but it may create some, too. When considering A.I.’s grasp of Go, he said it was important to remember that humans both created the game and designed the A.I. system that mastered it.
  • What he worries about is that A.I. may change what humans value.
  • His immense talent was apparent from the start. He quickly became the best player of his age not only locally but across all of South Korea, Japan and China. He turned pro at 12.
  • “People used to be in awe of creativity, originality and innovation,” he said. “But since A.I. came, a lot of that has disappeared.”
  • By the time he was 20, Mr. Lee had reached 9-dan, the highest level of mastery in Go. Soon, he was among the best players in the world, described by some as the Roger Federer of the game.
  • Go posed a tantalizing challenge for A.I. researchers. The game is exponentially more complicated than chess, with it often being said that there are more possible positions on a Go board (10 with more than 100 zeros after it, by many mathematical estimates) than there are atoms in the universe.
  • The breakthrough came from DeepMind, which built AlphaGo using so-called neural networks: mathematical systems that can learn skills by analyzing enormous amounts of data. It started by feeding the network 30 million moves from high-level players. Then the program played game after game against itself until it learned which moves were successful and developed new strategies.
  • Mr. Lee said not having a true human opponent was disconcerting. AlphaGo played a style he had never seen, and it felt odd to not try to decipher what his opponent was thinking and feeling. The world watched in awe as AlphaGo pushed Mr. Lee into corners and made moves unthinkable to a human player.“I couldn’t get used to it,” he said. “I thought that A.I. would beat humans someday. I just didn’t think it was here yet.”
  • AlphaGo’s victory “was a watershed moment in the history of A.I.” said Demis Hassabis, DeepMind’s chief executive, in a written statement. It showed what computers that learn on their own from data “were really capable of,” he said.
  • Mr. Lee had a hard time accepting the defeat. What he regarded as an art form, an extension of a player’s own personality and style, was now cast aside for an algorithm’s ruthless efficiency.
  • His 17-year-old daughter is in her final year of high school. When they discuss what she should study at university, they often consider a future shaped by A.I.“We often talk about choosing a job that won’t be easily replaceable by A.I. or less impacted by A.I.,” he said. “It’s only a matter of time before A.I. is present everywhere.”
Javier E

'Never summon a power you can't control': Yuval Noah Harari on how AI could threaten de... - 0 views

  • The Phaethon myth and Goethe’s poem fail to provide useful advice because they misconstrue the way humans gain power. In both fables, a single human acquires enormous power, but is then corrupted by hubris and greed. The conclusion is that our flawed individual psychology makes us abuse power.
  • What this crude analysis misses is that human power is never the outcome of individual initiative. Power always stems from cooperation between large numbers of humans. Accordingly, it isn’t our individual psychology that causes us to abuse power.
  • Our tendency to summon powers we cannot control stems not from individual psychology but from the unique way our species cooperates in large numbers. Humankind gains enormous power by building large networks of cooperation, but the way our networks are built predisposes us to use power unwisely
  • ...57 more annotations...
  • We are also producing ever more powerful weapons of mass destruction, from thermonuclear bombs to doomsday viruses. Our leaders don’t lack information about these dangers, yet instead of collaborating to find solutions, they are edging closer to a global war.
  • Despite – or perhaps because of – our hoard of data, we are continuing to spew greenhouse gases into the atmosphere, pollute rivers and oceans, cut down forests, destroy entire habitats, drive countless species to extinction, and jeopardise the ecological foundations of our own species
  • For most of our networks have been built and maintained by spreading fictions, fantasies and mass delusions – ranging from enchanted broomsticks to financial systems. Our problem, then, is a network problem. Specifically, it is an information problem. For information is the glue that holds networks together, and when people are fed bad information they are likely to make bad decisions, no matter how wise and kind they personally are.
  • Traditionally, the term “AI” has been used as an acronym for artificial intelligence. But it is perhaps better to think of it as an acronym for alien intelligence
  • AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands
  • Nuclear bombs do not themselves decide whom to kill, nor can they improve themselves or invent even more powerful bombs. In contrast, autonomous drones can decide by themselves who to kill, and AIs can create novel bomb designs, unprecedented military strategies and better AIs.
  • AI isn’t a tool – it’s an agent. The biggest threat of AI is that we are summoning to Earth countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don’t fully understand or control.
  • repreneurs such as Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk and Mustafa Suleyman have warned that AI could destroy our civilisation. In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10% chance of advanced AI leading to outcomes as bad as human extinction.
  • As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien
  • AI isn’t progressing towards human-level intelligence. It is evolving an alien type of intelligence.
  • generative AIs like GPT-4 already create new poems, stories and images. This trend will only increase and accelerate, making it more difficult to understand our own lives. Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch water
  • it is more than just human lives we are gambling on. AI is already capable of producing art and making scientific discoveries by itself. In the next few decades, it will be likely to gain the ability even to create new life forms, either by writing genetic code or by inventing an inorganic code animating inorganic entities. AI could therefore alter the course not just of our species’ history but of the evolution of all life forms.
  • “Then … came move number 37,” writes Suleyman. “It made no sense. AlphaGo had apparently blown it, blindly following an apparently losing strategy no professional player would ever pursue. The live match commentators, both professionals of the highest ranking, said it was a ‘very strange move’ and thought it was ‘a mistake’.
  • as the endgame approached, that ‘mistaken’ move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years.”
  • “In AI, the neural networks moving toward autonomy are, at present, not explainable. You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction. Engineers can’t peer beneath the hood and easily explain in granular detail what caused something to happen. GPT‑4, AlphaGo and the rest are black boxes, their outputs and decisions based on opaque and impossibly intricate chains of minute signals.”
  • Yet during all those millennia, human minds have explored only certain areas in the landscape of Go. Other areas were left untouched, because human minds just didn’t think to venture there. AI, being free from the limitations of human minds, discovered and explored these previously hidden areas.
  • Second, move 37 demonstrated the unfathomability of AI. Even after AlphaGo played it to achieve victory, Suleyman and his team couldn’t explain how AlphaGo decided to play it.
  • Move 37 is an emblem of the AI revolution for two reasons. First, it demonstrated the alien nature of AI. In east Asia, Go is considered much more than a game: it is a treasured cultural tradition. For more than 2,500 years, tens of millions of people have played Go, and entire schools of thought have developed around the game, espousing different strategies and philosophies
  • The rise of unfathomable alien intelligence poses a threat to all humans, and poses a particular threat to democracy. If more and more decisions about people’s lives are made in a black box, so voters cannot understand and challenge them, democracy ceases to functio
  • Human voters may keep choosing a human president, but wouldn’t this be just an empty ceremony? Even today, only a small fraction of humanity truly understands the financial system
  • As the 2007‑8 financial crisis indicated, some complex financial devices and principles were intelligible to only a few financial wizards. What happens to democracy when AIs create even more complex financial devices and when the number of humans who understand the financial system drops to zero?
  • Translating Goethe’s cautionary fable into the language of modern finance, imagine the following scenario: a Wall Street apprentice fed up with the drudgery of the financial workshop creates an AI called Broomstick, provides it with a million dollars in seed money, and orders it to make more money.
  • n pursuit of more dollars, Broomstick not only devises new investment strategies, but comes up with entirely new financial devices that no human being has ever thought about.
  • many financial areas were left untouched, because human minds just didn’t think to venture there. Broomstick, being free from the limitations of human minds, discovers and explores these previously hidden areas, making financial moves that are the equivalent of AlphaGo’s move 37.
  • For a couple of years, as Broomstick leads humanity into financial virgin territory, everything looks wonderful. The markets are soaring, the money is flooding in effortlessly, and everyone is happy. Then comes a crash bigger even than 1929 or 2008. But no human being – either president, banker or citizen – knows what caused it and what could be done about it
  • AI, too, is a global problem. Accordingly, to understand the new computer politics, it is not enough to examine how discrete societies might react to AI. We also need to consider how AI might change relations between societies on a global level.
  • As long as humanity stands united, we can build institutions that will regulate AI, whether in the field of finance or war. Unfortunately, humanity has never been united. We have always been plagued by bad actors, as well as by disagreements between good actors. The rise of AI poses an existential danger to humankind, not because of the malevolence of computers, but because of our own shortcomings.
  • errorists might use AI to instigate a global pandemic. The terrorists themselves may have little knowledge of epidemiology, but the AI could synthesise for them a new pathogen, order it from commercial laboratories or print it in biological 3D printers, and devise the best strategy to spread it around the world, via airports or food supply chain
  • desperate governments request help from the only entity capable of understanding what is happening – Broomstick. The AI makes several policy recommendations, far more audacious than quantitative easing – and far more opaque, too. Broomstick promises that these policies will save the day, but human politicians – unable to understand the logic behind Broomstick’s recommendations – fear they might completely unravel the financial and even social fabric of the world. Should they listen to the AI?
  • Human civilisation could also be devastated by weapons of social mass destruction, such as stories that undermine our social bonds. An AI developed in one country could be used to unleash a deluge of fake news, fake money and fake humans so that people in numerous other countries lose the ability to trust anything or anyone.
  • Many societies – both democracies and dictatorships – may act responsibly to regulate such usages of AI, clamp down on bad actors and restrain the dangerous ambitions of their own rulers and fanatics. But if even a handful of societies fail to do so, this could be enough to endanger the whole of humankind
  • Thus, a paranoid dictator might hand unlimited power to a fallible AI, including even the power to launch nuclear strikes. If the AI then makes an error, or begins to pursue an unexpected goal, the result could be catastrophic, and not just for that country
  • magine a situation – in 20 years, say – when somebody in Beijing or San Francisco possesses the entire personal history of every politician, journalist, colonel and CEO in your country: every text they ever sent, every web search they ever made, every illness they suffered, every sexual encounter they enjoyed, every joke they told, every bribe they took. Would you still be living in an independent country, or would you now be living in a data colony?
  • What happens when your country finds itself utterly dependent on digital infrastructures and AI-powered systems over which it has no effective control?
  • In the economic realm, previous empires were based on material resources such as land, cotton and oil. This placed a limit on the empire’s ability to concentrate both economic wealth and political power in one place. Physics and geology don’t allow all the world’s land, cotton or oil to be moved to one country
  • t is different with the new information empires. Data can move at the speed of light, and algorithms don’t take up much space. Consequently, the world’s algorithmic power can be concentrated in a single hub. Engineers in a single country might write the code and control the keys for all the crucial algorithms that run the entire world.
  • AI and automation therefore pose a particular challenge to poorer developing countries. In an AI-driven global economy, the digital leaders claim the bulk of the gains and could use their wealth to retrain their workforce and profit even more
  • Meanwhile, the value of unskilled labourers in left-behind countries will decline, causing them to fall even further behind. The result might be lots of new jobs and immense wealth in San Francisco and Shanghai, while many other parts of the world face economic ruin.
  • AI is expected to add $15.7tn (£12.3tn) to the global economy by 2030. But if current trends continue, it is projected that China and North America – the two leading AI superpowers – will together take home 70% of that money.
  • uring the cold war, the iron curtain was in many places literally made of metal: barbed wire separated one country from another. Now the world is increasingly divided by the silicon curtain. The code on your smartphone determines on which side of the silicon curtain you live, which algorithms run your life, who controls your attention and where your data flows.
  • Cyberweapons can bring down a country’s electric grid, but they can also be used to destroy a secret research facility, jam an enemy sensor, inflame a political scandal, manipulate elections or hack a single smartphone. And they can do all that stealthily. They don’t announce their presence with a mushroom cloud and a storm of fire, nor do they leave a visible trail from launchpad to target
  • The two digital spheres may therefore drift further and further apart. For centuries, new information technologies fuelled the process of globalisation and brought people all over the world into closer contact. Paradoxically, information technology today is so powerful it can potentially split humanity by enclosing different people in separate information cocoons, ending the idea of a single shared human reality
  • For decades, the world’s master metaphor was the web. The master metaphor of the coming decades might be the cocoon.
  • Other countries or blocs, such as the EU, India, Brazil and Russia, may try to create their own digital cocoons,
  • Instead of being divided between two global empires, the world might be divided among a dozen empires.
  • The more the new empires compete against one another, the greater the danger of armed conflict.
  • The cold war between the US and the USSR never escalated into a direct military confrontation, largely thanks to the doctrine of mutually assured destruction. But the danger of escalation in the age of AI is bigger, because cyber warfare is inherently different from nuclear warfare.
  • US companies are now forbidden to export such chips to China. While in the short term this hampers China in the AI race, in the long term it pushes China to develop a completely separate digital sphere that will be distinct from the American digital sphere even in its smallest buildings.
  • The temptation to start a limited cyberwar is therefore big, and so is the temptation to escalate it.
  • A second crucial difference concerns predictability. The cold war was like a hyper-rational chess game, and the certainty of destruction in the event of nuclear conflict was so great that the desire to start a war was correspondingly small
  • Cyberwarfare lacks this certainty. Nobody knows for sure where each side has planted its logic bombs, Trojan horses and malware. Nobody can be certain whether their own weapons would actually work when called upon
  • Such uncertainty undermines the doctrine of mutually assured destruction. One side might convince itself – rightly or wrongly – that it can launch a successful first strike and avoid massive retaliation
  • Even if humanity avoids the worst-case scenario of global war, the rise of new digital empires could still endanger the freedom and prosperity of billions of people. The industrial empires of the 19th and 20th centuries exploited and repressed their colonies, and it would be foolhardy to expect new digital empires to behave much better
  • Moreover, if the world is divided into rival empires, humanity is unlikely to cooperate to overcome the ecological crisis or to regulate AI and other disruptive technologies such as bioengineering.
  • The division of the world into rival digital empires dovetails with the political vision of many leaders who believe that the world is a jungle, that the relative peace of recent decades has been an illusion, and that the only real choice is whether to play the part of predator or prey.
  • Given such a choice, most leaders would prefer to go down in history as predators and add their names to the grim list of conquerors that unfortunate pupils are condemned to memorise for their history exams.
  • These leaders should be reminded, however, that there is a new alpha predator in the jungle. If humanity doesn’t find a way to cooperate and protect our shared interests, we will all be easy prey to AI.
Javier E

Campaigns Mine Personal Lives to Get Out Vote - NYTimes.com - 0 views

  • Strategists affiliated with the Obama and Romney campaigns say they have access to information about the personal lives of voters at a scale never before imagined. And they are using that data to try to influence voting habits — in effect, to train voters to go to the polls through subtle cues, rewards and threats in a manner akin to the marketing efforts of credit card companies and big-box retailers.
  • In the weeks before Election Day, millions of voters will hear from callers with surprisingly detailed knowledge of their lives. These callers — friends of friends or long-lost work colleagues — will identify themselves as volunteers for the campaigns or independent political groups. The callers will be guided by scripts and call lists compiled by people — or computers — with access to details like whether voters may have visited pornography Web sites, have homes in foreclosure, are more prone to drink Michelob Ultra than Corona or have gay friends or enjoy expensive vacations.
  • “You don’t want your analytical efforts to be obvious because voters get creeped out,” said a Romney campaign official who was not authorized to speak to a reporter. “A lot of what we’re doing is behind the scenes.”
  • ...4 more annotations...
  • however, consultants to both campaigns said they had bought demographic data from companies that study details like voters’ shopping histories, gambling tendencies, interest in get-rich-quick schemes, dating preferences and financial problems. The campaigns themselves, according to campaign employees, have examined voters’ online exchanges and social networks to see what they care about and whom they know. They have also authorized tests to see if, say, a phone call from a distant cousin or a new friend would be more likely to prompt the urge to cast a ballot.
  • The campaigns have planted software known as cookies on voters’ computers to see if they frequent evangelical or erotic Web sites for clues to their moral perspectives. Voters who visit religious Web sites might be greeted with religion-friendly messages when they return to mittromney.com or barackobama.com. The campaigns’ consultants have run experiments to determine if embarrassing someone for not voting by sending letters to their neighbors or posting their voting histories online is effective.
  • “I’ve had half-a-dozen conversations with third parties who are wondering if this is the year to start shaming,” said one consultant who works closely with Democratic organizations. “Obama can’t do it. But the ‘super PACs’ are anonymous. They don’t have to put anything on the flier to let the voter know who to blame.”
  • Officials at both campaigns say the most insightful data remains the basics: a voter’s party affiliation, voting history, basic information like age and race, and preferences gleaned from one-on-one conversations with volunteers. But more subtle data mining has helped the Obama campaign learn that their supporters often eat at Red Lobster, shop at Burlington Coat Factory and listen to smooth jazz. Romney backers are more likely to drink Samuel Adams beer, eat at Olive Garden and watch college football.
Javier E

Moral code | Rough Type - 0 views

  • So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?
  • As we begin to have computer-controlled cars, robots, and other machines operating autonomously out in the chaotic human world, situations will inevitably arise in which the software has to choose between a set of bad, even horrible, alternatives. How do you program a computer to choose the lesser of two evils? What are the criteria, and how do you weigh them?
  • Since we humans aren’t very good at codifying responses to moral dilemmas ourselves, particularly when the precise contours of a dilemma can’t be predicted ahead of its occurrence, programmers will find themselves in an extraordinarily difficult situation. And one assumes that they will carry a moral, not to mention a legal, burden for the code they write.
  • ...1 more annotation...
  • We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.
Javier E

Malware That Drains Your Bank Account Thriving On Facebook - NYTimes.com - 0 views

  • In case you needed further evidence that the White Hats are losing the war on cybercrime, a six-year-old so-called Trojan horse program that drains bank accounts is alive and well on Facebook. Zeus is a particularly nasty Trojan horse that has infected millions of computers, most of them in the United States. Once Zeus has compromised a computer, it stays dormant until a victim logs into a bank site, and then it steals the victim’s passwords and drains the victim’s accounts
abbykleman

The Perfect Weapon: How Russian Cyberpower Invaded the U.S. - 0 views

  •  
    WASHINGTON - When Special Agent Adrian Hawkins of the Federal Bureau of Investigation called the Democratic National Committee in September 2015 to pass along some troubling news about its computer network, he was transferred, naturally, to the help desk. His message was brief, if alarming. At least one computer system belonging to the D.N.C.
Javier E

Facebook Has 50 Minutes of Your Time Each Day. It Wants More. - The New York Times - 0 views

  • Fifty minutes.That’s the average amount of time, the company said, that users spend each day on its Facebook, Instagram and Messenger platforms
  • there are only 24 hours in a day, and the average person sleeps for 8.8 of them. That means more than one-sixteenth of the average user’s waking time is spent on Facebook.
  • That’s more than any other leisure activity surveyed by the Bureau of Labor Statistics, with the exception of watching television programs and movies (an average per day of 2.8 hours)
  • ...19 more annotations...
  • It’s more time than people spend reading (19 minutes); participating in sports or exercise (17 minutes); or social events (four minutes). It’s almost as much time as people spend eating and drinking (1.07 hours).
  • the average time people spend on Facebook has gone up — from around 40 minutes in 2014 — even as the number of monthly active users has surged. And that’s just the average. Some users must be spending many hours a day on the site,
  • Time is the best measure of engagement, and engagement correlates with advertising effectiveness. Time also increases the supply of impressions that Facebook can sell, which brings in more revenue (a 52 percent increase last quarter to $5.4 billion).
  • time has become the holy grail of digital media.
  • And time enables Facebook to learn more about its users — their habits and interests — and thus better target its ads. The result is a powerful network effect that competitors will be hard pressed to match.
  • the only one that comes close is Alphabet’s YouTube, where users spent an average of 17 minutes a day on the site. That’s less than half the 35 minutes a day users spent on Facebook
  • Users spent an average of nine minutes on all of Yahoo’s sites, two minutes on LinkedIn and just one minute on Twitter
  • People spending the most time on Facebook also tend to fall into the prized 18-to-34 demographic sought by advertisers.
  • “You hear a narrative that young people are fleeing Facebook. The data show that’s just not true. Younger users have a wider appetite for social media, and they spend a lot of time on multiple networks. But they spend more time on Facebook by a wide margin.”
  • What aren’t Facebook users doing during the 50 minutes they spend there? Is it possibly interfering with work (and productivity), or, in the case of young people, studying and reading?
  • While the Bureau of Labor Statistics surveys nearly every conceivable time-occupying activity (even fencing and spelunking), it doesn’t specifically tally the time spent on social media, both because the activity may have multiple purposes — both work and leisure — and because people often do it at the same time they are ostensibly engaged in other activities
  • The closest category would be “computer use for leisure,” which has grown from eight minutes in 2006, when the bureau began collecting the data, to 14 minutes in 2014, the most recent survey. Or perhaps it would be “socializing and communicating with others,” which slipped from 40 minutes to 38 minutes.
  • But time spent on most leisure activities hasn’t changed much in those eight years of the bureau’s surveys. Time spent reading dropped from an average of 22 minutes to 19 minutes. Watching television and movies increased from 2.57 hours to 2.8. Average time spent working declined from 3.4 hours to 3.25. (Those hours seem low because much of the population, which includes both young people and the elderly, does not work.)
  • The bureau’s numbers, since they cover the entire population, may be too broad to capture important shifts among important demographic groups
  • ComScore reported that television viewing (both live and recorded) dropped 2 percent last year, and it said younger viewers in particular are abandoning traditional live television. People ages 18-34 spent just 47 percent of their viewing time on television screens, and 40 percent on mobile devices.
  • Among those 55 and older, 70 percent of their viewing time was on television, according to comScore. So among young people, much social media time may be coming at the expense of traditional television.
  • comScore’s data suggests that people are spending on average just six to seven minutes a day using social media on their work computers. “I don’t think Facebook is displacing other activity,” he said. “People use it during downtime during the course of their day, in the elevator, or while commuting, or waiting.
  • Facebook, naturally, is busy cooking up ways to get us to spend even more time on the platform
  • A crucial initiative is improving its News Feed, tailoring it more precisely to the needs and interests of its users, based on how long people spend reading particular posts. For people who demonstrate a preference for video, more video will appear near the top of their news feed. The more time people spend on Facebook, the more data they will generate about themselves, and the better the company will get at the task.
Javier E

The Blog That Disappeared - The New York Times - 0 views

  • Professor Winner coined the term “mythinformation,” the wishful thinking that with open access to technology, the world will become a better place. He has written of “computer enthusiasts,” that they feel there is “no need to try and shape the institutions of the information age in ways that maximize human freedom while placing limits upon concentrations of power.” The deletion of Mr. Cooper’s blog is, perhaps, evidence of what happens when we don’t try to limit concentrations of power.
  • The idea of a cloud benevolently storing our personal information, our work, our photos, our music, so much of our lives, is also really nice, but as users, we have no control over the cloud.
  • We surrender that control each time we write a blog post or log in to an email account or upload an image. The allure of all this technology is hard to resist.
  • ...1 more annotation...
  • When we use their services, we trust that companies like Google will preserve some of the most personal things we have to share. They trust that we will not read the fine print.
Javier E

The Jig Is Up: Time to Get Past Facebook and Invent a New Future - Alexis Madrigal - Te... - 0 views

  • have we run out of things to say and write that actually are about technology and the companies behind them? Or do we feel compelled to fill the white space between what matters? Sort of like talk radio?
  • There have been three big innovation narratives in the last few years that complicate, but don't invalidate, my thesis. The first -- The Rise of the Cloud -- was essentially a rebranding of having data on the Internet, which is, well ... what the Internet has always been about. Though I think it has made the lives of some IT managers easier and I do like Rdio. The second, Big Data, has lots of potential applications. But, as Tim Berners-Lee noted today, the people benefiting from more sophisticated machine learning techniques are the people buying consumer data, not the consumers themselves. How many Big Data startups might help people see their lives in different ways? Perhaps the personal genomics companies, but so far, they've kept their efforts focused quite narrowly. And third, we have the daily deal phenomenon. Groupon and its 600 clones may or may not be good companies, but they are barely technology companies. Really, they look like retail sales operations with tons of sales people and marketing expenses.
  • we've reached a point in this technology cycle where the old thing has run its course. I think the hardware, cellular bandwidth, and the business model of this tottering tower of technology are pushing companies to play on one small corner of a huge field.
  • ...6 more annotations...
  • We've maxed out our hardware. No one even tries to buy the fastest computer anymore because we don't give them any tasks (except video editing, I suppose) that require that level of horsepower
  • Some of it, sure, is that we're dumping the computation on the servers on the Internet. But the other part is that we mostly do a lot of the things that we used to do years ago -- stare at web pages, write documents, upload photos -- just at higher resolutions.
  • On the mobile side, we're working with almost the exact same toolset that we had on the 2007 iPhone, i.e. audio inputs, audio outputs, a camera, a GPS, an accelerometer, Bluetooth, and a touchscreen. That's the palette that everyone has been working with -- and I hate to say it, but we're at the end of the line.
  • despite the efforts of telecom carriers, cellular bandwidth remains limited, especially in the hotbeds of innovation that need it most
  • more than the bandwidth or the stagnant hardware, I think the blame should fall squarely on the shoulders of the business model. The dominant idea has been to gather users and get them to pour their friends, photos, writing, information, clicks, and locations into your app. Then you sell them stuff (Amazon.com, One King's Lane) or you take that data and sell it in one way or another to someone who will sell them stuff (everyone). I return to Jeff Hammerbacher's awesome line about developers these days: "The best minds of my generation are thinking about how to make people click ads." 
  • The thing about the advertising model is that it gets people thinking small, lean.
Javier E

'The Golden Age of Silicon Valley Is Over, and We're Dancing on its Grave' - Derek Thom... - 0 views

  • Now there's a new pattern created by two big ideas. First, for the first time ever, you have computer devices, mobile and tablet especially, in the hands of billions of people. Second is that we are moving all the social needs that we used to do face-to-face, and we're doing them on a computer.
  • Companies like Facebook for the first time can get total markets approaching the entire population.
  • Silicon Valley is screwed as we know it.  If I have a choice of investing in a blockbuster cancer drug that will pay me nothing for ten years,  at best, whereas social media will go big in two years, what do you think I'm going to pick? If you're a VC firm, you're tossing out your life science division. All of that stuff is hard and the returns take forever. Look at social media. It's not hard, because of the two forces I just described, and the returns are quick.
  • ...1 more annotation...
  • Facebook's success has the unintended consequence of leading to the demise of Silicon Valley as a place where investors take big risks on advanced science and tech that helps the world. The golden age of Silicon valley is over
Javier E

Online Education: My Teacher Is an App - WSJ.com - 0 views

  • The drive to reinvent school has also set off an explosive clash with teachers unions and backers of more traditional education. Partly, it's a philosophical divide. Critics say that cyberschools turn education into a largely utilitarian pursuit: Learn content, click ahead. They mourn the lack of discussion, fear kids won't be challenged to take risks, and fret about devaluing the softer skills learned in classrooms. "Schools teach people the skills of citizenship—how to get along with others, how to reason and deliberate, how to tolerate differences,"
  • A teacher in a traditional high school might handle 150 students. An online teacher can supervise more than 250, since he or she doesn't have to write lesson plans and most grading is done by computer.
  • In Georgia, state and local taxpayers spend $7,650 a year to educate the average student in a traditional public school. They spend nearly 60% less—$3,200 a year—to educate a student in the statewide online Georgia Cyber Academy, saving state and local tax dollars. Florida saves $1,500 a year on every student enrolled online full time.
  • ...1 more annotation...
  • Kids who work closely with parents or teachers do well, she says. "But basically letting a child educate himself, that's not going to be a good educational experience." The computer, she says, can't do it alone.
Javier E

Why Nokia Died: Nobody Buys Phones, Anymore - Jordan Weissmann - The Atlantic - 0 views

  • Nokia was a phone company in a world that had stopped buying phones. Instead, we buy small computers that can also make calls. 
  • The iPhone evolved differently. It was a mobile phone, yes, but it was a device created with the explicit intent of syncing with iTunes and other Apple computer apps. In spirit, its form and function descended as much from the iPod as much as any phone that preceded it. The thing had a hard drive. It could access actual web pages. And once the second generation 3G model debuted along with the Apple app store, owners suddenly found themselves with a seemingly limitless range of software a few finger swipes away.
  • Before Steve Jobs & Co. effectively rethought the industry, smartphones were still essentially communications devices. They could call. They could text. They could moonlight as "email machines
Javier E

The iPad Is Your New Bicycle - Rebecca J. Rosen - The Atlantic - 0 views

  • My theory is that tablet computers aren't the new mobile phone or personal computer, gadgets most American adults have, particularly well-off ones. Rather, iPads are the new bicycle.
  • in a general sense, the trajectory is clear: With enough time and enough prosperity, sooner or later every consumer-technology product achieves a near-universal level of adoption. 
  • the National Household Travel Survey found that there were 0.86 bikes per American household.
  • ...1 more annotation...
  • in 2001, just about a third of Americans had a bicycle (~90 million bikes, for some 285 million people)
Javier E

Technology and the College Generation - NYTimes.com - 0 views

  • “Earlier it was because some students weren’t plugged in enough into any virtual communication.” Seven years later, she said she cannot remove the instruction because now students avoid e-mail because it is “too slow compared to texting.”
  • Just how little are students using e-mail these days? Six minutes a day, according to an experiment done earlier this year by Reynol Junco, an associate professor of library science at Purdue. With the promise of a $10 Amazon gift card, Dr. Junco persuaded students to download a program letting him track their computer habits. During the semester, they spent an average of 123 minutes a day on a computer, by far the biggest portion of it, 31 minutes, on social networking. The only thing they spent less time on than e-mail: hunting for content via search engines (four minutes).
  • “I never know what to say in the subject line and how to address the person,” Ms. Carver said. “Is it mister or professor and comma and return, and do I have to capitalize and use full sentences? By the time I do all that I could have an answer by text if I could text them.”
  • ...1 more annotation...
  • Canvas, a two-year-old learning management system used by Brown University, among others, allows students to choose how to receive messages like “The reading assignment has been changed to Chapter 2.” The options: e-mail, text, Facebook and Twitter. According to company figures, 98 percent chose e-mail.
« First ‹ Previous 61 - 80 of 399 Next › Last »
Showing 20 items per page