Skip to main content

Home/ History Readings/ Group items tagged software

Rss Feed Group items tagged

Javier E

The Coming Software Apocalypse - The Atlantic - 0 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

Essay-Grading Software, as Teacher's Aide - Digital Domain - NYTimes.com - 0 views

  • AS a professor and a parent, I have long dreamed of finding a software program that helps every student learn to write well. It would serve as a kind of tireless instructor, flagging grammatical, punctuation or word-use problems, but also showing the way to greater concision and clarity.
  • The standardized tests administered by the states at the end of the school year typically have an essay-writing component, requiring the hiring of humans to grade them one by one.
  • the Hewlett Foundation sponsored a study of automated essay-scoring engines now offered by commercial vendors. The researchers found that these produced scores effectively identical to those of human graders.
  • ...11 more annotations...
  • humans are not necessarily ideal graders: they provide an average of only three minutes of attention per essa
  • We are talking here about providing a very rough kind of measurement, the assignment of a single summary score on, say, a seventh grader’s essay
  • “A few years back, almost all states evaluated writing at multiple grade levels, requiring students to actually write,” says Mark D. Shermis, dean of the college of education at the University of Akron in Ohio. “But a few, citing cost considerations, have either switched back to multiple-choice format to evaluate or have dropped writing evaluation altogether.”
  • As statistical models for automated essay scoring are refined, Professor Shermis says, the current $2 or $3 cost of grading each one with humans could be virtually eliminated, at least theoretically.
  • As essay-scoring software becomes more sophisticated, it could be put to classroom use for any type of writing assignment throughout the school year, not just in an end-of-year assessment. Instead of the teacher filling the essay with the markings that flag problems, the software could do so. The software could also effortlessly supply full explanations and practice exercises that address the problems — and grade those, too.
  • the cost of commercial essay-grading software is now $10 to $20 a student per year. But as the technology improves and the costs drop, he expects that it will be incorporated into the word processing software that all students use
  • “Providing students with instant feedback about grammar, punctuation, word choice and sentence structure will lead to more writing assignments,” Mr. Vander Ark says, “and allow teachers to focus on higher-order skills.”
  • When sophisticated essay-evaluation software is built into word processing software, Mr. Vander Ark predicts “an order-of-magnitude increase in the amount of writing across the curriculum.”
  • the essay-scoring software that he and his teammates developed uses relatively small data sets and ordinary PCs — so the additional infrastructure cost for schools could be nil.
  • the William and Flora Hewlett Foundation sponsored a competition to see how well algorithms submitted by professional data scientists and amateur statistics wizards could predict the scores assigned by human graders. The winners were announced last month — and the predictive algorithms were eerily accurate.
  • wanted to create a neutral and fair platform to assess the various claims of the vendors. It turns out the claims are not hype.”
Javier E

How Elon Musk spoiled the dream of 'Full Self-Driving' - The Washington Post - 0 views

  • They said Musk’s erratic leadership style also played a role, forcing them to work at a breakneck pace to develop the technology and to push it out to the public before it was ready. Some said they are worried that, even today, the software is not safe to be used on public roads. Most spoke on the condition of anonymity for fear of retribution.
  • “The system was only progressing very slowly internally” but “the public wanted a product in their hands,” said John Bernal, a former Tesla test operator who worked in its Autopilot department. He was fired in February 2022 when the company alleged improper use of the technology after he had posted videos of Full Self-Driving in action
  • “Elon keeps tweeting, ‘Oh we’re almost there, we’re almost there,’” Bernal said. But “internally, we’re nowhere close, so now we have to work harder and harder and harder.” The team has also bled members in recent months, including senior executives.
  • ...17 more annotations...
  • “No one believed me that working for Elon was the way it was until they saw how he operated Twitter,” Bernal said, calling Twitter “just the tip of the iceberg on how he operates Tesla.”
  • In April 2019, at a showcase dubbed “Autonomy Investor Day,” Musk made perhaps his boldest prediction as Tesla’s chief executive. “By the middle of next year, we’ll have over a million Tesla cars on the road with full self-driving hardware,” Musk told a roomful of investors. The software updates automatically over the air, and Full Self-Driving would be so reliable, he said, the driver “could go to sleep.”
  • Investors were sold. The following year, Tesla’s stock price soared, making it the most valuable automaker and helping Musk become the world’s richest person
  • To deliver on his promise, Musk assembled a star team of engineers willing to work long hours and problem solve deep into the night. Musk would test the latest software on his own car, then he and other executives would compile “fix-it” requests for their engineers.
  • Those patchwork fixes gave the illusion of relentless progress but masked the lack of a coherent development strategy, former employees said. While competitors such as Alphabet-owned Waymo adopted strict testing protocols that limited where self-driving software could operate, Tesla eventually pushed Full Self-Driving out to 360,000 owners — who paid up to $15,000 to be eligible for the features — and let them activate it at their own discretion.
  • Tesla’s philosophy is simple: The more data (in this case driving) the artificial intelligence guiding the car is exposed to, the faster it learns. But that crude model also means there is a lighter safety net. Tesla has chosen to effectively allow the software to learn on its own, developing sensibilities akin to a brain via technology dubbed “neural nets” with fewer rules, the former employees said. While this has the potential to speed the process, it boils down to essentially a trial and error method of training.
  • Radar originally played a major role in the design of the Tesla vehicles and software, supplementing the cameras by offering a reality check of what was around, particularly if vision might be obscured. Tesla also used ultrasonic sensors, shorter-range devices that detect obstructions within inches of the car. (The company announced last year it was eliminating those as well.)
  • Musk, as the chief tester, also asked for frequent bug fixes to the software, requiring engineers to go in and adjust code. “Nobody comes up with a good idea while being chased by a tiger,” a former senior executive recalled an engineer on the project telling him
  • Toward the end of 2020, Autopilot employees turned on their computers to find in-house workplace monitoring software installed, former employees said. It monitored keystrokes and mouse clicks, and kept track of their image labeling. If the mouse did not move for a period of time, a timer started — and employees could be reprimanded, up to being fired, for periods of inactivity, the former employees said.
  • Some of the people who spoke with The Post said that approach has introduced risks. “I just knew that putting that software out in the streets would not be safe,” said a former Tesla Autopilot engineer who spoke on the condition of anonymity for fear of retaliation. “You can’t predict what the car’s going to do.”
  • Some of the people who spoke with The Post attributed Tesla’s sudden uptick in “phantom braking” reports — where the cars aggressively slow down from high speeds — to the lack of radar. The Post analyzed data from the National Highway Traffic Safety Administration to show incidences surged last year, prompting a federal regulatory investigation.
  • The data showed reports of “phantom braking” rose to 107 complaints over three months, compared to only 34 in the preceding 22 months. After The Post highlighted the problem in a news report, NHTSA received about 250 complaints of the issue in a two-week period. The agency opened an investigation after, it said, it received 354 complaints of the problem spanning a period of nine months.
  • “It’s not the sole reason they’re having [trouble] but it’s big a part of it,” said Missy Cummings, a former senior safety adviser for NHTSA, who has criticized the company’s approach and recused herself on matters related to Tesla. “The radar helped detect objects in the forward field. [For] computer vision which is rife with errors, it serves as a sensor fusion way to check if there is a problem.”
  • Even with radar, Teslas were less sophisticated than the lidar and radar-equipped cars of competitors.“One of the key advantages of lidar is that it will never fail to see a train or truck, even if it doesn’t know what it is,” said Brad Templeton, a longtime self-driving car developer and consultant who worked on Google’s self-driving car. “It knows there is an object in front and the vehicle can stop without knowing more than that.”
  • Musk’s resistance to suggestions led to a culture of deference, former employees said. Tesla fired employees who pushed back on his approach. The company was also pushing out so many updates to its software that in late 2021, NHTSA publicly admonished Tesla for issuing fixes without a formal recall notice.
  • Tesla engineers have been burning out, quitting and looking for opportunities elsewhere. Andrej Karpathy, Tesla’s director of artificial intelligence, took a months-long sabbatical last year before leaving Tesla and taking a position this year at OpenAI, the company behind language-modeling software ChatGPT.
  • One of the former employees said that he left for Waymo. “They weren’t really wondering if their car’s going to run the stop sign,” the engineer said. “They’re just focusing on making the whole thing achievable in the long term, as opposed to hurrying it up.”
Javier E

Videos of Tesla's Full Self-Driving beta software reveal flaws in system - The Washingt... - 0 views

  • Each of these moments — captured on video by a Tesla owner and posted online — reveals a fundamental weakness in Tesla’s “Full Self-Driving” technology, according to a panel of experts assembled by The Washington Post and asked to examine the videos. These are problems with no easy fix, the experts said, where patching one issue might introduce new complications, or where the nearly infinite array of possible real-life scenarios is simply too much for Tesla’s algorithms to master.
  • The Post selected six videos from a large array posted on YouTube and contacted the people who shot them to confirm their authenticity. The Post then recruited a half-dozen experts to conduct a frame-by-frame analysis.
  • The experts include academics who study self-driving vehicles; industry executives and technical staff who work in autonomous-vehicle safety analysis; and self-driving vehicle developers. None work in capacities that put them in competition with Tesla, and several said they did not fault Tesla for its approach. Two spoke on condition of anonymity to avoid angering Tesla, its fans or future clients.
  • ...4 more annotations...
  • Their analysis suggests that, as currently designed, “Full Self-Driving” (FSD) could be dangerous on public roadways, according to several of the experts.
  • That the Tesla keeps going after seeing a pedestrian near a crosswalk offers insight into the type of software Tesla uses, known as “machine learning.” This type of software is capable of deciphering large sets of data and forming correlations that allow it, in essence, to learn on its own.
  • Tesla’s software uses a combination of machine-learning software and simpler software “rules,” such as “always stop at stop signs and red lights.” But as one researcher pointed out, machine-learning algorithms invariably learn lessons they shouldn’t. It’s possible that if the software were told to “never hit pedestrians,” it could take away the wrong lesson: that pedestrians will move out of the way if they are about to be hit, one expert said
  • Software developers could create a “rule” that the car must slow down or stop for pedestrians. But that fix could paralyze the software in urban environments, where pedestrians are everywhere.
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
lilyrashkind

Self-driving car companies' first step to making money isn't robotaxis - 0 views

  • BEIJING — While governments may be wary of driverless cars, people want to buy the technology, and companies want to cash in.It’s a market for a limited version of self-driving tech that assists drivers with tasks like parking and switching lanes on a highway. And McKinsey predicts the market for a basic form of self-driving tech — known as “Level 2” in a classification system for autonomous driving — is worth 40 billion yuan ($6 million) in China alone.
  • But when it comes to revenue, robotaxi apps show the companies are still heavily subsidizing rides. For now, the money for self-driving tech is in software sales.
  • “As a collaborator, we of course want this sold [in] as many car OEMs in China so we can maximize our [revenue and] profit,” he said, referring to auto manufacturers. “We truly believe L2 and L3 systems can make people drive cars [more] safely.”In a separate release, Bosch called the deal a “strategic partnership” and said its China business would provide sensors, computing platforms, algorithm applications and cloud services, while WeRide provides the software. Neither company shared how much capital was invested.
  • ...4 more annotations...
  • WeRide has a valuation of $4.4 billion, according to CB Insights, with backers such as Nissan and Qiming Venture Partners. WeRide operates robotaxis and robobuses in parts of the southern city of Guangzhou, where it’s also testing self-driving street sweepers.
  • “Because Bosch is in charge of integration, we have to really spend 120% of our time to help Bosch with the integration and adaptation work,” Han said. WeRide has yet to go public.
  • picks for autonomous driving include ArcSoft and Desay SV.An outsourcing business model in China gives independent software vendors more opportunities than in the United States, where software is developed in-house at companies like Tesla, the analysts said. Beijing also plans to have L3 vehicles in mass production by 2025.“Auto OEMs are investing significantly in car software/digitalization to 2025, targeting US$20bn+ of obtainable software revenue by decade-end,” the Goldman analysts wrote in mid-March.
  • They estimate that for every car, the value of software within will rise from $202 each for L0 cars to $4,957 for L4 cars in 2030. For comparison, the battery component costs at least $5,000 today. By that calculation, the market for advanced driver assistance systems and autonomous driving software is set to surge from $2.4 billion in 2021 to $70 billion in 2030 — with China accounting for about a third, the analysts predict.
Javier E

Models Will Run the World - WSJ - 0 views

  • There is no shortage of hype about artificial intelligence and big data, but models are the source of the real power behind these tools. A model is a decision framework in which the logic is derived by algorithm from data, rather than explicitly programmed by a developer or implicitly conveyed via a person’s intuition. The output is a prediction on which a decision can be mad
  • Once created, a model can learn from its successes and failures with speed and sophistication that humans usually cannot match
  • Building this system requires a mechanism (often software-based) to collect data, processes to create models from the data, the models themselves, and a mechanism (also often software based) to deliver or act on the suggestions from those models.
  • ...11 more annotations...
  • A model-driven business is something beyond a data-driven business. A data-driven business collects and analyzes data to help humans make better business decisions. A model-driven business creates a system built around continuously improving models that define the business. In a data-driven business, the data helps the business; in a model-driven business, the models are the business.
  • Netflix beat Blockbuster with software; it is winning against the cable companies and content providers with its models. Its recommendation model is famous and estimated to be worth more than $1 billion a year in revenue, driving 80% of content consumption
  • Amazon used software to separate itself from physical competitors like Borders and Toys “R” Us, but its models helped it pull away from other e-commerce companies like Overstock.com . By 2013 an estimated 35% of revenue came from Amazon’s product recommendations. Those models have never stopped improving
  • Third, incumbents will be more potent competitors in this battle relative to their role in the battles of the software era. They have a meaningful advantage this time around, because they often have troves of data and startups usually don’
  • Looking to produce more-resilient crops, Monsanto’s models predict optimal places for farmers to plant based on historical yields, weather data, tractors equipped with GPS and other sensors, and field data collected from satellite imagery, which estimates where rainfall will pool and subtle variations in soil chemistry.
  • Lilt, a San Francisco-based startup, is building software that aims to make that translator five times as productive by inserting a model in the middle of the process. Instead of working from only the original text, translators using Lilt’s software are presented with a set of suggestions from the model, and they refine those as needed. The model is always learning from the changes the translator makes, simultaneously making all the other translators more productive in future projects.
  • First, businesses will increasingly be valued based on the completeness, not just the quantity, of data they create
  • Second, the goal is a flywheel, or virtuous circle. Tencent, Amazon and Netflix all demonstrate this characteristic: Models improve products, products get used more, this new data improves the product even more
  • inVia Robotics builds robots that can autonomously navigate a warehouse and pull totes from shelves to deliver them to a stationary human picker. The approach is model-driven; inVia uses models that consider item popularity and probability of association (putting sunglasses near sunscreen, for example) to adjust warehouse layout automatically and minimize the miles robots must travel. Every order provides feedback to a universe of prior predictions and improves productivity across the system.
  • Fourth, just as companies have built deep organizational capabilities to manage technology, people, and capital, the same will now happen for models
  • Fifth, companies will face new ethical and compliance challenges.
rerobinson03

Opinion | I Was the Homeland Security Adviser to Trump. We're Being Hacked. - The New Y... - 0 views

  • At the worst possible time, when the United States is at its most vulnerable — during a presidential transition and a devastating public health crisis — the networks of the federal government and much of corporate America are compromised by a foreign nation.
  • Last week, the cybersecurity firm FireEye said it had been hacked and that its clients, which include the United States government, had been placed at risk
  • The attackers gained access to SolarWinds software before updates of that software were made available to its customers. Unsuspecting customers then downloaded a corrupted version of the software, which included a hidden back door that gave hackers access to the victim’s network.
  • ...14 more annotations...
  • supply-chain attack
  • According to SolarWinds S.E.C. filings, the malware was on the software from March to June. The number of organizations that downloaded the corrupted update could be as many as 18,000, which includes most federal government unclassified networks and more than 425 Fortune 500 companies.
  • The magnitude of this ongoing attack is hard to overstate.
  • The Russians have had access to a considerable number of important and sensitive networks for six to nine months.
  • While the Russians did not have the time to gain complete control over every network they hacked, they most certainly did gain it over hundreds of them.
  • The National Defense Authorization Act, which each year provides the Defense Department and other agencies the authority to perform its work, is caught up in partisan wrangling. Among other important provisions, the act would authorize the Department of Homeland Security to perform network hunting in federal networks.
  • The actual and perceived control of so many important networks could easily be used to undermine public and consumer trust in data, written communications and services.
  • hat should be done?On Dec. 13, the Cybersecurity and Infrastructure Security Agency, a division of the Department of Homeland Security — itself a victim — issued an emergency directive ordering federal civilian agencies to remove SolarWinds software from their networks.
  • It also is impractical. In 2017, the federal government was ordered to remove from its networks software from a Russian company, Kaspersky Lab, that was deemed too risky. It took over a year to get it off the networks.
  • The remediation effort alone will be staggering
  • Cyber threat hunters that are stealthier than the Russians must be unleashed on these networks to look for the hidden, persistent access controls.
  • The logical conclusion is that we must act as if the Russian government has control of all the networks it has penetrated
  • The response must be broader than patching networks. While all indicators point to the Russian government, the United States, and ideally its allies, must publicly and formally attribute responsibility for these hacks. If it is Russia, President Trump must make it clear to Vladimir Putin that these actions are unacceptable. The U.S. military and intelligence community must be placed on increased alert; all elements of national power must be placed on the table.
  • President Trump is on the verge of leaving behind a federal government, and perhaps a large number of major industries, compromised by the Russian government. He must use whatever leverage he can muster to protect the United States and severely punish the Russians.President-elect Joe Biden must begin his planning to take charge of this crisis. He has to assume that communications about this matter are being read by Russia, and assume that any government data or email could be falsified.
Javier E

AI is about to completely change how you use computers | Bill Gates - 0 views

  • Health care
  • before the sophisticated agents I’m describing become a reality, we need to confront a number of questions about the technology and how we’ll use it.
  • Today, AI’s main role in healthcare is to help with administrative tasks. Abridge, Nuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.
  • ...38 more annotations...
  • agents will open up many more learning opportunities.
  • Already, AI can help you pick out a new TV and recommend movies, books, shows, and podcasts. Likewise, a company I’ve invested in, recently launched Pix, which lets you ask questions (“Which Robert Redford movies would I like and where can I watch them?”) and then makes recommendations based on what you’ve liked in the past
  • Productivity
  • copilots can do a lot—such as turn a written document into a slide deck, answer questions about a spreadsheet using natural language, and summarize email threads while representing each person’s point of view.
  • I don’t think any single company will dominate the agents business--there will be many different AI engines available.
  • Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.
  • To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want. It will be able to write the code, design the look and feel of the app, create a logo, and publish the app to an online store
  • Agents will do even more. Having one will be like having a person dedicated to helping you with various tasks and doing them independently if you want. If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like
  • For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job.
  • Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it.
  • Entertainment and shopping
  • The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
  • They’ll replace word processors, spreadsheets, and other productivity apps.
  • Education
  • For example, few families can pay for a tutor who works one-on-one with a student to supplement their classroom work. If agents can capture what makes a tutor effective, they’ll unlock this supplemental instruction for everyone who wants it. If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today’s text-based tutors.
  • your agent will be able to help you in the same way that personal assistants support executives today. If your friend just had surgery, your agent will offer to send flowers and be able to order them for you. If you tell it you’d like to catch up with your old college roommate, it will work with their agent to find a time to get together, and just before you arrive, it will remind you that their oldest child just started college at the local university.
  • To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences.
  • The current state of the art is Khanmigo, a text-based bot created by Khan Academy. It can tutor students in math, science, and the humanities—for example, it can explain the quadratic formula and create math problems to practice on. It can also help teachers do things like write lesson plans.
  • Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.
  • other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?
  • In the computing industry, we talk about platforms—the technologies that apps and services are built on. Android, iOS, and Windows are all platforms. Agents will be the next platform.
  • A shock wave in the tech industry
  • Agents won’t simply make recommendations; they’ll help you act on them. If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision.
  • Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you
  • they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter.
  • Companies will be able to make agents available for their employees to consult directly and be part of every meeting so they can answer questions.
  • AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.
  • If the number of companies that have started working on AI just this year is any indication, there will be an exceptional amount of competition, which will make agents very inexpensive.
  • Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.
  • Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.
  • In the distant future, agents may even force humans to face profound questions about purpose. Imagine that agents become so good that everyone can have a high quality of life without working nearly as much. In a future like that, what would people do with their time? Would anyone still want to get an education when an agent has all the answers? Can you have a safe and thriving society when most people have a lot of free time on their hands?
  • The ramifications for the software business and for society will be profound.
  • In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
  • You’ll also be able to get news and entertainment that’s been tailored to your interests. CurioAI, which creates a custom podcast on any subject you ask about, is a glimpse of what’s coming.
  • An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.
  • even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
  • The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people
  • They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.
katherineharron

Fact checking the outlandish claim that millions of Trump votes were deleted - CNNPolitics - 0 views

  • A human error that briefly led to incorrect election results in a Michigan county has spiraled into a sprawling, baseless conspiracy theory suggesting that glitches in widely-used voting software led to millions of miscast ballots.
  • Conservative media figures, social media users, and President Donald Trump have spread rumors about problems with Dominion Voting Systems
  • They've claimed that isolated reports about Election Night glitches raise concerns about election results in states around the country.
  • ...16 more annotations...
  • "DOMINION DELETED 2.7 MILLION TRUMP VOTES NATIONWIDE," Trump tweeted on Thursday, citing a report from the right-wing One America News Network. Without showing any evidence, he claimed that states using the company's technology had "SWITCHED 435,000 VOTES FROM TRUMP TO BIDEN."
  • Trump's tweet is completely without evidence.
  • "No credible reports or evidence of any software issues exist," the company wrote. "While no election is without isolated issues, Dominion Voting Systems are reliably and accurately counting ballots. State and local election authorities have publicly confirmed the integrity of the process."
  • the network claims to have proof of widespread voter fraud but is choosing to sit on that proof for more than a week.
  • Giuliani has claimed to have received an affidavit from someone "inside" Dominion who alleges that batches of "phony" pro-Biden ballots were counted. Giuliani hasn't released any evidence.
  • Dominion, a Canadian company founded in 2002 with US headquarters in Denver, is the second-largest provider of voting technology in the US, according to a 2017 report from the University of Pennsylvania's Wharton Public Policy Initiative. In 2016, Dominion's technology was used in 1,635 jurisdictions in more than two dozen states, the report found
  • Further undermining Trump's claims was the fact that hours after his tweet, federal government agencies released a statement declaring that "the November 3rd election was the most secure in American history."
  • The county's initial results showed Joe Biden leading, and election officials quickly realized on election night that something was wrong with their results, the local clerk told the Detroit Free Press last week.
  • The county uses Dominion's technology to tally ballots, but the mistake was due to human error and not the company's systems, according to the Michigan Secretary of State.
  • "This was an isolated error, there is no evidence this user error occurred elsewhere in the state, and if it did it would be caught during county canvasses, which are conducted by bipartisan boards of county canvassers," the secretary of state's office said in its statement.
  • Other rumors pointed to Oakland County, Michigan, a suburb of Detroit, where initial results mistakenly double-counted votes from the city of Rochester Hills, according to the secretary of state's office. But that was due to human error, not a software issue, the local clerk said.
  • "As a Republican, I am disturbed that this is intentionally being mischaracterized to undermine the election process," Tina Barton, the clerk of Rochester Hills, Michigan, said
  • But online, right-wing voices -- including Giuliani, Eric Trump, conservative Arizona Rep. Paul Gosar and Breitbart News -- have seized on those isolated issues as purported evidence of wider "glitches" with Dominion's software.
  • "It really does feel like people believe what they want to believe," he said. "I don't think I've ever seen it quite like this before."
  • Social media posts have also baselessly alleged ties between Dominion and Democratic leaders, misinformation that was first noted and debunked by The Associated Press.
  • Several Twitter posts that have been retweeted thousands of times have claimed that the company is involved with the Clinton Foundation. But while Dominion did agree to donate its technology to "emerging democracies" as part of a program run by the Clinton Foundation in 2014, according to the foundation's website, Dominion said in its statement that it has "no company ownership relationships" with the foundation.
Javier E

Antitrust Enforcers: "The Rent Is Too Damn High!" - 0 views

  • The story was explosive, explaining that, in fact, there was no mystery behind the inflation that Americans were experiencing, inflation in everyday items paired with skyrocketing corporate profits. There was a conspiracy, orchestrated by some of the richest men in the country.
  • Median asking rents had spiked by as much as 18% in the spring of 2022, and that was outrageous. Moreover, rents are just out of control more broadly. As the Antitrust Division notes, "the percentage of income spent on rent for Americans without a college degree increased from 30% in 2000 to 42% in 2017."
  • Policymakers also responded. Seventeen members of Congress, and multiple Democratic Senators, such as Antitrust Subcommittee Chair Amy Klobuchar, asked government enforcers to look into the allegations. Senator Ron Wyden introduced Federal legislation to ban the use of RealPage to set rents, which the Kamala Harris Presidential campaign recently endorsed. At a local level, San Fransisco just prohibited collusive algorithmic rent-setting, and similar legislation is being considered in a bunch of states and cities.
  • ...5 more annotations...
  • As the architect of RealPage once explained, “[i]f you have idiots undervaluing, it costs the whole system.” The complaints showed that it’s more than just information sharing; RealPage has “pricing advisors” that monitor landlords and encourage them to accept suggested pricing, it works to get employees at landlord companies fired who try to move rents lower, and it even threatens to drop clients who don’t accept its high price recommendations. The suits have passed important legal hurdles and are going to trial.
  • Private antitrust lawyers filed multiple lawsuits, which were consolidated in Tennessee by 2023. Their argument “is that RealPage has been working with at least 21 large landlords and institutional investors, encompassing 70% of multi-family apartment buildings and 16 million units nationwide, to systematically push up rents.”
  • Arizona Attorney General Kris Mayes sued RealPage and corporate landlords, alleging that rent increases of 30% in just two years are a result of the conspiracy. Seven out of ten multifamily apartment units in Phoenix are run by landlords who use the software. D.C. Attorney General Brian Schwalbe sued as well, noting that “in the Washington-Arlington-Alexandria Metropolitan Area, over 90% of units in large buildings are priced using RealPage’s software.”
  • The FBI conducted a dawn raid of corporate landlord Cortland, a giant that rents out 85,000 units across thirteen states. Today, the Antitrust Division and eight states sued RealPage, alleging not just a price-fixing conspiracy to raise rents, but also monopolization in the market for commercial real estate management software
  • The gist of the complaint is that large landlords and RealPage work together to (1) share sensitive information and (2) raise rents and hold units off the market. This activity hits at least 4.8 million housing units under the direct control of landlords using RealPage software, and according to the corporation itself, its products cause rents to increase by between 2-7% more than they otherwise would, year over year. "Our tool,” said RealPage, “ensures that [landlords] are driving every possible opportunity to increase price even in the most downward trending or unexpected conditions.”
Javier E

Armies of Expensive Lawyers, Replaced by Cheaper Software - NYTimes.com - 0 views

  • Mike Lynch, the founder of Autonomy, is convinced that “legal is a sector that will likely employ fewer, not more, people in the U.S. in the future.” He estimated that the shift from manual document discovery to e-discovery would lead to a manpower reduction in which one lawyer would suffice for work that once required 500 and that the newest generation of software, which can detect duplicates and find clusters of important documents on a particular topic, could cut the head count by another 50 percent.
  • Mr. Herr, the former chemical company lawyer, used e-discovery software to reanalyze work his company’s lawyers did in the 1980s and ’90s. His human colleagues had been only 60 percent accurate, he found. “Think about how much money had been spent to be slightly better than a coin toss,
katyshannon

Apple Fights Order to Unlock San Bernardino Gunman's iPhone - The New York Times - 0 views

  • Last month, some of President Obama’s top intelligence advisers met in Silicon Valley with Apple’s chief, Timothy D. Cook, and other technology leaders in what seemed to be a public rapprochement in their long-running dispute over the encryption safeguards built into their devices.
  • But behind the scenes, relations were tense, as lawyers for the Obama administration and Apple held closely guarded discussions for over two months about one particularly urgent case: The F.B.I. wanted Apple to help “unlock” an iPhone used by one of the two attackers who killed 14 people in San Bernardino, Calif., in December, but Apple was resisting.
  • When the talks collapsed, a federal magistrate judge, at the Justice Department’s request, ordered Apple to bypass security functions on the phone.
  • ...24 more annotations...
  • The order set off a furious public battle on Wednesday between the Obama administration and one of the world’s most valuable companies in a dispute with far-reaching legal implications.
  • This is not the first time a technology company has been ordered to effectively decrypt its own product. But industry experts say it is the most significant because of Apple’s global profile, the invasive steps it says are being demanded and the brutality of the San Bernardino attacks.
  • Law enforcement officials who support the F.B.I.’s position said that the impasse with Apple provided an ideal test case to move from an abstract debate over the balance between national security and privacy to a concrete one
  • The F.B.I. has been unable to get into the phone used by Syed Rizwan Farook, who was killed by the police along with his wife after they attacked Mr. Farook’s co-workers at a holiday gathering.
  • Magistrate Judge Sheri Pym of the Federal District Court for the District of Central California issued her order Tuesday afternoon, after the F.B.I. said it had been unable to get access to the data on its own and needed Apple’s technical assistance.
  • Mr. Cook, the chief executive at Apple, responded Wednesday morning with a blistering, 1,100-word letter to Apple customers, warning of the “chilling” breach of privacy posed by the government’s demands. He maintained that the order would effectively require it to create a “backdoor” to get around its own safeguards, and Apple vowed to appeal the ruling by next week.
  • Apple argues that the software the F.B.I. wants it to create does not exist. But technologists say the company can do it.
  • pple executives had hoped to resolve the impasse without having to rewrite their own encryption software. They were frustrated that the Justice Department had aired its demand in public, according to an industry executive with knowledge of the case, who spoke on the condition of anonymity about internal discussions.
  • The Justice Department and the F.B.I. have the White House’s “full support,” the spokesman, Josh Earnest, said on Wednesday.
  • His vote of confidence was significant because James Comey, the F.B.I. director, has at times been at odds with the White House over his aggressive advocacy of tougher decryption requirements on technology companies. While Mr. Obama’s national security team was sympathetic to Mr. Comey’s position, others at the White House viewed legislation as potentially perilous. Late last year, Mr. Obama refused to back any legislation requiring decryption, leaving a court fight likely.
  • The dispute could initiate legislation in Congress, with Republicans and Democrats alike criticizing Apple’s stance on Wednesday and calling for tougher decryption requirements.
  • Donald J. Trump, the Republican presidential contender, also attacked Apple on Fox News, asking, “Who do they think they are?”
  • But Apple had many defenders of its own among privacy and consumer advocates, who praised Mr. Cook for standing up to what they saw as government overreach.
  • Many of the company’s defenders argued that the types of government surveillance operations exposed in 2013 by Edward J. Snowden, the former National Security Agency contractor, have prompted technology companies to build tougher encryption safeguards in their products because of the privacy demands of their customers.
  • Privacy advocates and others said they worried that if the F.B.I. succeeded in getting access to the software overriding Apple’s encryption, it would create easy access for the government in many future investigations.
  • The Apple order is a flash point in a dispute that has been building for more than a decade. Advertisement Continue reading the main story Advertisement Continue reading the main story
  • The F.B.I. began sounding alarms years ago about technology that allowed people to exchange private messages protected by encryption so strong that government agents could not break it. In fall 2010, at the behest of Robert S. Mueller III, the F.B.I. director, the Obama administration began work on a law that required technology companies to provide unencrypted data to the government.
  • Lawyers at the F.B.I., Justice Department and Commerce Department drafted bills around the idea that technology companies in the Internet age should be bound by the same rules as phone companies, which were forced during the Clinton administration to build digital networks that government agents could tap.
  • The draft legislation would have covered app developers like WhatsApp and large companies like Google and Apple, according to current and former officials involved in the process.
  • There is no debate that, when armed with a court order, the government can get text messages and other data stored in plain text. Far less certain was whether the government could use a court order to force a company to write software or redesign its system to decode encrypted data. A federal law would make that authority clear, they said.
  • But the disclosures of government surveillance by Mr. Snowden changed the privacy debate, and the Obama administration decided not to move on the proposed legislation. It has not been revived.
  • The legal issues raised by the judge’s order are complicated. They involve statutory interpretation, rather than constitutional rights, and they could end up before the Supreme Court.
  • As Apple noted, the F.B.I., instead of asking Congress to pass legislation resolving the encryption fight, has proposed what appears to be a novel reading of the All Writs Act of 1789.
  • The law lets judges “issue all writs necessary or appropriate in aid of their respective jurisdictions and agreeable to the usages and principles of law.”
Javier E

Obama tried to give Zuckerberg a wake-up call over fake news on Facebook - The Washingt... - 0 views

  • There has been a rising bipartisan clamor, meanwhile, for new regulation of a tech industry that, amid a historic surge in wealth and power over the past decade, has largely had its way in Washington despite concerns raised by critics about its behavior.
  • In particular, momentum is building in Congress and elsewhere in the federal government for a law requiring tech companies — like newspapers, television stations and other traditional carriers of campaign messages — to disclose who buys political ads and how much they spend on them.
  • “There is no question that the idea that Silicon Valley is the darling of our markets and of our society — that sentiment is definitely turning,” said Tim O’Reilly, an adviser to tech executives and chief executive of the influential Silicon Valley-based publisher O’Reilly Media.
  • ...14 more annotations...
  • the Russian disinformation effort has proven far harder to track and combat because Russian operatives were taking advantage of Facebook’s core functions, connecting users with shared content and with targeted native ads to shape the political environment in an unusually contentious political season, say people familiar with Facebook’s response.
  • Unlike the Islamic State, what Russian operatives posted on Facebook was, for the most part, indistinguishable from legitimate political speech. The difference was the accounts that were set up to spread the misinformation and hate were illegitimate.
  • Facebook’s cyber experts found evidence that members of APT28 were setting up a series of shadowy accounts — including a persona known as Guccifer 2.0 and a Facebook page called DCLeaks — to promote stolen emails and other documents during the presidential race. Facebook officials once again contacted the FBI to share what they had seen.
  • The sophistication of the Russian tactics caught Facebook off-guard. Its highly regarded security team had erected formidable defenses against traditional cyber attacks but failed to anticipate that Facebook users — deploying easily available automated tools such as ad micro-targeting — pumped skillfully crafted propaganda through the social network without setting off any alarm bells.
  • One of the theories to emerge from their post-mortem was that Russian operatives who were directed by the Kremlin to support Trump may have taken advantage of Facebook and other social media platforms to direct their messages to American voters in key demographic areas in order to increase enthusiasm for Trump and suppress support for Clinton.
  • the intelligence agencies had little data on Russia’s use of Facebook and other U.S.-based social media platforms, in part because of rules designed to protect the privacy of communications between Americans.
  • “It is our responsibility,” he wrote, “to amplify the good effects [of the Facebook platform] and mitigate the bad — to continue increasing diversity while strengthening our common understanding so our community can create the greatest positive impact on the world.”
  • The extent of Facebook’s internal self-examination became clear in April, when Facebook Chief Security Officer Alex Stamos co-authored a 13-page white paper detailing the results of a sprawling research effort that included input from experts from across the company, who in some cases also worked to build new software aimed specifically at detecting foreign propaganda.
  • “Facebook sits at a critical juncture,” Stamos wrote in the paper, adding that the effort focused on “actions taken by organized actors (governments or non-state actors) to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome.” He described how the company had used a technique known as machine learning to build specialized data-mining software that can detect patterns of behavior — for example, the repeated posting of the same content — that malevolent actors might use.  
  • The software tool was given a secret designation, and Facebook is now deploying it and others in the run-up to elections around the world. It was used in the French election in May, where it helped disable 30,000 fake accounts, the company said. It was put to the test again on Sunday when Germans went to the polls. Facebook declined to share the software tool’s code name. 
  • Officials said Stamos underlined to Warner the magnitude of the challenge Facebook faced policing political content that looked legitimate. Stamos told Warner that Facebook had found no accounts that used advertising but agreed with the senator that some probably existed. The difficulty for Facebook was finding them.
  • Technicians then searched for “indicators” that would link those ads to Russia. To narrow down the search further, Facebook zeroed in on a Russian entity known as the Internet Research Agency, which had been publicly identified as a troll farm.
  • By early August, Facebook had identified more than 3,000 ads addressing social and political issues that ran in the United States between 2015 and 2017 and that appear to have come from accounts associated with the Internet Research Agency.
  • Congressional investigators say the disclosure only scratches the surface. One called Facebook’s discoveries thus far “the tip of the iceberg.” Nobody really knows how many accounts are out there and how to prevent more of them from being created to shape the next election — and turn American society against itself.
lilyrashkind

Start-up investors issue warnings as boom times 'unambiguously over' - 0 views

  • Y Combinator said companies have to “understand that the poor public market performance of tech companies significantly impacts VC investing.”
  • Slow your hiring! Cut back on marketing! Extend your runway!The venture capital missives are back, and they’re coming in hot.With tech stocks cratering through the first five months of 2022 and the Nasdaq on pace for its second-worst quarter since the 2008 financial crisis, start-up investors are telling their portfolio companies they won’t be spared in the fallout, and that conditions could be worsening.
  • It’s a stark contrast to 2021, when investors were rushing into pre-IPO companies at sky-high valuations, deal-making was happening at a frenzied pace and buzzy software start-ups were commanding multiples of 100 times revenue. That era reflected an extended bull market in tech, with the Nasdaq Composite notching gains in 11 of the past 13 years, and venture funding in the U.S. reaching $332.8 billion last year, up sevenfold from a decade earlier. according to the National Venture Capital Association.
  • ...8 more annotations...
  • As it turns out, technology demand only increased and the Nasdaq had its best year since 2009, spurred on by low interest rates and a surge in spending on products for remote work.
  • “Companies that recently raised at very high prices at the height of valuation inflation may be grappling with high burn rates and near-term challenges growing into those valuations,” Shakir told CNBC in an email. “Others that were more dilution-sensitive and chose to raise less may now need to consider avenues for extending runway that would have seemed unpalatable to them just months ago.”
  • “Our companies heeded that advice and most companies are now prepared for winter,” Lux wrote.
  • “This time, many of those tools have been exhausted,” Sequoia wrote. “We do not believe that this is going to be another steep correction followed by an equally swift V-shaped recovery like we saw at the outset of the pandemic.”Sequoia told its companies to look at projects, research and development, marketing and elsewhere for opportunities to cut costs. Companies don’t have to immediately pull the trigger, the firm added, but they should be ready to do it in the next 30 days if needed.
  • And among companies that are still private, staff reductions are underway at Klarna and Cameo, while Instacart is reportedly slowing hiring ahead of an expected initial public offering. Cloud software vendor Lacework announced staffing cuts on Friday, six months after the company was valued at $8.3 billion by venture investors.“We have adjusted our plan to increase our cash runway through to profitability and significantly strengthened our balance sheet so we can be more opportunistic around investment opportunities and weather uncertainty in the macro environment,” Lacework said in a blog post.
  • Shakir agreed with that assessment. “Like many, we at Lux have been advising our companies to think long term, extend runway to 2+ years if possible, take a very close look at reducing burn and improving gross margins, and start to set expectations that near-term future financings are unlikely to look like what they may have expected six or 12 months ago,” she wrote.
  • Lux highlighted one of the painful decisions it expects to see. For several companies, the firm said, “sacrificing people will come before sacrificing valuation.”But venture firms are keen to remind founders that great companies emerge from the darkest of times. Those that prove they can survive and even thrive when capital is in short supply, the thinking goes, are positioned to flourish when the economy bounces back.
  • conditions.”CORRECTION: This story was updated to reflect that cloud software vendor Lacework raised $1.3 billion in growth funding at a valuation of $8.3 billion.
ecfruchtman

EPA: Fiat Chrysler used software to cheat on emissions tests - 0 views

  •  
    The Environmental Protection Agency accused Fiat Chrysler Thursday of using software that enabled some of its diesel trucks to cheat on emissions tests. The news caused Fiat Chrysler's stock price to drop more than 13 percent in trading Thursday morning. Fiat Chrysler did not immediately respond to requests for comment.
Javier E

What Jobs Will the Robots Take? - Derek Thompson - The Atlantic - 0 views

  • Nearly half of American jobs today could be automated in "a decade or two," according to a new paper
  • The question is: Which half?
  • Where do machines work better than people?
  • ...14 more annotations...
  • in the past 30 years, software and robots have thrived at replacing a particular kind of occupation: the average-wage, middle-skill, routine-heavy worker, especially in manufacturing and office admin. 
  • the next wave of computer progress will continue to shred human work where it already has: manufacturing, administrative support, retail, and transportation. Most remaining factory jobs are "likely to diminish over the next decades," they write. Cashiers, counter clerks, and telemarketers are similarly endangered
  • here's a chart of the ten jobs with a 99-percent likelihood of being replaced by machines and software. They are mostly routine-based jobs (telemarketing, sewing) and work that can be solved by smart algorithms (tax preparation, data entry keyers, and insurance underwriters)
  • I've also listed the dozen jobs they consider least likely to be automated. Health care workers, people entrusted with our safety, and management positions dominate the list.
  • If you wanted to use this graph as a guide to the future of automation, your upshot would be: Machines are better at rules and routines; people are better at directing and diagnosing. But it doesn't have to stay that way.
  • Although the past 30 years have hollowed out the middle, high- and low-skill jobs have actually increased, as if protected from the invading armies of robots by their own moats
  • Higher-skill workers have been protected by a kind of social-intelligence moat. Computers are historically good at executing routines, but they're bad at finding patterns, communicating with people, and making decisions, which is what managers are paid to do
  • lower-skill workers have been protected by the Moravec moat. Hans Moravec was a futurist who pointed out that machine technology mimicked a savant infant: Machines could do long math equations instantly and beat anybody in chess, but they can't answer a simple question or walk up a flight of stairs. As a result, menial work done by people without much education (like home health care workers, or fast-food attendants) have been spared, too.
  • robots are finally crossing these moats by moving and thinking like people. Amazon has bought robots to work its warehouses. Narrative Science can write earnings summaries that are indistinguishable from wire reports. We can say to our phones I'm lost, help and our phones can tell us how to get home. 
  • In a decade, the idea of computers driving cars went from impossible to boring.
  • The first wave showed that machines are better at assembling things. The second showed that machines are better at organization things. Now data analytics and self-driving cars suggest they might be better at pattern-recognition and driving. So what are we better at?
  • One conclusion to draw from this is that humans are, and will always be, superior at working with, and caring for, other humans. In this light, automation doesn't make the world worse. Far from it: It creates new opportunities for human ingenuity.  
  • But robots are already creeping into diagnostics and surgeries. Schools are already experimenting with software that replaces teaching hours. The fact that some industries have been safe from automation for the last three decades doesn't guarantee that they'll be safe for the next one.
  • It would be anxious enough if we knew exactly which jobs are next in line for automation. The truth is scarier. We don't really have a clue.
Javier E

So Wrong for So Long | Foreign Policy - 0 views

  • Getting Iraq wrong wasn’t just an unfortunate miscalculation, it happened because their theories of world politics were dubious and their understanding of how the world works was goofy. When your strategic software is riddled with bugs, you should expect a lot of error messages.When your strategic software is riddled with bugs, you should expect a lot of error messages.
  • For starters, neoconservatives think balance-of-power politics doesn’t really work in international affairs and that states are strongly inclined to “bandwagon” instead. In other words, they think weaker states are easy to bully and never stand up to powerful adversaries. Their faulty logic follows that other states will do whatever Washington dictates provided we demonstrate how strong and tough we are.
  • What happened, alas, was that the various states we were threatening didn’t jump on our bandwagon. Instead, they balanced and then took steps to make sure we faced significant and growing resistance. In particular, Syria and Iran (the next two states on the neocons’ target list), cooperated even further with each other and helped aid the anti-American insurgency in Iraq itself.
  • ...15 more annotations...
  • Today, of course, opposition to the Iran deal reflects a similar belief that forceful resolve would enable Washington to dictate whatever terms it wants. As I’ve written before, this idea is the myth of a “better deal.” Because neocons assume states are attracted to strength and easy to intimidate, they think rejecting the deal, ratcheting up sanctions, and threatening war will cause Iran’s government to finally cave in and dismantle its entire enrichment program.
  • On the contrary, walking away from the deal will stiffen Iran’s resolve, strengthen its hard-liners, increase its interest in perhaps actually acquiring a nuclear weapon someday, and cause the other members of the P5+1 to part company with the United States.
  • The neoconservative worldview also exaggerates the efficacy of military force and downplays the value of diplomacy.
  • In reality, military force is a crude instrument whose effects are hard to foresee and one which almost always produces unintended consequences (see under: Libya, Yemen, Somalia, Pakistan, etc.)
  • Moreover, neocons believe military force is a supple tool that can be turned on and off like a spigot.
  • Once forces are committed, the military brass will demand the chance to win a clear victory, and politicians will worry about the nation’s prestige and their own political fortunes. The conflicts in Afghanistan, Iraq, Yemen, and Somalia should remind us that it’s a lot easier to get into wars than it is to get out of them
  • Third, the neoconservatives have a simplistic and ahistorical view of democracy itself. They claim their main goal is spreading freedom and democracy (except for Palestinians, of course), but they have no theory to explain how this will happen or how toppling a foreign government with military force will magically cause democracy to emerge
  • In fact, the development of liberal democracy was a long, contentious, imperfect, and often violent process in Western Europe and North America
  • Fourth, as befits a group of armchair ideologues whose primary goal has been winning power inside the Beltway, neoconservatives are often surprisingly ignorant about the actual conditions of the countries whose politics and society they want to transform.
  • In addition to flawed theories, in short, the neoconservative worldview also depends on an inaccurate reading of the facts on the ground.
  • Last but not least, the neoconservatives’ prescriptions for U.S. foreign policy are perennially distorted by a strong attachment to Israel,
  • But no two states have identical interests all the time, and when the interests of two countries conflict, people who feel strongly about both are forced to decide which of these feelings is going to take priority.
  • some proponents of the deal have pointed out — correctly — that some opponents don’t like the deal because they think it is bad for Israel and because the Netanyahu government is dead set against it. As one might expect, pointing out these obvious facts has led some opponents of the deal to accuse proponents (including President Obama) of anti-Semitism
  • Instead of being a serious criticism, this familiar smear is really just a way to change the subject and to put proponents of the deal on the defensive for pointing out the obvious
  • The fact that the neoconservatives, AIPAC, the Conference of Presidents, and other groups in the Israel lobby were wrong about the Iraq War does not by itself mean that they are necessarily wrong about the Iran deal. But when you examine their basic views on world politics and their consistent approach to U.S. Middle East policy, it becomes clear this is not a coincidence at all
Javier E

App Quietly Creates a Personal Journal on Your Phone - NYTimes.com - 0 views

  • Imagine if you could keep a log of everything that you do on your mobile phone. The phone calls that you make (or receive), your emails and text messages, the various places that you visit, and even the music tracks that you listen to on your phone.
  • At first glance, I suspect many readers will be taken aback by how intrusive the software can be as it captures all smartphone activities in the background. It’s a valid concern. But the Internet, combined with smartphones and mobile broadband devices, is pushing us slowly in this direction. The way I see it, we can fight the change unsuccessfully or we can cautiously embrace it. You might not ever subscribe to providing a greater amount of information to the cloud, but within reason, I’m willing to bet your kids will. It’s just a matter of time before more of your personal data is more online than offline. It may take years or decades yet, but it will happen for most.
  • The Android app is clever, not only because it captures your smartphone and app usage profile, but makes it searchable and ties together events with the context of both location and time.
  • ...2 more annotations...
  • Want to see all of the conversations you had with a particular contact? No problem. Curious what you did and where you were on a certain day in the past? Friday has you covered. Planning a trip and want to associate all of the events to the excursion? Friday supports automatic tagging, which you could enable for a “Family Vacation 2011″ tag before leaving and disable upon your return home. The software also includes analytics to gain insights on how many calls you take or make at various times of the day. var galleryData = [{"title":"friday-events","caption":"","thumbnail":"http:\/\/gigaom2.files.wordpress.com\/2011\/04\/friday-events.jpg?w=48&h=48&crop=1"}, {"title":"friday-map","caption":"","thumbnail":"http:\/\/gigaom2.files.wordpress.com\/2011\/04\/friday-map.jpg?w=48&h=48&crop=1"}, {"title":"friday-map-filtered","caption":"","thumbnail":"http:\/\/gigaom2.files.wordpress.com\/2011\/04\/friday-map-filtered.jpg?w=48&h=48&crop=1"}, {"title":"Phone Activity Log","caption":"","thumbnail":"http:\/\/gigaom2.files.wordpress.com\/2011\/04\/phone-activity-log.jpg?w=48&h=48&crop=1"}, {"title":"What do you want to track","caption":"","thumbnail":"http:\/\/gigaom2.files.wordpress.com\/2011\/04\/what-do-you-want-to-track.jpg?w=48&h=48&crop=1"}];
  • Intelligent software such as Friday, my6Sense and others like reQall Rover can help cut through the data clutter by indexing or surfacing important information without raising our stress levels. Yes, that could mean enabling devices to capture our every move, but that’s a price I’m personally willing to pay for easier access to the data I’m looking for.
Javier E

Managers Turn to Computer Games, Aiming for More Efficient Employees - NYTimes.com - 0 views

  • Silicon Valley companies are known for casual work clothes and generous employee perks like free lunches and laundry, but they share corporate America’s affinity for dogmatic processes and mind-numbing acronyms. The Valley’s tech companies excel at turning those dreary processes into something useful.
  • Mr. Doerr has long been a proselytizer of a Silicon Valley-style management system called “O.K.R.,” which stands for “objectives and key results.” The idea, which was created at Intel, where Mr. Doerr began his career, is to have workers create specific, measurable goals and to track their progress in an open system that anyone in the company can see.
  • Mr. Duggan founded Badgeville, whose software turns work tasks into badges and a leader board in an effort to add elements of games to work. His new company blends that game-playing sensibility with hard-core metrics.
  • ...5 more annotations...
  • Using BetterWorks software, workers set goals, like “Sign 10 new customers by May,” and enter them into an internal system that can be viewed by other employees — it looks almost identical to the dashboard function used by Fitbit fitness trackers. Co-workers can give each other encouragement (“cheers”) or shaming (“nudges”). A worker’s profile shows a digital tree that grows with accomplishments and shrivels with poor productivity
  • One of the main ways people become more productive on the job is by using their supposed downtime to do even more work. Many drivers did things like loading, unloading and inspecting their trucks during federally required breaks, Ms. Levy said
  • “If you distract workers with the idea that they are playing the game, they don’t challenge the rules of the game,
  • Companies like BetterWorks — Workday, Workboard or SuccessFactors also make goal-setting software — are importing similar concepts to office jobs where performance has historically been more subjective.
  • Culture Amp’s product is essentially a set of continuous, anonymous surveys that lets companies know how their workers are feeling and rates them against other companies in the same industry.
1 - 20 of 202 Next › Last »
Showing 20 items per page