Skip to main content

Home/ Dystopias/ Group items tagged hacking

Rss Feed Group items tagged

Ed Webb

Was Stuxnet Built to Attack Iran's Nuclear Program? - PCWorld Business Center - 0 views

  • A highly sophisticated computer worm that has spread through Iran, Indonesia and India was built to destroy operations at one target: possibly Iran's Bushehr nuclear reactor.
  • Langner thinks that it's possible that Bushehr may have been infected through the Russian contractor that is now building the facility, JSC AtomStroyExport. Recently AtomStroyExport had its Web site hacked, and some of its Web pages are still blocked by security vendors because they are known to host malware. This is not an auspicious sign for a company contracted with handling nuclear secrets.
  • y messing with Operational Block 35, Stuxnet could easily cause a refinery's centrifuge to malfunction, but it could be used to hit other targets too, Byres said. "The only thing I can say is that it is something designed to go bang," he said.
  • ...3 more annotations...
  • Many security researchers think that it would take the resources of a nation state to accomplish.
  • Bushehr is a plausible target, but there could easily be other facilities -- refineries, chemical plants or factories that could also make valuable targets, said Scott Borg, CEO of the U.S. Cyber Consequences Unit, a security advisory group. "It's not obvious that it has to be the nuclear program," he said. "Iran has other control systems that could be targeted."
  • Iran has been hit hard by the worm. When it was first discovered, 60 percent of the infected Stuxnet computers were located in Iran, according to Symantec.
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

The stories of Ray Bradbury. - By Nathaniel Rich - Slate Magazine - 0 views

  • Thanks to Fahrenheit 451, now required reading for every American middle-schooler, Bradbury is generally thought of as a writer of novels, but his talents—particularly his mastery of the diabolical premise and the brain-exploding revelation—are best suited to the short form.
  • The best stories have a strange familiarity about them. They're like long-forgotten acquaintances—you know you've met them somewhere before. There is, for instance, the tale of the time traveler who goes back into time and accidentally steps on a butterfly, thereby changing irrevocably the course of history ("A Sound of Thunder"). There's the one about the man who buys a robotic husband to live with his wife so that he can be free to travel and pursue adventure—that's "Marionettes, Inc." (Not to be confused with "I Sing the Body Electric!" about the man who buys a robotic grandmother to comfort his children after his wife dies.) Or "The Playground," about the father who changes places with his son so that he can spare his boy the cruelty of childhood—forgetting exactly how cruel childhood can be. The stories are familiar because they've been adapted, and plundered from, by countless other writers—in books, television shows, and films. To the extent that there is a mythology of our age, Bradbury is one of its creators.
  • "But Bradbury's skill is in evoking exactly how soul-annihilating that world is."    Of course, this also displays one of the key facts of Bradbury's work -- and a trend in science fiction that is often ignored. He's a reactionary of the first order, deeply distrustful of technology and even the notion of progress. Many science fiction writers had begun to rewrite the rules of women in space by the time Bradbury had women in long skirts hauling pots and pans over the Martian landscape. And even he wouldn't disagree. In his famous Playboy interview he responded to a question about predicting the future with, "It's 'prevent the future', that's the way I put it. Not predict it, prevent it."
  • ...5 more annotations...
  • And for the record, I've never understood why a writer who recognizes technology is labeled a "sci-fi writer", as if being a "sci-fi writer" were equal to being some sort of substandard, second-rate hack. The great Kurt Vonnegut managed to get stuck in that drawer after he recognized technolgy in his 1st novel "Player Piano". No matter that he turned out to be (imo) one of the greatest authors of the 20th century, perio
  • it's chilling how prescient he was about modern media culture in Fahrenheit 451. It's not a Luddite screed against TV. It's a speculative piece on what happens when we become divorced from the past and more attuned to images on the screen than we are to each other.
  • ite author of mine since I was in elementary school way back when mammoths roamed the earth. To me, he was an ardent enthusiast of technology, but also recognized its potential for seperating us from one another while at the same time seemingly making us more "connected" in a superficial and transitory way
  • Bradbury is undeniably skeptical of technology and the risks it brings, particularly the risk that what we'd now call "virtualization" will replace actual emotional, intellectual or physical experience. On the other hand, however, I don't think there's anybody who rhapsodizes about the imaginative possibilities of rocketships and robots the way Bradbury does, and he's built entire setpieces around the idea of technological wonders creating new experiences.    I'm not saying he doesn't have a Luddite streak, more that he has feet in both camps and is harder to pin down than a single label allows. And I'll also add that in his public pronouncements of late, the Luddite streak has come out more strongly--but I tend to put much of that down to the curmudgeonliness of a ninety-year-old man.
  • I don't think he is a luddite so much as he is the little voice that whispers "be careful what you wish for." We have been sold the beautiful myth that technology will buy us free time, but we are busier than ever. TV was supposed to enlighten the masses, instead we have "reality TV" and a news network that does not let facts get in the way of its ideological agenda. We romanticize childhood, ignoring children's aggressive impulses, then feed them on a steady diet of violent video games.  
Ed Webb

Calif. man used Facebook to hack women's e-mails - Yahoo! News - 1 views

  •  
    Be careful out there, people
Ed Webb

Interoperability And Privacy: Squaring The Circle | Techdirt - 0 views

  • if there's one thing we've learned from more than a decade of Facebook scandals, it's that there's little reason to believe that Facebook possesses the requisite will and capabilities. Indeed, it may be that there is no automated system or system of human judgments that could serve as a moderator and arbiter of the daily lives of billions of people. Given Facebook's ambition to put more and more of our daily lives behind its walled garden, it's hard to see why we would ever trust Facebook to be the one to fix all that's wrong with Facebook.
  • Facebook users are eager for alternatives to the service, but are held back by the fact that the people they want to talk with are all locked within the company's walled garden
  • rather than using standards to describe how a good voting machine should work, the industry pushed a standard that described how their existing, flawed machines did work with some small changes in configurations. Had they succeeded, they could have simply slapped a "complies with IEEE standard" label on everything they were already selling and declared themselves to have fixed the problem... without making the serious changes needed to fix their systems, including requiring a voter-verified paper ballot.
  • ...13 more annotations...
  • the risk of trusting competition to an interoperability mandate is that it will create a new ecosystem where everything that's not forbidden is mandatory, freezing in place the current situation, in which Facebook and the other giants dominate and new entrants are faced with onerous compliance burdens that make it more difficult to start a new service, and limit those new services to interoperating in ways that are carefully designed to prevent any kind of competitive challenge
  • Facebook is a notorious opponent of adversarial interoperability. In 2008, Facebook successfully wielded a radical legal theory that allowed it to shut down Power Ventures, a competitor that allowed Facebook's users to use multiple social networks from a single interface. Facebook argued that by allowing users to log in and display Facebook with a different interface, even after receipt of a cease and desist letter telling Power Ventures to stop, the company had broken a Reagan-era anti-hacking law called the Computer Fraud and Abuse Act (CFAA). In other words, upsetting Facebook's investors made their conduct illegal.
  • Today, Facebook is viewed as holding all the cards because it has corralled everyone who might join a new service within its walled garden. But legal reforms to safeguard the right to adversarial interoperability would turn this on its head: Facebook would be the place that had conveniently organized all the people whom you might tempt to leave Facebook, and even supply you with the tools you need to target those people.
  • Such a tool would allow someone to use Facebook while minimizing how they are used by Facebook. For people who want to leave Facebook but whose friends, colleagues or fellow travelers are not ready to join them, a service like this could let Facebook vegans get out of the Facebook pool while still leaving a toe in its waters.
  • In a competitive market (which adversarial interoperability can help to bring into existence), even very large companies can't afford to enrage their customers
  • the audience for a legitimate adversarial interoperability product are the customers of the existing service that it connects to.
  • anyone using a Facebook mobile app might be exposing themselves to incredibly intrusive data-gathering, including some surprisingly creepy and underhanded tactics.
  • If users could use a third-party service to exchange private messages with friends, or to participate in a group they're a member of, they can avoid much (but not all) of this surveillance.
  • Facebook users (and even non-Facebook users) who want more privacy have a variety of options, none of them very good. Users can tweak Facebook's famously hard-to-understand privacy dashboard to lock down their accounts and bet that Facebook will honor their settings (this has not always been a good bet). Everyone can use tracker-blockers, ad-blockers and script-blockers to prevent Facebook from tracking them when they're not on Facebook, by watching how they interact with pages that have Facebook "Like" buttons and other beacons that let Facebook monitor activity elsewhere on the Internet. We're rightfully proud of our own tracker blocker, Privacy Badger, but it doesn't stop Facebook from tracking you if you have a Facebook account and you're using Facebook's service.
  • As Facebook's market power dwindled, so would the pressure that web publishers feel to embed Facebook trackers on their sites, so that non-Facebook users would not be as likely to be tracked as they use the Web.
  • Today, Facebook's scandals do not trigger mass departures from the service, and when users do leave, they tend to end up on Instagram, which is also owned by Facebook.
  • For users who have privacy needs -- and other needs -- beyond those the big platforms are willing to fulfill, it's important that we keep the door open to competitors (for-profit, nonprofit, hobbyist and individuals) who are willing to fill those needs.
  • helping Facebook's own users, or the users of any big service, to configure their experience to make their lives better should be legal and encouraged even (and especially) if it provides a path for users to either diversify their social media experience or move away entirely from the big, concentrated services. Either way, we'd be on our way to a more pluralistic, decentralized, diverse Internet
Ed Webb

TSA is adding face recognition at big airports. Here's how to opt out. - The Washington... - 0 views

  • Any time data gets collected somewhere, it could also be stolen — and you only get one face. The TSA says all its databases are encrypted to reduce hacking risk. But in 2019, the Department of Homeland Security disclosed that photos of travelers were taken in a data breach, accessed through the network of one of its subcontractors.
  • “What we often see with these biometric programs is they are only optional in the introductory phases — and over time we see them becoming standardized and nationalized and eventually compulsory,” said Cahn. “There is no place more coercive to ask people for their consent than an airport.”
  • Those who have the privilege of not having to worry their face will be misread can zip right through — whereas people who don’t consent to it pay a tax with their time. At that point, how voluntary is it, really?
1 - 9 of 9
Showing 20 items per page