Skip to main content

Home/ Future of the Web/ Group items tagged Internet-of-things

Rss Feed Group items tagged

49More

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
2More

Internet users raise funds to buy lawmakers' browsing histories in protest | TheHill - 0 views

  • House passes bill undoing Obama internet privacy rule House passes bill undoing Obama internet privacy rule TheHill.com Mesmerizing Slow-Motion Lightning Celebrate #NationalPuppyDay with some adorable puppies on Instagram 5 plants to add to your garden this Spring House passes bill undoing Obama internet privacy rule Inform News. Coming Up... Ed Sheeran responds to his 'baby lookalike' margin: 0px; padding: 0px; borde
  • Great news! The House just voted to pass SJR34. We will finally be able to buy the browser history of all the Congresspeople who voted to sell our data and privacy without our consent!” he wrote on the fundraising page.Another activist from Tennessee has raised more than $152,000 from more than 9,800 people.A bill on its way to President Trump’s desk would allow internet service providers (ISPs) to sell users’ data and Web browsing history. It has not taken effect, which means there is no growing history data yet to purchase.A Washington Post reporter also wrote it would be possible to buy the data “in theory, but probably not in reality.”A former enforcement bureau chief at the Federal Communications Commission told the newspaper that most internet service providers would cover up this information, under their privacy policies. If they did sell any individual's personal data in violation of those policies, a state attorney general could take the ISPs to court.
3More

The Internet of Things Will Turn Large-Scale Hacks into Real World Disasters | Motherboard - 0 views

  • Disaster stories involving the Internet of Things are all the rage. They feature cars (both driven and driverless), the power grid, dams, and tunnel ventilation systems. A particularly vivid and realistic one, near-future fiction published last month in New York Magazine, described a cyberattack on New York that involved hacking of cars, the water system, hospitals, elevators, and the power grid. In these stories, thousands of people die. Chaos ensues. While some of these scenarios overhype the mass destruction, the individual risks are all real. And traditional computer and network security isn’t prepared to deal with them.Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).
  • So far, internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by—presumptively—China in 2015. On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door—or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location. With the advent of the Internet of Things and cyber-physical systems in general, we've given the internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete. Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.
  •  
    Bruce Scneier on the insecurity of the Internet of Things, and possible consequences.
6More

The coming merge of human and machine intelligence - 0 views

  • Now, as the Internet revolution unfolds, we are seeing not merely an extension of mind but a unity of mind and machine, two networks coming together as one. Our smaller brains are in a quest to bypass nature's intent and grow larger by proxy. It is not a stretch of the imagination to believe we will one day have all of the world's information embedded in our minds via the Internet.
  • BCI stands for brain-computer interface, and Jan is one of only a few people on earth using this technology, through two implanted chips attached directly to the neurons in her brain. The first human brain implant was conceived of by John Donoghue, a neuroscientist at Brown University, and implanted in a paralyzed man in 2004. These dime-sized computer chips use a technology called BrainGate that directly connects the mind to computers and the Internet. Having served as chairman of the BrainGate company, I have personally witnessed just how profound this innovation is. BrainGate is an invention that allows people to control electrical devices with nothing but their thoughts. The BrainGate chip is implanted in the brain and attached to connectors outside of the skull, which are hooked up to computers that, in Jan Scheuermann's case, are linked to a robotic arm. As a result, Scheuermann can feed herself chocolate by controlling the robotic arm with nothing but her thoughts.
  • Mind meld But imagine the ways in which the world will change when any of us, disabled or not, can connect our minds to computers.
  • ...2 more annotations...
  • Back in 2004, Google's founders told Playboy magazine that one day we'd have direct access to the Internet through brain implants, with "the entirety of the world's information as just one of our thoughts." A decade later, the road map is taking shape. While it may be years before implants like BrainGate are safe enough to be commonplace—they require brain surgery, after all—there are a host of brainwave sensors in development for use outside of the skull that will be transformational for all of us: caps for measuring driver alertness, headbands for monitoring sleep, helmets for controlling video games. This could lead to wearable EEGs, implantable nanochips or even technology that can listen to our brain signals using the electromagnetic waves that pervade the air we breathe. Just as human intelligence is expanding in the direction of the Internet, the Internet itself promises to get smarter and smarter. In fact, it could prove to be the basis of the machine intelligence that scientists have been racing toward since the 1950s.
  • Neurons may be good analogs for transistors and maybe even computer chips, but they're not good building blocks of intelligence. The neural network is fundamental. The BrainGate technology works because the chip attaches not to a single neuron, but to a network of neurons. Reading the signals of a single neuron would tell us very little; it certainly wouldn't allow BrainGate patients to move a robotic arm or a computer cursor. Scientists may never be able to reverse engineer the neuron, but they are increasingly able to interpret the communication of the network. It is for this reason that the Internet is a better candidate for intelligence than are computers. Computers are perfect calculators composed of perfect transistors; they are like neurons as we once envisioned them. But the Internet has all the quirkiness of the brain: it can work in parallel, it can communicate across broad distances, and it makes mistakes. Even though the Internet is at an early stage in its evolution, it can leverage the brain that nature has given us. The convergence of computer networks and neural networks is the key to creating real intelligence from artificial machines. It took millions of years for humans to gain intelligence, but with the human mind as a guide, it may only take a century to create Internet intelligence.
  •  
    Of course once the human brain is interfaced with the internet, then we will be able to do the Vulcan mind-meld thing. And NSA will be busily crawling the Internet for fresh brain dumps to their data center, which then encompasses the entire former state of Utah. Conventional warfare is a thing of the past as the cyberwar commands of great powers battle for control of the billions of minds making up BrainNet, the internet's successor.  Meanwhile, a hackers' Reaper malware trawls BrainNet for bank account numbers and paswords that it forwards for automated harvesting of personal funds. "Ah, Houston ... we have a problem ..."  
10More

He Was a Hacker for the NSA and He Was Willing to Talk. I Was Willing to Listen. - 2 views

  • he message arrived at night and consisted of three words: “Good evening sir!” The sender was a hacker who had written a series of provocative memos at the National Security Agency. His secret memos had explained — with an earthy use of slang and emojis that was unusual for an operative of the largest eavesdropping organization in the world — how the NSA breaks into the digital accounts of people who manage computer networks, and how it tries to unmask people who use Tor to browse the web anonymously. Outlining some of the NSA’s most sensitive activities, the memos were leaked by Edward Snowden, and I had written about a few of them for The Intercept. There is no Miss Manners for exchanging pleasantries with a man the government has trained to be the digital equivalent of a Navy SEAL. Though I had initiated the contact, I was wary of how he might respond. The hacker had publicly expressed a visceral dislike for Snowden and had accused The Intercept of jeopardizing lives by publishing classified information. One of his memos outlined the ways the NSA reroutes (or “shapes”) the internet traffic of entire countries, and another memo was titled “I Hunt Sysadmins.” I felt sure he could hack anyone’s computer, including mine. Good evening sir!
  • The sender was a hacker who had written a series of provocative memos at the National Security Agency. His secret memos had explained — with an earthy use of slang and emojis that was unusual for an operative of the largest eavesdropping organization in the world — how the NSA breaks into the digital accounts of people who manage computer networks, and how it tries to unmask people who use Tor to browse the web anonymously. Outlining some of the NSA’s most sensitive activities, the memos were leaked by Edward Snowden, and I had written about a few of them for The Intercept. There is no Miss Manners for exchanging pleasantries with a man the government has trained to be the digital equivalent of a Navy SEAL. Though I had initiated the contact, I was wary of how he might respond. The hacker had publicly expressed a visceral dislike for Snowden and had accused The Intercept of jeopardizing lives by publishing classified information. One of his memos outlined the ways the NSA reroutes (or “shapes”) the internet traffic of entire countries, and another memo was titled “I Hunt Sysadmins.” I felt sure he could hack anyone’s computer, including mine.
  • I got lucky with the hacker, because he recently left the agency for the cybersecurity industry; it would be his choice to talk, not the NSA’s. Fortunately, speaking out is his second nature.
  • ...7 more annotations...
  • He agreed to a video chat that turned into a three-hour discussion sprawling from the ethics of surveillance to the downsides of home improvements and the difficulty of securing your laptop.
  • In recent years, two developments have helped make hacking for the government a lot more attractive than hacking for yourself. First, the Department of Justice has cracked down on freelance hacking, whether it be altruistic or malignant. If the DOJ doesn’t like the way you hack, you are going to jail. Meanwhile, hackers have been warmly invited to deploy their transgressive impulses in service to the homeland, because the NSA and other federal agencies have turned themselves into licensed hives of breaking into other people’s computers. For many, it’s a techno sandbox of irresistible delights, according to Gabriella Coleman, a professor at McGill University who studies hackers. “The NSA is a very exciting place for hackers because you have unlimited resources, you have some of the best talent in the world, whether it’s cryptographers or mathematicians or hackers,” she said. “It is just too intellectually exciting not to go there.”
  • The Lamb’s memos on cool ways to hunt sysadmins triggered a strong reaction when I wrote about them in 2014 with my colleague Ryan Gallagher. The memos explained how the NSA tracks down the email and Facebook accounts of systems administrators who oversee computer networks. After plundering their accounts, the NSA can impersonate the admins to get into their computer networks and pilfer the data flowing through them. As the Lamb wrote, “sys admins generally are not my end target. My end target is the extremist/terrorist or government official that happens to be using the network … who better to target than the person that already has the ‘keys to the kingdom’?” Another of his NSA memos, “Network Shaping 101,” used Yemen as a theoretical case study for secretly redirecting the entirety of a country’s internet traffic to NSA servers.
  • “If I turn the tables on you,” I asked the Lamb, “and say, OK, you’re a target for all kinds of people for all kinds of reasons. How do you feel about being a target and that kind of justification being used to justify getting all of your credentials and the keys to your kingdom?” The Lamb smiled. “There is no real safe, sacred ground on the internet,” he replied. “Whatever you do on the internet is an attack surface of some sort and is just something that you live with. Any time that I do something on the internet, yeah, that is on the back of my mind. Anyone from a script kiddie to some random hacker to some other foreign intelligence service, each with their different capabilities — what could they be doing to me?”
  • “You know, the situation is what it is,” he said. “There are protocols that were designed years ago before anybody had any care about security, because when they were developed, nobody was foreseeing that they would be taken advantage of. … A lot of people on the internet seem to approach the problem [with the attitude of] ‘I’m just going to walk naked outside of my house and hope that nobody looks at me.’ From a security perspective, is that a good way to go about thinking? No, horrible … There are good ways to be more secure on the internet. But do most people use Tor? No. Do most people use Signal? No. Do most people use insecure things that most people can hack? Yes. Is that a bash against the intelligence community that people use stuff that’s easily exploitable? That’s a hard argument for me to make.”
  • I mentioned that lots of people, including Snowden, are now working on the problem of how to make the internet more secure, yet he seemed to do the opposite at the NSA by trying to find ways to track and identify people who use Tor and other anonymizers. Would he consider working on the other side of things? He wouldn’t rule it out, he said, but dismally suggested the game was over as far as having a liberating and safe internet, because our laptops and smartphones will betray us no matter what we do with them. “There’s the old adage that the only secure computer is one that is turned off, buried in a box ten feet underground, and never turned on,” he said. “From a user perspective, someone trying to find holes by day and then just live on the internet by night, there’s the expectation [that] if somebody wants to have access to your computer bad enough, they’re going to get it. Whether that’s an intelligence agency or a cybercrimes syndicate, whoever that is, it’s probably going to happen.”
  • There are precautions one can take, and I did that with the Lamb. When we had our video chat, I used a computer that had been wiped clean of everything except its operating system and essential applications. Afterward, it was wiped clean again. My concern was that the Lamb might use the session to obtain data from or about the computer I was using; there are a lot of things he might have tried, if he was in a scheming mood. At the end of our three hours together, I mentioned to him that I had taken these precautions—and he approved. “That’s fair,” he said. “I’m glad you have that appreciation. … From a perspective of a journalist who has access to classified information, it would be remiss to think you’re not a target of foreign intelligence services.” He was telling me the U.S. government should be the least of my worries. He was trying to help me. Documents published with this article: Tracking Targets Through Proxies & Anonymizers Network Shaping 101 Shaping Diagram I Hunt Sys Admins (first published in 2014)
1More

Microsoft's Next Big Thing; Rich MS Client / MS Cloud of Servers - 0 views

  •  
    CIO Magazine has an extensive interview with Craig Mundie, the man responsible for nailing down the next generation of monopolist profits: "You talk about technology waves. What will be the next big wave? What happens in waves is the shift from one generation of computing platform to the next. That platform gets established by a small number of killer apps. We've been through a number of these major platform shifts, from the mainframe to the minicomputer to the personal computer to adding the Internet as an adjunct platform. We're now trending to the next big platform, which I call "the client plus the cloud."

    That's one thing, not two things. Today, we've got a broadening out of what people call the client. My 16 years here was in large measure about that. And then we introduced the network. The Internet was a place where you had Web content and Web publishing, but other than being delivered on some of those clients, the two things were somewhat divorced.

    The next thing that will emerge is an architecture that allows the application developer to think of the cloud plus the client architecturally as a single thing. In a sense, it is like client/sever computing in the enterprise. It was the homogeneity that existed between some of the facilities at the server and the client end that allowed people to build those applications. We've never had that kind of architectural homogeneity in this cloud-plus-client or Internet-plus-smart-devices world, and I'm predicting that will be the next big thing.
2More

We Need to Save the Internet from the Internet of Things | Motherboard - 0 views

  • Brian Krebs is a popular reporter on the cybersecurity beat. He regularly exposes cybercriminals and their tactics, and consequently is regularly a target of their ire. Last month, he wrote about an online attack-for-hire service that resulted in the arrest of the two proprietors. In the aftermath, his site was taken down by a massive DDoS attack.In many ways, this is nothing new. Distributed denial-of-service attacks are a family of attacks that cause websites and other internet-connected systems to crash by overloading them with traffic. The "distributed" part means that other insecure computers on the internet—sometimes in the millions—are recruited to a botnet to unwittingly participate in the attack. The tactics are decades old; DDoS attacks are perpetrated by lone hackers trying to be annoying, criminals trying to extort money, and governments testing their tactics. There are defenses, and there are companies that offer DDoS mitigation services for hire. Basically, it's a size vs. size game. If the attackers can cobble together a fire hose of data bigger than the defender's capability to cope with, they win. If the defenders can increase their capability in the face of attack, they win. What was new about the Krebs attack was both the massive scale and the particular devices the attackers recruited. Instead of using traditional computers for their botnet, they used CCTV cameras, digital video recorders, home routers, and other embedded computers attached to the internet as part of the Internet of Things. Much has been written about how the IoT is wildly insecure. In fact, the software used to attack Krebs was simple and amateurish. What this attack demonstrates is that the economics of the IoT mean that it will remain insecure unless government steps in to fix the problem. This is a market failure that can't get fixed on its own.
  •  
    Bruce Schneier pointing to a massive security hole in the Internet of Things ("IoT").
2More

'Nice Internet You've Got There... You Wouldn't Want Something To Happen To It...' | Te... - 0 views

  • Last month, we wrote about Bruce Schneier's warning that certain unknown parties were carefully testing ways to take down the internet. They were doing carefully configured DDoS attacks, testing core internet infrastructure, focusing on key DNS servers. And, of course, we've also been talking about the rise of truly massive DDoS attacks, thanks to poorly secured Internet of Things (IoT) devices, and ancient, unpatched bugs. That all came to a head this morning when large chunks of the internet went down for about two hours, thanks to a massive DDoS attack targeting managed DNS provider Dyn. Most of the down sites are back (I'm still having trouble reaching Twitter), but it was pretty widespread, and lots of big name sites all went down. Just check out this screenshot from Downdetector showing the outages on a bunch of sites:
  • You'll see not all of them have downtime (and the big ISPs, as always, show lots of complaints about downtimes), but a ton of those sites show a giant spike in downtime for a few hours. So, once again, we'd like to point out that this is as problem that the internet community needs to start solving now. There's been a theoretical threat for a while, but it's no longer so theoretical. Yes, some people point out that this is a difficult thing to deal with. If you're pointing people to websites, even if we were to move to a more distributed system, there are almost always some kinds of chokepoints, and those with malicious intent will always, eventually, target those chokepoints. But there has to be a better way -- because if there isn't, this kind of thing is going to become a lot worse.
6More

Canada Casts Global Surveillance Dragnet Over File Downloads - The Intercept - 0 views

  • Canada’s leading surveillance agency is monitoring millions of Internet users’ file downloads in a dragnet search to identify extremists, according to top-secret documents. The covert operation, revealed Wednesday by CBC News in collaboration with The Intercept, taps into Internet cables and analyzes records of up to 15 million downloads daily from popular websites commonly used to share videos, photographs, music, and other files. The revelations about the spying initiative, codenamed LEVITATION, are the first from the trove of files provided by National Security Agency whistleblower Edward Snowden to show that the Canadian government has launched its own globe-spanning Internet mass surveillance system. According to the documents, the LEVITATION program can monitor downloads in several countries across Europe, the Middle East, North Africa, and North America. It is led by the Communications Security Establishment, or CSE, Canada’s equivalent of the NSA. (The Canadian agency was formerly known as “CSEC” until a recent name change.)
  • The latest disclosure sheds light on Canada’s broad existing surveillance capabilities at a time when the country’s government is pushing for a further expansion of security powers following attacks in Ottawa and Quebec last year. Ron Deibert, director of University of Toronto-based Internet security think tank Citizen Lab, said LEVITATION illustrates the “giant X-ray machine over all our digital lives.” “Every single thing that you do – in this case uploading/downloading files to these sites – that act is being archived, collected and analyzed,” Deibert said, after reviewing documents about the online spying operation for CBC News. David Christopher, a spokesman for Vancouver-based open Internet advocacy group OpenMedia.ca, said the surveillance showed “robust action” was needed to rein in the Canadian agency’s operations.
  • In a top-secret PowerPoint presentation, dated from mid-2012, an analyst from the agency jokes about how, while hunting for extremists, the LEVITATION system gets clogged with information on innocuous downloads of the musical TV series Glee. CSE finds some 350 “interesting” downloads each month, the presentation notes, a number that amounts to less than 0.0001 per cent of the total collected data. The agency stores details about downloads and uploads to and from 102 different popular file-sharing websites, according to the 2012 document, which describes the collected records as “free file upload,” or FFU, “events.” Only three of the websites are named: RapidShare, SendSpace, and the now defunct MegaUpload.
  • ...3 more annotations...
  • “The specific uses that they talk about in this [counter-terrorism] context may not be the problem, but it’s what else they can do,” said Tamir Israel, a lawyer with the University of Ottawa’s Canadian Internet Policy and Public Interest Clinic. Picking which downloads to monitor is essentially “completely at the discretion of CSE,” Israel added. The file-sharing surveillance also raises questions about the number of Canadians whose downloading habits could have been swept up as part of LEVITATION’s dragnet. By law, CSE isn’t allowed to target Canadians. In the LEVITATION presentation, however, two Canadian IP addresses that trace back to a web server in Montreal appear on a list of suspicious downloads found across the world. The same list includes downloads that CSE monitored in closely allied countries, including the United Kingdom, United States, Spain, Brazil, Germany and Portugal. It is unclear from the document whether LEVITATION has ever prevented any terrorist attacks. The agency cites only two successes of the program in the 2012 presentation: the discovery of a hostage video through a previously unknown target, and an uploaded document that contained the hostage strategy of a terrorist organization. The hostage in the discovered video was ultimately killed, according to public reports.
  • LEVITATION does not rely on cooperation from any of the file-sharing companies. A separate secret CSE operation codenamed ATOMIC BANJO obtains the data directly from internet cables that it has tapped into, and the agency then sifts out the unique IP address of each computer that downloaded files from the targeted websites. The IP addresses are valuable pieces of information to CSE’s analysts, helping to identify people whose downloads have been flagged as suspicious. The analysts use the IP addresses as a kind of search term, entering them into other surveillance databases that they have access to, such as the vast repositories of intercepted Internet data shared with the Canadian agency by the NSA and its British counterpart Government Communications Headquarters. If successful, the searches will return a list of results showing other websites visited by the people downloading the files – in some cases revealing associations with Facebook or Google accounts. In turn, these accounts may reveal the names and the locations of individual downloaders, opening the door for further surveillance of their activities.
  • Canada’s leading surveillance agency is monitoring millions of Internet users’ file downloads in a dragnet search to identify extremists, according to top-secret documents. The covert operation, revealed Wednesday by CBC News in collaboration with The Intercept, taps into Internet cables and analyzes records of up to 15 million downloads daily from popular websites commonly used to share videos, photographs, music, and other files. The revelations about the spying initiative, codenamed LEVITATION, are the first from the trove of files provided by National Security Agency whistleblower Edward Snowden to show that the Canadian government has launched its own globe-spanning Internet mass surveillance system. According to the documents, the LEVITATION program can monitor downloads in several countries across Europe, the Middle East, North Africa, and North America. It is led by the Communications Security Establishment, or CSE, Canada’s equivalent of the NSA. (The Canadian agency was formerly known as “CSEC” until a recent name change.)
5More

Spies and internet giants are in the same business: surveillance. But we can stop them ... - 0 views

  • On Tuesday, the European court of justice, Europe’s supreme court, lobbed a grenade into the cosy, quasi-monopolistic world of the giant American internet companies. It did so by declaring invalid a decision made by the European commission in 2000 that US companies complying with its “safe harbour privacy principles” would be allowed to transfer personal data from the EU to the US. This judgment may not strike you as a big deal. You may also think that it has nothing to do with you. Wrong on both counts, but to see why, some background might be useful. The key thing to understand is that European and American views about the protection of personal data are radically different. We Europeans are very hot on it, whereas our American friends are – how shall I put it? – more relaxed.
  • Given that personal data constitutes the fuel on which internet companies such as Google and Facebook run, this meant that their exponential growth in the US market was greatly facilitated by that country’s tolerant data-protection laws. Once these companies embarked on global expansion, however, things got stickier. It was clear that the exploitation of personal data that is the core business of these outfits would be more difficult in Europe, especially given that their cloud-computing architectures involved constantly shuttling their users’ data between server farms in different parts of the world. Since Europe is a big market and millions of its citizens wished to use Facebook et al, the European commission obligingly came up with the “safe harbour” idea, which allowed companies complying with its seven principles to process the personal data of European citizens. The circle having been thus neatly squared, Facebook and friends continued merrily on their progress towards world domination. But then in the summer of 2013, Edward Snowden broke cover and revealed what really goes on in the mysterious world of cloud computing. At which point, an Austrian Facebook user, one Maximilian Schrems, realising that some or all of the data he had entrusted to Facebook was being transferred from its Irish subsidiary to servers in the United States, lodged a complaint with the Irish data protection commissioner. Schrems argued that, in the light of the Snowden revelations, the law and practice of the United States did not offer sufficient protection against surveillance of the data transferred to that country by the government.
  • The Irish data commissioner rejected the complaint on the grounds that the European commission’s safe harbour decision meant that the US ensured an adequate level of protection of Schrems’s personal data. Schrems disagreed, the case went to the Irish high court and thence to the European court of justice. On Tuesday, the court decided that the safe harbour agreement was invalid. At which point the balloon went up. “This is,” writes Professor Lorna Woods, an expert on these matters, “a judgment with very far-reaching implications, not just for governments but for companies the business model of which is based on data flows. It reiterates the significance of data protection as a human right and underlines that protection must be at a high level.”
  • ...2 more annotations...
  • This is classic lawyerly understatement. My hunch is that if you were to visit the legal departments of many internet companies today you would find people changing their underpants at regular intervals. For the big names of the search and social media worlds this is a nightmare scenario. For those of us who take a more detached view of their activities, however, it is an encouraging development. For one thing, it provides yet another confirmation of the sterling service that Snowden has rendered to civil society. His revelations have prompted a wide-ranging reassessment of where our dependence on networking technology has taken us and stimulated some long-overdue thinking about how we might reassert some measure of democratic control over that technology. Snowden has forced us into having conversations that we needed to have. Although his revelations are primarily about government surveillance, they also indirectly highlight the symbiotic relationship between the US National Security Agency and Britain’s GCHQ on the one hand and the giant internet companies on the other. For, in the end, both the intelligence agencies and the tech companies are in the same business, namely surveillance.
  • And both groups, oddly enough, provide the same kind of justification for what they do: that their surveillance is both necessary (for national security in the case of governments, for economic viability in the case of the companies) and conducted within the law. We need to test both justifications and the great thing about the European court of justice judgment is that it starts us off on that conversation.
1More

Apple and Facebook Flash Forward to Computer Memory of the Future | Enterprise | WIRED - 1 views

  •  
    Great story that is at the center of a new cloud computing platform. I met David Flynn back when he was first demonstrating the Realmsys flash card. Extraordinary stuff. He was using the technology to open a secure Linux computing window on an operating Windows XP system. The card opened up a secure data socket, connecting to any Internet Server or Data Server, and running applications on that data - while running Windows and Windows apps in the background. Incredible mesh of Linux, streaming data, and legacy Windows apps. Everytime I find these tech pieces explaining Fusion-io though, I can't help but think that David Flynn is one of the most decent, kind and truly deserving of success people that I have ever met. excerpt: "Apple is spending mountains of money on a new breed of hardware device from a company called Fusion-io. As a public company, Fusion-io is required to disclose information about customers that account for an usually large portion of its revenue, and with its latest annual report, the Salt Lake City outfit reveals that in 2012, at least 25 percent of its revenue - $89.8 million - came from Apple. That's just one figure, from just one company. But it serves as a sign post, showing you where the modern data center is headed. 'There's now a blurring between the storage world and the memory world. People have been enlightened by Fusion-io.' - Gary Gentry Inside a data center like the one Apple operates in Maiden, North Carolina, you'll find thousands of computer servers. Fusion-io makes a slim card that slots inside these machines, and it's packed with hundreds of gigabytes of flash memory, the same stuff that holds all the software and the data on your smartphone. You can think of this card as a much-needed replacement for the good old-fashioned hard disk that typically sits inside a server. Much like a hard disk, it stores information. But it doesn't have any moving parts, which means it's generally more reliable. It c
11More

Cy Vance's Proposal to Backdoor Encrypted Devices Is Riddled With Vulnerabilities | Jus... - 0 views

  • Less than a week after the attacks in Paris — while the public and policymakers were still reeling, and the investigation had barely gotten off the ground — Cy Vance, Manhattan’s District Attorney, released a policy paper calling for legislation requiring companies to provide the government with backdoor access to their smartphones and other mobile devices. This is the first concrete proposal of this type since September 2014, when FBI Director James Comey reignited the “Crypto Wars” in response to Apple’s and Google’s decisions to use default encryption on their smartphones. Though Comey seized on Apple’s and Google’s decisions to encrypt their devices by default, his concerns are primarily related to end-to-end encryption, which protects communications that are in transit. Vance’s proposal, on the other hand, is only concerned with device encryption, which protects data stored on phones. It is still unclear whether encryption played any role in the Paris attacks, though we do know that the attackers were using unencrypted SMS text messages on the night of the attack, and that some of them were even known to intelligence agencies and had previously been under surveillance. But regardless of whether encryption was used at some point during the planning of the attacks, as I lay out below, prohibiting companies from selling encrypted devices would not prevent criminals or terrorists from being able to access unbreakable encryption. Vance’s primary complaint is that Apple’s and Google’s decisions to provide their customers with more secure devices through encryption interferes with criminal investigations. He claims encryption prevents law enforcement from accessing stored data like iMessages, photos and videos, Internet search histories, and third party app data. He makes several arguments to justify his proposal to build backdoors into encrypted smartphones, but none of them hold water.
  • Before addressing the major privacy, security, and implementation concerns that his proposal raises, it is worth noting that while an increase in use of fully encrypted devices could interfere with some law enforcement investigations, it will help prevent far more crimes — especially smartphone theft, and the consequent potential for identity theft. According to Consumer Reports, in 2014 there were more than two million victims of smartphone theft, and nearly two-thirds of all smartphone users either took no steps to secure their phones or their data or failed to implement passcode access for their phones. Default encryption could reduce instances of theft because perpetrators would no longer be able to break into the phone to steal the data.
  • Vance argues that creating a weakness in encryption to allow law enforcement to access data stored on devices does not raise serious concerns for security and privacy, since in order to exploit the vulnerability one would need access to the actual device. He considers this an acceptable risk, claiming it would not be the same as creating a widespread vulnerability in encryption protecting communications in transit (like emails), and that it would be cheap and easy for companies to implement. But Vance seems to be underestimating the risks involved with his plan. It is increasingly important that smartphones and other devices are protected by the strongest encryption possible. Our devices and the apps on them contain astonishing amounts of personal information, so much that an unprecedented level of harm could be caused if a smartphone or device with an exploitable vulnerability is stolen, not least in the forms of identity fraud and credit card theft. We bank on our phones, and have access to credit card payments with services like Apple Pay. Our contact lists are stored on our phones, including phone numbers, emails, social media accounts, and addresses. Passwords are often stored on people’s phones. And phones and apps are often full of personal details about their lives, from food diaries to logs of favorite places to personal photographs. Symantec conducted a study, where the company spread 50 “lost” phones in public to see what people who picked up the phones would do with them. The company found that 95 percent of those people tried to access the phone, and while nearly 90 percent tried to access private information stored on the phone or in other private accounts such as banking services and email, only 50 percent attempted contacting the owner.
  • ...8 more annotations...
  • Vance attempts to downplay this serious risk by asserting that anyone can use the “Find My Phone” or Android Device Manager services that allow owners to delete the data on their phones if stolen. However, this does not stand up to scrutiny. These services are effective only when an owner realizes their phone is missing and can take swift action on another computer or device. This delay ensures some period of vulnerability. Encryption, on the other hand, protects everyone immediately and always. Additionally, Vance argues that it is safer to build backdoors into encrypted devices than it is to do so for encrypted communications in transit. It is true that there is a difference in the threats posed by the two types of encryption backdoors that are being debated. However, some manner of widespread vulnerability will inevitably result from a backdoor to encrypted devices. Indeed, the NSA and GCHQ reportedly hacked into a database to obtain cell phone SIM card encryption keys in order defeat the security protecting users’ communications and activities and to conduct surveillance. Clearly, the reality is that the threat of such a breach, whether from a hacker or a nation state actor, is very real. Even if companies go the extra mile and create a different means of access for every phone, such as a separate access key for each phone, significant vulnerabilities will be created. It would still be possible for a malicious actor to gain access to the database containing those keys, which would enable them to defeat the encryption on any smartphone they took possession of. Additionally, the cost of implementation and maintenance of such a complex system could be high.
  • Privacy is another concern that Vance dismisses too easily. Despite Vance’s arguments otherwise, building backdoors into device encryption undermines privacy. Our government does not impose a similar requirement in any other context. Police can enter homes with warrants, but there is no requirement that people record their conversations and interactions just in case they someday become useful in an investigation. The conversations that we once had through disposable letters and in-person conversations now happen over the Internet and on phones. Just because the medium has changed does not mean our right to privacy has.
  • In addition to his weak reasoning for why it would be feasible to create backdoors to encrypted devices without creating undue security risks or harming privacy, Vance makes several flawed policy-based arguments in favor of his proposal. He argues that criminals benefit from devices that are protected by strong encryption. That may be true, but strong encryption is also a critical tool used by billions of average people around the world every day to protect their transactions, communications, and private information. Lawyers, doctors, and journalists rely on encryption to protect their clients, patients, and sources. Government officials, from the President to the directors of the NSA and FBI, and members of Congress, depend on strong encryption for cybersecurity and data security. There are far more innocent Americans who benefit from strong encryption than there are criminals who exploit it. Encryption is also essential to our economy. Device manufacturers could suffer major economic losses if they are prohibited from competing with foreign manufacturers who offer more secure devices. Encryption also protects major companies from corporate and nation-state espionage. As more daily business activities are done on smartphones and other devices, they may now hold highly proprietary or sensitive information. Those devices could be targeted even more than they are now if all that has to be done to access that information is to steal an employee’s smartphone and exploit a vulnerability the manufacturer was required to create.
  • Vance also suggests that the US would be justified in creating such a requirement since other Western nations are contemplating requiring encryption backdoors as well. Regardless of whether other countries are debating similar proposals, we cannot afford a race to the bottom on cybersecurity. Heads of the intelligence community regularly warn that cybersecurity is the top threat to our national security. Strong encryption is our best defense against cyber threats, and following in the footsteps of other countries by weakening that critical tool would do incalculable harm. Furthermore, even if the US or other countries did implement such a proposal, criminals could gain access to devices with strong encryption through the black market. Thus, only innocent people would be negatively affected, and some of those innocent people might even become criminals simply by trying to protect their privacy by securing their data and devices. Finally, Vance argues that David Kaye, UN Special Rapporteur for Freedom of Expression and Opinion, supported the idea that court-ordered decryption doesn’t violate human rights, provided certain criteria are met, in his report on the topic. However, in the context of Vance’s proposal, this seems to conflate the concepts of court-ordered decryption and of government-mandated encryption backdoors. The Kaye report was unequivocal about the importance of encryption for free speech and human rights. The report concluded that:
  • States should promote strong encryption and anonymity. National laws should recognize that individuals are free to protect the privacy of their digital communications by using encryption technology and tools that allow anonymity online. … States should not restrict encryption and anonymity, which facilitate and often enable the rights to freedom of opinion and expression. Blanket prohibitions fail to be necessary and proportionate. States should avoid all measures that weaken the security that individuals may enjoy online, such as backdoors, weak encryption standards and key escrows. Additionally, the group of intelligence experts that was hand-picked by the President to issue a report and recommendations on surveillance and technology, concluded that: [R]egarding encryption, the U.S. Government should: (1) fully support and not undermine efforts to create encryption standards; (2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and (3) increase the use of encryption and urge US companies to do so, in order to better protect data in transit, at rest, in the cloud, and in other storage.
  • The clear consensus among human rights experts and several high-ranking intelligence experts, including the former directors of the NSA, Office of the Director of National Intelligence, and DHS, is that mandating encryption backdoors is dangerous. Unaddressed Concerns: Preventing Encrypted Devices from Entering the US and the Slippery Slope In addition to the significant faults in Vance’s arguments in favor of his proposal, he fails to address the question of how such a restriction would be effectively implemented. There is no effective mechanism for preventing code from becoming available for download online, even if it is illegal. One critical issue the Vance proposal fails to address is how the government would prevent, or even identify, encrypted smartphones when individuals bring them into the United States. DHS would have to train customs agents to search the contents of every person’s phone in order to identify whether it is encrypted, and then confiscate the phones that are. Legal and policy considerations aside, this kind of policy is, at the very least, impractical. Preventing strong encryption from entering the US is not like preventing guns or drugs from entering the country — encrypted phones aren’t immediately obvious as is contraband. Millions of people use encrypted devices, and tens of millions more devices are shipped to and sold in the US each year.
  • Finally, there is a real concern that if Vance’s proposal were accepted, it would be the first step down a slippery slope. Right now, his proposal only calls for access to smartphones and devices running mobile operating systems. While this policy in and of itself would cover a number of commonplace devices, it may eventually be expanded to cover laptop and desktop computers, as well as communications in transit. The expansion of this kind of policy is even more worrisome when taking into account the speed at which technology evolves and becomes widely adopted. Ten years ago, the iPhone did not even exist. Who is to say what technology will be commonplace in 10 or 20 years that is not even around today. There is a very real question about how far law enforcement will go to gain access to information. Things that once seemed like merely science fiction, such as wearable technology and artificial intelligence that could be implanted in and work with the human nervous system, are now available. If and when there comes a time when our “smart phone” is not really a device at all, but is rather an implant, surely we would not grant law enforcement access to our minds.
  • Policymakers should dismiss Vance’s proposal to prohibit the use of strong encryption to protect our smartphones and devices in order to ensure law enforcement access. Undermining encryption, regardless of whether it is protecting data in transit or at rest, would take us down a dangerous and harmful path. Instead, law enforcement and the intelligence community should be working to alter their skills and tactics in a fast-evolving technological world so that they are not so dependent on information that will increasingly be protected by encryption.
6More

In Hearing on Internet Surveillance, Nobody Knows How Many Americans Impacted in Data C... - 0 views

  • The Senate Judiciary Committee held an open hearing today on the FISA Amendments Act, the law that ostensibly authorizes the digital surveillance of hundreds of millions of people both in the United States and around the world. Section 702 of the law, scheduled to expire next year, is designed to allow U.S. intelligence services to collect signals intelligence on foreign targets related to our national security interests. However—thanks to the leaks of many whistleblowers including Edward Snowden, the work of investigative journalists, and statements by public officials—we now know that the FISA Amendments Act has been used to sweep up data on hundreds of millions of people who have no connection to a terrorist investigation, including countless Americans. What do we mean by “countless”? As became increasingly clear in the hearing today, the exact number of Americans impacted by this surveillance is unknown. Senator Franken asked the panel of witnesses, “Is it possible for the government to provide an exact count of how many United States persons have been swept up in Section 702 surveillance? And if not the exact count, then what about an estimate?”
  • The lack of information makes rigorous oversight of the programs all but impossible. As Senator Franken put it in the hearing today, “When the public lacks even a rough sense of the scope of the government’s surveillance program, they have no way of knowing if the government is striking the right balance, whether we are safeguarding our national security without trampling on our citizens’ fundamental privacy rights. But the public can’t know if we succeed in striking that balance if they don’t even have the most basic information about our major surveillance programs."  Senator Patrick Leahy also questioned the panel about the “minimization procedures” associated with this type of surveillance, the privacy safeguard that is intended to ensure that irrelevant data and data on American citizens is swiftly deleted. Senator Leahy asked the panel: “Do you believe the current minimization procedures ensure that data about innocent Americans is deleted? Is that enough?”  David Medine, who recently announced his pending retirement from the Privacy and Civil Liberties Oversight Board, answered unequivocally:
  • Elizabeth Goitein, the Brennan Center director whose articulate and thought-provoking testimony was the highlight of the hearing, noted that at this time an exact number would be difficult to provide. However, she asserted that an estimate should be possible for most if not all of the government’s surveillance programs. None of the other panel participants—which included David Medine and Rachel Brand of the Privacy and Civil Liberties Oversight Board as well as Matthew Olsen of IronNet Cybersecurity and attorney Kenneth Wainstein—offered an estimate. Today’s hearing reaffirmed that it is not only the American people who are left in the dark about how many people or accounts are impacted by the NSA’s dragnet surveillance of the Internet. Even vital oversight committees in Congress like the Senate Judiciary Committee are left to speculate about just how far-reaching this surveillance is. It's part of the reason why we urged the House Judiciary Committee to demand that the Intelligence Community provide the public with a number. 
  • ...2 more annotations...
  • Senator Leahy, they don’t. The minimization procedures call for the deletion of innocent Americans’ information upon discovery to determine whether it has any foreign intelligence value. But what the board’s report found is that in fact information is never deleted. It sits in the databases for 5 years, or sometimes longer. And so the minimization doesn’t really address the privacy concerns of incidentally collected communications—again, where there’s been no warrant at all in the process… In the United States, we simply can’t read people’s emails and listen to their phone calls without court approval, and the same should be true when the government shifts its attention to Americans under this program. One of the most startling exchanges from the hearing today came toward the end of the session, when Senator Dianne Feinstein—who also sits on the Intelligence Committee—seemed taken aback by Ms. Goitein’s mention of “backdoor searches.” 
  • Feinstein: Wow, wow. What do you call it? What’s a backdoor search? Goitein: Backdoor search is when the FBI or any other agency targets a U.S. person for a search of data that was collected under Section 702, which is supposed to be targeted against foreigners overseas. Feinstein: Regardless of the minimization that was properly carried out. Goitein: Well the data is searched in its unminimized form. So the FBI gets raw data, the NSA, the CIA get raw data. And they search that raw data using U.S. person identifiers. That’s what I’m referring to as backdoor searches. It’s deeply concerning that any member of Congress, much less a member of the Senate Judiciary Committee and the Senate Intelligence Committee, might not be aware of the problem surrounding backdoor searches. In April 2014, the Director of National Intelligence acknowledged the searches of this data, which Senators Ron Wyden and Mark Udall termed “the ‘back-door search’ loophole in section 702.” The public was so incensed that the House of Representatives passed an amendment to that year's defense appropriations bill effectively banning the warrantless backdoor searches. Nonetheless, in the hearing today it seemed like Senator Feinstein might not recognize or appreciate the serious implications of allowing U.S. law enforcement agencies to query the raw data collected through these Internet surveillance programs. Hopefully today’s testimony helped convince the Senator that there is more to this topic than what she’s hearing in jargon-filled classified security briefings.
  •  
    The 4th Amendment: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and *particularly describing the place to be searched, and the* persons or *things to be seized."* So much for the particularized description of the place to be searched and the thngs to be seized.  Fah! Who needs a Constitution, anyway .... 
1More

» 21 Facts About NSA Snooping That Every American Should Know Alex Jones' Inf... - 0 views

  •  
    NSA-PRISM-Echelon in a nutshell.  The list below is a short sample.  Each fact is documented, and well worth the time reading. "The following are 21 facts about NSA snooping that every American should know…" #1 According to CNET, the NSA told Congress during a recent classified briefing that it does not need court authorization to listen to domestic phone calls… #2 According to U.S. Representative Loretta Sanchez, members of Congress learned "significantly more than what is out in the media today" about NSA snooping during that classified briefing. #3 The content of all of our phone calls is being recorded and stored.  The following is a from a transcript of an exchange between Erin Burnett of CNN and former FBI counterterrorism agent Tim Clemente which took place just last month… #4 The chief technology officer at the CIA, Gus Hunt, made the following statement back in March… "We fundamentally try to collect everything and hang onto it forever." #5 During a Senate Judiciary Oversight Committee hearing in March 2011, FBI Director Robert Mueller admitted that the intelligence community has the ability to access emails "as they come in"… #6 Back in 2007, Director of National Intelligence Michael McConnell told Congress that the president has the "constitutional authority" to authorize domestic spying without warrants no matter when the law says. #7 The Director Of National Intelligence James Clapper recently told Congress that the NSA was not collecting any information about American citizens.  When the media confronted him about his lie, he explained that he "responded in what I thought was the most truthful, or least untruthful manner". #8 The Washington Post is reporting that the NSA has four primary data collection systems… MAINWAY, MARINA, METADATA, PRISM #9 The NSA knows pretty much everything that you are doing on the Internet.  The following is a short excerpt from a recent Yahoo article… #10 The NSA is suppose
5More

What's Scarier: Terrorism, or Governments Blocking Websites in its Name? - The Intercept - 0 views

  • Forcibly taking down websites deemed to be supportive of terrorism, or criminalizing speech deemed to “advocate” terrorism, is a major trend in both Europe and the West generally. Last month in Brussels, the European Union’s counter-terrorism coordinator issued a memo proclaiming that “Europe is facing an unprecedented, diverse and serious terrorist threat,” and argued that increased state control over the Internet is crucial to combating it. The memo noted that “the EU and its Member States have developed several initiatives related to countering radicalisation and terrorism on the Internet,” yet argued that more must be done. It argued that the focus should be on “working with the main players in the Internet industry [a]s the best way to limit the circulation of terrorist material online.” It specifically hailed the tactics of the U.K. Counter-Terrorism Internet Referral Unit (CTIRU), which has succeeded in causing the removal of large amounts of material it deems “extremist”:
  • In addition to recommending the dissemination of “counter-narratives” by governments, the memo also urged EU member states to “examine the legal and technical possibilities to remove illegal content.” Exploiting terrorism fears to control speech has been a common practice in the West since 9/11, but it is becoming increasingly popular even in countries that have experienced exceedingly few attacks. A new extremist bill advocated by the right-wing Harper government in Canada (also supported by Liberal Party leader Justin Trudeau even as he recognizes its dangers) would create new crimes for “advocating terrorism”; specifically: “every person who, by communicating statements, knowingly advocates or promotes the commission of terrorism offences in general” would be a guilty and can be sent to prison for five years for each offense. In justifying the new proposal, the Canadian government admits that “under the current criminal law, it is [already] a crime to counsel or actively encourage others to commit a specific terrorism offence.” This new proposal is about criminalizing ideas and opinions. In the government’s words, it “prohibits the intentional advocacy or promotion of terrorism, knowing or reckless as to whether it would result in terrorism.”
  • If someone argues that continuous Western violence and interference in the Muslim world for decades justifies violence being returned to the West, or even advocates that governments arm various insurgents considered by some to be “terrorists,” such speech could easily be viewed as constituting a crime. To calm concerns, Canadian authorities point out that “the proposed new offence is similar to one recently enacted by Australia, that prohibits advocating a terrorist act or the commission of a terrorism offence-all while being reckless as to whether another person will engage in this kind of activity.” Indeed, Australia enacted a new law late last year that indisputably targets political speech and ideas, as well as criminalizing journalism considered threatening by the government. Punishing people for their speech deemed extremist or dangerous has been a vibrant practice in both the U.K. and U.S. for some time now, as I detailed (coincidentally) just a couple days before free speech marches broke out in the West after the Charlie Hebdo attacks. Those criminalization-of-speech attacks overwhelmingly target Muslims, and have resulted in the punishment of such classic free speech activities as posting anti-war commentary on Facebook, tweeting links to “extremist” videos, translating and posting “radicalizing” videos to the Internet, writing scholarly articles in defense of Palestinian groups and expressing harsh criticism of Israel, and even including a Hezbollah channel in a cable package.
  • ...2 more annotations...
  • Beyond the technical issues, trying to legislate ideas out of existence is a fool’s game: those sufficiently determined will always find ways to make themselves heard. Indeed, as U.S. pop star Barbra Streisand famously learned, attempts to suppress ideas usually result in the greatest publicity possible for their advocates and/or elevate them by turning fringe ideas into martyrs for free speech (I have zero doubt that all five of the targeted sites enjoyed among their highest traffic dates ever today as a result of the French targeting). But the comical futility of these efforts is exceeded by their profound dangers. Who wants governments to be able to unilaterally block websites? Isn’t the exercise of this website-blocking power what has long been cited as reasons we should regard the Bad Countries — such as China and Iran — as tyrannies (which also usually cite “counterterrorism” to justify their censorship efforts)?
  • s those and countless other examples prove, the concepts of “extremism” and “radicalizing” (like “terrorism” itself) are incredibly vague and elastic, and in the hands of those who wield power, almost always expand far beyond what you think it should mean (plotting to blow up innocent people) to mean: anyone who disseminates ideas that are threatening to the exercise of our power. That’s why powers justified in the name of combating “radicalism” or “extremism” are invariably — not often or usually, but invariably — applied to activists, dissidents, protesters and those who challenge prevailing orthodoxies and power centers. My arguments for distrusting governments to exercise powers of censorship are set forth here (in the context of a prior attempt by a different French minister to control the content of Twitter). In sum, far more damage has been inflicted historically by efforts to censor and criminalize political ideas than by the kind of “terrorism” these governments are invoking to justify these censorship powers. And whatever else may be true, few things are more inimical to, or threatening of, Internet freedom than allowing functionaries inside governments to unilaterally block websites from functioning on the ground that the ideas those sites advocate are objectionable or “dangerous.” That’s every bit as true when the censors are in Paris, London, and Ottawa, and Washington as when they are in Tehran, Moscow or Beijing.
3More

Microsoft breaks IE8 interoperability promise | The Register - 0 views

  • In March, Microsoft announced that their upcoming Internet Explorer 8 would: "use its most standards compliant mode, IE8 Standards, as the default." Note the last word: default. Microsoft argued that, in light of their newly published interoperability principles, it was the right thing to do. This declaration heralded an about-face and was widely praised by the web standards community; people were stunned and delighted by Microsoft's promise. This week, the promise was broken. It lasted less than six months. Now that Internet Explorer IE8 beta 2 is released, we know that many, if not most, pages viewed in IE8 will not be shown in standards mode by default.
  • How many pages are affected by this change? Here's the back of my envelope: The PC market can be split into two segments — the enterprise market and the home market. The enterprise market accounts for around 60 per cent of all PCs sold, while the home market accounts for the remaining 40 per cent. Within enterprises, intranets are used for all sorts of things and account for, perhaps, 80 per cent of all page views. Thus, intranets account for about half of all page views on PCs!
  •  
    Article by Hakon Lie of Opera Software. Also note that acdcording to the European Commission, "As for the tying of separate software products, in its Microsoft judgment of 17 September 2007, the Court of First Instance confirmed the principles that must be respected by dominant companies. In a complaint by Opera, a competing browser vendor, Microsoft is alleged to have engaged in illegal tying of its Internet Explorer product to its dominant Windows operating system. The complaint alleges that there is ongoing competitive harm from Microsoft's practices, in particular in view of new proprietary technologies that Microsoft has allegedly introduced in its browser that would reduce compatibility with open internet standards, and therefore hinder competition. In addition, allegations of tying of other separate software products by Microsoft, including desktop search and Windows Live have been brought to the Commission's attention. The Commission's investigation will therefore focus on allegations that a range of products have been unlawfully tied to sales of Microsoft's dominant operating system." http://europa.eu/rapid/pressReleasesAction.do?reference=MEMO/08/19&format=HTML&aged=0&language=EN&guiLanguage=en
7More

Tech firms and privacy groups press for curbs on NSA surveillance powers - The Washingt... - 0 views

  • The nation’s top technology firms and a coalition of privacy groups are urging Congress to place curbs on government surveillance in the face of a fast-approaching deadline for legislative action. A set of key Patriot Act surveillance authorities expire June 1, but the effective date is May 21 — the last day before Congress breaks for a Memorial Day recess. In a letter to be sent Wednesday to the Obama administration and senior lawmakers, the coalition vowed to oppose any legislation that, among other things, does not ban the “bulk collection” of Americans’ phone records and other data.
  • We know that there are some in Congress who think that they can get away with reauthorizing the expiring provisions of the Patriot Act without any reforms at all,” said Kevin Bankston, policy director of New America Foundation’s Open Technology Institute, a privacy group that organized the effort. “This letter draws a line in the sand that makes clear that the privacy community and the Internet industry do not intend to let that happen without a fight.” At issue is the bulk collection of Americans’ data by intelligence agencies such as the National Security Agency. The NSA’s daily gathering of millions of records logging phone call times, lengths and other “metadata” stirred controversy when it was revealed in June 2013 by former NSA contractor Edward Snowden. The records are placed in a database that can, with a judge’s permission, be searched for links to foreign terrorists.They do not include the content of conversations.
  • That program, placed under federal surveillance court oversight in 2006, was authorized by the court in secret under Section 215 of the Patriot Act — one of the expiring provisions. The public outcry that ensued after the program was disclosed forced President Obama in January 2014 to call for an end to the NSA’s storage of the data. He also appealed to Congress to find a way to preserve the agency’s access to the data for counterterrorism information.
  • ...3 more annotations...
  • Despite growing opposition in some quarters to ending the NSA’s program, a “clean” authorization — one that would enable its continuation without any changes — is unlikely, lawmakers from both parties say. Sen. Ron Wyden (D-Ore.), a leading opponent of the NSA’s program in its current format, said he would be “surprised if there are 60 votes” in the Senate for that. In the House, where there is bipartisan support for reining in surveillance, it’s a longer shot still. “It’s a toxic vote back in your district to reauthorize the Patriot Act, if you don’t get some reforms” with it, said Rep. Thomas Massie (R-Ky.). The House last fall passed the USA Freedom Act, which would have ended the NSA program, but the Senate failed to advance its own version.The House and Senate judiciary committees are working to come up with new bipartisan legislation to be introduced soon.
  • The tech firms and privacy groups’ demands are a baseline, they say. Besides ending bulk collection, they want companies to have the right to be more transparent in reporting on national security requests and greater declassification of opinions by the Foreign Intelligence Surveillance Court.
  • Some legal experts have pointed to a little-noticed clause in the Patriot Act that would appear to allow bulk collection to continue even if the authority is not renewed. Administration officials have conceded privately that a legal case probably could be made for that, but politically it would be a tough sell. On Tuesday, a White House spokesman indicated the administration would not seek to exploit that clause. “If Section 215 sunsets, we will not continue the bulk telephony metadata program,” National Security Council spokesman Edward Price said in a statement first reported by Reuters. Price added that allowing Section 215 to expire would result in the loss of a “critical national security tool” used in investigations that do not involve the bulk collection of data. “That is why we have underscored the imperative of Congressional action in the coming weeks, and we welcome the opportunity to work with lawmakers on such legislation,” he said.
  •  
    I omitted some stuff about opposition to sunsetting the provisions. They  seem to forget, as does Obama, that the proponents of the FISA Court's expansive reading of section 215 have not yet come up with a single instance where 215-derived data caught a single terrorist or prevented a single act of terrorism. Which means that if that data is of some use, it ain't in fighting terrorism, the purpose of the section.  Patriot Act § 215 is codified as 50 USCS § 1861, https://www.law.cornell.edu/uscode/text/50/1861 That section authorizes the FBI to obtain an iorder from the FISA Court "requiring the production of *any tangible things* (including books, records, papers, documents, and other items)."  Specific examples (a non-exclusive list) include: the production of library circulation records, library patron lists, book sales records, book customer lists, firearms sales records, tax return records, educational records, or medical records containing information that would identify a person." The Court can order that the recipient of the order tell no one of its receipt of the order or its response to it.   In other words, this is about way more than your telephone metadata. Do you trust the NSA with your medical records? 
22More

Can C.E.O. Satya Nadella Save Microsoft? | Vanity Fair - 0 views

  • he new world of computing is a radical break from the past. That’s because of the growth of mobile devices and cloud computing. In the old world, corporations owned and ran Windows P.C.’s and Window servers in their own facilities, with the necessary software installed on them. Everyone used Windows, so everything was developed for Windows. It was a virtuous circle for Microsoft.
  • Now the processing power is in the cloud, and very sophisticated applications, from e-mail to tools you need to run a business, can be run by logging onto a Web site, not from pre-installed software. In addition, the way we work (and play) has shifted from P.C.’s to mobile devices—where Android and Apple’s iOS each outsell Windows by more than 10 to 1. Why develop software to run on Windows if no one is using Windows? Why use Windows if nothing you want can run on it? The virtuous circle has turned vicious.
  • Part of why Microsoft failed with devices is that competitors upended its business model. Google doesn’t charge for the operating system. That’s because Google makes its money on search. Apple can charge high prices because of the beauty and elegance of its devices, where the software and hardware are integrated in one gorgeous package. Meanwhile, Microsoft continued to force outside manufacturers, whose products simply weren’t as compelling as Apple’s, to pay for a license for Windows. And it didn’t allow Office to be used on non-Windows phones and tablets. “The whole philosophy of the company was Windows first,” says Heather Bellini, an analyst at Goldman Sachs. Of course it was: that’s how Microsoft had always made its money.
  • ...18 more annotations...
  • Right now, Windows itself is fragmented: applications developed for one Windows device, say a P.C., don’t even necessarily work on another Windows device. And if Microsoft develops a new killer application, it almost has to be released for Android and Apple phones, given their market dominance, thereby strengthening those eco-systems, too.
  • At its core, Azure uses Windows server technology. That helps existing Windows applications run seamlessly on Azure. Technologists sometimes call what Microsoft has done a “hybrid cloud” because companies can use Azure alongside their pre-existing on-site Windows servers. At the same time, Nadella also to some extent has embraced open-source software—free code that doesn’t require a license from Microsoft—so that someone could develop something using non-Microsoft technology, and it would run on Azure. That broadens Azure’s appeal.
  • “In some ways the way people think about Bill and Steve is almost a Rorschach test.” For those who romanticize the Gates era, Microsoft’s current predicament will always be Ballmer’s fault. For others, it’s not so clear. “He left Steve holding a big bag of shit,” the former executive says of Gates. In the year Ballmer officially took over, Microsoft was found to be a predatory monopolist by the U.S. government and was ordered to split into two; the cost of that to Gates and his company can never be calculated. In addition, the dotcom bubble had burst, causing Microsoft stock to collapse, which resulted in a simmering tension between longtime employees, whom the company had made rich, and newer ones, who had missed the gravy train.
  • Nadella lived this dilemma because his job at Microsoft included figuring out the cloud-based future while maintaining the highly profitable Windows server business. And so he did a bunch of things that were totally un-Microsoft-like. He went to talk to start-ups to find out why they weren’t using Microsoft. He put massive research-and-development dollars behind Azure, a cloud-based platform that Microsoft had developed in Skunk Works fashion, which by definition took resources away from the highly profitable existing business.
  • They even have a catchphrase: “Re-inventing productivity.”
  • Microsoft’s historical reluctance to open Windows and Office is why it was such a big deal when in late March, less than two months after becoming C.E.O., Nadella announced that Microsoft would offer Office for Apple’s iPad. A team at the company had been working on it for about a year. Ballmer says he would have released it eventually, but Nadella did it immediately. Nadella also announced that Windows would be free for devices smaller than nine inches, meaning phones and small tablets. “Now that we have 30 million users on the iPad using it, that is 30 million people who never used Office before [on an iPad,]” he says. “And to me that’s what really drives us.” These are small moves in some ways, and yet they are also big. “It’s the first time I have listened to a senior Microsoft executive admit that they are behind,” says one institutional investor. “The fact that they are giving away Windows, their bread and butter for 25 years—it is quite a fundamental change.”
  • And whoever does the best job of building the right software experiences to give both organizations and individuals time back so that they can get more out of their time, that’s the core of this company—that’s the soul. That’s what Bill started this company with. That’s the Office franchise. That’s the Windows franchise. We have to re-invent them. . . . That’s where this notion of re-inventing productivity comes from.”
  • Ballmer might be a complicated character, but he has nothing on Gates, whose contradictions have long fascinated Microsoft-watchers. He is someone who has no problem humiliating individuals—he might not even notice—but who genuinely cares deeply about entire populations and is deeply loyal. He is generous in the biggest ways imaginable, and yet in small things, like picking up a lunch tab, he can be shockingly cheap. He can’t make small talk and can come across as totally lacking in E.Q. “The rules of human life that allow you to get along are not complicated,” says one person who knows Gates. “He could write a book on it, but he can’t do it!”
  • At the Microsoft board meeting in late June 2013, Ballmer announced he had a handshake deal with Nokia’s management to buy the company, pending the Microsoft board’s approval, according to a source close to the events. Ballmer thought he had it and left before the post-board-meeting dinner to attend his son’s middle-school graduation. When he came back the next day, he found that the board had pulled a coup: they informed him they weren’t doing the deal, and it wasn’t up for discussion. For Ballmer, it seems, the unforgivable thing was that Gates had been part of the coup, which Ballmer saw as the ultimate betrayal.
  • what is scarce in all of this abundance is human attention
  • And the original idea of having great software people and broad software products and Office being the primary tool that people look to across all these devices, that’ s as true today and as strong as ever.”
  • Meeting Room Plus
  • But he combines that with flashes of insight and humor that leave some wondering whether he can’t do it or simply chooses not to, or both. His most pronounced characteristic shouldn’t be simply labeled a competitive streak, because it is really a fierce, deep need to win. The dislike it bred among his peers in the industry is well known—“Silicon Bully” was the title of an infamous magazine story about him. And yet he left Microsoft for the philanthropic world, where there was no one to bully, only intractable problems to solve.
  • “The Irrelevance of Microsoft” is actually the title of a blog post by an analyst named Benedict Evans, who works at the Silicon Valley venture-capital firm Andreessen Horowitz. On his blog, Evans pointed out that Microsoft’s share of all computing devices that we use to connect to the Internet, including P.C.’s, phones, and tablets, has plunged from 90 percent in 2009 to just around 20 percent today. This staggering drop occurred not because Microsoft lost ground in personal computers, on which its software still dominates, but rather because it has failed to adapt its products to smartphones, where all the growth is, and tablets.
  • The board told Ballmer they wanted him to stay, he says, and they did eventually agree to a slightly different version of the deal. In September, Microsoft announced it was buying Nokia’s devices-and-services business for $7.2 billion. Why? The board finally realized the downside: without Nokia, Microsoft was effectively done in the smartphone business. But, for Ballmer, the damage was done, in more ways than one. He now says it became clear to him that despite the lack of a new C.E.O. he couldn’t stay. Cultural change, he decided, required a change at the top, and, he says,“there was too much water under the bridge with this board.” The feeling was mutual. As a source close to Microsoft says, no one, including Gates, tried to stop him from quitting.
  • in Wall Street’s eyes, Nadella can do no wrong. Microsoft’s stock has risen 30 percent since he became C.E.O., increasing its market value by $87 billion. “It’s interesting with Satya,” says one person who observes him with investors. “He is not a business guy or a financial analyst, but he finds a common language with investors, and in his short tenure, they leave going, Wow.” But the honeymoon is the easy part.
  • “He was so publicly and so early in life defined as the brilliant guy,” says a person who has observed him. “Anything that threatens that, he becomes narcissistic and defensive.” Or as another person puts it, “He throws hissy fits when he doesn’t get his way.”
  • round three-quarters of Microsoft’s profits come from the two fabulously successful products on which the company was built: the Windows operating system, which essentially makes personal computers run, and Office, the suite of applications that includes Word, Excel, and PowerPoint. Financially speaking, Microsoft is still extraordinarily powerful. In the last 12 months the company reported sales of $86.83 billion and earnings of $22.07 billion; it has $85.7 billion of cash on its balance sheet. But the company is facing a confluence of threats that is all the more staggering given Microsoft’s sheer size. Competitors such as Google and Apple have upended Microsoft’s business model, making it unclear where Windows will fit in the world, and even challenging Office. In the Valley, there are two sayings that everyone regards as truth. One is that profits follow relevance. The other is that there’s a difference between strategic position and financial position. “It’s easy to be in denial and think the financials reflect the current reality,” says a close observer of technology firms. “They do not.”
  •  
    Awesome article describing the history of Microsoft as seen through the lives of it's three CEO's: Bill Gates, Steve Ballmer and Satya Nadella
6More

The punk rock internet - how DIY ​​rebels ​are working to ​replace the tech g... - 0 views

  • What they are doing could be seen as the online world’s equivalent of punk rock: a scattered revolt against an industry that many now think has grown greedy, intrusive and arrogant – as well as governments whose surveillance programmes have fuelled the same anxieties. As concerns grow about an online realm dominated by a few huge corporations, everyone involved shares one common goal: a comprehensively decentralised internet.
  • In the last few months, they have started working with people in the Belgian city of Ghent – or, in Flemish, Gent – where the authorities own their own internet domain, complete with .gent web addresses. Using the blueprint of Heartbeat, they want to create a new kind of internet they call the indienet – in which people control their data, are not tracked and each own an equal space online. This would be a radical alternative to what we have now: giant “supernodes” that have made a few men in northern California unimaginable amounts of money thanks to the ocean of lucrative personal information billions of people hand over in exchange for their services.
  • His alternative is what he calls the Safe network: the acronym stands for “Safe Access for Everyone”. In this model, rather than being stored on distant servers, people’s data – files, documents, social-media interactions – will be broken into fragments, encrypted and scattered around other people’s computers and smartphones, meaning that hacking and data theft will become impossible. Thanks to a system of self-authentication in which a Safe user’s encrypted information would only be put back together and unlocked on their own devices, there will be no centrally held passwords. No one will leave data trails, so there will be nothing for big online companies to harvest. The financial lubricant, Irvine says, will be a cryptocurrency called Safecoin: users will pay to store data on the network, and also be rewarded for storing other people’s (encrypted) information on their devices. Software developers, meanwhile, will be rewarded with Safecoin according to the popularity of their apps. There is a community of around 7,000 interested people already working on services that will work on the Safe network, including alternatives to platforms such as Facebook and YouTube.
  • ...3 more annotations...
  • Once MaidSafe is up and running, there will be very little any government or authority can do about it: “We can’t stop the network if we start it. If anyone turned round and said: ‘You need to stop that,’ we couldn’t. We’d have to go round to people’s houses and switch off their computers. That’s part of the whole thing. The network is like a cyber-brain; almost a lifeform in itself. And once you start it, that’s it.” Before my trip to Scotland, I tell him, I spent whole futile days signing up to some of the decentralised social networks that already exist – Steemit, Diaspora, Mastadon – and trying to approximate the kind of experience I can easily get on, say, Twitter or Facebook.
  • And herein lie two potential breakthroughs. One, according to some cryptocurrency enthusiasts, is a means of securing and protecting people’s identities that doesn’t rely on remotely stored passwords. The other is a hope that we can leave behind intermediaries such as Uber and eBay, and allow buyers and sellers to deal directly with each other. Blockstack, a startup based in New York, aims to bring blockchain technology to the masses. Like MaidSafe, its creators aim to build a new internet, and a 13,000-strong crowd of developers are already working on apps that either run on the platform Blockstack has created, or use its features. OpenBazaar is an eBay-esque service, up and running since November last year, which promises “the world’s most private, secure, and liberating online marketplace”. Casa aims to be an decentralised alternative to Airbnb; Guild is a would-be blogging service that bigs up its libertarian ethos and boasts that its founders will have “no power to remove blogs they don’t approve of or agree with”.
  • An initial version of Blockstack is already up and running. Even if data is stored on conventional drives, servers and clouds, thanks to its blockchain-based “private key” system each Blockstack user controls the kind of personal information we currently blithely hand over to Big Tech, and has the unique power to unlock it. “That’s something that’s extremely powerful – and not just because you know your data is more secure because you’re not giving it to a company,” he says. “A hacker would have to hack a million people if they wanted access to their data.”
6More

American Surveillance Now Threatens American Business - The Atlantic - 0 views

  • What does it look like when a society loses its sense of privacy? <div><a href="http://pubads.g.doubleclick.net/gampad/jump?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" title=""><img style="border:none;" src="http://pubads.g.doubleclick.net/gampad/ad?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" alt="" /></a></div>In the almost 18 months since the Snowden files first received coverage, writers and critics have had to guess at the answer. Does a certain trend, consumer complaint, or popular product epitomize some larger shift? Is trust in tech companies eroding—or is a subset just especially vocal about it? Polling would make those answers clear, but polling so far has been… confused. A new study, conducted by the Pew Internet Project last January and released last week, helps make the average American’s view of his or her privacy a little clearer. And their confidence in their own privacy is ... low. The study's findings—and the statistics it reports—stagger. Vast majorities of Americans are uncomfortable with how the government uses their data, how private companies use and distribute their data, and what the government does to regulate those companies. No summary can equal a recounting of the findings. Americans are displeased with government surveillance en masse:   
  • A new study finds that a vast majority of Americans trust neither the government nor tech companies with their personal data.
  • According to the study, 70 percent of Americans are “at least somewhat concerned” with the government secretly obtaining information they post to social networking sites. Eighty percent of respondents agreed that “Americans should be concerned” with government surveillance of telephones and the web. They are also uncomfortable with how private corporations use their data: Ninety-one percent of Americans believe that “consumers have lost control over how personal information is collected and used by companies,” according to the study. Eighty percent of Americans who use social networks “say they are concerned about third parties like advertisers or businesses accessing the data they share on these sites.” And even though they’re squeamish about the government’s use of data, they want it to regulate tech companies and data brokers more strictly: 64 percent wanted the government to do more to regulate private data collection. Since June 2013, American politicians and corporate leaders have fretted over how much the leaks would cost U.S. businesses abroad.
  • ...3 more annotations...
  • What does it look like when a society loses its sense of privacy? <div><a href="http://pubads.g.doubleclick.net/gampad/jump?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" title=""><img style="border:none;" src="http://pubads.g.doubleclick.net/gampad/ad?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" alt="" /></a></div>In the almost 18 months since the Snowden files first received coverage, writers and critics have had to guess at the answer. Does a certain trend, consumer complaint, or popular product epitomize some larger shift? Is trust in tech companies eroding—or is a subset just especially vocal about it? Polling would make those answers clear, but polling so far has been… confused. A new study, conducted by the Pew Internet Project last January and released last week, helps make the average American’s view of his or her privacy a little clearer. And their confidence in their own privacy is ... low. The study's findings—and the statistics it reports—stagger. Vast majorities of Americans are uncomfortable with how the government uses their data, how private companies use and distribute their data, and what the government does to regulate those companies. No summary can equal a recounting of the findings. Americans are displeased with government surveillance en masse:   
  • “It’s clear the global community of Internet users doesn’t like to be caught up in the American surveillance dragnet,” Senator Ron Wyden said last month. At the same event, Google chairman Eric Schmidt agreed with him. “What occurred was a loss of trust between America and other countries,” he said, according to the Los Angeles Times. “It's making it very difficult for American firms to do business.” But never mind the world. Americans don’t trust American social networks. More than half of the poll’s respondents said that social networks were “not at all secure. Only 40 percent of Americans believe email or texting is at least “somewhat” secure. Indeed, Americans trusted most of all communication technologies where some protections has been enshrined into the law (though the report didn’t ask about snail mail). That is: Talking on the telephone, whether on a landline or cell phone, is the only kind of communication that a majority of adults believe to be “very secure” or “somewhat secure.”
  • (That may seem a bit incongruous, because making a telephone call is one area where you can be almost sure you are being surveilled: The government has requisitioned mass call records from phone companies since 2001. But Americans appear, when discussing security, to differentiate between the contents of the call and data about it.) Last month, Ramsey Homsany, the general counsel of Dropbox, said that one big thing could take down the California tech scene. “We have built this incredible economic engine in this region of the country,” said Homsany in the Los Angeles Times, “and [mistrust] is the one thing that starts to rot it from the inside out.” According to this poll, the mistrust has already begun corroding—and is already, in fact, well advanced. We’ve always assumed that the great hurt to American business will come globally—that citizens of other nations will stop using tech companies’s services. But the new Pew data shows that Americans suspect American businesses just as much. And while, unlike citizens of other nations, they may not have other places to turn, they may stop putting sensitive or delicate information online.
1 - 20 of 80 Next › Last »
Showing 20 items per page