Skip to main content

Home/ Open Web/ Group items tagged digital

Rss Feed Group items tagged

Gary Edwards

Why Open Digital Standards Matter in Government | Marco Fioretti - 1 views

  •  
    Excellent work!  Marco walks us through the many reasons why digital standards for formats and protocols are so important.  Emphasis on formats and protocols being totally independent from software and software platforms. Introduction: let's look at standards The Digital Age Explained Standards and the Problems with Digital Technology Why Has Digital Gone Bad So Often? The Huge Positive Potential of Digital Technologies Free and Open Standards and Software: The Digital Basis of Open Government Conclusions
Paul Merrell

Reset The Net - Privacy Pack - 0 views

  • This June 5th, I pledge to take strong steps to protect my freedom from government mass surveillance. I expect the services I use to do the same.
  • Fight for the Future and Center for Rights will contact you about future campaigns. Privacy Policy
  •  
    I wound up joining this campaign at the urging of the ACLU after checking the Privacy Policy. The Reset the Net campaign seems to be endorsed by a lot of change-oriented groups, from the ACLU to Greenpeac to the Pirate Party. A fair number of groups with a Progressive agenda, but certainly not limited to them. The right answer to that situation is to urge other groups to endorse, not to avoid the campaign. Single-issue coalition-building is all about focusing on an area of agreement rather than worrying about who you are rubbing elbows with.  I have been looking for a a bipartisan group that's tackling government surveillance issues via mass actions but has no corporate sponsors. This might be the one. The reason: Corporate types like Google have no incentive to really butt heads with the government voyeurs. They are themselves engaged in massive surveillance of their users and certainly will not carry the battle for digital privacy over to the private sector. But this *is* a battle over digital privacy and legally defining user privacy rights in the private sector is just as important as cutting back on government surveillance. As we have learned through the Snowden disclosures, what the private internet companies have, the NSA can and does get.  The big internet services successfully pushed in the U.S. for authorization to publish more numbers about how many times they pass private data to the government, but went no farther. They wanted to be able to say they did something, but there's a revolving door of staffers between NSA and the big internet companies and the internet service companies' data is an open book to the NSA.   The big internet services are not champions of their users' privacy. If they were, they would be featuring end-to-end encryption with encryption keys unique to each user and unknown to the companies.  Like some startups in Europe are doing. E.g., the Wuala.com filesync service in Switzerland (first 5 GB of storage free). Compare tha
Paul Merrell

U.S. knocks plans for European communication network | Reuters - 0 views

  • The United States on Friday criticized proposals to build a European communication network to avoid emails and other data passing through the United States, warning that such rules could breach international trade laws. In its annual review of telecommunications trade barriers, the office of the U.S. Trade Representative said impediments to cross-border data flows were a serious and growing concern.It was closely watching new laws in Turkey that led to the blocking of websites and restrictions on personal data, as well as calls in Europe for a local communications network following revelations last year about U.S. digital eavesdropping and surveillance."Recent proposals from countries within the European Union to create a Europe-only electronic network (dubbed a 'Schengen cloud' by advocates) or to create national-only electronic networks could potentially lead to effective exclusion or discrimination against foreign service suppliers that are directly offering network services, or dependent on them," the USTR said in the report.
  • Germany and France have been discussing ways to build a European network to keep data secure after the U.S. spying scandal. Even German Chancellor Angela Merkel's cell phone was reportedly monitored by American spies.The USTR said proposals by Germany's state-backed Deutsche Telekom to bypass the United States were "draconian" and likely aimed at giving European companies an advantage over their U.S. counterparts.Deutsche Telekom has suggested laws to stop data traveling within continental Europe being routed via Asia or the United States and scrapping the Safe Harbor agreement that allows U.S. companies with European-level privacy standards access to European data. (www.telekom.com/dataprotection)"Any mandatory intra-EU routing may raise questions with respect to compliance with the EU's trade obligations with respect to Internet-enabled services," the USTR said. "Accordingly, USTR will be carefully monitoring the development of any such proposals."
  • U.S. tech companies, the leaders in an e-commerce marketplace estimated to be worth up to $8 trillion a year, have urged the White House to undertake reforms to calm privacy concerns and fend off digital protectionism.
  •  
    High comedy from the office of the U.S. Trade Representative. The USTR's press release is here along with a link to its report. http://www.ustr.gov/about-us/press-office/press-releases/2014/March/USTR-Targets-Telecommunications-Trade-Barriers The USTR is upset because the E.U. is aiming to build a digital communications network that does not route internal digital traffic outside the E.U., to limit the NSA's ability to surveil Europeans' communications. Part of the plan is to build an E.U.-centric cloud that is not susceptible to U.S. court orders. This plan does not, of course, sit well with U.S.-based cloud service providers.  Where the comedy comes in is that the USTR is making threats to go to the World Trade organization to block the E.U. move under the authority of the General Agreement on Trade in Services (GATS). But that treaty provides, in article XIV, that:  "Subject to the requirement that such measures are not applied in a manner which would constitute a means of arbitrary or unjustifiable discrimination between countries where like conditions prevail, or a disguised restriction on trade in services, nothing in this Agreement shall be construed to prevent the adoption or enforcement by any Member of measures: ... (c)      necessary to secure compliance with laws or regulations which are not inconsistent with the provisions of this Agreement including those relating to:   ... (ii)     the protection of the privacy of individuals in relation to the processing and dissemination of personal data and the protection of confidentiality of individual records and accounts[.]" http://www.wto.org/english/docs_e/legal_e/26-gats_01_e.htm#articleXIV   The E.U., in its Treaty on Human Rights, has very strong privacy protections for digital communications. The USTR undoubtedly knows all this, and that the WTO Appellate Panel's judges are of the European mold, sticklers for protection of human rights and most likely do not appreciate being subjects o
gopikrishna72

Digital Transformation Consulting Firm - Digital Technology Solutions | Lera Tech - 1 views

  •  
    Lera Technologies is a leading digital transformation consulting firm, paving the path to perpetual digital transformation for businesses with next-generation digital solutions such as Big Data, Cloud Strategy, Advanced Analytics, Business Intelligence, IT Advisory Services and Human Capital Engagement. More info please visit here: https://www.lera.us/talk-to-us/
Paul Merrell

2nd Cir. Affirms That Creation of Full-Text Searchable Database of Works Is Fair Use | ... - 0 views

  • The fair use doctrine permits the unauthorized digitization of copyrighted works in order to create a full-text searchable database, the U.S. Court of Appeals for the Second Circuit ruled June 10.Affirming summary judgment in favor of a consortium of university libraries, the court also ruled that the fair use doctrine permits the unauthorized conversion of those works into accessible formats for use by persons with disabilities, such as the blind.
  • The dispute is connected to the long-running conflict between Google Inc. and various authors of books that Google included in a mass digitization program. In 2004, Google began soliciting the participation of publishers in its Google Print for Publishers service, part of what was then called the Google Print project, aimed at making information available for free over the Internet.Subsequently, Google announced a new project, Google Print for Libraries. In 2005, Google Print was renamed Google Book Search and it is now known simply as Google Books. Under this program, Google made arrangements with several of the world's largest libraries to digitize the entire contents of their collections to create an online full-text searchable database.The announcement of this program triggered a copyright infringement action by the Authors Guild that continues to this day.
  • Turning to the fair use question, the court first concluded that the full-text search function of the Hathitrust Digital Library was a “quintessentially transformative use,” and thus constituted fair use. The court said:the result of a word search is different in purpose, character, expression, meaning, and message from the page (and the book) from which it is drawn. Indeed, we can discern little or no resemblance between the original text and the results of the HDL full-text search.There is no evidence that the Authors write with the purpose of enabling text searches of their books. Consequently, the full-text search function does not “supersede[ ] the objects [or purposes] of the original creation.”Turning to the fourth fair use factor—whether the use functions as a substitute for the original work—the court rejected the argument that such use represents lost sales to the extent that it prevents the future development of a market for licensing copies of works to be used in full-text searches.However, the court emphasized that the search function “does not serve as a substitute for the books that are being searched.”
  • ...3 more annotations...
  • Part of the deal between Google and the libraries included an offer by Google to hand over to the libraries their own copies of the digitized versions of their collections.In 2011, a group of those libraries announced the establishment of a new service, called the HathiTrust digital library, to which the libraries would contribute their digitized collections. This database of copies is to be made available for full-text searching and preservation activities. Additionally, it is intended to offer free access to works to individuals who have “print disabilities.” For works under copyright protection, the search function would return only a list of page numbers that a search term appeared on and the frequency of such appearance.
  • The court also rejected the argument that the database represented a threat of a security breach that could result in the full text of all the books becoming available for anyone to access. The court concluded that Hathitrust's assertions of its security measures were unrebutted.Thus, the full-text search function was found to be protected as fair use.
  • The court also concluded that allowing those with print disabilities access to the full texts of the works collected in the Hathitrust database was protected as fair use. Support for this conclusion came from the legislative history of the Copyright Act's fair use provision, 17 U.S.C. §107.
Paul Merrell

What to Do About Lawless Government Hacking and the Weakening of Digital Security | Ele... - 0 views

  • In our society, the rule of law sets limits on what government can and cannot do, no matter how important its goals. To give a simple example, even when chasing a fleeing murder suspect, the police have a duty not to endanger bystanders. The government should pay the same care to our safety in pursuing threats online, but right now we don’t have clear, enforceable rules for government activities like hacking and "digital sabotage." And this is no abstract question—these actions increasingly endanger everyone’s security
  • The problem became especially clear this year during the San Bernardino case, involving the FBI’s demand that Apple rewrite its iOS operating system to defeat security features on a locked iPhone. Ultimately the FBI exploited an existing vulnerability in iOS and accessed the contents of the phone with the help of an "outside party." Then, with no public process or discussion of the tradeoffs involved, the government refused to tell Apple about the flaw. Despite the obvious fact that the security of the computers and networks we all use is both collective and interwoven—other iPhones used by millions of innocent people presumably have the same vulnerability—the government chose to withhold information Apple could have used to improve the security of its phones. Other examples include intelligence activities like Stuxnet and Bullrun, and law enforcement investigations like the FBI’s mass use of malware against Tor users engaged in criminal behavior. These activities are often disproportionate to stopping legitimate threats, resulting in unpatched software for millions of innocent users, overbroad surveillance, and other collateral effects.  That’s why we’re working on a positive agenda to confront governmental threats to digital security. Put more directly, we’re calling on lawyers, advocates, technologists, and the public to demand a public discussion of whether, when, and how governments can be empowered to break into our computers, phones, and other devices; sabotage and subvert basic security protocols; and stockpile and exploit software flaws and vulnerabilities.  
  • Smart people in academia and elsewhere have been thinking and writing about these issues for years. But it’s time to take the next step and make clear, public rules that carry the force of law to ensure that the government weighs the tradeoffs and reaches the right decisions. This long post outlines some of the things that can be done. It frames the issue, then describes some of the key areas where EFF is already pursuing this agenda—in particular formalizing the rules for disclosing vulnerabilities and setting out narrow limits for the use of government malware. Finally it lays out where we think the debate should go from here.   
  •  
    It's not often that I disagree with EFF's positions, but on this one I do. The government should be prohibited from exploiting computer vulnerabilities and should be required to immediately report all vulnerabilities discovered to the relevant developers of hardware or software. It's been one long slippery slope since the Supreme Court first approved wiretapping in Olmstead v. United States, 277 US 438 (1928), https://goo.gl/NJevsr (.) Left undecided to this day is whether we have a right to whisper privately, a right that is undeniable. All communications intercept cases since Olmstead fly directly in the face of that right.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Gary Edwards

Push Pop Press: About Us - 0 views

  •  
    iOS visual eBooks and magazine "immersive media" category.  Push Pop Press seeks to provide a platform for digital eBooks that are more multimedia content than text.  It's more like writing in powerpoint or Flash than Word.  Think flipboard.  Another interesting term used by Push Pop Press is that this is a "layout platform" for rich content. features:  A demo of the first book powered byPush Pop Press, Al Gore's Our Choice.A New Digital Publishing Platform Easy to PublishLayout and publish interactive digital books without writing codeMixed MediaTell rich stories using text, images, audio, video, maps and interactive graphicsInteractive GraphicsEmbed interactive graphics that use the microphone, accelerometer and moreMulti-Touch User InterfaceEdge-to-edge content without any distracting toolbars or buttonsVisual Table of ContentsBrowse through hundreds of pages quickly and easilyPages Load InstantlyPages load as fast as your finger can swipeStart Reading ImmediatelyStart reading the first chapter as the rest of the book downloads in the backgroundUpdatable ContentUpdate your content without having to update the appiPad, iPhone & iPod TouchPublish one universal app that can be read on an iPad, iPhone and iPod Touch
Paul Merrell

"In 10 Years, the Surveillance Business Model Will Have Been Made Illegal" - - 0 views

  • The opening panel of the Stigler Center’s annual antitrust conference discussed the source of digital platforms’ power and what, if anything, can be done to address the numerous challenges their ability to shape opinions and outcomes present. 
  • Google CEO Sundar Pichai caused a worldwide sensation earlier this week when he unveiled Duplex, an AI-driven digital assistant able to mimic human speech patterns (complete with vocal tics) to such a convincing degree that it managed to have real conversations with ordinary people without them realizing they were actually talking to a robot.   While Google presented Duplex as an exciting technological breakthrough, others saw something else: a system able to deceive people into believing they were talking to a human being, an ethical red flag (and a surefire way to get to robocall hell). Following the backlash, Google announced on Thursday that the new service will be designed “with disclosure built-in.” Nevertheless, the episode created the impression that ethical concerns were an “after-the-fact consideration” for Google, despite the fierce public scrutiny it and other tech giants faced over the past two months. “Silicon Valley is ethically lost, rudderless and has not learned a thing,” tweeted Zeynep Tufekci, a professor at the University of North Carolina at Chapel Hill and a prominent critic of tech firms.   The controversial demonstration was not the only sign that the global outrage has yet to inspire the profound rethinking critics hoped it would bring to Silicon Valley firms. In Pichai’s speech at Google’s annual I/O developer conference, the ethical concerns regarding the company’s data mining, business model, and political influence were briefly addressed with a general, laconic statement: “The path ahead needs to be navigated carefully and deliberately and we feel a deep sense of responsibility to get this right.”
  • Google’s fellow FAANGs also seem eager to put the “techlash” of the past two years behind them. Facebook, its shares now fully recovered from the Cambridge Analytica scandal, is already charging full-steam ahead into new areas like dating and blockchain.   But the techlash likely isn’t going away soon. The rise of digital platforms has had profound political, economic, and social effects, many of which are only now becoming apparent, and their sheer size and power makes it virtually impossible to exist on the Internet without using their services. As Stratechery’s Ben Thompson noted in the opening panel of the Stigler Center’s annual antitrust conference last month, Google and Facebook—already dominating search and social media and enjoying a duopoly in digital advertising—own many of the world’s top mobile apps. Amazon has more than 100 million Prime members, for whom it is usually the first and last stop for shopping online.   Many of the mechanisms that allowed for this growth are opaque and rooted in manipulation. What are those mechanisms, and how should policymakers and antitrust enforcers address them? These questions, and others, were the focus of the Stigler Center panel, which was moderated by the Economist’s New York bureau chief, Patrick Foulis.
Paul Merrell

Rural America and the 5G Digital Divide. Telecoms Expanding Their "Toxic Infrastructure... - 0 views

  • While there is considerable telecom hubris regarding the 5G rollout and increasing speculation that the next generation of wireless is not yet ready for Prime Time, the industry continues to make promises to Rural America that it has no intention of fulfilling. Decades-long promises to deliver digital Utopia to rural America by T-Mobile, Verizon and AT&T have never materialized.  
  • In 2017, the USDA reported that 29% of American farms had no internet access. The FCC says that 14 million rural Americans and 1.2 million Americans living on tribal lands do not have 4G LTE on their phones, and that 30 million rural residents do not have broadband service compared to 2% of urban residents.  It’s beginning to sound like a Third World country. Despite an FCC $4.5 billion annual subsidy to carriers to provide broadband service in rural areas, the FCC reports that ‘over 24 million Americans do not have access to high-speed internet service, the bulk of them in rural area”while a  Microsoft Study found that  “162 million people across the US do not have internet service at broadband speeds.” At the same time, only three cable companies have access to 70% of the market in a sweetheart deal to hike rates as they avoid competition and the FCC looks the other way.  The FCC believes that it would cost $40 billion to bring broadband access to 98% of the country with expansion in rural America even more expensive.  While the FCC has pledged a $2 billion, ten year plan to identify rural wireless locations, only 4 million rural American businesses and homes will be targeted, a mere drop in the bucket. Which brings us to rural mapping: Since the advent of the digital age, there have been no accurate maps identifying where broadband service is available in rural America and where it is not available.  The FCC has a long history of promulgating unreliable and unverified carrier-provided numbers as the Commission has repeatedly ‘bungled efforts to produce accurate broadband maps” that would have facilitated rural coverage. During the Senate Commerce Committee hearing on April 10th regarding broadband mapping, critical testimony questioned whether the FCC and/or the telecom industry have either the commitment or the proficiency to provide 5G to rural America.  Members of the Committee shared concerns that 5G might put rural America further behind the curve so as to never catch up with the rest of the country
Paul Merrell

Privacy Shield Program Overview | Privacy Shield - 0 views

  • EU-U.S. Privacy Shield Program Overview The EU-U.S. Privacy Shield Framework was designed by the U.S. Department of Commerce and European Commission to provide companies on both sides of the Atlantic with a mechanism to comply with EU data protection requirements when transferring personal data from the European Union to the United States in support of transatlantic commerce. On July 12, the European Commission deemed the Privacy Shield Framework adequate to enable data transfers under EU law (see the adequacy determination). The Privacy Shield program, which is administered by the International Trade Administration (ITA) within the U.S. Department of Commerce, enables U.S.-based organizations to join the Privacy Shield Framework in order to benefit from the adequacy determination. To join the Privacy Shield Framework, a U.S.-based organization will be required to self-certify to the Department of Commerce (via this website) and publicly commit to comply with the Framework’s requirements. While joining the Privacy Shield Framework is voluntary, once an eligible organization makes the public commitment to comply with the Framework’s requirements, the commitment will become enforceable under U.S. law. All organizations interested in joining the Privacy Shield Framework should review its requirements in their entirety. To assist in that effort, Commerce’s Privacy Shield Team has compiled resources and addressed frequently asked questions below. ResourcesKey New Requirements for Participating Organizations How to Join the Privacy ShieldPrivacy Policy FAQs Frequently Asked Questions
  •  
    I got a notice from Dropbox tonight that it is now certified under this program. This program is fallout from an E.U. Court of Justice decision following the Snowden disclosures, holding that the then existing U.S.-E.U. framework for ptoecting the rights of E.U. citozens' data were invalid because that framework did not adequately protect digital privacy rights. This new framework is intended to comoply with the court's decision but one need only look at section 5 of the agreement to see that it does not. Expect follow-on litigation. THe agreement is at https://www.privacyshield.gov/servlet/servlet.FileDownload?file=015t00000004qAg Section 5 lets NSA continue to intercept and read data from E.U. citizens and also allows their data to be disclosed to U.S. law enforcement. And the agreement adds nothing to U.S. citizens' digital privacy rights. In my view, this framework is a stopgap measure that will only last as long as it takes for another case to reach the Court of Justice and be ruled upon. The ox that got gored by the Court of Justice ruling was U.S. company's ability to store E.U. citizens' data outside the E.U. and to allow internet traffic from the E.U. to pass through the U.S. Microsoft had leadership that set up new server farms in Europe under the control of a business entity beyond the jurisdiction of U.S. courts. Other I/.S. internet biggies didn't follow suit. This framework is their lifeline until the next ruling by the Court of Justice.
Paul Merrell

'UK surveillance is worse than 1984' says UN privacy chief (Wired UK) - 0 views

  • The UN's newly appointed special rapporteur on privacy, Joseph Cannataci, has described digital surveillance in the UK as "worse" than anything imagined in George Orwell's totalitarian dystopia 1984.Speaking to the Guardian, Cannataci -- who doesn't own a Facebook account or use Twitter -- lambasted the oversight of British digital surveillance as "a rather bad joke at its citizens' expense".Warning against the steady erosion of privacy and increasing levels of government intrusion, he also drew sinister parallels with Orwell's vision of a mass-surveilled society, adding that today's reality was far worse than the fiction: "At least Winston [a character in Orwell's 1984] was able to go out in the countryside and go under a tree and expect there wouldn't be any screen, as it was called. Whereas today there are many parts of the English countryside where there are more cameras than George Orwell could ever have imagined."
  • Cannataci, who holds posts as a professor of technology of law at the University of Groningen, and as head of the department of Information Policy and Governance at the University of Malta, also called for a "Geneva convention-style law" for the internet. "Some people may not want to buy into it. But you know, if one takes the attitude that some countries will not play ball, then, for example, the chemical weapons agreement would never have come about."
  • As part of his new role -- which elevates digital privacy to the same level of importance as other human rights -- Cannataci has vowed to begin systematically reviewing government policies and the business models of large corporations, which he accuses of "very often taking the data that you never even knew they were taking". Although the privacy chief admits that his mandate is more than likely "impossible to achieve in the next three years", he stressed the importance of a "longer-term view" in an effort to help protect people's data and safeguard their digital rights.
Paul Merrell

He Was a Hacker for the NSA and He Was Willing to Talk. I Was Willing to Listen. - 0 views

  • he message arrived at night and consisted of three words: “Good evening sir!” The sender was a hacker who had written a series of provocative memos at the National Security Agency. His secret memos had explained — with an earthy use of slang and emojis that was unusual for an operative of the largest eavesdropping organization in the world — how the NSA breaks into the digital accounts of people who manage computer networks, and how it tries to unmask people who use Tor to browse the web anonymously. Outlining some of the NSA’s most sensitive activities, the memos were leaked by Edward Snowden, and I had written about a few of them for The Intercept. There is no Miss Manners for exchanging pleasantries with a man the government has trained to be the digital equivalent of a Navy SEAL. Though I had initiated the contact, I was wary of how he might respond. The hacker had publicly expressed a visceral dislike for Snowden and had accused The Intercept of jeopardizing lives by publishing classified information. One of his memos outlined the ways the NSA reroutes (or “shapes”) the internet traffic of entire countries, and another memo was titled “I Hunt Sysadmins.” I felt sure he could hack anyone’s computer, including mine. Good evening sir!
  • The sender was a hacker who had written a series of provocative memos at the National Security Agency. His secret memos had explained — with an earthy use of slang and emojis that was unusual for an operative of the largest eavesdropping organization in the world — how the NSA breaks into the digital accounts of people who manage computer networks, and how it tries to unmask people who use Tor to browse the web anonymously. Outlining some of the NSA’s most sensitive activities, the memos were leaked by Edward Snowden, and I had written about a few of them for The Intercept. There is no Miss Manners for exchanging pleasantries with a man the government has trained to be the digital equivalent of a Navy SEAL. Though I had initiated the contact, I was wary of how he might respond. The hacker had publicly expressed a visceral dislike for Snowden and had accused The Intercept of jeopardizing lives by publishing classified information. One of his memos outlined the ways the NSA reroutes (or “shapes”) the internet traffic of entire countries, and another memo was titled “I Hunt Sysadmins.” I felt sure he could hack anyone’s computer, including mine.
  • I got lucky with the hacker, because he recently left the agency for the cybersecurity industry; it would be his choice to talk, not the NSA’s. Fortunately, speaking out is his second nature.
  • ...7 more annotations...
  • The Lamb’s memos on cool ways to hunt sysadmins triggered a strong reaction when I wrote about them in 2014 with my colleague Ryan Gallagher. The memos explained how the NSA tracks down the email and Facebook accounts of systems administrators who oversee computer networks. After plundering their accounts, the NSA can impersonate the admins to get into their computer networks and pilfer the data flowing through them. As the Lamb wrote, “sys admins generally are not my end target. My end target is the extremist/terrorist or government official that happens to be using the network … who better to target than the person that already has the ‘keys to the kingdom’?” Another of his NSA memos, “Network Shaping 101,” used Yemen as a theoretical case study for secretly redirecting the entirety of a country’s internet traffic to NSA servers.
  • In recent years, two developments have helped make hacking for the government a lot more attractive than hacking for yourself. First, the Department of Justice has cracked down on freelance hacking, whether it be altruistic or malignant. If the DOJ doesn’t like the way you hack, you are going to jail. Meanwhile, hackers have been warmly invited to deploy their transgressive impulses in service to the homeland, because the NSA and other federal agencies have turned themselves into licensed hives of breaking into other people’s computers. For many, it’s a techno sandbox of irresistible delights, according to Gabriella Coleman, a professor at McGill University who studies hackers. “The NSA is a very exciting place for hackers because you have unlimited resources, you have some of the best talent in the world, whether it’s cryptographers or mathematicians or hackers,” she said. “It is just too intellectually exciting not to go there.”
  • He agreed to a video chat that turned into a three-hour discussion sprawling from the ethics of surveillance to the downsides of home improvements and the difficulty of securing your laptop.
  • “If I turn the tables on you,” I asked the Lamb, “and say, OK, you’re a target for all kinds of people for all kinds of reasons. How do you feel about being a target and that kind of justification being used to justify getting all of your credentials and the keys to your kingdom?” The Lamb smiled. “There is no real safe, sacred ground on the internet,” he replied. “Whatever you do on the internet is an attack surface of some sort and is just something that you live with. Any time that I do something on the internet, yeah, that is on the back of my mind. Anyone from a script kiddie to some random hacker to some other foreign intelligence service, each with their different capabilities — what could they be doing to me?”
  • “You know, the situation is what it is,” he said. “There are protocols that were designed years ago before anybody had any care about security, because when they were developed, nobody was foreseeing that they would be taken advantage of. … A lot of people on the internet seem to approach the problem [with the attitude of] ‘I’m just going to walk naked outside of my house and hope that nobody looks at me.’ From a security perspective, is that a good way to go about thinking? No, horrible … There are good ways to be more secure on the internet. But do most people use Tor? No. Do most people use Signal? No. Do most people use insecure things that most people can hack? Yes. Is that a bash against the intelligence community that people use stuff that’s easily exploitable? That’s a hard argument for me to make.”
  • I mentioned that lots of people, including Snowden, are now working on the problem of how to make the internet more secure, yet he seemed to do the opposite at the NSA by trying to find ways to track and identify people who use Tor and other anonymizers. Would he consider working on the other side of things? He wouldn’t rule it out, he said, but dismally suggested the game was over as far as having a liberating and safe internet, because our laptops and smartphones will betray us no matter what we do with them. “There’s the old adage that the only secure computer is one that is turned off, buried in a box ten feet underground, and never turned on,” he said. “From a user perspective, someone trying to find holes by day and then just live on the internet by night, there’s the expectation [that] if somebody wants to have access to your computer bad enough, they’re going to get it. Whether that’s an intelligence agency or a cybercrimes syndicate, whoever that is, it’s probably going to happen.”
  • There are precautions one can take, and I did that with the Lamb. When we had our video chat, I used a computer that had been wiped clean of everything except its operating system and essential applications. Afterward, it was wiped clean again. My concern was that the Lamb might use the session to obtain data from or about the computer I was using; there are a lot of things he might have tried, if he was in a scheming mood. At the end of our three hours together, I mentioned to him that I had taken these precautions—and he approved. “That’s fair,” he said. “I’m glad you have that appreciation. … From a perspective of a journalist who has access to classified information, it would be remiss to think you’re not a target of foreign intelligence services.” He was telling me the U.S. government should be the least of my worries. He was trying to help me. Documents published with this article: Tracking Targets Through Proxies & Anonymizers Network Shaping 101 Shaping Diagram I Hunt Sys Admins (first published in 2014)
Gary Edwards

Dropbox: The Inside Story Of Tech's Hottest Startup - Forbes - 0 views

  •  
    Excellent interview from 2010. Still worth reading. excerpt: "In December 2009 Jobs beckoned Houston (pronounced like the New York City street, not the Texas city) and his partner, Arash Ferdowsi, for a meeting at his Cupertino office. "I mean, Steve friggin' Jobs," remembers Houston, now 28. "How do you even prepare for that?" When Houston whipped out his laptop for a demo, Jobs, in his signature jeans and black turtleneck, coolly waved him away: "I know what you do." What Houston does is Dropbox, the digital storage service that has surged to 50 million users, with another joining every second. Jobs presciently saw this sapling as a strategic asset for Apple. Houston cut Jobs' pitch short: He was determined to build a big company, he said, and wasn't selling, no matter the status of the bidder (Houston considered Jobs his hero) or the prospects of a nine-digit price (he and Ferdowsi drove to the meeting in a Zipcar Prius). Jobs smiled warmly as he told them he was going after their market. "He said we were a feature, not a product," says Houston. Courteously, Jobs spent the next half hour waxing on over tea about his return to Apple, and why not to trust investors, as the duo-or more accurately, Houston, who plays Penn to Ferdowsi's mute Teller-peppered him with questions. When Jobs later followed up with a suggestion to meet at Dropbox's San Francisco office, Houston proposed that they instead meet in Silicon Valley. "Why let the enemy get a taste?" he now shrugs cockily. Instead, Jobs went dark on the subject, resurfacing only this June, at his final keynote speech, where he unveiled iCloud, and specifically knocked Dropbox as a half-attempt to solve the Internet's messiest dilemma: How do you get all your files, from all your devices, into one place? Houston's reaction was less cocky: "Oh, s-t." The next day he shot a missive to his staff: "We have one of the fastest-growing companies in the world," it b
Paul Merrell

1st International Digital Preservation Interoperability Framework Symposium - 0 views

  • A recent study by the International Data Corporation has shown that the amount of digital data being produced had risen to 281 exabytes (EB, 1018) in 2007 and estimates the total amount of digital information will grow at a rate of 58% per year, reaching 1610 EB by 2011! Each EB is equivalent to 50,000 times the entire U.S. Library of Congress. printed collection
Paul Merrell

Durham Statement on Open Access to Legal Scholarship | Berkman Center - 0 views

  • On 7 November 2008, the directors of the law libraries at the University of Chicago, Columbia University, Cornell University, Duke University, Georgetown University, Harvard University, New York University, Northwestern University, the University of Pennsylvania, Stanford University, the University of Texas, and Yale University met in Durham, North Carolina at the Duke Law School. That meeting resulted in the "Durham Statement on Open Access to Legal Scholarship," which calls for all law schools to stop publishing their journals in print format and to rely instead on electronic publication coupled with a commitment to keep the electronic versions available in stable, open, digital formats.
  • Particularly now, with growing financial pressures on law school budgets, ending print publication of law journals deserves serious consideration. Very few law journals receive enough in subscription income and royalties to cover their costs of operation. The Statement anticipates both that the costs for printing and mailing can be eliminated, and that law libraries can reduce their costs for subscribing to, processing, and preserving print journals. There are additional benefits in improving access to journals that are not now published in open access formats and in reducing paper consumption.
  • Call to Action: We therefore urge every U.S. law school to commit to ending print publication of its journals and to making definitive versions of journals and other scholarship produced at the school immediately available upon publication in stable, open, digital formats, rather than in print. We also urge every law school to commit to keeping a repository of the scholarship published at the school in a stable, open, digital format. Some law schools may choose to use a shared regional online repository or to offer their own repositories as places for other law schools to archive the scholarship published at their school. Repositories should rely upon open standards for the archiving of works, as well as on redundant formats, such as PDF copies. We also urge law schools and law libraries to agree to and use a standard set of metadata to catalog each article to ensure easy online public indexing of legal scholarship. As a measure of redundancy, we also urge faculty members to reserve their copyrights to ensure that they too can make their own scholarship available in stable, open, digital formats. All law journals should rely upon the AALS model publishing agreement as a default and should respect author requests to retain copyrights in their scholarship.
Gary Edwards

I, Cringely » Blog Archive » iCloud's real purpose: kill Windows - Cringely o... - 0 views

  •  
    I'm not convinced that iCloud will eliminate Windows, MAC and Linux desktops.  I've been using DropBox, SyncDocs, Live.com while testing a number of backup-store-synch-share file services. IMHO, it's all about the apps that act on your data.  And these can come from the desktop, the Browser, or the device.  The best app platform for Cloud hosted data seems to be moving towards HTML5-JS.  Not Win32, .NET, C#, Java or Cocoa (iOS).  And Google clearly has he best platform of integrated services and API's.  They are best positioned to win the Cloud Wars if HTM5-JS and Native Client can close the deal on Cloud apps.  IMHO. excerpt: Apple's announcements yesterday about OS X 10.7 pricing (cheap), upgrading (easy), iOS 5, and iCloud storage, syncing, and media service can all be viewed as increasing ease of use, but from the perspective of Apple CEO Steve Jobs they perform an even more vital function - killing Microsoft. Here is the money line from Jobs yesterday: "We're going to demote the PC and the Mac to just be a device - just like an iPad, an iPhone or an iPod Touch. We're going to move the hub of your digital life to the cloud." Just like they used to say at Sun Microsystems, the network is the computer. Or we could go even further and say our data is the computer. This redefines digital incumbency. The incumbent platform today is Windows because it is in Windows machines that nearly all of our data and our ability to use that data have been trapped. But the Apple announcement changes all that. Suddenly the competition isn't about platforms at all, but about data, with that data being crunched on a variety of platforms through the use of cheap downloaded apps.
Gary Edwards

Bitcoin Consultancy - 1 views

  •  
    Amir's Website for open source BitCoin:  Jason Calcanis "This Week in Startups". Bitcoin is a new digital currency offering many advantages over traditional and other electronic currencies. Nobody owns the bitcoin economy, and nobody controls it. Utilising an ingenious decentralized structure, bitcoin relies on cryptography and mathematics to ensure security and reliability. It is secure and reliable. It eliminates virtually all the overhead seen in traditional banking. Currently bitcoin lingers in the underworld of the internet and lacks the outside polish that can take this cryptocurrency from the garage to commerce. Our wiki contains a short introduction to this digital currency.
Gary Edwards

Google Chrome OS: Web Platform To Rule Them All -- InformationWeek - 0 views

  •  
    Some good commentary on chrome OS from InformationWeek's Thomas Claburn. Excerpt: With Chrome OS, Google aims to make the Web the primary platform for software development....... The fact that Chrome OS applications will be written using open Web standards like JavaScript, HTML, and CSS might seem like a liability because Web applications still aren't as capable as applications written for specific devices and operating systems. But Google is betting that will change and is working to effect the change on which its bet depends. Within a year or two, Web browsers will gain access to peripherals, through an infrastructure layer above the level of device drivers. Google's work with standards bodies is making that happen..... ..... According to Matt Womer, the "ubiquitous Web activity lead" for W3C, the Web standards consortium, Web protocol groups are working to codify ways to access peripherals like digital cameras, the messaging stack, calendar data, and contact data. There's now a JavaScript API that Web developers can use to get GPS information from mobile phones using the phone's browser, he points out. What that means is that device drivers for Chrome OS will emerge as HTML 5 and related standards mature. Without these, consumers would never use Chrome OS because devices like digital cameras wouldn't be able to transfer data. Womer said the standardization work could move quite quickly, but won't be done until there's an actual implementation. That would be Chrome OS...... ..... Chrome OS will sell itself to developers because, as Google puts it, writing applications for the Web gives "developers the largest user base of any platform."
Paul Merrell

UN Report Finds Mass Surveillance Violates International Treaties and Privacy Rights - ... - 0 views

  • The United Nations’ top official for counter-terrorism and human rights (known as the “Special Rapporteur”) issued a formal report to the U.N. General Assembly today that condemns mass electronic surveillance as a clear violation of core privacy rights guaranteed by multiple treaties and conventions. “The hard truth is that the use of mass surveillance technology effectively does away with the right to privacy of communications on the Internet altogether,” the report concluded. Central to the Rapporteur’s findings is the distinction between “targeted surveillance” — which “depend[s] upon the existence of prior suspicion of the targeted individual or organization” — and “mass surveillance,” whereby “states with high levels of Internet penetration can [] gain access to the telephone and e-mail content of an effectively unlimited number of users and maintain an overview of Internet activity associated with particular websites.” In a system of “mass surveillance,” the report explained, “all of this is possible without any prior suspicion related to a specific individual or organization. The communications of literally every Internet user are potentially open for inspection by intelligence and law enforcement agencies in the States concerned.”
  • Mass surveillance thus “amounts to a systematic interference with the right to respect for the privacy of communications,” it declared. As a result, “it is incompatible with existing concepts of privacy for States to collect all communications or metadata all the time indiscriminately.” In concluding that mass surveillance impinges core privacy rights, the report was primarily focused on the International Covenant on Civil and Political Rights, a treaty enacted by the General Assembly in 1966, to which all of the members of the “Five Eyes” alliance are signatories. The U.S. ratified the treaty in 1992, albeit with various reservations that allowed for the continuation of the death penalty and which rendered its domestic law supreme. With the exception of the U.S.’s Persian Gulf allies (Saudi Arabia, UAE and Qatar), virtually every major country has signed the treaty. Article 17 of the Covenant guarantees the right of privacy, the defining protection of which, the report explained, is “that individuals have the right to share information and ideas with one another without interference by the State, secure in the knowledge that their communication will reach and be read by the intended recipients alone.”
  • The report’s key conclusion is that this core right is impinged by mass surveillance programs: “Bulk access technology is indiscriminately corrosive of online privacy and impinges on the very essence of the right guaranteed by article 17. In the absence of a formal derogation from States’ obligations under the Covenant, these programs pose a direct and ongoing challenge to an established norm of international law.” The report recognized that protecting citizens from terrorism attacks is a vital duty of every state, and that the right of privacy is not absolute, as it can be compromised when doing so is “necessary” to serve “compelling” purposes. It noted: “There may be a compelling counter-terrorism justification for the radical re-evaluation of Internet privacy rights that these practices necessitate. ” But the report was adamant that no such justifications have ever been demonstrated by any member state using mass surveillance: “The States engaging in mass surveillance have so far failed to provide a detailed and evidence-based public justification for its necessity, and almost no States have enacted explicit domestic legislation to authorize its use.”
  • ...5 more annotations...
  • Instead, explained the Rapporteur, states have relied on vague claims whose validity cannot be assessed because of the secrecy behind which these programs are hidden: “The arguments in favor of a complete abrogation of the right to privacy on the Internet have not been made publicly by the States concerned or subjected to informed scrutiny and debate.” About the ongoing secrecy surrounding the programs, the report explained that “states deploying this technology retain a monopoly of information about its impact,” which is “a form of conceptual censorship … that precludes informed debate.” A June report from the High Commissioner for Human Rights similarly noted “the disturbing lack of governmental transparency associated with surveillance policies, laws and practices, which hinders any effort to assess their coherence with international human rights law and to ensure accountability.” The rejection of the “terrorism” justification for mass surveillance as devoid of evidence echoes virtually every other formal investigation into these programs. A federal judge last December found that the U.S. Government was unable to “cite a single case in which analysis of the NSA’s bulk metadata collection actually stopped an imminent terrorist attack.” Later that month, President Obama’s own Review Group on Intelligence and Communications Technologies concluded that mass surveillance “was not essential to preventing attacks” and information used to detect plots “could readily have been obtained in a timely manner using conventional [court] orders.”
  • Three Democratic Senators on the Senate Intelligence Committee wrote in The New York Times that “the usefulness of the bulk collection program has been greatly exaggerated” and “we have yet to see any proof that it provides real, unique value in protecting national security.” A study by the centrist New America Foundation found that mass metadata collection “has had no discernible impact on preventing acts of terrorism” and, where plots were disrupted, “traditional law enforcement and investigative methods provided the tip or evidence to initiate the case.” It labeled the NSA’s claims to the contrary as “overblown and even misleading.” While worthless in counter-terrorism policies, the UN report warned that allowing mass surveillance to persist with no transparency creates “an ever present danger of ‘purpose creep,’ by which measures justified on counter-terrorism grounds are made available for use by public authorities for much less weighty public interest purposes.” Citing the UK as one example, the report warned that, already, “a wide range of public bodies have access to communications data, for a wide variety of purposes, often without judicial authorization or meaningful independent oversight.”
  • The report was most scathing in its rejection of a key argument often made by American defenders of the NSA: that mass surveillance is justified because Americans are given special protections (the requirement of a FISA court order for targeted surveillance) which non-Americans (95% of the world) do not enjoy. Not only does this scheme fail to render mass surveillance legal, but it itself constitutes a separate violation of international treaties (emphasis added): The Special Rapporteur concurs with the High Commissioner for Human Rights that where States penetrate infrastructure located outside their territorial jurisdiction, they remain bound by their obligations under the Covenant. Moreover, article 26 of the Covenant prohibits discrimination on grounds of, inter alia, nationality and citizenship. The Special Rapporteur thus considers that States are legally obliged to afford the same privacy protection for nationals and non-nationals and for those within and outside their jurisdiction. Asymmetrical privacy protection regimes are a clear violation of the requirements of the Covenant.
  • That principle — that the right of internet privacy belongs to all individuals, not just Americans — was invoked by NSA whistleblower Edward Snowden when he explained in a June, 2013 interview at The Guardian why he disclosed documents showing global surveillance rather than just the surveillance of Americans: “More fundamentally, the ‘US Persons’ protection in general is a distraction from the power and danger of this system. Suspicionless surveillance does not become okay simply because it’s only victimizing 95% of the world instead of 100%.” The U.N. Rapporteur was clear that these systematic privacy violations are the result of a union between governments and tech corporations: “States increasingly rely on the private sector to facilitate digital surveillance. This is not confined to the enactment of mandatory data retention legislation. Corporates [sic] have also been directly complicit in operationalizing bulk access technology through the design of communications infrastructure that facilitates mass surveillance. ”
  • The latest finding adds to the growing number of international formal rulings that the mass surveillance programs of the U.S. and its partners are illegal. In January, the European parliament’s civil liberties committee condemned such programs in “the strongest possible terms.” In April, the European Court of Justice ruled that European legislation on data retention contravened EU privacy rights. A top secret memo from the GCHQ, published last year by The Guardian, explicitly stated that one key reason for concealing these programs was fear of a “damaging public debate” and specifically “legal challenges against the current regime.” The report ended with a call for far greater transparency along with new protections for privacy in the digital age. Continuation of the status quo, it warned, imposes “a risk that systematic interference with the security of digital communications will continue to proliferate without any serious consideration being given to the implications of the wholesale abandonment of the right to online privacy.” The urgency of these reforms is underscored, explained the Rapporteur, by a conclusion of the United States Privacy and Civil Liberties Oversight Board that “permitting the government to routinely collect the calling records of the entire nation fundamentally shifts the balance of power between the state and its citizens.”
1 - 20 of 158 Next › Last »
Showing 20 items per page