Skip to main content

Home/ Open Web/ Group items tagged 4

Rss Feed Group items tagged

Gary Edwards

Andreessen Horowitz & the Meteor investment - 0 views

  •  
    Web site for Andreessen Horowitz VC. List of blogs for general partners. The reason for linking into a16z is the $11.2 Million they invested in Meteor! Meteor is awesome. My guess is that Meteor will provide a very effective Cloud platform to replace or extend the Windows Client/Server business productivity platform. Many VC watchers are wondering if a16z can recover the investment? Say what? IMHO this is for all the marbles. Platform is everything, and Cloud Computing is certain to replace Client/Server over time. Meteor just move that time frame from a future uncertainty to NOW. The Windows Productivity Platform has dominated Client/Server computing since the introduction of Windows 4 WorkGroups (v3.11) in 1992. Key technologies that followed or were included in v3.11 were DDE, OLE, MAPI, ODBC, ActiveX, and Visual Basic scripting - to name but a few. Meteor is an open source platform that hits these technologies directly with an approach that truly improves the complicated development of all Cloud based Web Apps - including the sacred Microsoft Cow herd of client/server business productivity apps. Meteor nails OLE and ODBC like nothing i've ever seen before. Very dramatic stuff. Maybe they are nailing shut the Redmond coffin in the process - making that $11.2 Mill a drop in the bucket considering the opportunity Meteor has cracked open. The iron grip Microsoft has on business productivity is so tight and so far reaching that one could easily say that Windows is the client in Client/Server. But it took years to build that empire. With this investment, Meteor could do it in months. Compound documents are the fuel in Windows business productivity and office automation systems. Tear apart a compound document, and you'll find embedded logic for OLE and ODBC. Sure, it's brittle, costly to develop, costly to maintain, and a bear to distribute. Tear apart a Meteor productivity service and you'll find the same kind of OLE-ODBC-Script
Gary Edwards

Asus shows off ARM-based Windows 8 tablet - Computerworld - 0 views

  •  
    Is Intel right?  Is there a "compatibility-interoperability" problem between Windows RT Office (ARM) and legacy (x86) Windows MS Office productivity environments?  It seems to me that the entire reason iPAD, Android and other ARM based tablet systems want MSOffice and MSOffice Visual Document Viewers is exactly because they want and expect a high level of compat-interop with legacy Windows productivity workgroups and client/server systems. What's the truth?  And is there anything x86 providers like Intel and AMD can do about compat-interop and the unstoppable cloud-mobility revolution? excerpt: The Asus tablet has a quad-core Tegra 3 processor from Nvidia. Windows RT comes preloaded with Office 15, a group of widely used productivity applications. Microsoft has said it had to re-engineer Windows RT to deal with expectations for ARM based devices, which include all-day connectivity and low power consumption. The tablet also has an 8-megapixel camera at the rear with LED flash, and a 2-megapixel camera at the front. It has 2GB of RAM, 32GB of storage, Wi-Fi and Bluetooth 4.0. Intel has already started the war of words against ARM around Windows 8, with Intel's CEO Paul Otellini saying that ARM devices will be incompatible with existing Windows applications and drivers. But analysts have said that Windows RT devices will likely be attractive to users who have few ties with legacy Windows PCs. Low prices could also attract users to Windows on ARM devices.
Gary Edwards

Open Source, Android Push Evolution of Mobile Cloud Apps | Linux.com - 0 views

  •  
    Nice OpenMobster graphic!  Good explanation of the Android notification advantage over iOS and Windows 7 too.  Note the exception that iOS-5 finally introduces support for JSON. excerpt: Why Android Rocks the Cloud Most open source mobile-cloud projects are still in the early stages. These include the fledgling cloud-to-mobile push notifications app, SimplePush , and the pre-alpha Mirage  "cloud operating system" which enables the creation of secure network applications across any Xen-ready cloud platform. The 2cloud Project , meanwhile, has the more ambitious goal of enabling complete mobile cloud platforms. All of the above apps support Android, and many support iOS. Among mobile OSes, Android is best equipped to support cloud applications, said Shah. Android supports sockets to help connect to remote services, and supplies a capable SQlite-based local database. It also offers a JSON (JavaScript Object Notation) interchange stack to help parse incoming cloud data -- something missing in iOS. Unlike iOS and Windows Phone 7, Android provides background processing, which is useful for building a robust push infrastructure, said Shah. Without it, he added, users need to configure the app to work with a third-party push service. Most importantly, Android is the only major mobile OS to support inter-application communications. "Mobile apps are focused, and tend to do one thing only," said Shah. "When they cannot communicate with each other, you lose innovation." Comment from Sohil Shah, CEO OpenMobster: "I spoke too soon. iOS 5 now supports JSON out of the box. I am still working with a third party library which was needed in iOS 4 and earlier, and to stay backward compatible with those versions.  Anyways, it should have been supported a lot earlier considering the fact that AFAIK, Android has had it since the very beginning. "
Gary Edwards

How would you fix the Linux desktop? | ITworld - 0 views

  • VB integrates with COM
  • QL Server has a DCE/RPC interface. 
  • MS-Office?  all the components (Excel, Word etc.) have a COM and an OLE interface.
  •  
    Comment posted 1 week ago in reply to Zzgomes .....  by Ed Carp.  Finally someone who gets it! OBTW, i replaced Windows 7 with Linux Mint over a year ago and hope to never return.  The thing is though, i am not a member of a Windows productivity workgroup, nor do i need to connect to any Windows databases or servers.  Essentially i am not using any Windows business process or systems.  It's all Internet!!! 100% Web and Cloud Services systems.  And that's why i can dump Windows without a blink! While working for Sursen Corp, it was a very different story.  I had to have Windows XP and Windows 7, plus MSOffice 2003-2007, plus Internet Explorer with access to SharePoint, Skydrive/Live.com.  It's all about the business processes and systems you're part of, or must join.   And that's exactly why the Linux Desktop has failed.  Give Cloud Computing the time needed to re-engineer and re-invent those many Windows business processes, and the Linux Desktop might suceed.  The trick will be in advancing both the Linux Desktop and Application developer layers to target the same Cloud Computing services mobility targets.  ..... Windows will take of itself.   The real fight is in the great transition of business systems and processes moving from the Windows desktp/workgroup productivity model to the Cloud.  Linux Communities must fight to win the great transition. And yes, in the end this all about a massive platform shift.  The fourth wave of computing began with the Internet, and will finally close out the desktop client/server computing model as the Web evolves into the Cloud. excerpt: Most posters here have it completely wrong...the *real* reason Linux doesn't have a decent penetration into the desktop market is quite obvious if you look at the most successful desktop in history - Windows.  All this nonsense about binary driver compatibility, distro fragmentation, CORBA, and all the other red herrings that people are talking about are completely irrelevant
Gary Edwards

Google News - 0 views

  •  
    Prepare to be blown away. I viewed a demo of Numecent today and then did some research. There is no doubt in my mind that this is the end of the shrink wrapped- Microsoft business model. It's also perhaps the end of software application design and construction as we know it. Mobile apps in particular will get blasted by the Numecent "Cloud - Paging" concept. Extraordinary stuff. I'll leave a few useful links on Diigo "Open Web". "Numecent, a company that has a new kind of cloud computing technology that could potentially completely reorganize the way software is delivered and handled - upending the business as we know it - has another big feather in its cap. The company is showing how enterprises can use this technology to instantly put all of their enterprise software in the cloud, without renegotiating contracts and licenses with their software vendors. It signed $3 billion engineering construction company Parsons as a customer. Parsons is using Numecent's tech to deliver 4 million huge computer-aided design (CAD) files to its nearly 12,000 employees around the world. CAD drawings are bigger than video files and they can only be opened and edited by specific CAD apps like AutoCAD. Numecent offers a tech called "cloud paging" which instantly "cloudifies" any Windows app. Instead of being installed on a PC, the enterprise setup can deliver the app over the cloud. Unlike similar cloud technologies (called virtualization), this makes the app run faster and continue working even when the Internet connection goes down. "It's offers a 95% reduction in download times and 95% in download network usage," CEO Osman Kent told Business Insider. "It makes 8G of memory work like 800G." It also lets enterprises check in and check out software, like a library book, so more PCs can legally share software without violating licensing terms, saving money on software license fees, Kent says. Parson is using it to let employees share over 700 huge applications such as Au
  •  
    Sounds like Microsoft must-buy-or-kill technology.
Gary Edwards

Analyzing Your Own Style | Writing and Humanistic Studies at MIT - 0 views

  •  
    Copyblogger originally shared: These 4 Exercises Are Guaranteed to Make You a Better Writer Your writing is good. You know how to position words to make clear sentences. You can string together sentences into meaningful paragraphs. You can take those sentences and arrange them into a persuasive post. But you've plateaued. Your writing is getting predictable, stale, and forgettable. And you're not sure how to break out of that mold. If that's you, then you need to check out these exercise from MIT designed to help you evaluate your copy. You'll learn things like: - Your sentence length pattern - If you correctly emphasize the important parts in your sentences and paragraphs. - Whether you lean on simple, complex, or compound sentences. Analyzing your writing style will highlight your weaknesses, and give you a plan to make your writing better. So, when you've got a few minutes, perform these exercises: http://writing.mit.edu/wcc/resources/writers/analyzingyourownstyle +Demian Farnworth 
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Gary Edwards

CPU Wars - Intel to Play Fab for an ARM Chipmaker: Understanding What the Altera Deal M... - 0 views

  • Intel wants x86 to conquer all computing spaces -- including mobile -- and is trying to leverage its process lead to make that happen.  However, it's been slowed by a lack of inclusion of 4G cellular modems on-die and difficulties adapting to the mobile market's low component prices.  ARM, meanwhile, wants a piece of the PC and server markets, but has received a lukewarm response from consumers due to software compatibility concerns. The disappointing sales of (x86) tablet products using Microsoft Corp.'s (MSFT) Windows 8 and the flop of Windows RT (ARM) product in general somewhat unexpectedly had the net result of being a driver to maintain the status quo, allowing neither company to gain much ground.  For Intel, its partnership with Microsoft (the historic "Wintel" combo) has damaged its mobile efforts, as Windows 8 flopped in the tablet market.  Likewise ARM's efforts to score PC market share were stifled by the flop of Windows RT, which led to OEMs killing off ARM-based laptops and convertibles.
  • Both companies seem to have learned their lesson and are migrating away from Windows towards other platforms -- in ARM's case Chromebooks, and in Intel's case Android tablets/smartphones. But suffice it to say, ARM Holdings and Intel are still very much bitter enemies from a sales perspective.
  • III. Profit vs. Risk -- Understanding the Modern CPU Food Chain
  • ...16 more annotations...
  • Whether it's tablets or PCs, the processor is still one of the most expensive components onboard.  Aside from the discrete GPU -- if a device has one -- the CPU has the greatest earning potential for a large company like Intel because the CPU is the most complex component. Other components like the power supply or memory tend to either be lower margin or have more competitors.  The display, memory, and storage components are all sensitive to process, but see profit split between different parties (e.g. the company who makes the DRAM chips and the company who sells the stick of DRAM) and are primarily dependent on process technology. CPUs and GPUs remain the toughest product to make, as it's not enough to simply have the best process, you must also have the best architecture and the best optimization of that architecture for the space you're competing in. There's essentially five points of potential profit on the processor food chain: [CPU] Fabrication [CPU] Architecture design [CPU] Optimization OEM OS platform Of these, the fabrication/OS point is the most profitable (but is dependent on the number of OEM adopters).  The second most profitable niche is optimization (which again is dependent on OEM adopter market share), followed by OEM markups.  In terms of expense, fabrication and operating system designs requires the greatest capital investment and the highest risk.
  • In terms of difficulty/risk, the fabrication and operating system are the most difficult/risky points.  Hence in terms of combined risk, cost, and profitability the ranking of which points are "best" is arguably: Optimization Architecture design OS platfrom OEM Fabrication ...with the fabrication point being last largely because it's so high risk. In other words, the last thing Intel wants is to settle into a niche of playing fabs for everybody else's product, as that's an unsound approach.  If you can't keep up in terms of chip design, you typically spin off your fabs and opt for a different architecture direction -- just look at Advanced Micro Devices, Inc.'s (AMD) spinoff of GlobalFoundries and upcoming ARM product to see that.
  • IV. Top Firms' Role on That Food Chain
  • Apple has seen unbelievable profits due to this fundamental premise.  It controls the two most desirable points on the food chain -- OS and optimization -- while sharing some profit with its architecture designer (ARM Holdings) and a bit with the fabricator (Samsung Electronics Comp., Ltd. (KSC:005930)).  By choosing to play operating system maker, too, it adds to its profits, but also its risk.  Note that nearly every other first-party exclusive smartphone platform has failed or is about to fail (i.e. BlackBerry, Ltd. (TSE:BB) and the now-dead Palm).
  • Intel controls points 1, 2, and 5, currently, on the food chain.  Compared to Apple, Intel's points of control offer less risk, but also slightly less profitability. Its architecture control may be at risk, but even so, it's currently the top in its most risky/expensive point of control (fabrication), where as Apple's most risky/expensive point of control (OS development) is much less of a clear leader (as Android has surpassed Apple in market share).  Hence Apple might be a better short-term investment, but Intel certainly appears a better long-term investment.
  • Samsung is another top company in terms of market dominance and profit.  It occupies points 1, 3, 4, and 5 -- sometimes.  Sometimes Samsung's devices use third-party optimization firms like Qualcomm Inc. (QCOM) and NVIDIA Corp. (NVDA), which hurts profitability by removing one of the most profitable roles.  But Samsung makes up for this by being one of the largest and most successful third party manufacturers.
  • Microsoft enjoys a lot of profit due to its OS dominance, as does Google Inc. (GOOG); but both companies are limited in controlling only one point which they monetize in different ways (Microsoft by direct sales; Google by giving away OS product for free in return for web services market share and by proxy search advertising revenue).
  • Qualcomm and NVIDIA are also quite profitable operating solely as optimizers, as is ARM Holdings who serves as architecture maker to Qualcomm, NVIDIA, Apple, and Samsung.
  • V. Four Scenarios in the x86 vs. ARM Competition
  • Scenario one is that x86 proves dominant in the mobile space, assuming a comparable process.
  • A second scenario is that x86 and ARM are roughly tied, assuming a comparable process.
  • A third scenario is that x86 is inferior to ARM at a comparable process, but comparable or superior to ARM when the x86 chip is built using a superior process.  From the benchmarks I've seen to date, I personally believe this is most likely.
  • A fourth scenario is that x86 is so drastically inferior to ARM architecturally that a process lead by Intel can't make up for it.
  • This is perhaps the most interesting scenario, in the sense of thinking of how Intel would react, if not overly likely.  If Intel were faced with this scenario, I believe Intel would simply bite the bullet and start making ARM chips, leveraging its process lead to become the dominant ARM chipmaker.  To make up for the revenue it lost, paying licensing fees to ARM Holdings, it could focus its efforts in the OS space (it's Tizen Linux OS project with Samsung hints at that).  Or it could look to make up for lost revenue by expanding its production of other basic process-sensitive components (e.g. DRAM).  I think this would be Intel's best and most likely option in this scenario.
  • VI. Why Intel is Unlikely to Play Fab For ARM Chipmakers (Even if ARM is Better)
  • From Intel's point of view, there is an entrenched, but declining market for x86 chips because of Windows, and Intel will continue to support Atom chips (which will be required to run Windows 8 tablets), but growth on desktops will come from 64 bit desktop/server class non-Windows ARM devices - Chromebooks, Android laptops, possibly Apple's desktop products as well given they are going 64 bit ARM for their future iPhones. Even Windows has been trying to transition (unsuccessfully) to ARM. Again, the Windows server market is tied to x86, but Linux and FreeBSD servers will run on ARM as well, and ARM will take a chunk out of the server market when a decent 64bit ARM server chip is available as a result.
  •  
    Excellent article explaining the CPU war for the future of computing, as Intel and ARM square off.  Intel's x86 architecture dominates the era of client/server computing, with their famed WinTel alliance monopolizing desktop, notebook and server implementations.  But Microsoft was a no show with the merging mobile computing market, and now ARM is in position transition from their mobile dominance to challenge the desktop -notebook - server markets.   WinTel lost their shot at the mobile computing market, and now their legacy platforms are in play.  Good article!!! Well worth the read time  ................
Paul Merrell

Eric Holder: The Justice Department could strike deal with Edward Snowden - 0 views

  • Eric Holder: The Justice Department could strike deal with Edward SnowdenMichael IsikoffChief Investigative CorrespondentJuly 6, 2015Former U.S. Attorney General Eric Holder. (Photo: Olivier Douliery-Pool/Getty) Former Attorney General Eric Holder said today that a “possibility exists” for the Justice Department to cut a deal with former NSA contractor Edward Snowden that would allow him to return to the United States from Moscow. In an interview with Yahoo News, Holder said “we are in a different place as a result of the Snowden disclosures” and that “his actions spurred a necessary debate” that prompted President Obama and Congress to change policies on the bulk collection of phone records of American citizens. Asked if that meant the Justice Department might now be open to a plea bargain that allows Snowden to return from his self-imposed exile in Moscow, Holder replied: “I certainly think there could be a basis for a resolution that everybody could ultimately be satisfied with. I think the possibility exists.”
  • But his remarks to Yahoo News go further than any current or former Obama administration official in suggesting that Snowden’s disclosures had a positive impact and that the administration might be open to a negotiated plea that the self-described whistleblower could accept, according to his lawyer Ben Wizner.
  • It’s also not clear whether Holder’s comments signal a shift in Obama administration attitudes that could result in a resolution of the charges against Snowden. Melanie Newman, chief spokeswoman for Attorney General Loretta Lynch, Holder’s successor, immediately shot down the idea that the Justice Department was softening its stance on Snowden. “This is an ongoing case so I am not going to get into specific details but I can say our position regarding bringing Edward Snowden back to the United States to face charges has not changed,” she said in an email.
  • ...1 more annotation...
  • Three sources familiar with informal discussions of Snowden’s case told Yahoo News that one top U.S. intelligence official, Robert Litt, the chief counsel to Director of National Intelligence James Clapper, recently privately floated the idea that the government might be open to a plea bargain in which Snowden returns to the United States, pleads guilty to one felony count and receives a prison sentence of three to five years in exchange for full cooperation with the government.
Gary Edwards

Google Chrome 5 WebKit - Firefox - Opera Comparisons - BusinessWeek - 0 views

  •  
    Chrome runs as close as any browser can to the bleeding edge of Web standards. Though it uses the same open source WebKit rendering engine as Safari, it doesn't reliably support the controversial, proprietary CSS3 transformation and animation tricks that Apple's built into Safari. However, like every browser I tested, it earned a perfect score in a compatibility test for CSS3 selectors, and it joined Safari and Opera with a flawless score of 100 in the Acid3 web standards benchmark. Chrome 5 also supports both Apple's H.264 codec and Mozilla's preferred open source Ogg Theora technology for plugin-free HTML5 video, and it beautifully played back HTML5 demo videos from YouTube and Brightcove. In XHTML and CSS tests, Chrome was surprisingly slower than Safari, despite their shared rendering engine -- but the race was close. Safari rendered a local XHTML test page in 0.58 seconds to Chrome's 0.78 seconds, and a local CSS test page in 33 milliseconds to Chrome's 51 milliseconds. Note that Chrome still rendered XHTML more than twice as fast as Opera (1.67 seconds) and left Firefox (12.42 seconds--no, that's not a typo) eating its dust. In CSS, it also beat the pants off Opera (193 milliseconds) and Firefox (342 milliseconds). But Chrome shines brightest when handling JavaScript. Its V8 engine zipped through the SunSpider Javascript benchmark in 448.6 milliseconds, narrowly beating Opera's 485.8 milliseconds, and absolutely plastering Firefox's 1,161.4 milliseconds. However, Safari 5's time of 376.3 miliseconds in the SunSpider test beat Chrome 5 handily.
Gary Edwards

GSA picks Google Apps: What it means | ZDNet - 0 views

  •  
    The General Services Administration made a bold decision to move its email and collaboration systems to the cloud.  This is a huge win for cloud-computing, but perhaps should have been expected since last week the Feds announced a new requisition and purchase mandate that cloud-computing had to be the FIRST consideration for federal agency purchases.  Note that the General Services Administration oversees requisitions and purchases for all Federal agencies!  This is huge.  Estimated to be worth $8 billion to cloud-computing providers. The cloud-computing market is estimated to be $30 Billion, but Gartner did not anticipate or expect Federal Agencies to embrace cloud-computing let alone issue a mandate for it.   In the RFP issued last June, it was easy to see their goals in the statement of objectives: This Statement of Objectives (SOO) describes the goals that GSA expects to achieve with regard to the 1. modernization of its e-mail system; 2. provision of an effective collaborative working environment; 3. reduction of the government's in-house system maintenance burden by providing related business, technical, and management functions; and 4. application of appropriate security and privacy safeguards. GSA announced yesterday that they choose Google Apps for email and collaboration and Unisys as the implementation partner. So what does this mean? What it means (WIM) #1: GSA employees will be using a next-generation information workplace. And that means mobile, device-agnostic, and location-agile. Gmail on an iPad? No problem. Email from a home computer? Yep. For GSA and for every other agency and most companies, it's important to give employees the tools to be productive and engage from every location on every device. "Work becomes a thing you do and not a place you go." [Thanks to Earl Newsome of Estee Lauder for that quote.] WIM #2: GSA will save 50% of the cost of email over five years. This is also what our research on the cost of email o
Gary Edwards

Why You Should Upload Documents to Office Web Apps via SkyDrive - 0 views

  •  
    Here it comes - the "rich" Web experience based on integrated but proprietary 2010 technologies from Microsoft.  Note the comparative "advantages" listed in this article describing Microsoft SkyDrive, and comparing to Google Docs. excerpt:  Do you use Microsoft Office programs for creating documents and then use Google Docs to edit these documents online or as an offsite backup? Well, now that Office 2010 and Office Web Apps are available under public beta for free, here are some reasons why you should consider uploading documents, presentations and spreadsheets into Office Web Apps via Windows Live SkyDrive in addition to your Google Docs account. 1. Windows Live SkyDrive supports larger files 2. Document formatting is preserved 3. Native OpenXML file formats 4. Public Documents are in the Lifestream 5. Content is not 'lost in translation'  ....... When you upload a document in Office Web Apps, the application will automatically preserve all the data in that document even if a particular feature is not currently supported by the online applications. For instance, if your PowerPoint presentation contains a slide transition (e.g., Vortex) that is not supported in the online version of Office, the feature will be preserved in your presentation even if you upload it on to Office Web Apps via Windows Live SkyDrive. Later, when you download and open that presentation inside PowerPoint, it would be just like the original version. The content is not 'lost in translation' with Office Web Apps. Are you using Google Docs as a Document Backup Service?  Office Web Apps won't just preserve all the original features of your documents but you can also download entire directories of Office documents as a ZIP file with a simple click.
Gary Edwards

Top Five Cloud Computing Predictions for 2011: John Savageau | SYS-CON MEDIA - 0 views

  •  
    1. ESBaaS Will Emerge in Enterprise Clouds.  (enterprise service bus as a service for internal messaging and exchange between apps) 2. Enterprise Cloud Computing will Accelerate Data Center Consolidation. 3. Desktop Virtualization.    4. SME Data Center Outsourcing into Public Clouds 5. Cloud Computing and Cloud Storage will Look to PODs and Containers.
Gary Edwards

EU Cyber Agency ENISA Issues Governmental Cloud Report | WHIR Web Hosting Industry News - 0 views

  •  
    The EU's cyber security agency ENISA (www.enisa.europa.eu) announced this week it has released a new report on governmental cloud computing. The report, which can be downloaded now on the ENISA website, is targeted at senior managers of public bodies who have to make a security and resilience decision about migrating to the cloud, if at all. The main goal of the report is to support governmental bodies in taking informed risk based decisions relating to the security of data, resilience of service and legal compliance on moving to the cloud. ENISA concludes that private and community clouds appear to be the solutions that offer the best solution to meet the needs of public administrations if they need to achieve the highest level of data governance.The report makes several recommendations to governments and public bodies, including national governments and the EU institutions should investigate the concept of an EU governmental cloud.The report also argues that cloud computing will soon serve a significant portion of EU citizens, SMEs and public administrations, and therefore national governments should prepare a cloud computing strategy and study the role that cloud computing will play for critical information infrastructure protection.Finally, the report states that a national cloud computing strategy should address the effects of national/supra-national interoperability and interdependencies, cascading failures, and include cloud providers into the reporting schemes of articles 4 and 13 of the new Telecom Framework Directive. Download report:  http://www.enisa.europa.eu/act/rm/emerging-and-future-risk/deliverables/security-and-resilience-in-governmental-clouds/
Gary Edwards

Escape the App Store: 4 ways to create smartphone Web apps | HTML5 - CSS - JavaScript D... - 0 views

  •  
    Excellent guidelines for developing crossplatform smartphone apps in HTML5-CSS-JavaScript.  Covers Appcellerator, Sencha, jQuery, and Drupal.  Great resource!
Paul Merrell

Oracle to gain quick OK for Sun purchase (Dealscape - Pipeline) - 0 views

  •  
    The Justice Department's antitrust division is preparing to give a quick thumbs up to Oracle Corp.'s proposed $7.4 billion purchase of Sun Microsystems Inc., according to a lawyer briefed on the case. Swift approval surprises the many antitrust experts who predicted a long, scathing review. A difficult investigation was expected for two key reasons. First, Assistant Attorney General Christine Varney, who has a special area of expertise in high technology matters, rode into office promising vigorous antitrust enforcement. Second, skepticism lingers among DOJ staff about Oracle's business practices. Oracle CEO Larry Ellison thwarted the DOJ in 2004 when he defeated an attempt by the agency to block his purchase of rival PeopleSoft Inc.
Gary Edwards

Petabytes on a budget: How to build cheap cloud storage | Backblaze Blog - 0 views

  •  
    Amazing must read!  BackBlaze offers unlimited cloud storage/backup for $5 per month.  Now they are releasing the "storage" aspect of their service as an open source design.  The discussion introducing the design is simple to read and follow - which in itself is an achievement.   They held back on open sourcing the BackBlaze Cloud software system, which is understandable.  But they do disclose a Debian Linux OS running Tomcat over Apache Server 5.4 with JFS and HTTPS access.  This is exciting stuff.  I hope the CAR MLS-Cloud guys take notice.  Intro: At Backblaze, we provide unlimited storage to our customers for only $5 per month, so we had to figure out how to store hundreds of petabytes of customer data in a reliable, scalable way-and keep our costs low. After looking at several overpriced commercial solutions, we decided to build our own custom Backblaze Storage Pods: 67 terabyte 4U servers for $7,867. In this post, we'll share how to make one of these storage pods, and you're welcome to use this design. Our hope is that by sharing, others can benefit and, ultimately, refine this concept and send improvements back to us. Evolving and lowering costs is critical to our continuing success at Backblaze.
Paul Merrell

Report: Verizon Claimed Public Utility Status To Get Government Perks - Slashdot - 0 views

  • Research for the Public Utility Law Project (PULP) has been released which details 'how Verizon deliberately moves back and forth between regulatory regimes, classifying its infrastructure either like a heavily regulated telephone network or a deregulated information service depending on its needs. The chicanery has allowed Verizon to raise telephone rates, all the while missing commitments for high-speed internet deployment' (PDF). In short, Verizon pushed for the government to give it common carrier privileges under Title II in order to build out its fiber network with tax-payer money. Result: increased service rates on telephone users to subsidize Verizon's 'infrastructure investment.' When it comes to regulations on Verizon's fiber network, however, Verizon has been pushing the government to classify its services as that of information only — i.e., beyond Title II. Verizon has made about $4.4 billion in additional revenue in New York City alone, 'money that's funneled directly from a Title II service to an array of services that currently lie beyond Title II's reach.' And it's all legal. An attorney at advocacy group Public Knowledge said it best: 'To expect that you can come in and use public infrastructure and funds to build a network and then be free of any regulation is absurd....When Verizon itself is describing these activities as a Title II common carrier, how can the FCC look at broadband internet and continue acting as though it's not a telecommunication network?'"
  •  
    Let's also not forget that what is now named "Verizon" used to be named Bell Atlantic, one of the seven Baby Bells that were spun off by AT&T by government order during antitrust proceedings.  In other words, this is one of the companies rate-payers financed through a heavily-regulated analog telephony absolute monopoly. But Verizon wants to spread its wings and escape the chains of regulation as a telecommunications carrier. While having its cake and eating it to, according to this article. The FCC has poised itself through a proposed rule with the flexibility to postpone a decision on net neutrality.  AT&T famously was allowed to keep its R&D arm while being freed of the expense of upgrading the U.S. telephony network from analog to digital and from copper wire to fibre optic.  And pay for those Baby Bells to make that transition we did. I remember monthly bills for a two person office running as high as $1,100 a month for calls all carried from Baby Bell to AT&T and back to another Baby Bell. All at state-regulated rates with FCC looking the other way. But now Verizon, Comcast (the originally munipally regulated cable television monopolies) and the few other "competing" survivors of that broadband rollout, having had their infrastructure paid for by the ratepayers, want to fly off and begin charging us at the other end of the pipe,via charges to content providers that will be passed on to us. Leading to the squeezing out of Mom and Pop internet businesses by the big content providers that can afford the charges and pass them on to us. This is looking more and more like another massive rip-off of the customers who already paid for that infrasture. Is that banksters I smell, privatizing a enormous public utility in the name of free markets?      
Paul Merrell

UN Report Finds Mass Surveillance Violates International Treaties and Privacy Rights - ... - 0 views

  • The United Nations’ top official for counter-terrorism and human rights (known as the “Special Rapporteur”) issued a formal report to the U.N. General Assembly today that condemns mass electronic surveillance as a clear violation of core privacy rights guaranteed by multiple treaties and conventions. “The hard truth is that the use of mass surveillance technology effectively does away with the right to privacy of communications on the Internet altogether,” the report concluded. Central to the Rapporteur’s findings is the distinction between “targeted surveillance” — which “depend[s] upon the existence of prior suspicion of the targeted individual or organization” — and “mass surveillance,” whereby “states with high levels of Internet penetration can [] gain access to the telephone and e-mail content of an effectively unlimited number of users and maintain an overview of Internet activity associated with particular websites.” In a system of “mass surveillance,” the report explained, “all of this is possible without any prior suspicion related to a specific individual or organization. The communications of literally every Internet user are potentially open for inspection by intelligence and law enforcement agencies in the States concerned.”
  • Mass surveillance thus “amounts to a systematic interference with the right to respect for the privacy of communications,” it declared. As a result, “it is incompatible with existing concepts of privacy for States to collect all communications or metadata all the time indiscriminately.” In concluding that mass surveillance impinges core privacy rights, the report was primarily focused on the International Covenant on Civil and Political Rights, a treaty enacted by the General Assembly in 1966, to which all of the members of the “Five Eyes” alliance are signatories. The U.S. ratified the treaty in 1992, albeit with various reservations that allowed for the continuation of the death penalty and which rendered its domestic law supreme. With the exception of the U.S.’s Persian Gulf allies (Saudi Arabia, UAE and Qatar), virtually every major country has signed the treaty. Article 17 of the Covenant guarantees the right of privacy, the defining protection of which, the report explained, is “that individuals have the right to share information and ideas with one another without interference by the State, secure in the knowledge that their communication will reach and be read by the intended recipients alone.”
  • The report’s key conclusion is that this core right is impinged by mass surveillance programs: “Bulk access technology is indiscriminately corrosive of online privacy and impinges on the very essence of the right guaranteed by article 17. In the absence of a formal derogation from States’ obligations under the Covenant, these programs pose a direct and ongoing challenge to an established norm of international law.” The report recognized that protecting citizens from terrorism attacks is a vital duty of every state, and that the right of privacy is not absolute, as it can be compromised when doing so is “necessary” to serve “compelling” purposes. It noted: “There may be a compelling counter-terrorism justification for the radical re-evaluation of Internet privacy rights that these practices necessitate. ” But the report was adamant that no such justifications have ever been demonstrated by any member state using mass surveillance: “The States engaging in mass surveillance have so far failed to provide a detailed and evidence-based public justification for its necessity, and almost no States have enacted explicit domestic legislation to authorize its use.”
  • ...5 more annotations...
  • Instead, explained the Rapporteur, states have relied on vague claims whose validity cannot be assessed because of the secrecy behind which these programs are hidden: “The arguments in favor of a complete abrogation of the right to privacy on the Internet have not been made publicly by the States concerned or subjected to informed scrutiny and debate.” About the ongoing secrecy surrounding the programs, the report explained that “states deploying this technology retain a monopoly of information about its impact,” which is “a form of conceptual censorship … that precludes informed debate.” A June report from the High Commissioner for Human Rights similarly noted “the disturbing lack of governmental transparency associated with surveillance policies, laws and practices, which hinders any effort to assess their coherence with international human rights law and to ensure accountability.” The rejection of the “terrorism” justification for mass surveillance as devoid of evidence echoes virtually every other formal investigation into these programs. A federal judge last December found that the U.S. Government was unable to “cite a single case in which analysis of the NSA’s bulk metadata collection actually stopped an imminent terrorist attack.” Later that month, President Obama’s own Review Group on Intelligence and Communications Technologies concluded that mass surveillance “was not essential to preventing attacks” and information used to detect plots “could readily have been obtained in a timely manner using conventional [court] orders.”
  • Three Democratic Senators on the Senate Intelligence Committee wrote in The New York Times that “the usefulness of the bulk collection program has been greatly exaggerated” and “we have yet to see any proof that it provides real, unique value in protecting national security.” A study by the centrist New America Foundation found that mass metadata collection “has had no discernible impact on preventing acts of terrorism” and, where plots were disrupted, “traditional law enforcement and investigative methods provided the tip or evidence to initiate the case.” It labeled the NSA’s claims to the contrary as “overblown and even misleading.” While worthless in counter-terrorism policies, the UN report warned that allowing mass surveillance to persist with no transparency creates “an ever present danger of ‘purpose creep,’ by which measures justified on counter-terrorism grounds are made available for use by public authorities for much less weighty public interest purposes.” Citing the UK as one example, the report warned that, already, “a wide range of public bodies have access to communications data, for a wide variety of purposes, often without judicial authorization or meaningful independent oversight.”
  • The report was most scathing in its rejection of a key argument often made by American defenders of the NSA: that mass surveillance is justified because Americans are given special protections (the requirement of a FISA court order for targeted surveillance) which non-Americans (95% of the world) do not enjoy. Not only does this scheme fail to render mass surveillance legal, but it itself constitutes a separate violation of international treaties (emphasis added): The Special Rapporteur concurs with the High Commissioner for Human Rights that where States penetrate infrastructure located outside their territorial jurisdiction, they remain bound by their obligations under the Covenant. Moreover, article 26 of the Covenant prohibits discrimination on grounds of, inter alia, nationality and citizenship. The Special Rapporteur thus considers that States are legally obliged to afford the same privacy protection for nationals and non-nationals and for those within and outside their jurisdiction. Asymmetrical privacy protection regimes are a clear violation of the requirements of the Covenant.
  • That principle — that the right of internet privacy belongs to all individuals, not just Americans — was invoked by NSA whistleblower Edward Snowden when he explained in a June, 2013 interview at The Guardian why he disclosed documents showing global surveillance rather than just the surveillance of Americans: “More fundamentally, the ‘US Persons’ protection in general is a distraction from the power and danger of this system. Suspicionless surveillance does not become okay simply because it’s only victimizing 95% of the world instead of 100%.” The U.N. Rapporteur was clear that these systematic privacy violations are the result of a union between governments and tech corporations: “States increasingly rely on the private sector to facilitate digital surveillance. This is not confined to the enactment of mandatory data retention legislation. Corporates [sic] have also been directly complicit in operationalizing bulk access technology through the design of communications infrastructure that facilitates mass surveillance. ”
  • The latest finding adds to the growing number of international formal rulings that the mass surveillance programs of the U.S. and its partners are illegal. In January, the European parliament’s civil liberties committee condemned such programs in “the strongest possible terms.” In April, the European Court of Justice ruled that European legislation on data retention contravened EU privacy rights. A top secret memo from the GCHQ, published last year by The Guardian, explicitly stated that one key reason for concealing these programs was fear of a “damaging public debate” and specifically “legal challenges against the current regime.” The report ended with a call for far greater transparency along with new protections for privacy in the digital age. Continuation of the status quo, it warned, imposes “a risk that systematic interference with the security of digital communications will continue to proliferate without any serious consideration being given to the implications of the wholesale abandonment of the right to online privacy.” The urgency of these reforms is underscored, explained the Rapporteur, by a conclusion of the United States Privacy and Civil Liberties Oversight Board that “permitting the government to routinely collect the calling records of the entire nation fundamentally shifts the balance of power between the state and its citizens.”
‹ Previous 21 - 40 of 82 Next › Last »
Showing 20 items per page