Skip to main content

Home/ Open Web/ Group items tagged document-conversion

Rss Feed Group items tagged

Gary Edwards

Adeptol Viewing Technology Features - 0 views

  •  
    Quick LinksGet a TrialEnterprise On DemandEnterprise On PremiseFAQHelpContact UsWhy Adeptol?Document SupportSupport for more than 300 document types out of boxNot a Virtual PrinterMultitenant platform for high end document viewingNo SoftwaresNo need to install any additional softwares on serverNo ActiveX/PluginsNo plugins or active x or applets need to be downloaded on client side.Fully customizableAdvanced API offers full customization and UI changes.Any OS/Any Prog LanguageInstall Adeptol Server on any OS and integrate with any programming language.AwardsAdeptol products receive industry awards and accolades year after year  View a DemoAttend a WebcastContact AdeptolView a Success StoryNo ActiveX, No Plug-in, No Software's to download. Any OS, Any Browser, Any Programming Language. That is the Power of Adeptol. Adeptol can help you retain your customers and streamline your content integration efforts. Leverage Web 2.0 technologies to get a completely scalable content viewer that easily handles any type of content in virtually unlimited volume, with additional capabilities to support high-volume transaction and archive environments. Our enterprise-class infrastructure was built to meet the needs of the world's most demanding global enterprises. Based on AJAX technology you can easily integrate the viewer into your application with complete ease. Support for all Server PlatformsCan be installed on Windows   (32bit/64bit) Server and Linux   (32bit/64bit) Server. Click here to see technical specifications.Integrate with any programming languageWhether you work in .net, c#, php, cold fusion or JSP. Adeptol Viewer can be integrated easily in any programming language using the easy API set. It also comes with sample code for all languages to get you started.Compatibility with more than 99% of the browsersTested & verified for compatibility with 99% of the various browsers on different platforms. Click here to see browser compatibility report.More than 300 Document T
Gary Edwards

Office to finally fully support ODF, Open XML, and PDF formats | ZDNet - 0 views

  •  
    The king of clicks returns!  No doubt there was a time when the mere mention of ODF and the now legendary XML "document" format wars with Microsoft could drive click counts into the statisphere.  Sorry to say though, those times are long gone. It's still a good story though.  Even if the fate of mankind and the future of the Internet no longer hinges on the outcome.  There is that question that continues defy answer; "Did Microsoft win or lose?"  So the mere announcement of supported formats in MSOffice XX is guaranteed to rev the clicks somewhat. Veteran ODF clickmeister SVN does make an interesting observation though: "The ironic thing is that, while this was as hotly debated am issue in the mid-2000s as are mobile patents and cloud implementation is today, this news was barely noticed. That's a mistake. Updegrove points out, "document interoperability and vendor neutrality matter more now than ever before as paper archives disappear and literally all of human knowledge is entrusted to electronic storage." He concluded, "Only if documents can be easily exchanged and reliably accessed on an ongoing basis will competition in the present be preserved, and the availability of knowledge down through the ages be assured. Without robust, universally adopted document formats, both of those goals will be impossible to attain." Updegrove's right of course. Don't believe me? Go into your office's archives and try to bring up documents your wrote in the 90s in WordPerfect or papers your staff created in the 80s with WordStar. If you don't want to lose your institutional memory, open document standards support is more important than ever. "....................................... Sorry but Updegrove is wrong.  Woefully wrong. The Web is the future.  Sure interoperability matters, but only as far as the Web and the future of Cloud Computing is concerned.  Sadly neither ODF or Open XML are Web ready.  The language of the Web is famously HTML, now HTML5+
Gary Edwards

Google Wave Operational Transformation (Google Wave Federation Protocol) - 0 views

  • Wave document operations consist of the following mutation components:skipinsert charactersinsert element startinsert element endinsert anti-element startinsert anti-element enddelete charactersdelete element startdelete element enddelete anti-element startdelete anti-element endset attributesupdate attributescommence annotationconclude annotationThe following is a more complex example document operation.skip 3insert element start with tag "p" and no attributesinsert characters "Hi there!"insert element endskip 5delete characters 4From this, one could see how an entire XML document can be represented as a single document operation. 
  • Wave OperationsWave operations consists of a document operation, for modifying XML documents and other non document operations. Non document operations are for tasks such as adding or removing a participant to a Wavelet. We'll focus on document operations here as they are the most central to Wave.It's worth noting that an XML document in Wave can be regarded as a single document operation that can be applied to the empty document.This section will also cover how Wave operations are particularly efficient even in the face of a large number of transforms.XML Document SupportWave uses a streaming interface for document operations. This is similar to an XMLStreamWriter or a SAX handler. The document operation consists of a sequence of ordered document mutations. The mutations are applied in sequence as you traverse the document linearly. Designing document operations in this manner makes it easier to write transformation function and composition function described later.In Wave, every 16-bit Unicode code unit (as used in javascript, JSON, and Java strings), start tag or end tag in an XML document is called an item. Gaps between items are called positions. Position 0 is before the first item. A document operation can contain mutations that reference positions. For example, a "Skip" mutation specifies how many positions to skip ahead in the XML document before applying the next mutation.Wave document operations also support annotations. An annotation is some meta-data associated with an item range, i.e. a start position and an end position. This is particularly useful for describing text formatting and spelling suggestions, as it does not unecessarily complicate the underlying XML document format.
  •  
    Summary: Collaborative document editing means multiple editors being able to edit a shared document at the same time.. Live and concurrent means being able to see the changes another person is making, keystroke by keystroke. Currently, there are already a number of products on the market that offer collaborative document editing. Some offer live concurrent editing, such as EtherPad and SubEthaEdit, but do not offer rich text. There are others that offer rich text, such as Google Docs, but do not offer a seamless live concurrent editing experience, as merge failures can occur. Wave stands as a solution that offers both live concurrent editing and rich text document support.  The result is that Wave allows for a very engaging conversation where you can see what the other person is typing, character by character much like how you would converse in a cafe. This is very much like instant messaging except you can see what the other person is typing, live. Wave also allows for a more productive collaborative document editing experience, where people don't have to worry about stepping on each others toes and still use common word processor functionalities such as bold, italics, bullet points, and headings. Wave is more than just rich text documents. In fact, Wave's core technology allows live concurrent modifications of XML documents which can be used to represent any structured content including system data that is shared between clients and backend systems. To achieve these goals, Wave uses a concurrency control system based on Operational Transformation.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Paul Merrell

From Radio to Porn, British Spies Track Web Users' Online Identities - 0 views

  • HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the Internet.” Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media and news websites, search engines, chat forums, and blogs. The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into Internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
  • Amid a renewed push from the U.K. government for more surveillance powers, more than two dozen documents being disclosed today by The Intercept reveal for the first time several major strands of GCHQ’s existing electronic eavesdropping capabilities.
  • The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails and Internet browsing logs of Brits, Americans, and any other citizens — all without a court order or judicial warrant
  • ...17 more annotations...
  • A huge volume of the Internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis. Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events” — a term the agency uses to refer to metadata records — with about 10 billion new entries added every day. As of March 2009, the largest slice of data Black Hole held — 41 percent — was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.
  • Throughout this period, as smartphone sales started to boom, the frequency of people’s Internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data. By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it said would be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.”
  • A document from the GCHQ target analysis center (GTAC) shows the Black Hole repository’s structure.
  • The data is searched by GCHQ analysts in a hunt for behavior online that could be connected to terrorism or other criminal activity. But it has also served a broader and more controversial purpose — helping the agency hack into European companies’ computer networks. In the lead up to its secret mission targeting Netherlands-based Gemalto, the largest SIM card manufacturer in the world, GCHQ used MUTANT BROTH in an effort to identify the company’s employees so it could hack into their computers. The system helped the agency analyze intercepted Facebook cookies it believed were associated with Gemalto staff located at offices in France and Poland. GCHQ later successfully infiltrated Gemalto’s internal networks, stealing encryption keys produced by the company that protect the privacy of cell phone communications.
  • Similarly, MUTANT BROTH proved integral to GCHQ’s hack of Belgian telecommunications provider Belgacom. The agency entered IP addresses associated with Belgacom into MUTANT BROTH to uncover information about the company’s employees. Cookies associated with the IPs revealed the Google, Yahoo, and LinkedIn accounts of three Belgacom engineers, whose computers were then targeted by the agency and infected with malware. The hacking operation resulted in GCHQ gaining deep access into the most sensitive parts of Belgacom’s internal systems, granting British spies the ability to intercept communications passing through the company’s networks.
  • In March, a U.K. parliamentary committee published the findings of an 18-month review of GCHQ’s operations and called for an overhaul of the laws that regulate the spying. The committee raised concerns about the agency gathering what it described as “bulk personal datasets” being held about “a wide range of people.” However, it censored the section of the report describing what these “datasets” contained, despite acknowledging that they “may be highly intrusive.” The Snowden documents shine light on some of the core GCHQ bulk data-gathering programs that the committee was likely referring to — pulling back the veil of secrecy that has shielded some of the agency’s most controversial surveillance operations from public scrutiny. KARMA POLICE and MUTANT BROTH are among the key bulk collection systems. But they do not operate in isolation — and the scope of GCHQ’s spying extends far beyond them.
  • The agency operates a bewildering array of other eavesdropping systems, each serving its own specific purpose and designated a unique code name, such as: SOCIAL ANTHROPOID, which is used to analyze metadata on emails, instant messenger chats, social media connections and conversations, plus “telephony” metadata about phone calls, cell phone locations, text and multimedia messages; MEMORY HOLE, which logs queries entered into search engines and associates each search with an IP address; MARBLED GECKO, which sifts through details about searches people have entered into Google Maps and Google Earth; and INFINITE MONKEYS, which analyzes data about the usage of online bulletin boards and forums. GCHQ has other programs that it uses to analyze the content of intercepted communications, such as the full written body of emails and the audio of phone calls. One of the most important content collection capabilities is TEMPORA, which mines vast amounts of emails, instant messages, voice calls and other communications and makes them accessible through a Google-style search tool named XKEYSCORE.
  • As of September 2012, TEMPORA was collecting “more than 40 billion pieces of content a day” and it was being used to spy on people across Europe, the Middle East, and North Africa, according to a top-secret memo outlining the scope of the program. The existence of TEMPORA was first revealed by The Guardian in June 2013. To analyze all of the communications it intercepts and to build a profile of the individuals it is monitoring, GCHQ uses a variety of different tools that can pull together all of the relevant information and make it accessible through a single interface. SAMUEL PEPYS is one such tool, built by the British spies to analyze both the content and metadata of emails, browsing sessions, and instant messages as they are being intercepted in real time. One screenshot of SAMUEL PEPYS in action shows the agency using it to monitor an individual in Sweden who visited a page about GCHQ on the U.S.-based anti-secrecy website Cryptome.
  • Partly due to the U.K.’s geographic location — situated between the United States and the western edge of continental Europe — a large amount of the world’s Internet traffic passes through its territory across international data cables. In 2010, GCHQ noted that what amounted to “25 percent of all Internet traffic” was transiting the U.K. through some 1,600 different cables. The agency said that it could “survey the majority of the 1,600” and “select the most valuable to switch into our processing systems.”
  • According to Joss Wright, a research fellow at the University of Oxford’s Internet Institute, tapping into the cables allows GCHQ to monitor a large portion of foreign communications. But the cables also transport masses of wholly domestic British emails and online chats, because when anyone in the U.K. sends an email or visits a website, their computer will routinely send and receive data from servers that are located overseas. “I could send a message from my computer here [in England] to my wife’s computer in the next room and on its way it could go through the U.S., France, and other countries,” Wright says. “That’s just the way the Internet is designed.” In other words, Wright adds, that means “a lot” of British data and communications transit across international cables daily, and are liable to be swept into GCHQ’s databases.
  • A map from a classified GCHQ presentation about intercepting communications from undersea cables. GCHQ is authorized to conduct dragnet surveillance of the international data cables through so-called external warrants that are signed off by a government minister. The external warrants permit the agency to monitor communications in foreign countries as well as British citizens’ international calls and emails — for example, a call from Islamabad to London. They prohibit GCHQ from reading or listening to the content of “internal” U.K. to U.K. emails and phone calls, which are supposed to be filtered out from GCHQ’s systems if they are inadvertently intercepted unless additional authorization is granted to scrutinize them. However, the same rules do not apply to metadata. A little-known loophole in the law allows GCHQ to use external warrants to collect and analyze bulk metadata about the emails, phone calls, and Internet browsing activities of British people, citizens of closely allied countries, and others, regardless of whether the data is derived from domestic U.K. to U.K. communications and browsing sessions or otherwise. In March, the existence of this loophole was quietly acknowledged by the U.K. parliamentary committee’s surveillance review, which stated in a section of its report that “special protection and additional safeguards” did not apply to metadata swept up using external warrants and that domestic British metadata could therefore be lawfully “returned as a result of searches” conducted by GCHQ.
  • Perhaps unsurprisingly, GCHQ appears to have readily exploited this obscure legal technicality. Secret policy guidance papers issued to the agency’s analysts instruct them that they can sift through huge troves of indiscriminately collected metadata records to spy on anyone regardless of their nationality. The guidance makes clear that there is no exemption or extra privacy protection for British people or citizens from countries that are members of the Five Eyes, a surveillance alliance that the U.K. is part of alongside the U.S., Canada, Australia, and New Zealand. “If you are searching a purely Events only database such as MUTANT BROTH, the issue of location does not occur,” states one internal GCHQ policy document, which is marked with a “last modified” date of July 2012. The document adds that analysts are free to search the databases for British metadata “without further authorization” by inputing a U.K. “selector,” meaning a unique identifier such as a person’s email or IP address, username, or phone number. Authorization is “not needed for individuals in the U.K.,” another GCHQ document explains, because metadata has been judged “less intrusive than communications content.” All the spies are required to do to mine the metadata troves is write a short “justification” or “reason” for each search they conduct and then click a button on their computer screen.
  • Intelligence GCHQ collects on British persons of interest is shared with domestic security agency MI5, which usually takes the lead on spying operations within the U.K. MI5 conducts its own extensive domestic surveillance as part of a program called DIGINT (digital intelligence).
  • GCHQ’s documents suggest that it typically retains metadata for periods of between 30 days to six months. It stores the content of communications for a shorter period of time, varying between three to 30 days. The retention periods can be extended if deemed necessary for “cyber defense.” One secret policy paper dated from January 2010 lists the wide range of information the agency classes as metadata — including location data that could be used to track your movements, your email, instant messenger, and social networking “buddy lists,” logs showing who you have communicated with by phone or email, the passwords you use to access “communications services” (such as an email account), and information about websites you have viewed.
  • Records showing the full website addresses you have visited — for instance, www.gchq.gov.uk/what_we_do — are treated as content. But the first part of an address you have visited — for instance, www.gchq.gov.uk — is treated as metadata. In isolation, a single metadata record of a phone call, email, or website visit may not reveal much about a person’s private life, according to Ethan Zuckerman, director of Massachusetts Institute of Technology’s Center for Civic Media. But if accumulated and analyzed over a period of weeks or months, these details would be “extremely personal,” he told The Intercept, because they could reveal a person’s movements, habits, religious beliefs, political views, relationships, and even sexual preferences. For Zuckerman, who has studied the social and political ramifications of surveillance, the most concerning aspect of large-scale government data collection is that it can be “corrosive towards democracy” — leading to a chilling effect on freedom of expression and communication. “Once we know there’s a reasonable chance that we are being watched in one fashion or another it’s hard for that not to have a ‘panopticon effect,’” he said, “where we think and behave differently based on the assumption that people may be watching and paying attention to what we are doing.”
  • When compared to surveillance rules in place in the U.S., GCHQ notes in one document that the U.K. has “a light oversight regime.” The more lax British spying regulations are reflected in secret internal rules that highlight greater restrictions on how NSA databases can be accessed. The NSA’s troves can be searched for data on British citizens, one document states, but they cannot be mined for information about Americans or other citizens from countries in the Five Eyes alliance. No such constraints are placed on GCHQ’s own databases, which can be sifted for records on the phone calls, emails, and Internet usage of Brits, Americans, and citizens from any other country. The scope of GCHQ’s surveillance powers explain in part why Snowden told The Guardian in June 2013 that U.K. surveillance is “worse than the U.S.” In an interview with Der Spiegel in July 2013, Snowden added that British Internet cables were “radioactive” and joked: “Even the Queen’s selfies to the pool boy get logged.”
  • In recent years, the biggest barrier to GCHQ’s mass collection of data does not appear to have come in the form of legal or policy restrictions. Rather, it is the increased use of encryption technology that protects the privacy of communications that has posed the biggest potential hindrance to the agency’s activities. “The spread of encryption … threatens our ability to do effective target discovery/development,” says a top-secret report co-authored by an official from the British agency and an NSA employee in 2011. “Pertinent metadata events will be locked within the encrypted channels and difficult, if not impossible, to prise out,” the report says, adding that the agencies were working on a plan that would “(hopefully) allow our Internet Exploitation strategy to prevail.”
Gary Edwards

Cloud file-sharing for enterprise users - 1 views

  •  
    Quick review of different sync-share-store services, starting with DropBox and ending with three Open Source services. Very interesting. Things have progressed since I last worked on the SurDocs project for Sursen. No mention in this review of file formats, conversion or viewing issues. I do know that CrocoDoc is used by near every sync-share-store service to convert documents to either pdf or html formats for viewing. No servie however has been able to hit the "native document" sweet spot. Not even SurDocs - which was the whole purpose behind the project!!! "Native Documents" means that the document is in it's native / original application format. That format is needed for the round tripping and reloading of the document. Although most sync-share-store services work with MSOffice OXML formatted documents, only Microsoft provides a true "native" format viewer (Office 365). Office 365 enables direct edit, view and collaboration on native documents. Which is an enormous advantage given that conversion of any sort is guaranteed to "break" a native document and disrupt any related business processes or round tripping need. It was here that SurDoc was to provide a break-through technology. Sadly, we're still waiting :( excerpt: The availability of cheap, easy-to-use and accessible cloud file-sharing services means users have more freedom and choice than ever before. Dropbox pioneered simplicity and ease of use, and so quickly picked up users inside the enterprise. Similar services have followed Dropbox's lead and now there are dozens, including well-known ones such as Google Drive, SkyDrive and Ubuntu One. cloud.jpg Valdis Filks , research director at analyst firm Gartner explained the appeal of cloud file-sharing services. Filks said: "Enterprise employees use Dropbox and Google because they are consumer products that are simple to use, can be purchased without officially requesting new infrastructure or budget expenditure, and can be installed qu
  •  
    Odd that the reporter mentions the importance of security near the top of the article but gives that topic such short shrift in his evaluation of the services. For example, "secured by 256-bit AES encryption" is meaningless without discussing other factors such as: [i] who creates the encryption keys and on which side of the server/client divide; and [ii] the service's ability to decrypt the customer's content. Encrypt/decryt must be done on the client side using unique keys that are unknown to the service, else security is broken and if the service does business in the U.S. or any of its territories or possessions, it is subject to gagged orders to turn over the decrypted customer information. My wisdom so far is to avoid file sync services to the extent you can, boycott U.S. services until the spy agencies are encaged, and reward services that provide good security from nations with more respect for digital privacy, to give U.S.-based services an incentive to lobby *effectively* on behalf of their customer's privacy in Congress. The proof that they are not doing so is the complete absence of bills in Congress that would deal effectively with the abuse by U.S. spy agencies. From that standpoint, the Switzerland-based http://wuala.com/ file sync service is looking pretty good so far. I'm using it.
Gary Edwards

http://www.naverage.com/ - 0 views

  •  
    Florian's docx reader is now available for iOS high-touch devices.  Extreme fidelity for reading/viewing native docx documents.  I hope he is working on a Chrome eXtension version!!!!   The world urgently needs WEB ready - Web view-able docx business documents.   Conversion of docx to HTML sucks.   The ultimate Visual Document system would enable users to work entirely in the native document format of the authoring system.  Florian's reader can do this, but so far he's limited to iOS.  Seems to me that the exploding sync-share-store market sector (DropBox, Box, Egnyte, SugarSync, etc) really need native document viewers that are HTM5 browser ready. " Naverage Reader HD Features: ... Designed for business documents. View your business document in an unbelievable quality. .....Tracked Changes Support. View text insertions, text deletions and comments on your iPad. ..... Layout Fidelity. Headers and footers, footnotes, tables, paragraph numbering, frames, graphics layout optimized for business documents. .....Font Embedding. Corporate fonts on your iPad. .docx-compatible. Compatible with the new Microsoft® Word format (.docx)."
Gary Edwards

Pragmatic PDF: Structured Content: PDF to HTML - 1 views

  •  
    A while back I included the following as one of the areas of interest of the PDF/D Consortium: Structured Documents and Single Sourcing: improving round-trips to document softwareWhat did I mean by Structured Documents? For years Solid Documents has been converting PDF files to Word documents with a focus on retaining format and layout to allow customers to repurpose the content. While this is a great solution for a large amount of customers, it is not the only type of reconstruction that is interesting. PDF is by nature a "document" format: the layout is in the form of pages. Content also needs to exist in alternate formats like a continuously flowing stream. Use cases for continuously flowing content include:conversion to HTML to reflow for form factors other than "pages"conversion to content management systems where structure is more important than layout and formattingconversion for alternate readers for people with disabilities (text to speech, etc)Reconstruction for these use cases focuses more on the structure of the document than on the layout and formatting. For example, we need to take unstructured PDF files and recognize columns, tables, lists, headers and footers, etc. This allows us to organize the content in a logical structure. Ultimately, we'll recognize topics and sections too so that we can produce logical hierarchies from plain old non-tagged PDF files. One great example of where conventional PDF pages are not the most appropriate way to read a document are on small screens of handheld devices. For example, the typical Blackberry has a 3"x2" screen with a resolution something like 320x240 pixels.
Gary Edwards

Crocodoc's HTML Document Viewer Infiltrates the Enterprise | Xconomy - 0 views

  •  
    "the core of Crocodoc's technology is a rendering engine that can reproduce pixel-perfect versions of native documents in a format that any Web browser can understand. You've probably seen a Word or PDF document displayed in a Google Docs browser window; that's actually just a big, fuzzy, graphical image of the original document. "It loads slowly and it doesn't look very good," says Damico. To create high-fidelity version of a native document that still loads quickly, you have to understand the structure of the document at a deep level, Damico says. "What is a heading, what is a paragraph, what is the kerning, what is the spacing?" Then you have to tell the browser how to reconstruct the document using nothing but style sheets and the other tools of HTML5. "We think everyone is going to be using HTML5, so we are focused on building the Ferrari of HTML5 document viewers.""
Gary Edwards

Save any Document from Microsoft Word 2007 to EPUB using a Free Add-in from Aspose - As... - 0 views

  •  
    Aspose.Words Product Family Save any Document from Microsoft Word 2007 to EPUB using a Free Add-in from Aspose We are happy to announce the first release of a free add-in that allows you to convert any document opened in Microsoft Word 2007 to EPUB. To download goto http://www.aspose.com/community/files/69/free-microsoft-office-add-ins/aspose.words-for-microsoft-word/default.aspx. Below are excerpts from the user's guide that is included in the installer. Introduction Aspose.Words for Microsoft Word is a free utility that allows converting any document opened in Microsoft Word 2007 to the EPUB format. Microsoft Word 2007 can load documents in many formats including DOC, DOCX, RTF, HTML, ODT etc and you can now easily convert them all to EPUB using Aspose.Words for Microsoft Word. About the EPUB format EPUB is an XML-based distribution format for eBooks that is rapidly gaining adoption by publishers and distributors. EPUB is an open standard supported by the International Digital Publishing Forum (IDPF) organization. See also the Wikipedia article. To Save as EPUB After you install Aspose.Words for Microsoft Word, the EPUB format will be listed in the Save As dialog. Saving to EPUB is just as simple as saving to any other file format available in Microsoft Word. 1.      Open any document in Microsoft Word. 2.      From the Save As menu select "Aspose.Words - EPUB (*.epub)", and then click Save. You can save any document from Microsoft Word to EPUB using Aspose.Words for Microsoft Word.
Gary Edwards

The Real Meaning Of Google Wave - Forbes.com - 0 views

  • Wave is a new way to build distributed applications, and it will open the door to an explosion of innovation.
  • So, if Wave is not just the demo application, what is it? Google Wave is a platform for creating distributed applications. Each Wave server can be involved in a number of conversations involving Wavelets, what most people would think of as a document. Wavelets are actually a much more powerful and general because they are based on XML, which means you can have lots of depth of content, like headings and subheadings of a book, but on steroids. Adding a document repository to XMPP is just revolutionary.
  • The XMPP protocol manages the communication between the Wave servers so that all the Wavelets can synchronize as they are changed. Then Google finished the job by making Wavelets tag-able, searchable and versioned, so you can play back changes. But Google Wave goes beyond just managing the content--it also manages the programs that act on the content. At any level, a program can be assigned to a Wavelet to render it, that is, show it to a user and help manage the conversation. Google Wave also manages the distribution and management of these programs. The idea of a platform that combines management of the data and the code is really powerful.
  •  
    Good article.  One of the first to go beyond the demo, recognizing that Wave is application platform - a wrapper for the convergence of communications and content. Excerpt: Wave is a new way to build distributed applications, and it will open the door to an explosion of innovation. What the Wave demo showed is support for a continuum from the shortest messages to longer and longer forms of content. All of it can be shared with precise control, tagged, searched. The version history is kept. No more mailing around a document. This takes the beauty of e-mail and wikis and extends it in a more flexible way to a much larger audience. Google Wave is a platform for creating distributed applications. Each Wave server can be involved in a number of conversations involving Wavelets, what most people would think of as a document. Wavelets are actually a much more powerful and general because they are based on XML, which means you can have lots of depth of content, like headings and subheadings of a book, but on steroids. Adding a document repository to XMPP is just revolutionary. The XMPP protocol manages the communication between the Wave servers so that all the Wavelets can synchronize as they are changed. Then Google finished the job by making Wavelets tag-able, searchable and versioned, so you can play back changes. But Google Wave goes beyond just managing the content--it also manages the programs that act on the content. At any level, a program can be assigned to a Wavelet to render it, that is, show it to a user and help manage the conversation. Google Wave also manages the distribution and management of these programs. The idea of a platform that combines management of the data and the code is really powerful.
Gary Edwards

Kindle Format 8 Overview - 0 views

  •  
    Amazon releases a new version of the KF8 Format, with greatly improved HTML5-CSS3 capabilities.  Details of the KF8 spec can be found here: http://goo.gl/XY39v A couple of things i'm wondering about here.  One is, the KindleGen conversion tool can convert HTML, XHTML and EPUB to KF8.  Has anyone tried to push a OpenOffice XHTML compound document through this latest KF8 version of  KGen?  I'm thinking that perhaps the OOo HTML problem could be solved in this way? There is no doubt in my mind that HTML5 will continue to grow, and eventually replace the desktop XML "compound document" formats. The great transition from desktop client/server business productivity environments, where legacy compound documents rule the roost and fuel the engines of all business systems, to a Cloud Productivity Platform, will require an HTML5 compound document format model.  Also needed will be HTML5 capable applications participating in the production of Cloud ready compound documents.  Is KF8 a reasonable starting place? excerpt: Kindle Format 8 is Amazon's next generation file format offering a wide range of new features and enhancements - including HTML5 and CSS3 support that publishers can use to create all types of books. KF8 adds over 150 new formatting capabilities, including drop caps, numbered lists, fixed layouts, nested tables, callouts, sidebars and Scalable Vector Graphics - opening up more opportunities to create Kindle books that readers will love. Kindle Fire is the first Kindle device to support KF8 - in the coming months KF8 will be rolled out to our latest generation Kindle e-ink devices as well as our free Kindle reading apps.
Gary Edwards

Discoverer of JSON Recommends Suspension of HTML5 | Web Security Journal - 0 views

  •  
    Fascinating conversation between Douglas Crockford and Jeremy Geelan. The issue is that XSS - the Cross Site Scripting capabilities of HTML. and "the painful gap" in the HTML5 specification of the itnerface between JavaScript and the browser. I had to use the Evernote Clearly Chrome extension to read this page. Microsoft is running a huge JavaScript advertisement/pointer that totally blocks the page with no way of closing or escaping. Incredible. Clearly was able to knock it out though. Nicely done! The HTML5-XSS problem is very important, especially if your someone like me that sees the HTML+ format (HTML5-CSS3-JSON-JavaScript-SVG/Canvas) as the undisputed Cloud Productivity Platform "compound document" model. The XSS discussion goes right to the heart of matter of creating an HTML compound document in much the same way that a MSOffice Productivity Compound Document worked. The XSS mimics the functionality of of embedded compound document components such as OLE, DDE, ODBC and Scripting. Crack open any client/server business document and it will be found to be loaded with these embeded components. It seems to me that any one of the Cloud Productivity Platform contenders could solve the HTML-XSS problem. I'm thinking Google Apps, Zoho, SalesForce.com, RackSpace and Amazon - with gApps and Zoho clearly leading the charge. Also let me add that RSS and XMP (Jabber), while not normally mentioned with JSON, ought to be considered. Twitter uses RSS to transport and connect data. Jabber is of course a long time favorite of mine. excerpt: The fundamental mistake in HTML5 was one of prioritization. It should have tackled the browser's most important problem first. Once the platform was secured, then shiny new features could be carefully added. There is much that is attractive about HTML5. But ultimately the thing that made the browser into a credible application delivery system was JavaScript, the ultimate workaround tool. There is a painful gap
Gary Edwards

Microsoft Office to get a dose of OpenDocument - CNET News - 0 views

  •  
    While trying to help a friend understand the issues involved with exchanging MSOffice documnets between the many different versions of MSOffice, I stumbled on this oldy but goody ......... "A group of software developers have created a program to make Microsoft Office work with files in the OpenDocument format, a move that would bridge currently incompatible desktop applications. Gary Edwards, an engineer involved in the open-source OpenOffice.org project and founder of the OpenDocument Foundation, on Thursday discussed the software plug-in on the Web site Groklaw. The new program, which has been under development for about year and finished initial testing last week, is designed to let Microsoft Office manipulate OpenDocument format (ODF) files, Edwards said. "The ODF Plugin installs on the file menu as a natural and transparent part of the 'open,' 'save,' and 'save as' sequences. As far as end users and other application add-ons are concerned, ODF Plugin renders ODF documents as if (they) were native to MS Office," according to Edwards. If the software, which is not yet available, works as described, it will be a significant twist to an ongoing contest between Microsoft and the backers of OpenDocument, a document format gaining more interest lately, particularly among governments. Microsoft will not natively support OpenDocument in Office 2007, which will come out later this year. Company executives have said that there is not sufficient demand and OpenDocument is less functional that its own Office formats. Having a third-party product to save OpenDocument files from Office could give OpenDocument-based products a bump in the marketplace, said Stephen O'Grady, a RedMonk analyst. OpenDocument is the native format for the OpenOffice open-source desktop productivity suite and is supported in others, including KOffice, Sun Microsystems' StarOffice and IBM's Workplace. "To the extent that you get people authoring documents in a format that is natively compatible with
Gary Edwards

Online Collaboration | Novell Vibe cloud service - 0 views

  •  
    Real-time co-creation and co-editing: With Novell Vibe, people in your organization can author and edit online documents together, character by character, in real time. Teams can dramatically accelerate the completion of projects that used to take weeks. Because collaboration unfolds in a shared workspace, no one has to manually merge content from multiple contributors to create a unified, finished document. Enterprise social messaging: As easy to use as Facebook or Twitter, Novell Vibe consolidates direct messages, chat, blogs and wikis from within Novell Vibe into one message stream. Creating new groups and inviting members from inside or outside your organization is as simple as sending an e-mail. You can even jumpstart ad-hoc conversations in seconds to tackle projects that can't wait. File synchronization and management: Files on your desktop, regardless of authoring application, can be synchronized to the Novell Vibe file repository based in the cloud. As a result, users always work with the latest versions of important files on their desktops and in Novell Vibe. The Novell Vibe unified message stream: Direct messages, social feeds and group conversations from within Novell Vibe are unified in one intuitive interface. This eliminates the need to constantly switch between locations to see all your content. Using powerful filtering, sorting and tagging capabilities, you can determine exactly what you want to see and whom you want to follow. Advanced information management: Novell Vibe keeps a persistent record of all your work and conversations. Its comprehensive search function quickly locates files, messages, attachments, groups and people to save time and boost productivity.
Gary Edwards

Government Market Drags Microsoft Deeper into the Cloud - 0 views

  •  
    Nice article from Scott M. Fulton describing Microsoft's iron fisted lock on government desktop productivity systems and the great transition to a Cloud Productivity Platform.  Keep in mind that in 2005, Massachusetts tried to do the same thing with their SOA effort.  Then Governor Romney put over $1 M into a beta test that produced the now infamous 300 page report written by Sam Hiser.  The details of this test resulted in the even more infamous da Vinci ODF plug-in for Microsoft Office desktops.   The lessons of Massachusetts are simple enough; it's not the formats or office suite applications.  It's the business process!  Conversion of documents not only breaks the document.  It also breaks the embedded "business process". The mystery here is that Microsoft owns the client side of client/server computing.  Compound documents, loaded with intertwined OLE, ODBC, ActiveX, and other embedded protocols and interface dependencies connecting data sources with work flow, are the fuel of these client/server business productivity systems.  Break a compound document and you break the business process.   Even though Massachusetts workers were wonderfully enthusiastic and supportive of an SOA based infrastructure that would include Linux servers and desktops as well as OSS productivity applications, at the end of the day it's all about getting the work done.  Breaking the business process turned out to be a show stopper. Cloud Computing changes all that.  The reason is that the Cloud is rapidly replacing client/server as the target architecture for new productivity developments; including data centers and transaction processing systems.  There are many reasons for the great transition, but IMHO the most important is that the Web combines communications with content, data, and collaborative computing.   Anyone who ever worked with the Microsoft desktop productivity environment knows that the desktop sucks as a communication device.  There was
Paul Merrell

Forget About Siri and Alexa - When It Comes to Voice Identification, the "NSA Reigns Su... - 0 views

  • These and other classified documents provided by former NSA contractor Edward Snowden reveal that the NSA has developed technology not just to record and transcribe private conversations but to automatically identify the speakers. Americans most regularly encounter this technology, known as speaker recognition, or speaker identification, when they wake up Amazon’s Alexa or call their bank. But a decade before voice commands like “Hello Siri” and “OK Google” became common household phrases, the NSA was using speaker recognition to monitor terrorists, politicians, drug lords, spies, and even agency employees. The technology works by analyzing the physical and behavioral features that make each person’s voice distinctive, such as the pitch, shape of the mouth, and length of the larynx. An algorithm then creates a dynamic computer model of the individual’s vocal characteristics. This is what’s popularly referred to as a “voiceprint.” The entire process — capturing a few spoken words, turning those words into a voiceprint, and comparing that representation to other “voiceprints” already stored in the database — can happen almost instantaneously. Although the NSA is known to rely on finger and face prints to identify targets, voiceprints, according to a 2008 agency document, are “where NSA reigns supreme.” It’s not difficult to see why. By intercepting and recording millions of overseas telephone conversations, video teleconferences, and internet calls — in addition to capturing, with or without warrants, the domestic conversations of Americans — the NSA has built an unrivaled collection of distinct voices. Documents from the Snowden archive reveal that analysts fed some of these recordings to speaker recognition algorithms that could connect individuals to their past utterances, even when they had used unknown phone numbers, secret code words, or multiple languages.
  • The classified documents, dating from 2004 to 2012, show the NSA refining increasingly sophisticated iterations of its speaker recognition technology. They confirm the uses of speaker recognition in counterterrorism operations and overseas drug busts. And they suggest that the agency planned to deploy the technology not just to retroactively identify spies like Pelton but to prevent whistleblowers like Snowden.
Gary Edwards

The better Office alternative: SoftMaker Office bests OpenOffice.org ( - Soft... - 0 views

shared by Gary Edwards on 30 Jun 09 - Cached
  • Frankly, from Microsoft's perspective, the danger may have been overstated. Though the free open source crowd talks a good fight, the truth is that they keep missing the real target. Instead of investing in new features that nobody will use, the team behind OpenOffice should take a page from the SoftMaker playbook and focus on interoperability first. Until OpenOffice works out its import/export filter issues, it'll never be taken seriously as a Microsoft alternative. More troubling (for Microsoft) is the challenge from the SoftMaker camp. These folks have gotten the file-format compatibility issue licked, and this gives them the freedom to focus on building out their product's already respectable feature set. I wouldn't be surprised if SoftMaker got gobbled up by a major enterprise player in the near, thus creating a viable third way for IT shops seeking to kick the Redmond habit.
    • Gary Edwards
       
      This quote is an excerpt from the article :)
  •  
    Finally! Someone who gets it. For an office suite to be considered as an alternative to MSOffice, it must be designed with multiple levels of compatibility. It's not just that the "feature sets" that must be comparable. The guts of the suite must be compatible at both the file format level, and the environment level. Randall put's it this way; "It's the ecosystem stupid". The reason ODF failed in Massachusetts is that neither OpenOffice nor OpenOffice ODF are designed to be compatible with legacy and existing MSOffice applications, binary formats, and, the MSOffice productivity environment. Instead, OOo and OOo-ODF are designed to be competitively comparable. As an alternative to MSOffice, OpenOffice and OpenOffice ODF cannot fit into existing MSOffice workgroups and producitivity environments. Because it s was not designed to be compatible, OOo demands that the environment be replaced, rebuilt and re-engineered. Making OOo and OOo-ODF costly and disruptive to critical day-to-day business processes. The lesson of Massachusetts is simple; compatibility matters. Conversion of workgroup/workflow documents from the MSOffice productivity environment to OpenOffice ODF will break those documents at two levels: fidelity and embedded "ecosystem" logic. Fidelity is what most end-users point to since that's the aspect of the document conversion they can see. However, it's what they can't see that is the show stopper. The hidden side of workgroup/workflow documents is embedded logic that includes scripts, macros, formulas, OLE, data bindings, security settings, application specific settings, and productivity environment settings. Breaks these aspects of the document, and you stop important business processes bound to the MSOffice productivity environment. There is no such thing as an OpenOffice productivity environment designed to be a compatible alternative to the MSOffice productivity environment. Another lesson from Massach
Gary Edwards

Memeo Connect's Take on the GDrive - 0 views

  •  
    Memeo Connect, which my colleague David Worthington tried and liked a few weeks ago, is an app that lets Google Apps users sync their documents and other files to a PC or Mac so they can get access to them even when they're offline. And as of today, it's available in a beta of version 2.0, which lets you get at synced files not only in Memeo's app but in Windows Explorer or the OS X finder, as well as in file open/save dialog boxes. The sync is two-way, so anything you drag or save into this repository gets moved back to Google Apps' storage once you're back online. And as before, Connect can handle files of all sorts and do conversions between Google Docs files and PDF and Microsoft Office formats. This virtual drive shows up in Explorer or Finder labeled as "GDrive"-a playful reference to a Google product that people have been expecting to arrive any day now for at least half a decade. (Don't tell anyone, but I've seen something called Google Web Drive in use at Google's offices; I assume it's undergoing internal testing and will get rolled out to the rest of us someday.) All in all, the new Connect competes more closely with Box.net (which launched its own syncing feature recently) and sync-focused services such as SugarSync. Memeo Connect 2.0′s other major feature is full-text search of the files in your Google Docs collection: Previous versions could only search file names. The Memeo Connect 2.0 beta is free, but the final version will cost $9 per user per year. It requires a $50/year Google Apps Premier account. (I think plenty of users of Google Apps' free version would pay for it, but Google only lets third-party apps and services that access the Apps API work with the paid edition.)
Paul Merrell

The People and Tech Behind the Panama Papers - Features - Source: An OpenNews project - 0 views

  • Then we put the data up, but the problem with Solr was it didn’t have a user interface, so we used Project Blacklight, which is open source software normally used by librarians. We used it for the journalists. It’s simple because it allows you to do faceted search—so, for example, you can facet by the folder structure of the leak, by years, by type of file. There were more complex things—it supports queries in regular expressions, so the more advanced users were able to search for documents with a certain pattern of numbers that, for example, passports use. You could also preview and download the documents. ICIJ open-sourced the code of our document processing chain, created by our web developer Matthew Caruana Galizia. We also developed a batch-searching feature. So say you were looking for politicians in your country—you just run it through the system, and you upload your list to Blacklight and you would get a CSV back saying yes, there are matches for these names—not only exact matches, but also matches based on proximity. So you would say “I want Mar Cabra proximity 2” and that would give you “Mar Cabra,” “Mar whatever Cabra,” “Cabra, Mar,”—so that was good, because very quickly journalists were able to see… I have this list of politicians and they are in the data!
  • Last Sunday, April 3, the first stories emerging from the leaked dataset known as the Panama Papers were published by a global partnership of news organizations working in coordination with the International Consortium of Investigative Journalists, or ICIJ. As we begin the second week of reporting on the leak, Iceland’s Prime Minister has been forced to resign, Germany has announced plans to end anonymous corporate ownership, governments around the world launched investigations into wealthy citizens’ participation in tax havens, the Russian government announced that the investigation was an anti-Putin propaganda operation, and the Chinese government banned mentions of the leak in Chinese media. As the ICIJ-led consortium prepares for its second major wave of reporting on the Panama Papers, we spoke with Mar Cabra, editor of ICIJ’s Data & Research unit and lead coordinator of the data analysis and infrastructure work behind the leak. In our conversation, Cabra reveals ICIJ’s years-long effort to build a series of secure communication and analysis platforms in support of genuinely global investigative reporting collaborations.
  • For communication, we have the Global I-Hub, which is a platform based on open source software called Oxwall. Oxwall is a social network, like Facebook, which has a wall when you log in with the latest in your network—it has forum topics, links, you can share files, and you can chat with people in real time.
  • ...3 more annotations...
  • We had the data in a relational database format in SQL, and thanks to ETL (Extract, Transform, and Load) software Talend, we were able to easily transform the data from SQL to Neo4j (the graph-database format we used). Once the data was transformed, it was just a matter of plugging it into Linkurious, and in a couple of minutes, you have it visualized—in a networked way, so anyone can log in from anywhere in the world. That was another reason we really liked Linkurious and Neo4j—they’re very quick when representing graph data, and the visualizations were easy to understand for everybody. The not-very-tech-savvy reporter could expand the docs like magic, and more technically expert reporters and programmers could use the Neo4j query language, Cypher, to do more complex queries, like show me everybody within two degrees of separation of this person, or show me all the connected dots…
  • We believe in open source technology and try to use it as much as possible. We used Apache Solr for the indexing and Apache Tika for document processing, and it’s great because it processes dozens of different formats and it’s very powerful. Tika interacts with Tesseract, so we did the OCRing on Tesseract. To OCR the images, we created an army of 30–40 temporary servers in Amazon that allowed us to process the documents in parallel and do parallel OCR-ing. If it was very slow, we’d increase the number of servers—if it was going fine, we would decrease because of course those servers have a cost.
  • For the visualization of the Mossack Fonseca internal database, we worked with another tool called Linkurious. It’s not open source, it’s licensed software, but we have an agreement with them, and they allowed us to work with it. It allows you to represent data in graphs. We had a version of Linkurious on our servers, so no one else had the data. It was pretty intuitive—journalists had to click on dots that expanded, basically, and could search the names.
1 - 20 of 30 Next ›
Showing 20 items per page