Skip to main content

Home/ Open Web/ Group items tagged content management

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Gary Edwards

The Real Meaning Of Google Wave - Forbes.com - 0 views

  • Wave is a new way to build distributed applications, and it will open the door to an explosion of innovation.
  • So, if Wave is not just the demo application, what is it? Google Wave is a platform for creating distributed applications. Each Wave server can be involved in a number of conversations involving Wavelets, what most people would think of as a document. Wavelets are actually a much more powerful and general because they are based on XML, which means you can have lots of depth of content, like headings and subheadings of a book, but on steroids. Adding a document repository to XMPP is just revolutionary.
  • The XMPP protocol manages the communication between the Wave servers so that all the Wavelets can synchronize as they are changed. Then Google finished the job by making Wavelets tag-able, searchable and versioned, so you can play back changes. But Google Wave goes beyond just managing the content--it also manages the programs that act on the content. At any level, a program can be assigned to a Wavelet to render it, that is, show it to a user and help manage the conversation. Google Wave also manages the distribution and management of these programs. The idea of a platform that combines management of the data and the code is really powerful.
  •  
    Good article.  One of the first to go beyond the demo, recognizing that Wave is application platform - a wrapper for the convergence of communications and content. Excerpt: Wave is a new way to build distributed applications, and it will open the door to an explosion of innovation. What the Wave demo showed is support for a continuum from the shortest messages to longer and longer forms of content. All of it can be shared with precise control, tagged, searched. The version history is kept. No more mailing around a document. This takes the beauty of e-mail and wikis and extends it in a more flexible way to a much larger audience. Google Wave is a platform for creating distributed applications. Each Wave server can be involved in a number of conversations involving Wavelets, what most people would think of as a document. Wavelets are actually a much more powerful and general because they are based on XML, which means you can have lots of depth of content, like headings and subheadings of a book, but on steroids. Adding a document repository to XMPP is just revolutionary. The XMPP protocol manages the communication between the Wave servers so that all the Wavelets can synchronize as they are changed. Then Google finished the job by making Wavelets tag-able, searchable and versioned, so you can play back changes. But Google Wave goes beyond just managing the content--it also manages the programs that act on the content. At any level, a program can be assigned to a Wavelet to render it, that is, show it to a user and help manage the conversation. Google Wave also manages the distribution and management of these programs. The idea of a platform that combines management of the data and the code is really powerful.
Paul Merrell

Last Call Working Draft -- W3C Authoring Tool Accessibility Guidelines (ATAG) 2.0 - 0 views

  • This is a Working Draft of the Authoring Tool Accessibility Guidelines (ATAG) version 2.0. This document includes recommendations for assisting authoring tool developers to make the authoring tools that they develop more accessible to people with disabilities, including blindness and low vision, deafness and hearing loss, learning disabilities, cognitive limitations, motor difficulties, speech difficulties, and others. Accessibility, from an authoring tool perspective, includes addressing the needs of two (potentially overlapping) user groups with disabilities: authors of web content, whose needs are met by ensuring that the authoring tool user interface itself is accessible (addressed by Part A of the guidelines), and end users of web content, whose needs are met by ensuring that all authors are enabled, supported, and guided towards producing accessible web content (addressed by Part B of the guidelines).
  • Examples of authoring tools: ATAG 2.0 applies to a wide variety of web content generating applications, including, but not limited to: web page authoring tools (e.g., WYSIWYG HTML editors) software for directly editing source code (see note below) software for converting to web content technologies (e.g., "Save as HTML" features in office suites) integrated development environments (e.g., for web application development) software that generates web content on the basis of templates, scripts, command-line input or "wizard"-type processes software for rapidly updating portions of web pages (e.g., blogging, wikis, online forums) software for generating/managing entire web sites (e.g., content management systems, courseware tools, content aggregators) email clients that send messages in web content technologies multimedia authoring tools debugging tools for web content software for creating mobile web applications
  • Web-based and non-web-based: ATAG 2.0 applies equally to authoring tools of web content that are web-based, non-web-based or a combination (e.g., a non-web-based markup editor with a web-based help system, a web-based content management system with a non-web-based file uploader client). Real-time publishing: ATAG 2.0 applies to authoring tools with workflows that involve real-time publishing of web content (e.g., some collaborative tools). For these authoring tools, conformance to Part B of ATAG 2.0 may involve some combination of real-time accessibility supports and additional accessibility supports available after the real-time authoring session (e.g., the ability to add captions for audio that was initially published in real-time). For more information, see the Implementing ATAG 2.0 - Appendix E: Real-time content production. Text Editors: ATAG 2.0 is not intended to apply to simple text editors that can be used to edit source content, but that include no support for the production of any particular web content technology. In contrast, ATAG 2.0 can apply to more sophisticated source content editors that support the production of specific web content technologies (e.g., with syntax checking, markup prediction, etc.).
  •  
    Link is the latest version link so page should update when this specification graduates to a W3C recommendation.
Gary Edwards

Cloud file-sharing for enterprise users - 1 views

  •  
    Quick review of different sync-share-store services, starting with DropBox and ending with three Open Source services. Very interesting. Things have progressed since I last worked on the SurDocs project for Sursen. No mention in this review of file formats, conversion or viewing issues. I do know that CrocoDoc is used by near every sync-share-store service to convert documents to either pdf or html formats for viewing. No servie however has been able to hit the "native document" sweet spot. Not even SurDocs - which was the whole purpose behind the project!!! "Native Documents" means that the document is in it's native / original application format. That format is needed for the round tripping and reloading of the document. Although most sync-share-store services work with MSOffice OXML formatted documents, only Microsoft provides a true "native" format viewer (Office 365). Office 365 enables direct edit, view and collaboration on native documents. Which is an enormous advantage given that conversion of any sort is guaranteed to "break" a native document and disrupt any related business processes or round tripping need. It was here that SurDoc was to provide a break-through technology. Sadly, we're still waiting :( excerpt: The availability of cheap, easy-to-use and accessible cloud file-sharing services means users have more freedom and choice than ever before. Dropbox pioneered simplicity and ease of use, and so quickly picked up users inside the enterprise. Similar services have followed Dropbox's lead and now there are dozens, including well-known ones such as Google Drive, SkyDrive and Ubuntu One. cloud.jpg Valdis Filks , research director at analyst firm Gartner explained the appeal of cloud file-sharing services. Filks said: "Enterprise employees use Dropbox and Google because they are consumer products that are simple to use, can be purchased without officially requesting new infrastructure or budget expenditure, and can be installed qu
  •  
    Odd that the reporter mentions the importance of security near the top of the article but gives that topic such short shrift in his evaluation of the services. For example, "secured by 256-bit AES encryption" is meaningless without discussing other factors such as: [i] who creates the encryption keys and on which side of the server/client divide; and [ii] the service's ability to decrypt the customer's content. Encrypt/decryt must be done on the client side using unique keys that are unknown to the service, else security is broken and if the service does business in the U.S. or any of its territories or possessions, it is subject to gagged orders to turn over the decrypted customer information. My wisdom so far is to avoid file sync services to the extent you can, boycott U.S. services until the spy agencies are encaged, and reward services that provide good security from nations with more respect for digital privacy, to give U.S.-based services an incentive to lobby *effectively* on behalf of their customer's privacy in Congress. The proof that they are not doing so is the complete absence of bills in Congress that would deal effectively with the abuse by U.S. spy agencies. From that standpoint, the Switzerland-based http://wuala.com/ file sync service is looking pretty good so far. I'm using it.
Paul Merrell

NSA Director Finally Admits Encryption Is Needed to Protect Public's Privacy - 0 views

  • NSA Director Finally Admits Encryption Is Needed to Protect Public’s Privacy The new stance denotes a growing awareness within the government that Americans are not comfortable with the State’s grip on their data. By Carey Wedler | AntiMedia | January 22, 2016 Share this article! https://mail.google.com/mail/?view=cm&fs=1&to&su=NSA%20Director%20Finally%20Admits%20Encryption%20Is%20Needed%20to%20Protect%20Public%E2%80%99s%20Privacy&body=http%3A%2F%2Fwww.mintpress
  • Rogers cited the recent Office of Personnel Management hack of over 20 million users as a reason to increase encryption rather than scale it back. “What you saw at OPM, you’re going to see a whole lot more of,” he said, referring to the massive hack that compromised the personal data about 20 million people who obtained background checks. Rogers’ comments, while forward-thinking, signify an about face in his stance on encryption. In February 2015, he said he “shares [FBI] Director [James] Comey’s concern” about cell phone companies’ decision to add encryption features to their products. Comey has been one loudest critics of encryption. However, Rogers’ comments on Thursday now directly conflict with Comey’s stated position. The FBI director has publicly chastised encryption, as well as the companies that provide it. In 2014, he claimed Apple’s then-new encryption feature could lead the world to “a very dark place.” At a Department of Justice hearing in November, Comey testified that “Increasingly, the shadow that is ‘going dark’ is falling across more and more of our work.” Though he claimed, “We support encryption,” he insisted “we have a problem that encryption is crashing into public safety and we have to figure out, as people who care about both, to resolve it. So, I think the conversation’s in a healthier place.”
  • At the same hearing, Comey and Attorney General Loretta Lynch declined to comment on whether they had proof the Paris attackers used encryption. Even so, Comey recently lobbied for tech companies to do away with end-to-end encryption. However, his crusade has fallen on unsympathetic ears, both from the private companies he seeks to control — and from the NSA. Prior to Rogers’ statements in support of encryption Thursday, former NSA chief Michael Hayden said, “I disagree with Jim Comey. I actually think end-to-end encryption is good for America.” Still another former NSA chair has criticized calls for backdoor access to information. In October, Mike McConnell told a panel at an encryption summit that the United States is “better served by stronger encryption, rather than baking in weaker encryption.” Former Department of Homeland Security chief, Michael Chertoff, has also spoken out against government being able to bypass encryption.
  • ...2 more annotations...
  • Regardless of these individual defenses of encryption, the Intercept explained why these statements may be irrelevant: “Left unsaid is the fact that the FBI and NSA have the ability to circumvent encryption and get to the content too — by hacking. Hacking allows law enforcement to plant malicious code on someone’s computer in order to gain access to the photos, messages, and text before they were ever encrypted in the first place, and after they’ve been decrypted. The NSA has an entire team of advanced hackers, possibly as many as 600, camped out at Fort Meade.”
  • Rogers statements, of course, are not a full-fledged endorsement of privacy, nor can the NSA be expected to make it a priority. Even so, his new stance denotes a growing awareness within the government that Americans are not comfortable with the State’s grip on their data. “So spending time arguing about ‘hey, encryption is bad and we ought to do away with it’ … that’s a waste of time to me,” Rogers said Thursday. “So what we’ve got to ask ourselves is, with that foundation, what’s the best way for us to deal with it? And how do we meet those very legitimate concerns from multiple perspectives?”
Paul Merrell

The Cover Pages: Alfresco Enterprise Edition v3.3 for Composite Content Applications - 0 views

  • While CMIS, cloud computing and market commoditization have left some vendors struggling to determine the future of enterprise content management (ECM), Alfresco Software today unveiled Alfresco Enterprise Edition 3.3 as the platform for composite content applications that will redefine the way organizations approach ECM. As the first commercially-supported CMIS implementation offering integrations around IBM/Lotus social software, Microsoft Outlook, Google Docs and Drupal, Alfresco Enterprise 3.3 becomes the first content services platform to deliver the features, flexibility and affordability required across the enterprise.
  • Quick and simple development environment to support new business applications Flexible deployment options enabling content applications to be deployed on-premise, in the cloud or on the Web Interoperability between business applications through open source and open standards The ability to link data, content, business process and context
  • Build future-proof content applications through CMIS — With the first and most complete supported implementation of the CMIS standard, Alfresco now enables companies to build new content-based applications while offering the security of the most open, flexible and future-proof content services platform. Repurpose content for multiple delivery channels — Advanced content formatting and transformation services allow organizations to easily repurpose content for delivery through multiple channels (web, smart phone, iPad, print, etc). Improve project management with content collaboration — New datalist function can be used to track project related issues, to-dos, actions and tasks, supplementing existing commenting, social tagging, discussions and project sites. Deploy content through replication services — Companies can replicate and deploy content, and associated information, between content platforms. Using powerful replication services, users can develop and then deploy content outside the firewall, to web servers and into the cloud. Develop new frameworks through Spring Surf — Building on SpringSource, the leader in Java application infrastructure used to create java applications, Spring Surf provides a scriptable framework for developing new content rich applications.
Gary Edwards

Adeptol Viewing Technology Features - 0 views

  •  
    Quick LinksGet a TrialEnterprise On DemandEnterprise On PremiseFAQHelpContact UsWhy Adeptol?Document SupportSupport for more than 300 document types out of boxNot a Virtual PrinterMultitenant platform for high end document viewingNo SoftwaresNo need to install any additional softwares on serverNo ActiveX/PluginsNo plugins or active x or applets need to be downloaded on client side.Fully customizableAdvanced API offers full customization and UI changes.Any OS/Any Prog LanguageInstall Adeptol Server on any OS and integrate with any programming language.AwardsAdeptol products receive industry awards and accolades year after year  View a DemoAttend a WebcastContact AdeptolView a Success StoryNo ActiveX, No Plug-in, No Software's to download. Any OS, Any Browser, Any Programming Language. That is the Power of Adeptol. Adeptol can help you retain your customers and streamline your content integration efforts. Leverage Web 2.0 technologies to get a completely scalable content viewer that easily handles any type of content in virtually unlimited volume, with additional capabilities to support high-volume transaction and archive environments. Our enterprise-class infrastructure was built to meet the needs of the world's most demanding global enterprises. Based on AJAX technology you can easily integrate the viewer into your application with complete ease. Support for all Server PlatformsCan be installed on Windows   (32bit/64bit) Server and Linux   (32bit/64bit) Server. Click here to see technical specifications.Integrate with any programming languageWhether you work in .net, c#, php, cold fusion or JSP. Adeptol Viewer can be integrated easily in any programming language using the easy API set. It also comes with sample code for all languages to get you started.Compatibility with more than 99% of the browsersTested & verified for compatibility with 99% of the various browsers on different platforms. Click here to see browser compatibility report.More than 300 Document T
Paul Merrell

Cover Pages: Content Management Interoperability Services (CMIS) - 0 views

  • On October 06, 2008, OASIS issued a public call for participation in a new technical committee chartered to define specifications for use of Web services and Web 2.0 interfaces to enable information sharing across content management repositories from different vendors. The OASIS Content Management Interoperability Services (CMIS) TC will build upon existing specifications to "define a domain model and bindings that are designed to be layered on top of existing Content Management systems and their existing programmatic interfaces. The TC will not prescribe how specific features should be implemented within those Enterprise Content Management (ECM) systems. Rather it will seek to define a generic/universal set of capabilities provided by an ECM system and a set of services for working with those capabilities." As of February 17, 2010, the CMIS technical work had received broad support through TC participation, industry analyst opinion, and declarations of interest from major companies. Some of these include Adobe, Adullact, AIIM, Alfresco, Amdocs, Anakeen, ASG Software Solutions, Booz Allen Hamilton, Capgemini, Citytech, Content Technologies, Day Software, dotCMS, Ektron, EMC, EntropySoft, ESoCE-NET, Exalead, FatWire, Fidelity, Flatirons, fme AG, Genus Technologies, Greenbytes GmbH, Harris, IBM, ISIS Papyrus, KnowledgeTree, Lexmark, Liferay, Magnolia, Mekon, Microsoft, Middle East Technical University, Nuxeo, Open Text, Oracle, Pearson, Quark, RSD, SAP, Saperion, Structured Software Systems (3SL), Sun Microsystems, Tanner AG, TIBCO Software, Vamosa, Vignette, and WeWebU Software. Early commentary from industry analysts and software engineers is positive about the value proposition in standardizing an enterprise content-centric management specification. The OASIS announcement of November 17, 2008 includes endorsements. Principal use cases motivating the CMIS technical work include collaborative content applications, portals leveraging content management repositories, mashups, and searching a content repository.
  •  
    I should have posted before about CMIS, an emerging standard with a very lot of buy-in by vendors large and small. I've been watching the buzz grow via Robin Cover's Daily XML links service. IIt's now on my "need to watch" list. 
Gary Edwards

I Want To Build A Website. Do I Need a Content Management System (CMS)? - www.htmlgoodi... - 2 views

  •  
    Although there are many open source CMSes available, we're going to focus on those that are based upon PHP. The following CMSes are thus PHP-based, and use a MySQL database. The advantages of using such a CMS include portability, support and a large developer base with frequent updates and improvements. We will discuss the following four CMSes: Drupal - a free open source content management system written in PHP and distributed under the GNU General Public License Joomla - an open source content management system platform for publishing content as a Model-view-controller (MVC) web application framework PHPNuke - a web-based automated news publishing and content management system based on PHP and MySQL Wordpress - an open source CMS, often used as a blog publishing application, and is the most popular blog software in use today
Gary Edwards

4 Pillars for Web Content Management Site & Content Optimization - 0 views

  •  
    4 Pillars for Web Content Management Site & Content Optimization.  Excellent review of the basics of WCM - DMS. Billy Cripe from Oracle.
Gary Edwards

Pragmatic PDF: Structured Content: PDF to HTML - 1 views

  •  
    A while back I included the following as one of the areas of interest of the PDF/D Consortium: Structured Documents and Single Sourcing: improving round-trips to document softwareWhat did I mean by Structured Documents? For years Solid Documents has been converting PDF files to Word documents with a focus on retaining format and layout to allow customers to repurpose the content. While this is a great solution for a large amount of customers, it is not the only type of reconstruction that is interesting. PDF is by nature a "document" format: the layout is in the form of pages. Content also needs to exist in alternate formats like a continuously flowing stream. Use cases for continuously flowing content include:conversion to HTML to reflow for form factors other than "pages"conversion to content management systems where structure is more important than layout and formattingconversion for alternate readers for people with disabilities (text to speech, etc)Reconstruction for these use cases focuses more on the structure of the document than on the layout and formatting. For example, we need to take unstructured PDF files and recognize columns, tables, lists, headers and footers, etc. This allows us to organize the content in a logical structure. Ultimately, we'll recognize topics and sections too so that we can produce logical hierarchies from plain old non-tagged PDF files. One great example of where conventional PDF pages are not the most appropriate way to read a document are on small screens of handheld devices. For example, the typical Blackberry has a 3"x2" screen with a resolution something like 320x240 pixels.
Gary Edwards

HyperOffice Expands SaaS Collaboration Suite Reach to EMEA - 1 views

  •  
    Document Management Article: HyperOffice Expands SaaS Collaboration Suite Reach to EMEA. CMSWire focuses on Document Management as well as enterprise content management topics, document managment, Enterprise CMS, DAM, enterprise 2.0 and related topics.
Paul Merrell

Data Transfer Pact Between U.S. and Europe Is Ruled Invalid - The New York Times - 0 views

  • Europe’s highest court on Tuesday struck down an international agreement that allowed companies to move digital information like people’s web search histories and social media updates between the European Union and the United States. The decision left the international operations of companies like Google and Facebook in a sort of legal limbo even as their services continued working as usual.The ruling, by the European Court of Justice, said the so-called safe harbor agreement was flawed because it allowed American government authorities to gain routine access to Europeans’ online information. The court said leaks from Edward J. Snowden, the former contractor for the National Security Agency, made it clear that American intelligence agencies had almost unfettered access to the data, infringing on Europeans’ rights to privacy. The court said data protection regulators in each of the European Union’s 28 countries should have oversight over how companies collect and use online information of their countries’ citizens. European countries have widely varying stances towards privacy.
  • Data protection advocates hailed the ruling. Industry executives and trade groups, though, said the decision left a huge amount of uncertainty for big companies, many of which rely on the easy flow of data for lucrative businesses like online advertising. They called on the European Commission to complete a new safe harbor agreement with the United States, a deal that has been negotiated for more than two years and could limit the fallout from the court’s decision.
  • Some European officials and many of the big technology companies, including Facebook and Microsoft, tried to play down the impact of the ruling. The companies kept their services running, saying that other agreements with the European Union should provide an adequate legal foundation.But those other agreements are now expected to be examined and questioned by some of Europe’s national privacy watchdogs. The potential inquiries could make it hard for companies to transfer Europeans’ information overseas under the current data arrangements. And the ruling appeared to leave smaller companies with fewer legal resources vulnerable to potential privacy violations.
  • ...3 more annotations...
  • “We can’t assume that anything is now safe,” Brian Hengesbaugh, a privacy lawyer with Baker & McKenzie in Chicago who helped to negotiate the original safe harbor agreement. “The ruling is so sweepingly broad that any mechanism used to transfer data from Europe could be under threat.”At issue is the sort of personal data that people create when they post something on Facebook or other social media; when they do web searches on Google; or when they order products or buy movies from Amazon or Apple. Such data is hugely valuable to companies, which use it in a broad range of ways, including tailoring advertisements to individuals and promoting products or services based on users’ online activities.The data-transfer ruling does not apply solely to tech companies. It also affects any organization with international operations, such as when a company has employees in more than one region and needs to transfer payroll information or allow workers to manage their employee benefits online.
  • But it was unclear how bulletproof those treaties would be under the new ruling, which cannot be appealed and went into effect immediately. Europe’s privacy watchdogs, for example, remain divided over how to police American tech companies.France and Germany, where companies like Facebook and Google have huge numbers of users and have already been subject to other privacy rulings, are among the countries that have sought more aggressive protections for their citizens’ personal data. Britain and Ireland, among others, have been supportive of Safe Harbor, and many large American tech companies have set up overseas headquarters in Ireland.
  • “For those who are willing to take on big companies, this ruling will have empowered them to act,” said Ot van Daalen, a Dutch privacy lawyer at Project Moore, who has been a vocal advocate for stricter data protection rules. The safe harbor agreement has been in place since 2000, enabling American tech companies to compile data generated by their European clients in web searches, social media posts and other online activities.
  •  
    Another take on it from EFF: https://www.eff.org/deeplinks/2015/10/europes-court-justice-nsa-surveilance Expected since the Court's Advocate General released an opinion last week, presaging today's opinion.  Very big bucks involved behind the scenes because removing U.S.-based internet companies from the scene in the E.U. would pave the way for growth of E.U.-based companies.  The way forward for the U.S. companies is even more dicey because of a case now pending in the U.S.  The Second U.S. Circuit Court of Appeals is about to decide a related case in which Microsoft was ordered by the lower court to produce email records stored on a server in Ireland. . Should the Second Circuit uphold the order and the Supreme Court deny review, then under the principles announced today by the Court in the E.U., no U.S.-based company could ever be allowed to have "possession, custody, or control" of the data of E.U. citizens. You can bet that the E.U. case will weigh heavily in the Second Circuit's deliberations.  The E.U. decision is by far and away the largest legal event yet flowing out of the Edward Snowden disclosures, tectonic in scale. Up to now, Congress has succeeded in confining all NSA reforms to apply only to U.S. citizens. But now the large U.S. internet companies, Google, Facebook, Microsoft, Dropbox, etc., face the loss of all Europe as a market. Congress *will* be forced by their lobbying power to extend privacy protections to "non-U.S. persons."  Thank you again, Edward Snowden.
Gary Edwards

Paquete - 0 views

  •  
    Paquete is a packaging plugin Paquete is a simple package viewer javascript plugin which supports packages like IMS and others where a package is a collection of related pages and content. Paquete describes the relationships within the package using a JSON manifest file. Paquete uses one page to host all of the pages of a package which reduces overall page loading time and allows users to navigate easily. Once an individual sub-page is loaded, it is cached in the host page making browsing much quicker. Specific pages are denoted individually so bookmarking of an individual page is possible. What does Paquete do? Paquete arranges files within the package so there is a table of contents (TOC) on the left-hand side and html pages for viewing, on the right. User navigation can randomly utilise the TOC or the arrowed navigation buttons direct navigation through the package page by page, in a linear fashion. How can Paquete be applied? Paquete can be applied in various ways: on its own, to group files and display them in a browser for viewing. in a Learning Management System (LMS) like Moodle, for viewing course content. in other web applications where navigating through grouped content is important.
Gary Edwards

Content Management for Web & Mobile Applications - Contentful - 0 views

  •  
    A Cloud platform for advanced web applications.  Good discussion about the need to separate content from presentation, and then apply multiple device specific presentation layers.  These guys need to implement the Readability API's  !!!!!!
Gary Edwards

How To Win The Cloud Wars - Forbes - 0 views

  •  
    Byron Deeter is right, but perhaps he's holding back on his reasoning.  Silicon Valley is all about platform, and platform plays only come about once every ten to twenty years.  They come like great waves of change, not replacing the previous waves as much as taking away and running with the future.   Cloud Computing is the fourth great wave.  It will replace the PC and Network Computing waves as the future.  It is the target of all developers and entrepreneurs.   The four great waves are mainframe, workstation, pc and networked pc, and the Internet.  Cloud Computing takes the Internet to such a high level of functionality that it will now replace the pc-netwroking wave.  It's going to be enormous.  Especially as enterprises move their business productivity and data / content apps from the desktop/workgroup to the Cloud.  Enormous. The key was the perfect storm of 2008, where mobility (iPhone) converged with the standardization of tagged PDF, which converged with the Cloud Computing application and data model, which all happened at the time of the great financial collapse.   The financial collapase of 2008 caused a tectonic shift in productivity.  Survival meant doing more with less.  Particularly less labor since cost of labor was and continues to be a great uncertainty.  But that's also the definition of productivity and automation.  To survive, companies were compelled to reduce labor and invest in software/hardware systems based productivity.  The great leap to a new platform had it's fuel; survival. Social applications and services are just the simplest manifestation of productivity through managed connectivity in the Cloud.  Wait until this new breed of productivity reaches business apps!  The platform wars have begun, and it's for all the marbles. One last thought.  The Internet was always going to win as the next computing platform wave.  It's the first time communications have been combined and integrated into content, and vast dat
Gary Edwards

Report: Next Generation Web CMS Must Unify Operations, Intelligence - 0 views

  •  
    According to Aberdeen, the next generation web content management system needs to bridge the inter-system gap - providing marketers with actionable insights that can support and enhance online engagement.  To find out more about the challenges for today's marketers and how the next generation of web content management system can support them, check out the Aberdeen Group's report (hosted by Sitecore) entitled Next Generation WCM: A Comprehensive Assessment of Current Challenges and The Future of WCM.
Gary Edwards

Online Collaboration | Novell Vibe cloud service - 0 views

  •  
    Real-time co-creation and co-editing: With Novell Vibe, people in your organization can author and edit online documents together, character by character, in real time. Teams can dramatically accelerate the completion of projects that used to take weeks. Because collaboration unfolds in a shared workspace, no one has to manually merge content from multiple contributors to create a unified, finished document. Enterprise social messaging: As easy to use as Facebook or Twitter, Novell Vibe consolidates direct messages, chat, blogs and wikis from within Novell Vibe into one message stream. Creating new groups and inviting members from inside or outside your organization is as simple as sending an e-mail. You can even jumpstart ad-hoc conversations in seconds to tackle projects that can't wait. File synchronization and management: Files on your desktop, regardless of authoring application, can be synchronized to the Novell Vibe file repository based in the cloud. As a result, users always work with the latest versions of important files on their desktops and in Novell Vibe. The Novell Vibe unified message stream: Direct messages, social feeds and group conversations from within Novell Vibe are unified in one intuitive interface. This eliminates the need to constantly switch between locations to see all your content. Using powerful filtering, sorting and tagging capabilities, you can determine exactly what you want to see and whom you want to follow. Advanced information management: Novell Vibe keeps a persistent record of all your work and conversations. Its comprehensive search function quickly locates files, messages, attachments, groups and people to save time and boost productivity.
Gary Edwards

New Box.net CMS Release Leverages Cloud, Partnerships - ecrmguide.com - 0 views

  •  
    PALO ALTO, Calif - Box.net unveiled a redesign of its cloud-based content management (CMS) and collaboration system at a press event here today at company headquarters. The new interface, features and partnership announcements with NetSuite, Samsung and VMware are all part of the company's strategy to win over more enterprise customers. "This is an all new version of Box, remade for the enterprise, enabling a new set of workflows and features," Box CEO Aaron Levie said in opening remarks. Storage in the free version of Box has been upgraded to five gigabytes (up from one) and is unlimited for enterprise users of the paid version. Box has also increased the viewing area for content by 30 percent, added real-time updates of content including new comments, edits or deletions of a document. Updates are also ranked and collated to present the user with the most important information. Another improvement is a simplified administrative console designed to improve readability and organization. Overall, Box said it has developed a much more scalable framework for its user interface that makes it easier to roll out new features.
Paul Merrell

Google Fiber: No Charge For Peering, No Fast Lanes - Slashdot - 0 views

  • "Addressing the recent controversy over Netflix paying ISPs directly for better data transfer speeds, Google's Director of Network Engineering explains how their Fiber server handles peering. He says, 'Bringing fiber all the way to your home is only one piece of the puzzle. We also partner with content providers (like YouTube, Netflix, and Akamai) to make the rest of your video's journey shorter and faster. (This doesn't involve any deals to prioritize their video 'packets' over others or otherwise discriminate among Internet traffic — we don't do that.) Like other Internet providers, Google Fiber provides the 'last-mile' Internet connection to your home. ... So that your video doesn't get caught up in this possible congestion, we invite content providers to hook up their networks directly to ours. This is called 'peering,' and it gives you a more direct connection to the content that you want. ... We don't make money from peering or colocation; since people usually only stream one video at a time, video traffic doesn't bog down or change the way we manage our network in any meaningful way — so why not help enable it?'"
  •  
    The difference between an ISP that does not also sell content and those that do. Those that do are against net neutrality so they can throttle competing content providers. 
1 - 20 of 46 Next › Last »
Showing 20 items per page