Skip to main content

Home/ Future of the Web/ Group items tagged operators

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

NSA Spying Relies on AT&T's 'Extreme Willingness to Help' - ProPublica - 0 views

  • he National Security Agency’s ability to spy on vast quantities of Internet traffic passing through the United States has relied on its extraordinary, decades-long partnership with a single company: the telecom giant AT&T. While it has been long known that American telecommunications companies worked closely with the spy agency, newly disclosed NSA documents show that the relationship with AT&T has been considered unique and especially productive. One document described it as “highly collaborative,” while another lauded the company’s “extreme willingness to help.”
  • AT&T’s cooperation has involved a broad range of classified activities, according to the documents, which date from 2003 to 2013. AT&T has given the NSA access, through several methods covered under different legal rules, to billions of emails as they have flowed across its domestic networks. It provided technical assistance in carrying out a secret court order permitting the wiretapping of all Internet communications at the United Nations headquarters, a customer of AT&T. The NSA’s top-secret budget in 2013 for the AT&T partnership was more than twice that of the next-largest such program, according to the documents. The company installed surveillance equipment in at least 17 of its Internet hubs on American soil, far more than its similarly sized competitor, Verizon. And its engineers were the first to try out new surveillance technologies invented by the eavesdropping agency. One document reminds NSA officials to be polite when visiting AT&T facilities, noting: “This is a partnership, not a contractual relationship.” The documents, provided by the former agency contractor Edward Snowden, were jointly reviewed by The New York Times and ProPublica.
  • It is not clear if the programs still operate in the same way today. Since the Snowden revelations set off a global debate over surveillance two years ago, some Silicon Valley technology companies have expressed anger at what they characterize as NSA intrusions and have rolled out new encryption to thwart them. The telecommunications companies have been quieter, though Verizon unsuccessfully challenged a court order for bulk phone records in 2014. At the same time, the government has been fighting in court to keep the identities of its telecom partners hidden. In a recent case, a group of AT&T customers claimed that the NSA’s tapping of the Internet violated the Fourth Amendment protection against unreasonable searches. This year, a federal judge dismissed key portions of the lawsuit after the Obama administration argued that public discussion of its telecom surveillance efforts would reveal state secrets, damaging national security.
Paul Merrell

NSA Will Destroy Archived Metadata When Program Stops - 0 views

  • Four months from now, at the same time that the National Security Agency finally abandons the massive domestic telephone dragnet exposed by whistleblower Edward Snowden, it will also stop perusing the vast archive of data collected by the program. The NSA announced on Monday that it will expunge all the telephone metadata it previously swept up, citing Section 215 of the U.S.A Patriot Act. The program was ruled illegal by a federal appeals court in May. In June, Congress voted to end the program, but gave the NSA until the end of November to phase it out. The historical metadata —  records of American phone calls showing who called who, when, and for how long — will be put out of the reach of analysts on November 29, although technical personnel will have access for three more months. The program started 14 years ago, and operated under rules requiring data be retained for five years, and then destroyed.
  • The only possible hold-up, ironically, would be if any of the civil lawsuits prompted by the program prohibit the destruction of the data. “The telephony metadata” will be “preserved solely because of preservation obligations in pending civil litigation,” the Office of the Director of National Intelligence announced. “As soon as possible, NSA will destroy the Section 215 bulk telephony metadata upon expiration of its litigation preservation obligations.” ACLU staff attorney Alex Abdo told The Intercept his organization is “pleased that the NSA intends to purge the call records it has collected illegally.” But, he added: “Even with today’s pledge, the devil may be in the details.”
Paul Merrell

CISPA is back! - 0 views

  • OPERATION: Fax Big Brother Congress is rushing toward a vote on CISA, the worst spying bill yet. CISA would grant sweeping legal immunity to giant companies like Facebook and Google, allowing them to do almost anything they want with your data. In exchange, they'll share even more of your personal information with the government, all in the name of "cybersecurity." CISA won't stop hackers — Congress is stuck in 1984 and doesn't understand modern technology. So this week we're sending them thousands of faxes — technology that is hopefully old enough for them to understand. Stop CISA. Send a fax now!
  • (Any tweet w/ #faxbigbrother will get faxed too!) Your email is only shown in your fax to Congress. We won't add you to any mailing lists.
  • CISA: the dirty deal between government and corporate giants. It's the dirty deal that lets much of government from the NSA to local police get your private data from your favorite websites and lets them use it without due process. The government is proposing a massive bribe—they will give corporations immunity for breaking virtually any law if they do so while providing the NSA, DHS, DEA, and local police surveillance access to everyone's data in exchange for getting away with crimes, like fraud, money laundering, or illegal wiretapping. Specifically it incentivizes companies to automatically and simultaneously transfer your data to the DHS, NSA, FBI, and local police with all of your personally-indentifying information by giving companies legal immunity (notwithstanding any law), and on top of that, you can't use the Freedom of Information Act to find out what has been shared.
  • ...1 more annotation...
  • The NSA and members of Congress want to pass a "cybersecurity" bill so badly, they’re using the recent hack of the Office of Personnel Management as justification for bringing CISA back up and rushing it through. In reality, the OPM hack just shows that the government has not been a good steward of sensitive data and they need to institute real security measures to fix their problems. The truth is that CISA could not have prevented the OPM hack, and no Senator could explain how it could have. Congress and the NSA are using irrational hysteria to turn the Internet into a place where the government has overly broad, unchecked powers. Why Faxes? Since 2012, online and civil liberties groups and 30,000+ sites have driven more than 2.6 million emails and hundreds of thousands of calls, tweets and more to Congress opposing overly broad cybersecurity legislation. Congress has tried to pass CISA in one form or another 4 times, and they were beat back every time by people like you. It's clear Congress is completely out of touch with modern technology, so this week, as Congress rushes toward a vote on CISA, we are going to send them thousands of faxes, a technology from the 1980s that is hopefully antiquated enough for them to understand. Sending a fax is super easy — you can use this page to send a fax. Any tweet with the hashtag #faxbigbrother will get turned into a fax to Congress too, so what are you waiting for? Click here to send a fax now!
Paul Merrell

Sloppy Cyber Threat Sharing Is Surveillance by Another Name | Just Security - 0 views

  • Imagine you are the target of a phishing attack: Someone sends you an email attachment containing malware. Your email service provider shares the attachment with the government, so that others can configure their computer systems to spot similar attacks. The next day, your provider gets a call. It’s the Department of Homeland Security (DHS), and they’re curious. The malware appears to be from Turkey. Why, DHS wants to know, might someone in Turkey be interested in attacking you? So, would your email company please share all your emails with the government? Knowing more about you, investigators might better understand the attack. Normally, your email provider wouldn’t be allowed to give this information over without your consent or a search warrant. But that could soon change. The Senate may soon make another attempt at passing the Cybersecurity Information Sharing Act, a bill that would waive privacy laws in the name of cybersecurity. In April, the US House of Representatives passed by strong majorities two similar “cyber threat” information sharing bills. These bills grant companies immunity for giving DHS information about network attacks, attackers, and online crimes.
  • Sharing information about security vulnerabilities is a good idea. Shared vulnerability data empowers other system operators to check and see if they, too, have been attacked, and also to guard against being similarly attacked in the future. I’ve spent most of my career fighting for researchers’ rights to share this kind of information against threats from companies that didn’t want their customers to know their products were flawed. But, these bills gut legal protections against government fishing expeditions exactly at a time when individuals and Internet companies need privacy laws to get stronger, not weaker. 
  • Worse, the bills aren’t needed. Private companies share threat data with each other, and even with the government, all the time. The threat data that security professionals use to protect networks from future attacks is a far more narrow category of information than those included in the bills being considered by Congress, and will only rarely contain private information. And none of the recent cyberattacks — not Sony, not Target, and not the devastating grab of sensitive background check interviews on government employees at the Office of Personnel Management — would have been mitigated by these bills.
Gonzalo San Gil, PhD.

US, China reach cyberespionage agreement | ITworld - 1 views

  •  
    "e U.S. and China have reached their first ever cybercrime and cyberespionage agreement, but the deal is quite general and how it will translate into actions is still unclear."
  •  
    "e U.S. and China have reached their first ever cybercrime and cyberespionage agreement, but the deal is quite general and how it will translate into actions is still unclear."
Paul Merrell

NSA head: We need bulk collection | TheHill - 0 views

  • The head of the National Security Agency on Thursday told Senate lawmakers that preventing his agency from collecting Americans’ information in bulk would make it harder to do its job.Under questioning before the Senate Intelligence Committee, Adm. Michael Rogers agreed that ending bulk collection would “significantly reduce [his] operational capabilities.”ADVERTISEMENT“Right now, bulk collection gives us the ability ... to generate insights as to what’s going on,” Rogers told the committee.The NSA head also referenced a January report from the National Academy of Sciences that concluded there is “no software technique that will fully substitute for bulk collection” because of the ability to search through the storehouse of old information. “That independent, impartial, scientifically-founded body came back and said no, under the current structure there is no real replacement,” Rogers said.Rogers was questioned on Thursday by Sen. Ron Wyden (D-Ore.), a member of the Intelligence Committee who has become its most vocal privacy hawk.
  • In response to the NSA head’s comments, Wyden pointed to a 2013 White House review group, which found that one controversial NSA bulk collection program “was not essential to preventing attacks” and that the information obtained by the NSA “could readily have been obtained in a timely manner using” other means.The debate follows on a congressional clash earlier this year over the NSA’s bulk collection of records about the phone calls of millions of Americans. The records contained information about whom people called and when but not what they talked about.
  • After a brief lapsing of some portions of the Patriot Act, Congress eventually reined in the NSA by forcing it to go through the courts to search private phone companies’ records for a narrower set of records. Many privacy advocates treated the new law, called the USA Freedom Act, as a significant victory, through national security hawks worried that it would make it harder for the NSA to track terrorists.Under the new system — which has not gone into effect yet — the amount of time it takes to obtain those records “is probably going to be longer I suspect,” Rogers said.Though the phone records database has been the NSA’s most prominent bulk collection program, it is not the only one. The agency’s collection of vast amounts of Internet data has alarmed many privacy advocates and is the target of a current lawsuit from Wikipedia and the American Civil Liberties Union. 
Paul Merrell

Revealed: How DOJ Gagged Google over Surveillance of WikiLeaks Volunteer - The Intercept - 0 views

  • The Obama administration fought a legal battle against Google to secretly obtain the email records of a security researcher and journalist associated with WikiLeaks. Newly unsealed court documents obtained by The Intercept reveal the Justice Department won an order forcing Google to turn over more than one year’s worth of data from the Gmail account of Jacob Appelbaum (pictured above), a developer for the Tor online anonymity project who has worked with WikiLeaks as a volunteer. The order also gagged Google, preventing it from notifying Appelbaum that his records had been provided to the government. The surveillance of Appelbaum’s Gmail account was tied to the Justice Department’s long-running criminal investigation of WikiLeaks, which began in 2010 following the transparency group’s publication of a large cache of U.S. government diplomatic cables. According to the unsealed documents, the Justice Department first sought details from Google about a Gmail account operated by Appelbaum in January 2011, triggering a three-month dispute between the government and the tech giant. Government investigators demanded metadata records from the account showing email addresses of those with whom Appelbaum had corresponded between the period of November 2009 and early 2011; they also wanted to obtain information showing the unique IP addresses of the computers he had used to log in to the account.
  • The Justice Department argued in the case that Appelbaum had “no reasonable expectation of privacy” over his email records under the Fourth Amendment, which protects against unreasonable searches and seizures. Rather than seeking a search warrant that would require it to show probable cause that he had committed a crime, the government instead sought and received an order to obtain the data under a lesser standard, requiring only “reasonable grounds” to believe that the records were “relevant and material” to an ongoing criminal investigation. Google repeatedly attempted to challenge the demand, and wanted to immediately notify Appelbaum that his records were being sought so he could have an opportunity to launch his own legal defense. Attorneys for the tech giant argued in a series of court filings that the government’s case raised “serious First Amendment concerns.” They noted that Appelbaum’s records “may implicate journalistic and academic freedom” because they could “reveal confidential sources or information about WikiLeaks’ purported journalistic or academic activities.” However, the Justice Department asserted that “journalists have no special privilege to resist compelled disclosure of their records, absent evidence that the government is acting in bad faith,” and refused to concede Appelbaum was in fact a journalist. It claimed it had acted in “good faith throughout this criminal investigation, and there is no evidence that either the investigation or the order is intended to harass the … subscriber or anyone else.” Google’s attempts to fight the surveillance gag order angered the government, with the Justice Department stating that the company’s “resistance to providing the records” had “frustrated the government’s ability to efficiently conduct a lawful criminal investigation.”
  • Google accused the government of hyperbole and argued that the backlash over the Twitter order did not justify secrecy related to the Gmail surveillance. “Rather than demonstrating how unsealing the order will harm its well-publicized investigation, the government lists a parade of horribles that have allegedly occurred since it unsealed the Twitter order, yet fails to establish how any of these developments could be further exacerbated by unsealing this order,” wrote Google’s attorneys. “The proverbial toothpaste is out of the tube, and continuing to seal a materially identical order will not change it.” But Google’s attempt to overturn the gag order was denied by magistrate judge Ivan D. Davis in February 2011. The company launched an appeal against that decision, but this too was rebuffed, in March 2011, by District Court judge Thomas Selby Ellis, III.
  • ...4 more annotations...
  • The Justice Department wanted to keep the surveillance secret largely because of an earlier public backlash over its WikiLeaks investigation. In January 2011, Appelbaum and other WikiLeaks volunteers’ – including Icelandic parlimentarian Birgitta Jonsdottir – were notified by Twitter that the Justice Department had obtained data about their accounts. This disclosure generated widepread news coverage and controversy; the government says in the unsealed court records that it “failed to anticipate the degree of  damage that would be caused” by the Twitter disclosure and did not want to “exacerbate this problem” when it went after Appelbaum’s Gmail data. The court documents show the Justice Department said the disclosure of its Twitter data grab “seriously jeopardized the [WikiLeaks] investigation” because it resulted in efforts to “conceal evidence” and put public pressure on other companies to resist similar surveillance orders. It also claimed that officials named in the subpeona ordering Twitter to turn over information were “harassed” after a copy was published by Intercept co-founder Glenn Greenwald at Salon in 2011. (The only specific evidence of the alleged harassment cited by the government is an email that was sent to an employee of the U.S. Attorney’s office that purportedly said: “You guys are fucking nazis trying to controll [sic] the whole fucking world. Well guess what. WE DO NOT FORGIVE. WE DO NOT FORGET. EXPECT US.”)
  • The government agreed to unseal some of the court records on Apr. 1 this year, and they were apparently turned over to Appelbaum on May 14 through a notification sent to his Gmail account. The files were released on condition that they would contain some redactions, which are bizarre and inconsistent, in some cases censoring the name of “WikiLeaks” from cited public news reports. Not all of the documents in the case – such as the original surveillance orders contested by Google – were released as part of the latest disclosure. Some contain “specific and sensitive details of the investigation” and “remain properly sealed while the grand jury investigation continues,” according to the court records from April this year. Appelbaum, an American citizen who is based in Berlin, called the case “a travesty that continues at a slow pace” and said he felt it was important to highlight “the absolute madness in these documents.”
  • He told The Intercept: “After five years, receiving such legal documents is neither a shock nor a needed confirmation. … Will we ever see the full documents about our respective cases? Will we even learn the names of those signing so-called legal orders against us in secret sealed documents? Certainly not in a timely manner and certainly not in a transparent, just manner.” The 32-year-old, who has recently collaborated with Intercept co-founder Laura Poitras to report revelations about National Security Agency surveillance for German news magazine Der Spiegel, said he plans to remain in Germany “in exile, rather than returning to the U.S. to experience more harassment of a less than legal kind.”
  • “My presence in Berlin ensures that the cost of physically harassing me or politically harassing me is much higher than when I last lived on U.S. soil,” Appelbaum said. “This allows me to work as a journalist freely from daily U.S. government interference. It also ensures that any further attempts to continue this will be forced into the open through [a Mutal Legal Assistance Treaty] and other international processes. The German goverment is less likely to allow the FBI to behave in Germany as they do on U.S. soil.” The Justice Department’s WikiLeaks investigaton is headed by prosecutors in the Eastern District of Virginia. Since 2010, the secretive probe has seen activists affiliated with WikiLeaks compelled to appear before a grand jury and the FBI attempting to infiltrate the group with an informant. Earlier this year, it was revealed that the government had obtained the contents of three core WikiLeaks staffers’ Gmail accounts as part of the investigation.
Gonzalo San Gil, PhD.

OS showdown: Windows 10 vs Linux | TechRadar - 2 views

  •  
    "By Neil Mohr 2 days agoOperating systems Redmond's new OS goes toe-to-toe with Linux"
  •  
    "By Neil Mohr 2 days agoOperating systems Redmond's new OS goes toe-to-toe with Linux"
Paul Merrell

Ubuntu Goes Enterprise - CIO.com - Business Technology Leadership - 0 views

  • Canonical, Ubuntu's parent company, is finally taking serious action on its long-announced plans to become a serious enterprise Linux player. The Isle of Man-based Linux distributor isn't just targeting data center servers, although that's on its list.
  • The plan is for VARs (value added resellers) and system integrators to brand the complete package under their own names.
  •  
    Ubuntu Enterprise to ship with Zimbra, Alfresco, and Unison in easy-to-install packages, plus IBM Notes-related collaboration software packages.
Paul Merrell

Home - Pencil Project - 0 views

  • The Pencil Project's unique mission is to build a free and opensource tool for making diagrams and GUI prototyping that everyone can use.
  • Built-in stencils for diagraming and prototyping Multi-page document with background page On-screen text editing with rich-text supports PNG rasterizing Undo/redo supports Installing user-defined stencils Standard drawing operations: aligning, z-ordering, scaling, rotating... Cross-platforms Adding external objects And much more...
  •  
    Interesting application for prototyping GUIs. Runs as a Firefox 3 extension or standalone on Linux and Windows using XULRunner.
Paul Merrell

Microsoft's plans for post-Windows OS revealed - Software Development Times On The Web - 0 views

  • Microsoft is incubating a componentized non-Windows operating system known as Midori, which is being architected from the ground up to tackle challenges that Redmond has determined cannot be met by simply evolving its existing technology.SD Times has viewed internal Microsoft documents that outline Midori’s proposed design, which is Internet-centric and predicated on the prevalence of connected systems.
  • The Midori documents foresee applications running across a multitude of topologies, ranging from client-server and multi-tier deployments to peer-to-peer at the edge, and in the cloud data center. Those topologies form a heterogeneous mesh where capabilities can exist at separate places.
  •  
    Supposedly, Midori is going to do cloud computing. Note that the Sun-MSFT deal expires in 2012.
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Paul Merrell

Office Business Applications for Store Operations - 0 views

  • Service orientation addresses these challenges by centering on rapidly evolving XML and Web services standards that are revolutionizing how developers compose systems and integrate them over distributed networks. No longer are developers forced to make do with rigid and proprietary languages and object models that used to be the norm before service orientation came into play. The emergence of this new methodology is helping to develop new approaches specifically for Web-based distributed computing. This revolution is transforming the business by integrating disparate systems to establish a real-time enterprise. Making information available where it is needed to simplify merchandising processes requires a methodology that is based on loosely coupled integration between various in-store and back-end applications. This demand makes it critical for an architecture that is based on service orientation for integration between disparate applications. In addition, surfacing information at the right place requires the ability to compose dynamic applications using an array of underlying services. The Office Business Applications platform provides this ability to create composite applications, such as dashboards for the store, regional, and corporate managers.
  •  
    Summary: Changing market conditions require agility in business applications. Service orientation answers the challenge by centering on XML and Web services standards that revolutionize how developers compose systems and integrate them over distributed networks. Once integrated, how is the information presented to the decision makers? (36 printed pages)
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Gary Edwards

The Monkey On Microsoft's Back - Forbes.com - 0 views

  • The new technology, dubbed TraceMonkey, promises to speed up Firefox's ability to deliver complex applications. The move heightens the threat posed by a nascent group of online alternatives to Microsoft's most profitable software: PC applications, like Microsoft Office, that allow Microsoft to burn hundreds of millions of dollars on efforts to seize control of the online world. Microsoft's Business Division, which gets 90% of its revenues from sales of Microsoft Office, spat out $12.4 billion in operating income for the fiscal year ending June 30. Google (nasdaq: GOOG - news - people ), however, is playing a parallel game, using profits from its online advertising business to fund alternatives to Microsoft's desktop offerings. Google already says it has "millions" of users for its free, Web-based alternative to desktop staples, including Microsoft's Word, Excel and PowerPoint software. The next version of Firefox, which could debut by the end of this year, promises to speed up such applications, thanks to a new technology built into the developer's version of the software last week. Right now, rich Web applications such as Google Gmail rely on a technology known as Javascript to turn them from lifeless Web pages into applications that respond as users mouse about a Web page. TraceMonkey aims to turn the most frequently used chunks of Javascript code embedded into Web pages into binary form--allowing computers to hustle through the most used bits of code--without waiting around to render all of the code into binary form.
  •  
    I did send a very lenghthy comment to Brian Caulfield, the Forbes author of this article. Of course, i disagreed with his perspective. TraceMonkey is great, performing an acceleration of JavaScript in FireFox in much the same way that Squirrel Fish accelleratees WebKit Browsers. What Brian misses though is that the RiA war that is taking place both inside and outside the browser (RIA = fully functional Web applications that WILL replace the "client/server" apps model)
Paul Merrell

Would a VMware Acquisition of Red Hat Go Anywhere? | OStatic - 0 views

  • Is there any chance that virtualization giant VMware might have its eyes on Red Hat as an acquisition? This article reports that "VMware CEO Diane Greene, ousted by her board in July, had set up meetings with Red Hat in part to position VMware as friendly to open source and possibly as a prelude to a buyout discussion, according to a person familiar with the conversations." While both companies have declined to comment, the prospect could make a lot of sense for VMware for several reasons.
  • Maritz would know that what is going on with virtualization offerings is following the same path that software utilities have always followed. They end up free in the operating system. This happened with backup software, file managers, disk defraggers, and countless other utilities. Virtualization is becoming commoditized in this way--expected in the OS.
Gary Edwards

Olympics set the stage for Web tech fight | Tech News on ZDNet - 0 views

  • Microsoft is approaching Silverlight from the opposite direction. It plans to take advantage of its legions of outside developers experienced in writing for its ubiquitous Windows operating system. The next version of Silverlight, being tested now and due later this year, will support Microsoft's .NET framework -- tools used by developers to create desktop applications that work on Windows.
  •  
    Adobe vs. Microsoft Gartner analyst Ray Valdes said 90 percent of the top global 1,000 companies have yet to deploy any sort of RIA, while 90 percent of the top 100 consumer Web sites have already done so using the nonproprietary and more simple AJAX format. That opportunity has Microsoft eyeing current leader Adobe for business that extends beyond Silverlight and into the sale of design tools along with server and database software to enable these new applications.
Paul Merrell

Microsoft Loses E.U. Antitrust Case - washingtonpost.com - 0 views

  • It ordered the software giant to untie the browser from its operating system in the 27-nation E.U.
  • The commission's investigation into Microsoft's Web-surfing software began a year ago, after the Norwegian browser-maker Opera Software filed a complaint. Opera argued that Microsoft hurt competitors not only by bundling the software, in effect giving away the browser, but also by not following accepted Web standards. That meant programmers who built Web pages would have to tweak their codes for different browsers. In many cases, they simply designed pages that worked with market-leading Internet Explorer but showed up garbled on competing browsers.
  • At the time of the complaint, Opera said it was asking E.U. regulators to either force Microsoft to market a version of Windows without the browser, or to include other browsers with Windows.
  •  
    The Post too says that DG Competition ordered the unbundling of MSIE from Windows. But again no attribution for the statement. They also leave the impression that Opera's complaint regarding the undermining of open web standards was upheld, something not stated in either the Microsoft or DG Competition announcements. So the questions of the day are: [i] did the Commission order the unbundling of MSIE from Windows; and [ii] did the Commission also rule on the undermining of open web standards. The latter question could be of critical importance in the still ongoing proceeding regarding the ECIS complaint in regard to the undermining of ODF by Microsoft pushing OOXML.
Paul Merrell

Update: EU hits Microsoft with new antitrust charges - 0 views

  • January 16, 2009 (Computerworld) Microsoft Corp. confirmed today that European Union regulators have formally accused the company of breaking antitrust laws by including the company's Internet Explorer (IE) browser with the Windows operating system. "Yesterday, Microsoft received a Statement of Objections from the Directorate General for Competition of the European Commission," the company said in a statement on Friday. "The Statement of Objections expresses the Commission's preliminary view that the inclusion of Internet Explorer in Windows since 1996 has violated European competition law." According to Microsoft, the EU claimed that "other browsers are foreclosed from competing because Windows includes Internet Explorer."
Paul Merrell

EU considers spending €1 billion for satellite broadband technology - Interna... - 0 views

  • The €200 billion economic rescue plan being considered this week by European Union leaders includes a proposal to spend €1 billion on bringing high-speed Internet access to rural areas. The proposal is likely to pit the Continent's telecommunications operators against satellite companies, which say they are uniquely suited to expand the broadband, or high-speed, network to underserved parts of Eastern Europe and the Alps by the end of 2010.
  • But support for the plan by EU government leaders, who begin a two-day meeting to consider the rescue plan Thursday is not assured. The money would come from unspent funds in the current EU budget, which under EU rules normally revert back to member countries. Germany, which contributes the most to the EU budget and stands to get the largest refund if the project is rejected, opposes the expenditure.
  • Across the EU, 21.7 percent of residents had broadband Internet access in July, according to the commission; 107.6 million received service from a telephone DSL line or a cable television connection and 130,592 via satellite. Only 6 percent of EU residents on average received broadband via mobile phones.
  • ...1 more annotation...
  • Until now, Baugh said, satellite broadband had been hindered by the relatively high cost of the hardware consumers needed to gain access to the service. But recent advances have lowered the cost to roughly €400, including installation, from several thousand euros a few years ago. At about €30 a month, service packages are comparable to those of DSL and cable.
  •  
    A billion Euros is chicken feed compared to other portions of the E.U. economic stimulus initiatives in the works that respond to the major recession under way. Still, this could be a significant foot in the door for satellite broadband in the E.U., perhaps enough to build out the infrastructure enough for a more serious challenge to cable and telephony broadband. But I wonder if there would be enough redundancy enabled by only a billion Euros to gracefully handle a satellite's death if it has far more broadband users.
Paul Merrell

Introducing the Open XML Format External File Converter for 2007 Microsoft Office Syste... - 0 views

  • In other words, revising the Open XML Format converter interfaces by adding new functionality does not require any recompilation of existing clients. This guarantees backward compatibility as these converter interfaces are upgraded.
    • Paul Merrell
       
      But what does it do for forward compatibility? OOXML is a moving interoperabillity target.
  • In addition to allowing converters to override external file formats, the applications allow converters to override OpenDocument Format-related formats (such as .odt). For example, if you specify a converter to be the default converter for .odt, Word 2007 SP2 invokes the specified converter whenever a user tries to open an .odt file from the Windows Shell instead of going through the native load path for Word 2007 SP2.
    • Paul Merrell
       
      How wonderful. Developers can bypass the forthcoming Microsoft native file support for ODF. Perhaps to convert Excel formulas to OpenForumla?
  • Open XML Format converters for Word 2007 SP2, Excel 2007 SP2, or PowerPoint 2007 SP2 are implemented as out-of-process COM servers. Out-of-process converters have the benefit of running in their own process space, which means issues or crashes within converters do not affect the application process space. In addition, out-of-process 32-bit converters can function on 64-bit operating systems in Microsoft Windows on Windows 64-bit (WoW64) mode without the need for converters to be compiled in 64-bit.
    • Paul Merrell
       
      Pretty lame excuses for not documenting the native file support APIs. I.e., the native file supoort APIs already throw "can't open file" error messages for problematic documents without crashing the app. The bit about not needing to recompile converters for 64-bit Windoze is a complete red herring. This is only a benefit if one requires conversion in an external process. It wouldn't be an issue if the native file support APIs were documented and their intermediate formats were the interop targets.
    • Paul Merrell
       
      I.e., one need not recompile the Office app if a supported native format is added. The OpenDocument Foundation and Sun plug-ins for MS Office proved that.
  • ...3 more annotations...
  • To begin developing a converter, you should familiarize yourself with the Open XML standard. For more information, see: Standard ECMA-376: Office Open XML File Formats.
    • Paul Merrell
       
      Note that they specify Ecma 376 rather than ISO/IEC:29500-2008 Office Open XML. So you get to rewrite your converters when Microsoft adds support for the official standard in the next major release of Office.
  • External files are imported into Word 2007 SP2, Excel 2007 SP2, or PowerPoint 2007 SP2 by converting the external file to Open XML Formats. External files are exported from Word 2007 SP2, Excel 2007 SP2, or PowerPoint by converting Open XML Formats to external files. The success of either the import or export conversion depends upon the accurate generation and interpretation of Open XML Formats by the converter.
    • Paul Merrell
       
      Note that this is a process external to the native file support APIs and their intermediate formats. The real APIs apparently will remain obfuscated. Thiis forces others to develop support for Ecma 376 rather than working directly with the native file support APIs. In other words, more incentives for others to target the moving target OOXML rather than the more stable intermediate formats.
  • Summary: Get the details about the interfaces that you need to use to create an Open XML Format External File Converter for the 2007 Microsoft Office system Service Pack 2 (SP2). (16 Printed Pages)
« First ‹ Previous 221 - 240 of 301 Next › Last »
Showing 20 items per page