Skip to main content

Home/ Open Web/ Group items matching "Cover" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Paul Merrell

NSA Spying Relies on AT&T's 'Extreme Willingness to Help' - ProPublica - 0 views

  • he National Security Agency’s ability to spy on vast quantities of Internet traffic passing through the United States has relied on its extraordinary, decades-long partnership with a single company: the telecom giant AT&T. While it has been long known that American telecommunications companies worked closely with the spy agency, newly disclosed NSA documents show that the relationship with AT&T has been considered unique and especially productive. One document described it as “highly collaborative,” while another lauded the company’s “extreme willingness to help.”
  • AT&T’s cooperation has involved a broad range of classified activities, according to the documents, which date from 2003 to 2013. AT&T has given the NSA access, through several methods covered under different legal rules, to billions of emails as they have flowed across its domestic networks. It provided technical assistance in carrying out a secret court order permitting the wiretapping of all Internet communications at the United Nations headquarters, a customer of AT&T. The NSA’s top-secret budget in 2013 for the AT&T partnership was more than twice that of the next-largest such program, according to the documents. The company installed surveillance equipment in at least 17 of its Internet hubs on American soil, far more than its similarly sized competitor, Verizon. And its engineers were the first to try out new surveillance technologies invented by the eavesdropping agency. One document reminds NSA officials to be polite when visiting AT&T facilities, noting: “This is a partnership, not a contractual relationship.” The documents, provided by the former agency contractor Edward Snowden, were jointly reviewed by The New York Times and ProPublica.
  • It is not clear if the programs still operate in the same way today. Since the Snowden revelations set off a global debate over surveillance two years ago, some Silicon Valley technology companies have expressed anger at what they characterize as NSA intrusions and have rolled out new encryption to thwart them. The telecommunications companies have been quieter, though Verizon unsuccessfully challenged a court order for bulk phone records in 2014. At the same time, the government has been fighting in court to keep the identities of its telecom partners hidden. In a recent case, a group of AT&T customers claimed that the NSA’s tapping of the Internet violated the Fourth Amendment protection against unreasonable searches. This year, a federal judge dismissed key portions of the lawsuit after the Obama administration argued that public discussion of its telecom surveillance efforts would reveal state secrets, damaging national security.
Paul Merrell

Whistleblowers File $100 Million Suit against NSA, FBI - WhoWhatWhy - 0 views

  • In a $100 million lawsuit that has garnered virtually no public attention, five National Security Agency (NSA) whistleblowers are accusing the federal government of illegally retaliating against them for alerting the NSA and Congress to a waste of taxpayer funds that benefitted a well-connected contractor.The lawsuit tells the story of the infancy of the NSA’s efforts to surveil the Internet. Back then, there were two programs for the spying agency to choose from — and the first was called ThinThread. It had been developed internally, was comparatively inexpensive, had been tested and proven to be effective, and included safeguards preventing the spying on Americans without a court warrant. The other was called Trailblazer. It did not include such safeguards, had not yet been shown to be effective, and cost 1,000 times more than ThinThread. Instead of being developed internally, it was to be outsourced to Science Applications International Corporation (SAIC), a politically connected contractor.The NSA chose Trailblazer.
  • In response, four NSA employees who had worked on ThinThread, as well as a congressional staffer, alerted Congress and the Office of the Inspector General of the NSA that the agency was wasting taxpayer funds. That is when their troubles began, according to the lawsuit.It alleges that the defendants, which include the NSA, FBI, and the Department of Justice, as well as individuals associated with them, “knowingly and intentionally fabricated” a claim that the plaintiffs leaked classified information to New York Times reporters Eric Lichtblau and James Risen.“[The defendants] used this fabricated claim for retaliation, illegal searches and seizures, physical invasion of their residences and places of business, temporary false imprisonment, the confiscation of their property, cancellation of security clearances leading to the loss of their jobs and employment, intentional infliction of emotional distress, harassment and intimidation,” the lawsuit alleges.It also states that the defendants should have known that the plaintiffs were not the leaks because the NSA “was tracking all domestic telephone calls for the supposed purpose of protecting national security.”
  • The plaintiffs are former NSA employees Thomas Drake, Ed Loomis, J. Kirk Wiebe, William Binney, and former congressional staffer Diane Roark. They seek “punitive damages in excess of $100 million because of Defendants [sic] callous and reckless indifference and malicious acts …” as well as well as an additional $15 million for lost wages and to cover costs.Larry Klayman, the prominent conservative public interest attorney and founder of Judicial Watch, filed the suit on August 20th. However, it is expected to be amended this week, and it is possible that additional publicity for the case will be sought then.
Paul Merrell

Spies and internet giants are in the same business: surveillance. But we can stop them | John Naughton | Comment is free | The Guardian - 0 views

  • On Tuesday, the European court of justice, Europe’s supreme court, lobbed a grenade into the cosy, quasi-monopolistic world of the giant American internet companies. It did so by declaring invalid a decision made by the European commission in 2000 that US companies complying with its “safe harbour privacy principles” would be allowed to transfer personal data from the EU to the US. This judgment may not strike you as a big deal. You may also think that it has nothing to do with you. Wrong on both counts, but to see why, some background might be useful. The key thing to understand is that European and American views about the protection of personal data are radically different. We Europeans are very hot on it, whereas our American friends are – how shall I put it? – more relaxed.
  • Given that personal data constitutes the fuel on which internet companies such as Google and Facebook run, this meant that their exponential growth in the US market was greatly facilitated by that country’s tolerant data-protection laws. Once these companies embarked on global expansion, however, things got stickier. It was clear that the exploitation of personal data that is the core business of these outfits would be more difficult in Europe, especially given that their cloud-computing architectures involved constantly shuttling their users’ data between server farms in different parts of the world. Since Europe is a big market and millions of its citizens wished to use Facebook et al, the European commission obligingly came up with the “safe harbour” idea, which allowed companies complying with its seven principles to process the personal data of European citizens. The circle having been thus neatly squared, Facebook and friends continued merrily on their progress towards world domination. But then in the summer of 2013, Edward Snowden broke cover and revealed what really goes on in the mysterious world of cloud computing. At which point, an Austrian Facebook user, one Maximilian Schrems, realising that some or all of the data he had entrusted to Facebook was being transferred from its Irish subsidiary to servers in the United States, lodged a complaint with the Irish data protection commissioner. Schrems argued that, in the light of the Snowden revelations, the law and practice of the United States did not offer sufficient protection against surveillance of the data transferred to that country by the government.
  • The Irish data commissioner rejected the complaint on the grounds that the European commission’s safe harbour decision meant that the US ensured an adequate level of protection of Schrems’s personal data. Schrems disagreed, the case went to the Irish high court and thence to the European court of justice. On Tuesday, the court decided that the safe harbour agreement was invalid. At which point the balloon went up. “This is,” writes Professor Lorna Woods, an expert on these matters, “a judgment with very far-reaching implications, not just for governments but for companies the business model of which is based on data flows. It reiterates the significance of data protection as a human right and underlines that protection must be at a high level.”
  • ...2 more annotations...
  • This is classic lawyerly understatement. My hunch is that if you were to visit the legal departments of many internet companies today you would find people changing their underpants at regular intervals. For the big names of the search and social media worlds this is a nightmare scenario. For those of us who take a more detached view of their activities, however, it is an encouraging development. For one thing, it provides yet another confirmation of the sterling service that Snowden has rendered to civil society. His revelations have prompted a wide-ranging reassessment of where our dependence on networking technology has taken us and stimulated some long-overdue thinking about how we might reassert some measure of democratic control over that technology. Snowden has forced us into having conversations that we needed to have. Although his revelations are primarily about government surveillance, they also indirectly highlight the symbiotic relationship between the US National Security Agency and Britain’s GCHQ on the one hand and the giant internet companies on the other. For, in the end, both the intelligence agencies and the tech companies are in the same business, namely surveillance.
  • And both groups, oddly enough, provide the same kind of justification for what they do: that their surveillance is both necessary (for national security in the case of governments, for economic viability in the case of the companies) and conducted within the law. We need to test both justifications and the great thing about the European court of justice judgment is that it starts us off on that conversation.
Paul Merrell

Fourth Circuit adopts mosaic theory, holds that obtaining "extended" cell-site records requires a warrant - The Washington Post - 0 views

  • A divided Fourth Circuit has ruled, in United States v. Graham, that “the government conducts a search under the Fourth Amendment when it obtains and inspects a cell phone user’s historical [cell-site location information] for an extended period of time” and that obtaining such records requires a warrant. The new case creates multiple circuit splits, which may lead to Supreme Court review. Specifically, the decision creates a clear circuit split with the Fifth and Eleventh Circuits on whether acquiring cell-site records is a search. It also creates an additional clear circuit split with the Eleventh Circuit on whether, if cell-site records are protected, a warrant is required. Finally, it also appears to deepen an existing split between the Fifth and Third Circuits on whether the Stored Communications Act allows the government to choose whether to obtain an intermediate court order or a warrant for cell-site records. This post will cover the reasoning of the new case in detail.
Paul Merrell

The Cover Pages: Alfresco Enterprise Edition v3.3 for Composite Content Applications - 0 views

  • While CMIS, cloud computing and market commoditization have left some vendors struggling to determine the future of enterprise content management (ECM), Alfresco Software today unveiled Alfresco Enterprise Edition 3.3 as the platform for composite content applications that will redefine the way organizations approach ECM. As the first commercially-supported CMIS implementation offering integrations around IBM/Lotus social software, Microsoft Outlook, Google Docs and Drupal, Alfresco Enterprise 3.3 becomes the first content services platform to deliver the features, flexibility and affordability required across the enterprise.
  • Quick and simple development environment to support new business applications Flexible deployment options enabling content applications to be deployed on-premise, in the cloud or on the Web Interoperability between business applications through open source and open standards The ability to link data, content, business process and context
  • Build future-proof content applications through CMIS — With the first and most complete supported implementation of the CMIS standard, Alfresco now enables companies to build new content-based applications while offering the security of the most open, flexible and future-proof content services platform. Repurpose content for multiple delivery channels — Advanced content formatting and transformation services allow organizations to easily repurpose content for delivery through multiple channels (web, smart phone, iPad, print, etc). Improve project management with content collaboration — New datalist function can be used to track project related issues, to-dos, actions and tasks, supplementing existing commenting, social tagging, discussions and project sites. Deploy content through replication services — Companies can replicate and deploy content, and associated information, between content platforms. Using powerful replication services, users can develop and then deploy content outside the firewall, to web servers and into the cloud. Develop new frameworks through Spring Surf — Building on SpringSource, the leader in Java application infrastructure used to create java applications, Spring Surf provides a scriptable framework for developing new content rich applications.
Paul Merrell

InfoQ: WS-I closes its doors. What does this mean for WS-*? - 0 views

  • The Web Services Interoperability Organization (WS-I) has just announced that it has completed its mission and will betransitioning all further efforts to OASIS. As their recent press release states: The release of WS-I member approved final materials for Basic Profile (BP) 1.2 and 2.0, and Reliable Secure Profile (RSP) 1.0 fulfills WS-I’s last milestone as an organization. By publishing the final three profiles, WS-I marks the completion of its work. Stewardship over WS-I’s assets, operations and mission will transition to OASIS (Organization for the Advancement of Structured Information Standards), a group of technology vendors and customers that drive development and adoption of open standards. Now at any other time this kind of statement from a standards organization might pass without much comment. However, with the rise of REST, a range of non-WS approaches to SOA and the fact that most of the WS-* standards have not been covered by WS-I, is this a reflection of the new position Web Services finds itself in, over a decade after it began? Perhaps this was inevitable given that the over the past few years there has been a lot more emphasis on interoperability within the various WS-* working groups? Or are the days of interactions across heterogeneous SOAP implementations in the past?
  • So the question remains: has interoperability pretty much been achieved for WS-* through WS-I and the improvements made with the way in which the specifications and standards are developed today, or has the real interoperability challenge moved elsewhere, still to be addressed?
Paul Merrell

Cover Pages: XML Daily Newslink: Friday, 12 November 2010 - 0 views

  • HTTP Framework for Time-Based Access to Resource States: Memento Herbert Van de Sompel, Michael Nelson, Robert Sanderson; IETF I-D Representatives of Los Alamos National Laboratory and Old Dominion University have published a first IETF Working Draft of HTTP Framework for Time-Based Access to Resource States: Memento. According to the editor's iMinds blog: "While the days of human time travel as described in many a science fiction novel are yet to come, time travel on the Web has recently become a reality thanks to the Memento project. In essence, Memento adds a time dimension to the Web: enter the Web address of a resource in your browser and set a time slider to a desired moment in the Web's past, and see what the resource looked like around that time... Technically, Memento achieves this by: (a) Leveraging systems that host archival Web content, including Web archives, content management systems, and software versioning systems; (b) Extending the Web's most commonly used protocol (HTTP) with the capability to specify a datetime in protocol requests, and by applying an existing HTTP capability (content negotiation) in a new dimension: 'time'. The result is a Web in which navigating the past is as seamless as navigating the present... The Memento concepts have attracted significant international attention since they were first published in November 2009, and compliant tools are already emerging. For example, at the client side there is the MementoFox add-on for FireFox, and a Memento app for Android; at the server side, there is a plug-in for MediaWiki servers, and the Wayback software that is widely used by Web archives, worldwide, was recently enhanced with Memento support..."
Paul Merrell

Cover Pages: Content Management Interoperability Services (CMIS) - 0 views

  • On October 06, 2008, OASIS issued a public call for participation in a new technical committee chartered to define specifications for use of Web services and Web 2.0 interfaces to enable information sharing across content management repositories from different vendors. The OASIS Content Management Interoperability Services (CMIS) TC will build upon existing specifications to "define a domain model and bindings that are designed to be layered on top of existing Content Management systems and their existing programmatic interfaces. The TC will not prescribe how specific features should be implemented within those Enterprise Content Management (ECM) systems. Rather it will seek to define a generic/universal set of capabilities provided by an ECM system and a set of services for working with those capabilities." As of February 17, 2010, the CMIS technical work had received broad support through TC participation, industry analyst opinion, and declarations of interest from major companies. Some of these include Adobe, Adullact, AIIM, Alfresco, Amdocs, Anakeen, ASG Software Solutions, Booz Allen Hamilton, Capgemini, Citytech, Content Technologies, Day Software, dotCMS, Ektron, EMC, EntropySoft, ESoCE-NET, Exalead, FatWire, Fidelity, Flatirons, fme AG, Genus Technologies, Greenbytes GmbH, Harris, IBM, ISIS Papyrus, KnowledgeTree, Lexmark, Liferay, Magnolia, Mekon, Microsoft, Middle East Technical University, Nuxeo, Open Text, Oracle, Pearson, Quark, RSD, SAP, Saperion, Structured Software Systems (3SL), Sun Microsystems, Tanner AG, TIBCO Software, Vamosa, Vignette, and WeWebU Software. Early commentary from industry analysts and software engineers is positive about the value proposition in standardizing an enterprise content-centric management specification. The OASIS announcement of November 17, 2008 includes endorsements. Principal use cases motivating the CMIS technical work include collaborative content applications, portals leveraging content management repositories, mashups, and searching a content repository.
  •  
    I should have posted before about CMIS, an emerging standard with a very lot of buy-in by vendors large and small. I've been watching the buzz grow via Robin Cover's Daily XML links service. IIt's now on my "need to watch" list. 
Paul Merrell

Durham Statement on Open Access to Legal Scholarship | Berkman Center - 0 views

  • On 7 November 2008, the directors of the law libraries at the University of Chicago, Columbia University, Cornell University, Duke University, Georgetown University, Harvard University, New York University, Northwestern University, the University of Pennsylvania, Stanford University, the University of Texas, and Yale University met in Durham, North Carolina at the Duke Law School. That meeting resulted in the "Durham Statement on Open Access to Legal Scholarship," which calls for all law schools to stop publishing their journals in print format and to rely instead on electronic publication coupled with a commitment to keep the electronic versions available in stable, open, digital formats.
  • Particularly now, with growing financial pressures on law school budgets, ending print publication of law journals deserves serious consideration. Very few law journals receive enough in subscription income and royalties to cover their costs of operation. The Statement anticipates both that the costs for printing and mailing can be eliminated, and that law libraries can reduce their costs for subscribing to, processing, and preserving print journals. There are additional benefits in improving access to journals that are not now published in open access formats and in reducing paper consumption.
  • Call to Action: We therefore urge every U.S. law school to commit to ending print publication of its journals and to making definitive versions of journals and other scholarship produced at the school immediately available upon publication in stable, open, digital formats, rather than in print. We also urge every law school to commit to keeping a repository of the scholarship published at the school in a stable, open, digital format. Some law schools may choose to use a shared regional online repository or to offer their own repositories as places for other law schools to archive the scholarship published at their school. Repositories should rely upon open standards for the archiving of works, as well as on redundant formats, such as PDF copies. We also urge law schools and law libraries to agree to and use a standard set of metadata to catalog each article to ensure easy online public indexing of legal scholarship. As a measure of redundancy, we also urge faculty members to reserve their copyrights to ensure that they too can make their own scholarship available in stable, open, digital formats. All law journals should rely upon the AALS model publishing agreement as a default and should respect author requests to retain copyrights in their scholarship.
Gary Edwards

Google Wave Operational Transformation (Google Wave Federation Protocol) - 0 views

  • Wave document operations consist of the following mutation components:skipinsert charactersinsert element startinsert element endinsert anti-element startinsert anti-element enddelete charactersdelete element startdelete element enddelete anti-element startdelete anti-element endset attributesupdate attributescommence annotationconclude annotationThe following is a more complex example document operation.skip 3insert element start with tag "p" and no attributesinsert characters "Hi there!"insert element endskip 5delete characters 4From this, one could see how an entire XML document can be represented as a single document operation. 
  • Wave OperationsWave operations consists of a document operation, for modifying XML documents and other non document operations. Non document operations are for tasks such as adding or removing a participant to a Wavelet. We'll focus on document operations here as they are the most central to Wave.It's worth noting that an XML document in Wave can be regarded as a single document operation that can be applied to the empty document.This section will also cover how Wave operations are particularly efficient even in the face of a large number of transforms.XML Document SupportWave uses a streaming interface for document operations. This is similar to an XMLStreamWriter or a SAX handler. The document operation consists of a sequence of ordered document mutations. The mutations are applied in sequence as you traverse the document linearly. Designing document operations in this manner makes it easier to write transformation function and composition function described later.In Wave, every 16-bit Unicode code unit (as used in javascript, JSON, and Java strings), start tag or end tag in an XML document is called an item. Gaps between items are called positions. Position 0 is before the first item. A document operation can contain mutations that reference positions. For example, a "Skip" mutation specifies how many positions to skip ahead in the XML document before applying the next mutation.Wave document operations also support annotations. An annotation is some meta-data associated with an item range, i.e. a start position and an end position. This is particularly useful for describing text formatting and spelling suggestions, as it does not unecessarily complicate the underlying XML document format.
  •  
    Summary: Collaborative document editing means multiple editors being able to edit a shared document at the same time.. Live and concurrent means being able to see the changes another person is making, keystroke by keystroke. Currently, there are already a number of products on the market that offer collaborative document editing. Some offer live concurrent editing, such as EtherPad and SubEthaEdit, but do not offer rich text. There are others that offer rich text, such as Google Docs, but do not offer a seamless live concurrent editing experience, as merge failures can occur. Wave stands as a solution that offers both live concurrent editing and rich text document support.  The result is that Wave allows for a very engaging conversation where you can see what the other person is typing, character by character much like how you would converse in a cafe. This is very much like instant messaging except you can see what the other person is typing, live. Wave also allows for a more productive collaborative document editing experience, where people don't have to worry about stepping on each others toes and still use common word processor functionalities such as bold, italics, bullet points, and headings. Wave is more than just rich text documents. In fact, Wave's core technology allows live concurrent modifications of XML documents which can be used to represent any structured content including system data that is shared between clients and backend systems. To achieve these goals, Wave uses a concurrency control system based on Operational Transformation.
Paul Merrell

The HTML 5 Layout Elements Rundown - 0 views

  • TML 5 is an interesting beastie. The specification was not planned; The W3C was committed to HTML 4.1 as the last word in HTML. As such, most of the requests for HTML 5 came from the HTML user community itself, largely through the advent of the Web Hypertext Application Technology Working Group (WHATWG). The push from WHATWG was strong enough to prompt the formation of a HTML 5 working group a couple of years ago. Since then, the HTML 5 working group has slowly gone through the process of taking a somewhat hand-waving specification and recasting it in W3C terms, along with all the politics that the process entails. On April 23, 2009, the HTML 5 group released the most recent draft of the specification. Overall, it represents a considerable simplification from the previous release, especially as a number of initially proposed changes to the specification have been scaled back. The group defined roles for the proposed changes elsewhere. HTML 5 is a broad specification, and consequently, dozens of distinct changes—more than a single article can reasonably cover in any detail—occurred between HTML 4 and 5. This article focuses on the HTML 5 layout elements. Subsequent articles will examine forms-related changes (which are substantial), the new media elements, and DOM-related changes.
Paul Merrell

No, Department of Justice, 80 Percent of Tor Traffic Is Not Child Porn | WIRED - 0 views

  • The debate over online anonymity, and all the whistleblowers, trolls, anarchists, journalists and political dissidents it enables, is messy enough. It doesn’t need the US government making up bogus statistics about how much that anonymity facilitates child pornography.
  • he debate over online anonymity, and all the whistleblowers, trolls, anarchists, journalists and political dissidents it enables, is messy enough. It doesn’t need the US government making up bogus statistics about how much that anonymity facilitates child pornography. At the State of the Net conference in Washington on Tuesday, US assistant attorney general Leslie Caldwell discussed what she described as the dangers of encryption and cryptographic anonymity tools like Tor, and how those tools can hamper law enforcement. Her statements are the latest in a growing drumbeat of federal criticism of tech companies and software projects that provide privacy and anonymity at the expense of surveillance. And as an example of the grave risks presented by that privacy, she cited a study she said claimed an overwhelming majority of Tor’s anonymous traffic relates to pedophilia. “Tor obviously was created with good intentions, but it’s a huge problem for law enforcement,” Caldwell said in comments reported by Motherboard and confirmed to me by others who attended the conference. “We understand 80 percent of traffic on the Tor network involves child pornography.” That statistic is horrifying. It’s also baloney.
  • In a series of tweets that followed Caldwell’s statement, a Department of Justice flack said Caldwell was citing a University of Portsmouth study WIRED covered in December. He included a link to our story. But I made clear at the time that the study claimed 80 percent of traffic to Tor hidden services related to child pornography, not 80 percent of all Tor traffic. That is a huge, and important, distinction. The vast majority of Tor’s users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what’s often referred to as the “dark web,” use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software’s creators at the non-profit Tor Project. The University of Portsmouth study dealt exclusively with visits to hidden services. In contrast to Caldwell’s 80 percent claim, the Tor Project’s director Roger Dingledine pointed out last month that the study’s pedophilia findings refer to something closer to a single percent of Tor’s overall traffic.
  • ...1 more annotation...
  • So to whoever at the Department of Justice is preparing these talking points for public consumption: Thanks for citing my story. Next time, please try reading it.
Paul Merrell

FBI Flouts Obama Directive to Limit Gag Orders on National Security Letters - The Intercept - 0 views

  • Despite the post-Snowden spotlight on mass surveillance, the intelligence community’s easiest end-run around the Fourth Amendment since 2001 has been something called a National Security Letter. FBI agents can demand that an Internet service provider, telephone company or financial institution turn over its records on any number of people — without any judicial review whatsoever — simply by writing a letter that says the information is needed for national security purposes. The FBI at one point was cranking out over 50,000 such letters a year; by the latest count, it still issues about 60 a day. The letters look like this:
  • Recipients are legally required to comply — but it doesn’t stop there. They also aren’t allowed to mention the order to anyone, least of all the person whose data is being searched. Ever. That’s because National Security Letters almost always come with eternal gag orders. Here’s that part:
  • Despite the use of the word “now” in that first sentence, however, the FBI has yet to do any such thing. It has not announced any such change, nor explained how it will implement it, or when. Media inquiries were greeted with stalling and, finally, a no comment — ostensibly on advice of legal counsel. “There is pending litigation that deals with a lot of the same questions you’re asking, out of the Ninth Circuit,” FBI spokesman Chris Allen told me. “So for now, we’ll just have to decline to comment.” FBI lawyers are working on a court filing for that case, and “it will address” the new policy, he said. He would not say when to expect it.
  • ...6 more annotations...
  • That means the NSL process utterly disregards the First Amendment as well. More than a year ago, President Obama announced that he was ordering the Justice Department to terminate gag orders “within a fixed time unless the government demonstrates a real need for further secrecy.” And on Feb. 3, when the Office of the Director of National Intelligence announced a handful of baby steps resulting from its “comprehensive effort to examine and enhance [its] privacy and civil liberty protections” one of the most concrete was — finally — to cap the gag orders: In response to the President’s new direction, the FBI will now presumptively terminate National Security Letter nondisclosure orders at the earlier of three years after the opening of a fully predicated investigation or the investigation’s close. Continued nondisclosures orders beyond this period are permitted only if a Special Agent in Charge or a Deputy Assistant Director determines that the statutory standards for nondisclosure continue to be satisfied and that the case agent has justified, in writing, why continued nondisclosure is appropriate.
  • There is indeed a significant case currently before the federal appeals court in San Francisco. Oral arguments were in October. A decision could come any time. But in that case, the Electronic Frontier Foundation (EFF), which is representing two unnamed communications companies that received NSLs, is calling for the entire NSL statute to be thrown out as unconstitutional — not for a tweak to the gag. And it has a March 2013 district court ruling in its favor. “The gag is a prior restraint under the First Amendment, and prior restraints have to meet an extremely high burden,” said Andrew Crocker, a legal fellow at EFF. That means going to court and meeting the burden of proof — not just signing a letter. Or as the Cato Institute’s Julian Sanchez put it, “To have such a low bar for denying persons or companies the right to speak about government orders they have been served with is anathema. And it is not very good for accountability.”
  • In a separate case, a wide range of media companies (including First Look Media, the non-profit digital media venture that produces The Intercept) are supporting a lawsuit filed by Twitter, demanding the right to say specifically how many NSLs it has received. But simply releasing companies from a gag doesn’t assure the kind of accountability that privacy advocates are saying is required by the Constitution. “What the public has to remember is a NSL is asking for your information, but it’s not asking it from you,” said Michael German, a former FBI agent who is now a fellow with the Brennan Center for Justice. “The vast majority of these things go to the very large telecommunications and financial companies who have a large stake in maintaining a good relationship with the government because they’re heavily regulated entities.”
  • So, German said, “the number of NSLs that would be exposed as a result of the release of the gag order is probably very few. The person whose records are being obtained is the one who should receive some notification.” A time limit on gags going forward also raises the question of whether past gag orders will now be withdrawn. “Obviously there are at this point literally hundreds of thousands of National Security Letters that are more than three years old,” said Sanchez. Individual review is therefore unlikely, but there ought to be some recourse, he said. And the further back you go, “it becomes increasingly implausible that a significant percentage of those are going to entail some dire national security risk.” The NSL program has a troubled history. The absolute secrecy of the program and resulting lack of accountability led to systemic abuse as documented by repeated inspector-general investigations, including improperly authorized NSLs, factual misstatements in the NSLs, improper requests under NSL statutes, requests for information based on First Amendment protected activity, “after-the-fact” blanket NSLs to “cover” illegal requests, and hundreds of NSLs for “community of interest” or “calling circle” information without any determination that the telephone numbers were relevant to authorized national security investigations.
  • Obama’s own hand-selected “Review Group on Intelligence and Communications Technologies” recommended in December 2013 that NSLs should only be issued after judicial review — just like warrants — and that any gag should end within 180 days barring judicial re-approval. But FBI director James Comey objected to the idea, calling NSLs “a very important tool that is essential to the work we do.” His argument evidently prevailed with Obama.
  • NSLs have managed to stay largely under the American public’s radar. But, Crocker says, “pretty much every time I bring it up and give the thumbnail, people are shocked. Then you go into how many are issued every year, and they go crazy.” Want to send me your old NSL and see if we can set a new precedent? Here’s how to reach me. And here’s how to leak to me.
Paul Merrell

Internet privacy, funded by spooks: A brief history of the BBG | PandoDaily - 0 views

  • For the past few months I’ve been covering U.S. government funding of popular Internet privacy tools like Tor, CryptoCat and Open Whisper Systems. During my reporting, one agency in particular keeps popping up: An agency with one of those really bland names that masks its wild, bizarre history: the Broadcasting Board of Governors, or BBG. The BBG was formed in 1999 and runs on a $721 million annual budget. It reports directly to Secretary of State John Kerry and operates like a holding company for a host of Cold War-era CIA spinoffs and old school “psychological warfare” projects: Radio Free Europe, Radio Free Asia, Radio Martí, Voice of America, Radio Liberation from Bolshevism (since renamed “Radio Liberty”) and a dozen other government-funded radio stations and media outlets pumping out pro-American propaganda across the globe. Today, the Congressionally-funded federal agency is also one of the biggest backers of grassroots and open-source Internet privacy technology. These investments started in 2012, when the BBG launched the “Open Technology Fund” (OTF) — an initiative housed within and run by Radio Free Asia (RFA), a premier BBG property that broadcasts into communist countries like North Korea, Vietnam, Laos, China and Myanmar. The BBG endowed Radio Free Asia’s Open Technology Fund with a multimillion dollar budget and a single task: “to fulfill the U.S. Congressional global mandate for Internet freedom.”
  • Here’s a small sample of what the Broadcasting Board of Governors funded (through Radio Free Asia and then through the Open Technology Fund) between 2012 and 2014: Open Whisper Systems, maker of free encrypted text and voice mobile apps like TextSecure and Signal/RedPhone, got a generous $1.35-million infusion. (Facebook recently started using Open Whisper Systems to secure its WhatsApp messages.) CryptoCat, an encrypted chat app made by Nadim Kobeissi and promoted by EFF, received $184,000. LEAP, an email encryption startup, got just over $1 million. LEAP is currently being used to run secure VPN services at RiseUp.net, the radical anarchist communication collective. A Wikileaks alternative called GlobaLeaks (which was endorsed by the folks at Tor, including Jacob Appelbaum) received just under $350,000. The Guardian Project — which makes an encrypted chat app called ChatSecure, as well a mobile version of Tor called Orbot — got $388,500. The Tor Project received over $1 million from OTF to pay for security audits, traffic analysis tools and set up fast Tor exit nodes in the Middle East and South East Asia.
  •  
    But can we trust them?
commonpromo

digital fabric printing - 0 views

  •  
    there is no requirement of screens. Depending on the fabric being used, there may be a possible requirement of pre-treating the fabric to ensure that the color is more long- lasting and available in a larger range of shades.
Paul Merrell

How to Protect Yourself from NSA Attacks on 1024-bit DH | Electronic Frontier Foundation - 0 views

  • In a post on Wednesday, researchers Alex Halderman and Nadia Heninger presented compelling research suggesting that the NSA has developed the capability to decrypt a large number of HTTPS, SSH, and VPN connections using an attack on common implementations of the Diffie-Hellman key exchange algorithm with 1024-bit primes. Earlier in the year, they were part of a research group that published a study of the Logjam attack, which leveraged overlooked and outdated code to enforce "export-grade" (downgraded, 512-bit) parameters for Diffie-Hellman. By performing a cost analysis of the algorithm with stronger 1024-bit parameters and comparing that with what we know of the NSA "black budget" (and reading between the lines of several leaked documents about NSA interception capabilities) they concluded that it's likely NSA has been breaking 1024-bit Diffie-Hellman for some time now. The good news is, in the time since this research was originally published, the major browser vendors (IE, Chrome, and Firefox) have removed support for 512-bit Diffie-Hellman, addressing the biggest vulnerability. However, 1024-bit Diffie-Hellman remains supported for the forseeable future despite its vulnerability to NSA surveillance. In this post, we present some practical tips to protect yourself from the surveillance machine, whether you're using a web browser, an SSH client, or VPN software. Disclaimer: This is not a complete guide, and not all software is covered.
Paul Merrell

Evidence of Google blacklisting of left and progressive sites continues to mount - World Socialist Web Site - 0 views

  • A growing number of leading left-wing websites have confirmed that their search traffic from Google has plunged in recent months, adding to evidence that Google, under the cover of a fraudulent campaign against fake news, is implementing a program of systematic and widespread censorship. Truthout, a not-for-profit news website that focuses on political, social, and ecological developments from a left progressive standpoint, had its readership plunge by 35 percent since April. The Real News , a nonprofit video news and documentary service, has had its search traffic fall by 37 percent. Another site, Common Dreams , last week told the WSWS that its search traffic had fallen by up to 50 percent. As extreme as these sudden drops in search traffic are, they do not equal the nearly 70 percent drop in traffic from Google seen by the WSWS. “This is political censorship of the worst sort; it’s just an excuse to suppress political viewpoints,” said Robert Epstein, a former editor in chief of Psychology Today and noted expert on Google. Epstein said that at this point, the question was whether the WSWS had been flagged specifically by human evaluators employed by the search giant, or whether those evaluators had influenced the Google Search engine to demote left-wing sites. “What you don’t know is whether this was the human evaluators who are demoting you, or whether it was the new algorithm they are training,” Epstein said.
  • Richard Stallman, the world-renowned technology pioneer and a leader of the free software movement, said he had read the WSWS’s coverage on Google’s censorship of left-wing sites. He warned about the immense control exercised by Google over the Internet, saying, “For people’s main way of finding articles about a topic to be run by a giant corporation creates an obvious potential for abuse.” According to data from the search optimization tool SEMRush, search traffic to Mr. Stallman’s personal website, Stallman.org, fell by 24 percent, while traffic to gnu.org, operated by the Free Software Foundation, fell 19 percent. Eric Maas, a search engine optimization consultant working in the San Francisco Bay area, said his team has surveyed a wide range of alternative news sites affected by changes in Google’s algorithms since April.  “While the update may be targeting specific site functions, there is evidence that this update is promoting only large mainstream news organizations. What I find problematic with this is that it appears that some sites have been targeted and others have not.” The massive drop in search traffic to the WSWS and other left-wing sites followed the implementation of changes in Google’s search evaluation protocols. In a statement issued on April 25, Ben Gomes, the company’s vice president for engineering, stated that Google’s update of its search engine would block access to “offensive” sites, while working to surface more “authoritative content.” In a set of guidelines issued to Google evaluators in March, the company instructed its search evaluators to flag pages returning “conspiracy theories” or “upsetting” content unless “the query clearly indicates the user is seeking an alternative viewpoint.”
Paul Merrell

Internet users raise funds to buy lawmakers' browsing histories in protest | TheHill - 0 views

  • House passes bill undoing Obama internet privacy rule House passes bill undoing Obama internet privacy rule TheHill.com Mesmerizing Slow-Motion Lightning Celebrate #NationalPuppyDay with some adorable puppies on Instagram 5 plants to add to your garden this Spring House passes bill undoing Obama internet privacy rule Inform News. Coming Up... Ed Sheeran responds to his 'baby lookalike' margin: 0px; padding: 0px; borde
  • Great news! The House just voted to pass SJR34. We will finally be able to buy the browser history of all the Congresspeople who voted to sell our data and privacy without our consent!” he wrote on the fundraising page.Another activist from Tennessee has raised more than $152,000 from more than 9,800 people.A bill on its way to President Trump’s desk would allow internet service providers (ISPs) to sell users’ data and Web browsing history. It has not taken effect, which means there is no growing history data yet to purchase.A Washington Post reporter also wrote it would be possible to buy the data “in theory, but probably not in reality.”A former enforcement bureau chief at the Federal Communications Commission told the newspaper that most internet service providers would cover up this information, under their privacy policies. If they did sell any individual's personal data in violation of those policies, a state attorney general could take the ISPs to court.
Paul Merrell

WikiLeaks - Vault 7: Projects - 0 views

  • Today, March 31st 2017, WikiLeaks releases Vault 7 "Marble" -- 676 source code files for the CIA's secret anti-forensic Marble Framework. Marble is used to hamper forensic investigators and anti-virus companies from attributing viruses, trojans and hacking attacks to the CIA. Marble does this by hiding ("obfuscating") text fragments used in CIA malware from visual inspection. This is the digital equivallent of a specalized CIA tool to place covers over the english language text on U.S. produced weapons systems before giving them to insurgents secretly backed by the CIA. Marble forms part of the CIA's anti-forensics approach and the CIA's Core Library of malware code. It is "[D]esigned to allow for flexible and easy-to-use obfuscation" as "string obfuscation algorithms (especially those that are unique) are often used to link malware to a specific developer or development shop." The Marble source code also includes a deobfuscator to reverse CIA text obfuscation. Combined with the revealed obfuscation techniques, a pattern or signature emerges which can assist forensic investigators attribute previous hacking attacks and viruses to the CIA. Marble was in use at the CIA during 2016. It reached 1.0 in 2015.
  • The source code shows that Marble has test examples not just in English but also in Chinese, Russian, Korean, Arabic and Farsi. This would permit a forensic attribution double game, for example by pretending that the spoken language of the malware creator was not American English, but Chinese, but then showing attempts to conceal the use of Chinese, drawing forensic investigators even more strongly to the wrong conclusion, --- but there are other possibilities, such as hiding fake error messages. The Marble Framework is used for obfuscation only and does not contain any vulnerabilties or exploits by itself.
  •  
    But it was the Russians who hacked the 2016 U.S. election. Really.
« First ‹ Previous 41 - 60 of 60
Showing 20 items per page