Skip to main content

Home/ Future of the Web/ Group items tagged code

Rss Feed Group items tagged

Gary Edwards

Readium at the London Book Fair 2014: Open Source for an Open Publishing Ecosystem: Rea... - 0 views

  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-Android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-Android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
Paul Merrell

This project aims to make '404 not found' pages a thing of the past - 0 views

  • The Internet is always changing. Sites are rising and falling, content is deleted, and bad URLs can lead to '404 Not Found' errors that are as helpful as a brick wall. A new project proposes an do away with dead 404 errors by implementing new HTML code that will help access prior versions of hyperlinked content. With any luck, that means that you’ll never have to run into a dead link again. The “404-No-More” project is backed by a formidable coalition including members from organizations like the Harvard Library Innovation Lab, Los Alamos National Laboratory, Old Dominion University, and the Berkman Center for Internet & Society. Part of the Knight News Challenge, which seeks to strengthen the Internet for free expression and innovation through a variety of initiatives, 404-No-More recently reached the semifinal stage. The project aims to cure so-called link rot, the process by which hyperlinks become useless overtime because they point to addresses that are no longer available. If implemented, websites such as Wikipedia and other reference documents would be vastly improved. The new feature would also give Web authors a way provide links that contain both archived copies of content and specific dates of reference, the sort of information that diligent readers have to hunt down on a website like Archive.org.
  • While it may sound trivial, link rot can actually have real ramifications. Nearly 50 percent of the hyperlinks in Supreme Court decisions no longer work, a 2013 study revealed. Losing footnotes and citations in landmark legal decisions can mean losing crucial information and context about the laws that govern us. The same study found that 70 percent of URLs within the Harvard Law Review and similar journals didn’t link to the originally cited information, considered a serious loss surrounding the discussion of our laws. The project’s proponents have come up with more potential uses as well. Activists fighting censorship will have an easier time combatting government takedowns, for instance. Journalists will be much more capable of researching dynamic Web pages. “If every hyperlink was annotated with a publication date, you could automatically view an archived version of the content as the author intended for you to see it,” the project’s authors explain. The ephemeral nature of the Web could no longer be used as a weapon. Roger Macdonald, a director at the Internet Archive, called the 404-No-More project “an important contribution to preservation of knowledge.”
  • The new feature would come in the form of introducing the mset attribute to the <a> element in HTML, which would allow users of the code to specify multiple dates and copies of content as an external resource. For instance, if both the date of reference and the location of a copy of targeted content is known by an author, the new code would like like this: The 404-No-More project’s goals are numerous, but the ultimate goal is to have mset become a new HTML standard for hyperlinks. “An HTML standard that incorporates archives for hyperlinks will loop in these efforts and make the Web better for everyone,” project leaders wrote, “activists, journalists, and regular ol’ everyday web users.”
Paul Merrell

We finally gave Congress email addresses - Sunlight Foundation Blog - 0 views

  • On OpenCongress, you can now email your representatives and senators just as easily as you would a friend or colleague. We've added a new feature to OpenCongress. It's not flashy. It doesn't use D3 or integrate with social media. But we still think it's pretty cool. You might've already heard of it. Email. This may not sound like a big deal, but it's been a long time coming. A lot of people are surprised to learn that Congress doesn't have publicly available email addresses. It's the number one feature request that we hear from users of our APIs. Until recently, we didn't have a good response. That's because members of Congress typically put their feedback mechanisms behind captchas and zip code requirements. Sometimes these forms break; sometimes their requirements improperly lock out actual constituents. And they always make it harder to email your congressional delegation than it should be.
  • This is a real problem. According to the Congressional Management Foundation, 88% of Capitol Hill staffers agree that electronic messages from constituents influence their bosses' decisions. We think that it's inappropriate to erect technical barriers around such an essential democratic mechanism. Congress itself is addressing the problem. That effort has just entered its second decade, and people are feeling optimistic that a launch to a closed set of partners might be coming soon. But we weren't content to wait. So when the Electronic Frontier Foundation (EFF) approached us about this problem, we were excited to really make some progress. Building on groundwork first done by the Participatory Politics Foundation and more recent work within Sunlight, a network of 150 volunteers collected the data we needed from congressional websites in just two days. That information is now on Github, available to all who want to build the next generation of constituent communication tools. The EFF is already working on some exciting things to that end.
  • But we just wanted to be able to email our representatives like normal people. So now, if you visit a legislator's page on OpenCongress, you'll see an email address in the right-hand sidebar that looks like Sen.Reid@opencongress.org or Rep.Boehner@opencongress.org. You can also email myreps@opencongress.org to email both of your senators and your House representatives at once. The first time we get an email from you, we'll send one back asking for some additional details. This is necessary because our code submits your message by navigating those aforementioned congressional webforms, and we don't want to enter incorrect information. But for emails after the first one, all you'll have to do is click a link that says, "Yes, I meant to send that email."
  • ...1 more annotation...
  • One more thing: For now, our system will only let you email your own representatives. A lot of people dislike this. We do, too. In an age of increasing polarization, party discipline means that congressional leaders must be accountable to citizens outside their districts. But the unfortunate truth is that Congress typically won't bother reading messages from non-constituents — that's why those zip code requirements exist in the first place. Until that changes, we don't want our users to waste their time. So that's it. If it seems simple, it's because it is. But we think that unbreaking how Congress connects to the Internet is important. You should be able to send a call to action in a tweet, easily forward a listserv message to your representative and interact with your government using the tools you use to interact with everyone else.
Gonzalo San Gil, PhD.

Poll: Should open source projects adopt community codes of conduct? | Opensource.com - 0 views

  •  
    "Posted 23 Jun 2014 by Opensource.com (Red Hat) [ Be respectful Assume good faith Be collaborative Try to be concise Be open] "
Paul Merrell

Internet Giants Erect Barriers to Spy Agencies - NYTimes.com - 0 views

  • As fast as it can, Google is sealing up cracks in its systems that Edward J. Snowden revealed the N.S.A. had brilliantly exploited. It is encrypting more data as it moves among its servers and helping customers encode their own emails. Facebook, Microsoft and Yahoo are taking similar steps.
  • After years of cooperating with the government, the immediate goal now is to thwart Washington — as well as Beijing and Moscow. The strategy is also intended to preserve business overseas in places like Brazil and Germany that have threatened to entrust data only to local providers. Google, for example, is laying its own fiber optic cable under the world’s oceans, a project that began as an effort to cut costs and extend its influence, but now has an added purpose: to assure that the company will have more control over the movement of its customer data.
  • A year after Mr. Snowden’s revelations, the era of quiet cooperation is over. Telecommunications companies say they are denying requests to volunteer data not covered by existing law. A.T.&T., Verizon and others say that compared with a year ago, they are far more reluctant to cooperate with the United States government in “gray areas” where there is no explicit requirement for a legal warrant.
  • ...8 more annotations...
  • Eric Grosse, Google’s security chief, suggested in an interview that the N.S.A.'s own behavior invited the new arms race.“I am willing to help on the purely defensive side of things,” he said, referring to Washington’s efforts to enlist Silicon Valley in cybersecurity efforts. “But signals intercept is totally off the table,” he said, referring to national intelligence gathering.“No hard feelings, but my job is to make their job hard,” he added.
  • In Washington, officials acknowledge that covert programs are now far harder to execute because American technology companies, fearful of losing international business, are hardening their networks and saying no to requests for the kind of help they once quietly provided.Continue reading the main story Robert S. Litt, the general counsel of the Office of the Director of National Intelligence, which oversees all 17 American spy agencies, said on Wednesday that it was “an unquestionable loss for our nation that companies are losing the willingness to cooperate legally and voluntarily” with American spy agencies.
  • Many point to an episode in 2012, when Russian security researchers uncovered a state espionage tool, Flame, on Iranian computers. Flame, like the Stuxnet worm, is believed to have been produced at least in part by American intelligence agencies. It was created by exploiting a previously unknown flaw in Microsoft’s operating systems. Companies argue that others could have later taken advantage of this defect.Worried that such an episode undercuts confidence in its wares, Microsoft is now fully encrypting all its products, including Hotmail and Outlook.com, by the end of this year with 2,048-bit encryption, a stronger protection that would take a government far longer to crack. The software is protected by encryption both when it is in data centers and when data is being sent over the Internet, said Bradford L. Smith, the company’s general counsel.
  • Mr. Smith also said the company was setting up “transparency centers” abroad so that technical experts of foreign governments could come in and inspect Microsoft’s proprietary source code. That will allow foreign governments to check to make sure there are no “back doors” that would permit snooping by United States intelligence agencies. The first such center is being set up in Brussels.Microsoft has also pushed back harder in court. In a Seattle case, the government issued a “national security letter” to compel Microsoft to turn over data about a customer, along with a gag order to prevent Microsoft from telling the customer it had been compelled to provide its communications to government officials. Microsoft challenged the gag order as violating the First Amendment. The government backed down.
  • Hardware firms like Cisco, which makes routers and switches, have found their products a frequent subject of Mr. Snowden’s disclosures, and their business has declined steadily in places like Asia, Brazil and Europe over the last year. The company is still struggling to convince foreign customers that their networks are safe from hackers — and free of “back doors” installed by the N.S.A. The frustration, companies here say, is that it is nearly impossible to prove that their systems are N.S.A.-proof.
  • In one slide from the disclosures, N.S.A. analysts pointed to a sweet spot inside Google’s data centers, where they could catch traffic in unencrypted form. Next to a quickly drawn smiley face, an N.S.A. analyst, referring to an acronym for a common layer of protection, had noted, “SSL added and removed here!”
  • Facebook and Yahoo have also been encrypting traffic among their internal servers. And Facebook, Google and Microsoft have been moving to more strongly encrypt consumer traffic with so-called Perfect Forward Secrecy, specifically devised to make it more labor intensive for the N.S.A. or anyone to read stored encrypted communications.One of the biggest indirect consequences from the Snowden revelations, technology executives say, has been the surge in demands from foreign governments that saw what kind of access to user information the N.S.A. received — voluntarily or surreptitiously. Now they want the same.
  • The latest move in the war between intelligence agencies and technology companies arrived this week, in the form of a new Google encryption tool. The company released a user-friendly, email encryption method to replace the clunky and often mistake-prone encryption schemes the N.S.A. has readily exploited.But the best part of the tool was buried in Google’s code, which included a jab at the N.S.A.'s smiley-face slide. The code included the phrase: “ssl-added-and-removed-here-; - )”
Gonzalo San Gil, PhD.

Just-released WordPress 0day makes it easy to hijack millions of websites [Updated] | A... - 0 views

  •  
    "Update: About two hours after this post went live, WordPress released a critical security update that fixes the 0day vulnerability described below. The WordPress content management system used by millions of websites is vulnerable to two newly discovered threats that allow attackers to take full control of the Web server. Attack code has been released that targets one of the latest versions of WordPress, making it a zero-day exploit that could touch off a series of site hijackings throughout the Internet."
  •  
    "Update: About two hours after this post went live, WordPress released a critical security update that fixes the 0day vulnerability described below. The WordPress content management system used by millions of websites is vulnerable to two newly discovered threats that allow attackers to take full control of the Web server. Attack code has been released that targets one of the latest versions of WordPress, making it a zero-day exploit that could touch off a series of site hijackings throughout the Internet."
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

Kano - The Kano Kit [Via x Open source Rules...] - 1 views

    • Gonzalo San Gil, PhD.
       
      [# ! Via, thanks x #share, # ! Robert Ryan -> FB's 'P2P Community...]
  •  
    "Kano is a computer you build and code yourself. Lego simple, Raspberry Pi powerful, and hugely fun."
  •  
    "Kano is a computer you build and code yourself. Lego simple, Raspberry Pi powerful, and hugely fun."
Gonzalo San Gil, PhD.

Replace SourceForge with these Better Alternatives - Linux Links - The Linux Portal Site - 1 views

  •  
    "SourceForge is a long established web-based service that offers source code repository, downloads mirrors, bug tracker and other features. It acts as a centralized location for software developers to control and manage free and open-source software development. "
  •  
    "SourceForge is a long established web-based service that offers source code repository, downloads mirrors, bug tracker and other features. It acts as a centralized location for software developers to control and manage free and open-source software development. "
Gary Edwards

The Monkey On Microsoft's Back - Forbes.com - 0 views

  • The new technology, dubbed TraceMonkey, promises to speed up Firefox's ability to deliver complex applications. The move heightens the threat posed by a nascent group of online alternatives to Microsoft's most profitable software: PC applications, like Microsoft Office, that allow Microsoft to burn hundreds of millions of dollars on efforts to seize control of the online world. Microsoft's Business Division, which gets 90% of its revenues from sales of Microsoft Office, spat out $12.4 billion in operating income for the fiscal year ending June 30. Google (nasdaq: GOOG - news - people ), however, is playing a parallel game, using profits from its online advertising business to fund alternatives to Microsoft's desktop offerings. Google already says it has "millions" of users for its free, Web-based alternative to desktop staples, including Microsoft's Word, Excel and PowerPoint software. The next version of Firefox, which could debut by the end of this year, promises to speed up such applications, thanks to a new technology built into the developer's version of the software last week. Right now, rich Web applications such as Google Gmail rely on a technology known as Javascript to turn them from lifeless Web pages into applications that respond as users mouse about a Web page. TraceMonkey aims to turn the most frequently used chunks of Javascript code embedded into Web pages into binary form--allowing computers to hustle through the most used bits of code--without waiting around to render all of the code into binary form.
  •  
    I did send a very lenghthy comment to Brian Caulfield, the Forbes author of this article. Of course, i disagreed with his perspective. TraceMonkey is great, performing an acceleration of JavaScript in FireFox in much the same way that Squirrel Fish accelleratees WebKit Browsers. What Brian misses though is that the RiA war that is taking place both inside and outside the browser (RIA = fully functional Web applications that WILL replace the "client/server" apps model)
Paul Merrell

Carakan - By Opera Core Concerns - 0 views

  • Over the past few months, a small team of developers and testers have been working on implementing a new ECMAScript/JavaScript engine for Opera. When Opera's current ECMAScript engine, called Futhark, was first released in a public version, it was the fastest engine on the market. That engine was developed to minimize code footprint and memory usage, rather than to achieve maximum execution speed. This has traditionally been a correct trade-off on many of the platforms Opera runs on. The Web is a changing environment however, and tomorrow's advanced web applications will require faster ECMAScript execution, so we have now taken on the challenge to once again develop the fastest ECMAScript engine on the market.
  • So how fast is Carakan? Using a regular cross-platform switch dispatch mechanism (without any generated native code) Carakan is currently about two and a half times faster at the SunSpider benchmark than the ECMAScript engine in Presto 2.2 (Opera 10 Alpha). Since Opera is ported to many different hardware architectures, this cross-platform improvement is on its own very important. The native code generation in Carakan is not yet ready for full-scale testing, but the few individual benchmark tests that it is already compatible with runs between 5 and 50 times faster, so it is looking promising so far.
Paul Merrell

Doug Mahugh : Miscellaneous links for 12-09-2008 - 0 views

  • If you've been at one of the recent DII workshops, you may recall that some of us from Microsoft have been talking about an upcoming converter interface that will allow you to add support for other formats to Office. I'm pleased to report that we've now published the documentation on MSDN for the External File Converter for SP2. The basic concept is that you convert incoming files to the Open XML format, and on save you convert Open XML to your format. Using this API, you can extend Office to support any format you'd like. The details are not for the faint of heart, but there is sample C++ source code available to help you get started.
  •  
    So now we learn some details about the new MS Office API(s) for unsupported file formats Microsoft promised a few months ago. Surprise, surprise! They're not for native file support. They're external process tools for converting to and from OOXML. That makes it sound as though Microsoft has no intention of coughing up the documentation for the native file support APIs despite its claim that it would document all APIs for Office (also required by U.S. v. Microsoft). The extra conversion step also practically guarantees more conversion artifacts. Do the new APIs provide interop for embedded scripts, etc.? My guess is no. There has to be a reason Microsoft chose to externalize the process rather than documenting the existing APIs. Limiting features available is still the most plausible scenario.
Gonzalo San Gil, PhD.

6 tips to increase government agency adoption of open source software | Opensource.com - 0 views

  •  
    "Open source code drives collaborative innovation from a larger pool of developers at a lower cost, which is why federal agencies are adopting the "open source first" model."
  •  
    "Open source code drives collaborative innovation from a larger pool of developers at a lower cost, which is why federal agencies are adopting the "open source first" model."
Gonzalo San Gil, PhD.

Achieving Impossible Things with Free Culture and Commons-Based Enterprise : Terry Hanc... - 0 views

  •  
    "Author: Terry Hancock Keywords: free software; open source; free culture; commons-based peer production; commons-based enterprise; Free Software Magazine; Blender Foundation; Blender Open Movies; Wikipedia; Project Gutenberg; Open Hardware; One Laptop Per Child; Sugar Labs; licensing; copyleft; hosting; marketing; design; online community; Debian GNU/Linux; GNU General Public License; Creative Commons Attribution-ShareAlike License; TAPR Open Hardware License; collective patronage; women in free software; Creative Commons; OScar; C,mm,n; Free Software Foundation; Open Source Initiative; Freedom Defined; Free Software Definition; Debian Free Software Guidelines; Sourceforge; Google Code; digital rights management; digital restrictions management; technological protection measures; DRM; TPM; linux; gnu; manifesto Publisher: Free Software Magazine Press Year: 2009 Language: English Collection: opensource"
  •  
    "Author: Terry Hancock Keywords: free software; open source; free culture; commons-based peer production; commons-based enterprise; Free Software Magazine; Blender Foundation; Blender Open Movies; Wikipedia; Project Gutenberg; Open Hardware; One Laptop Per Child; Sugar Labs; licensing; copyleft; hosting; marketing; design; online community; Debian GNU/Linux; GNU General Public License; Creative Commons Attribution-ShareAlike License; TAPR Open Hardware License; collective patronage; women in free software; Creative Commons; OScar; C,mm,n; Free Software Foundation; Open Source Initiative; Freedom Defined; Free Software Definition; Debian Free Software Guidelines; Sourceforge; Google Code; digital rights management; digital restrictions management; technological protection measures; DRM; TPM; linux; gnu; manifesto Publisher: Free Software Magazine Press Year: 2009 Language: English Collection: opensource"
Gonzalo San Gil, PhD.

European Commission to update its open source policy | Joinup - 0 views

  •  
    " Steps up efforts to contribute code upstream The European Commission wants to make it easier for its software developers to submit patches and add new functionalities to open source projects. Contributing to open source communities will be made central to the EC's new open source policy, expects Pierre Damas, Head of Sector at the Directorate General for IT (DIGIT)."
  •  
    " Steps up efforts to contribute code upstream The European Commission wants to make it easier for its software developers to submit patches and add new functionalities to open source projects. Contributing to open source communities will be made central to the EC's new open source policy, expects Pierre Damas, Head of Sector at the Directorate General for IT (DIGIT)."
Paul Merrell

For sale: Systems that can secretly track where cellphone users go around the globe - T... - 0 views

  • Makers of surveillance systems are offering governments across the world the ability to track the movements of almost anybody who carries a cellphone, whether they are blocks away or on another continent. The technology works by exploiting an essential fact of all cellular networks: They must keep detailed, up-to-the-minute records on the locations of their customers to deliver calls and other services to them. Surveillance systems are secretly collecting these records to map people’s travels over days, weeks or longer, according to company marketing documents and experts in surveillance technology.
  • The world’s most powerful intelligence services, such as the National Security Agency and Britain’s GCHQ, long have used cellphone data to track targets around the globe. But experts say these new systems allow less technically advanced governments to track people in any nation — including the United States — with relative ease and precision.
  • It is unclear which governments have acquired these tracking systems, but one industry official, speaking on the condition of anonymity to share sensitive trade information, said that dozens of countries have bought or leased such technology in recent years. This rapid spread underscores how the burgeoning, multibillion-dollar surveillance industry makes advanced spying technology available worldwide. “Any tin-pot dictator with enough money to buy the system could spy on people anywhere in the world,” said Eric King, deputy director of Privacy International, a London-based activist group that warns about the abuse of surveillance technology. “This is a huge problem.”
  • ...9 more annotations...
  • Security experts say hackers, sophisticated criminal gangs and nations under sanctions also could use this tracking technology, which operates in a legal gray area. It is illegal in many countries to track people without their consent or a court order, but there is no clear international legal standard for secretly tracking people in other countries, nor is there a global entity with the authority to police potential abuses.
  • tracking systems that access carrier location databases are unusual in their ability to allow virtually any government to track people across borders, with any type of cellular phone, across a wide range of carriers — without the carriers even knowing. These systems also can be used in tandem with other technologies that, when the general location of a person is already known, can intercept calls and Internet traffic, activate microphones, and access contact lists, photos and other documents. Companies that make and sell surveillance technology seek to limit public information about their systems’ capabilities and client lists, typically marketing their technology directly to law enforcement and intelligence services through international conferences that are closed to journalists and other members of the public.
  • Yet marketing documents obtained by The Washington Post show that companies are offering powerful systems that are designed to evade detection while plotting movements of surveillance targets on computerized maps. The documents claim system success rates of more than 70 percent. A 24-page marketing brochure for SkyLock, a cellular tracking system sold by Verint, a maker of analytics systems based in Melville, N.Y., carries the subtitle “Locate. Track. Manipulate.” The document, dated January 2013 and labeled “Commercially Confidential,” says the system offers government agencies “a cost-effective, new approach to obtaining global location information concerning known targets.”
  • (Privacy International has collected several marketing brochures on cellular surveillance systems, including one that refers briefly to SkyLock, and posted them on its Web site. The 24-page SkyLock brochure and other material was independently provided to The Post by people concerned that such systems are being abused.)
  • Verint, which also has substantial operations in Israel, declined to comment for this story. It says in the marketing brochure that it does not use SkyLock against U.S. or Israeli phones, which could violate national laws. But several similar systems, marketed in recent years by companies based in Switzerland, Ukraine and elsewhere, likely are free of such limitations.
  • The tracking technology takes advantage of the lax security of SS7, a global network that cellular carriers use to communicate with one another when directing calls, texts and Internet data. The system was built decades ago, when only a few large carriers controlled the bulk of global phone traffic. Now thousands of companies use SS7 to provide services to billions of phones and other mobile devices, security experts say. All of these companies have access to the network and can send queries to other companies on the SS7 system, making the entire network more vulnerable to exploitation. Any one of these companies could share its access with others, including makers of surveillance systems.
  • Companies that market SS7 tracking systems recommend using them in tandem with “IMSI catchers,” increasingly common surveillance devices that use cellular signals collected directly from the air to intercept calls and Internet traffic, send fake texts, install spyware on a phone, and determine precise locations. IMSI catchers — also known by one popular trade name, StingRay — can home in on somebody a mile or two away but are useless if a target’s general location is not known. SS7 tracking systems solve that problem by locating the general area of a target so that IMSI catchers can be deployed effectively. (The term “IMSI” refers to a unique identifying code on a cellular phone.)
  • Verint can install SkyLock on the networks of cellular carriers if they are cooperative — something that telecommunications experts say is common in countries where carriers have close relationships with their national governments. Verint also has its own “worldwide SS7 hubs” that “are spread in various locations around the world,” says the brochure. It does not list prices for the services, though it says that Verint charges more for the ability to track targets in many far-flung countries, as opposed to only a few nearby ones. Among the most appealing features of the system, the brochure says, is its ability to sidestep the cellular operators that sometimes protect their users’ personal information by refusing government requests or insisting on formal court orders before releasing information.
  • Another company, Defentek, markets a similar system called Infiltrator Global Real-Time Tracking System on its Web site, claiming to “locate and track any phone number in the world.” The site adds: “It is a strategic solution that infiltrates and is undetected and unknown by the network, carrier, or the target.”
  •  
    The Verint company has very close ties to the Iraeli government. Its former parent company Comverse, was heavily subsidized by Israel and the bulk of its manufacturing and code development was done in Israel. See https://en.wikipedia.org/wiki/Comverse_Technology "In December 2001, a Fox News report raised the concern that wiretapping equipment provided by Comverse Infosys to the U.S. government for electronic eavesdropping may have been vulnerable, as these systems allegedly had a back door through which the wiretaps could be intercepted by unauthorized parties.[55] Fox News reporter Carl Cameron said there was no reason to believe the Israeli government was implicated, but that "a classified top-secret investigation is underway".[55] A March 2002 story by Le Monde recapped the Fox report and concluded: "Comverse is suspected of having introduced into its systems of the 'catch gates' in order to 'intercept, record and store' these wire-taps. This hardware would render the 'listener' himself 'listened to'."[56] Fox News did not pursue the allegations, and in the years since, there have been no legal or commercial actions of any type taken against Comverse by the FBI or any other branch of the US Government related to data access and security issues. While no real evidence has been presented against Comverse or Verint, the allegations have become a favorite topic of conspiracy theorists.[57] By 2005, the company had $959 million in sales and employed over 5,000 people, of whom about half were located in Israel.[16]" Verint is also the company that got the Dept. of Homeland Security contract to provide and install an electronic and video surveillance system across the entire U.S. border with Mexico.  One need not be much of a conspiracy theorist to have concerns about Verint's likely interactions and data sharing with the NSA and its Israeli equivalent, Unit 8200. 
Gonzalo San Gil, PhD.

GNU projects for network services | GNU social and GNU FM - 0 views

  •  
    "Welcome to the home of the GNU social and GNU FM projects. We are projects that work as network services, either through a web browser or over the Internet in some way. All of our code is free software, licensed under the GNU Affero General Public License."
  •  
    "Welcome to the home of the GNU social and GNU FM projects. We are projects that work as network services, either through a web browser or over the Internet in some way. All of our code is free software, licensed under the GNU Affero General Public License."
Gonzalo San Gil, PhD.

Google Code Shutting Down - InternetNews. - 0 views

  •  
    "It wasn't Google shooting itself in the foot that is the prime cause of Google Code's demise, but rather the extreme success of Github. "
Paul Merrell

Use Tor or 'EXTREMIST' Tails Linux? Congrats, you're on the NSA's list * The Register - 0 views

  • Alleged leaked documents about the NSA's XKeyscore snooping software appear to show the paranoid agency is targeting Tor and Tails users, Linux Journal readers – and anyone else interested in online privacy.Apparently, this configuration file for XKeyscore is in the divulged data, which was obtained and studied by members of the Tor project and security specialists for German broadcasters NDR and WDR. <a href="http://pubads.g.doubleclick.net/gampad/jump?iu=/6978/reg_security/front&sz=300x250%7C300x600&tile=3&c=33U7ZK6qwQrMkAACSrTugAAAP1&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" target="_blank"> <img src="http://pubads.g.doubleclick.net/gampad/ad?iu=/6978/reg_security/front&sz=300x250%7C300x600&tile=3&c=33U7ZK6qwQrMkAACSrTugAAAP1&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" alt=""></a> In their analysis of the alleged top-secret documents, they claim the NSA is, among other things:Specifically targeting Tor directory servers Reading email contents for mentions of Tor bridges Logging IP addresses used to search for privacy-focused websites and software And possibly breaking international law in doing so. We already know from leaked Snowden documents that Western intelligence agents hate Tor for its anonymizing abilities. But what the aforementioned leaked source code, written in a rather strange custom language, shows is that not only is the NSA targeting the anonymizing network Tor specifically, it is also taking digital fingerprints of any netizens who are remotely interested in privacy.
  • These include readers of the Linux Journal site, anyone visiting the website for the Tor-powered Linux operating system Tails – described by the NSA as "a comsec mechanism advocated by extremists on extremist forums" – and anyone looking into combining Tails with the encryption tool Truecrypt.If something as innocuous as Linux Journal is on the NSA's hit list, it's a distinct possibility that El Reg is too, particularly in light of our recent exclusive report on GCHQ – which led to a Ministry of Defence advisor coming round our London office for a chat.
  • If you take even the slightest interest in online privacy or have Googled a Linux Journal article about a broken package, you are earmarked in an NSA database for further surveillance, according to these latest leaks.This is assuming the leaked file is genuine, of course.Other monitored sites, we're told, include HotSpotShield, FreeNet, Centurian, FreeProxies.org, MegaProxy, privacy.li and an anonymous email service called MixMinion. The IP address of computer users even looking at these sites is recorded and stored on the NSA's servers for further analysis, and it's up to the agency how long it keeps that data.The XKeyscore code, we're told, includes microplugins that target Tor servers in Germany, at MIT in the United States, in Sweden, in Austria, and in the Netherlands. In doing so it may not only fall foul of German law but also the US's Fourth Amendment.
  • ...2 more annotations...
  • The nine Tor directory servers receive especially close monitoring from the NSA's spying software, which states the "goal is to find potential Tor clients connecting to the Tor directory servers." Tor clients linking into the directory servers are also logged."This shows that Tor is working well enough that Tor has become a target for the intelligence services," said Sebastian Hahn, who runs one of the key Tor servers. "For me this means that I will definitely go ahead with the project.”
  • While the German reporting team has published part of the XKeyscore scripting code, it doesn't say where it comes from. NSA whistleblower Edward Snowden would be a logical pick, but security experts are not so sure."I do not believe that this came from the Snowden documents," said security guru Bruce Schneier. "I also don't believe the TAO catalog came from the Snowden documents. I think there's a second leaker out there."If so, the NSA is in for much more scrutiny than it ever expected.
Gonzalo San Gil, PhD.

10 years of podcasting: Code, comedy, and patent lawsuits | Ars Technica - 0 views

  •  
    ""Thank you very much for taking the time to download this MP3 file." by Cyrus Farivar - Aug 13 2014, 6:49pm CEST"
  •  
    ""Thank you very much for taking the time to download this MP3 file." by Cyrus Farivar - Aug 13 2014, 6:49pm CEST"
« First ‹ Previous 41 - 60 of 175 Next › Last »
Showing 20 items per page