Skip to main content

Home/ Future of the Web/ Group items matching "issue" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
1More

How The Sirius XM Ruling Upsets Decades Of Copyright Law Consensus | Techdirt - 1 views

  •  
    "from the activist-judges... dept We recently wrote about district court judge Philip Gutierrez ruling against Sirius XM on the issue of streaming pre-1972 recordings. As we noted at the time, the ruling appeared to upset what was considered more or less a settled issue. "
49More

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
1More

YouTube - Lessig on McCain on Tech - 0 views

  •  
    Last week, the John McCain campaign finally published its technology platform plank. Larry Lessig takes a critical look at it on three major issues: [i] U.S. broadband uptake relative to other nations; [ii] network neutrality. as it relates to the freedom of internet users to select what use they wish to make of the internet; and [iii how the McCain platform contrasts with the Obama technology plank. Lessig's normal luciidity and in-depth research shines brightly in this Informative video presentation to make his point that the future of the Web is now a presidential campaign issue.
1More

Coding In Paradise: Fixing the Web, Part I - 0 views

  •  
    Must read: "This blog post is part of a new, semi-regular series called Fixing the Web. The goal is to highlight these issues, identify potential solutions, and have a dialogue. I don't claim to have the answers for the situation we are in. However, I do know this -- if there is any community that potentially has what it takes to solve these issues I believe it's the Ajax and JavaScript communities, which is why this is a perfect place to have these discussions. To start, I see four areas that are broken that must be fixed: ..... "
1More

[SURVEY PREVIEW MODE] EuroDIG Agenda Survey - 2 views

  •  
    [The Pan-European dialogue on Internet governance (EuroDIG) is an open platform for informal and inclusive discussion and exchange on public policy issues related to Internet Governance (IG) between stakeholders from all over Europe. It was created in 2008 by a number of key stakeholders representing various European stakeholder groups working in the field of IG. EuroDIG is a network which is open to all European stakeholders that are interested in contributing to an open and interactive discussion on IG issues. ...] *Send Personalized Suggestions thru The Page, fill The Survey...
3More

Director Wants His Film on The Pirate Bay, Pirates Deliver... | TorrentFreak - 0 views

  •  
    " Ernesto on July 24, 2014 C: 38 News A few days ago a Dutch movie director asked people to upload a copy of one of his older films onto The Pirate Bay. The filmmaker had become fed up with the fact that copyright issues made his work completely unavailable through legal channels. To his surprise, pirates were quick to deliver. suzyDutch movie director Martin Koolhoven sent out an unusual request on Twitter a few days ago. "
  •  
    " Ernesto on July 24, 2014 C: 38 News A few days ago a Dutch movie director asked people to upload a copy of one of his older films onto The Pirate Bay. The filmmaker had become fed up with the fact that copyright issues made his work completely unavailable through legal channels. To his surprise, pirates were quick to deliver. suzyDutch movie director Martin Koolhoven sent out an unusual request on Twitter a few days ago. "
  •  
    " Ernesto on July 24, 2014 C: 38 News A few days ago a Dutch movie director asked people to upload a copy of one of his older films onto The Pirate Bay. The filmmaker had become fed up with the fact that copyright issues made his work completely unavailable through legal channels. To his surprise, pirates were quick to deliver. suzyDutch movie director Martin Koolhoven sent out an unusual request on Twitter a few days ago. "
9More

FBI Flouts Obama Directive to Limit Gag Orders on National Security Letters - The Inter... - 0 views

  • Despite the post-Snowden spotlight on mass surveillance, the intelligence community’s easiest end-run around the Fourth Amendment since 2001 has been something called a National Security Letter. FBI agents can demand that an Internet service provider, telephone company or financial institution turn over its records on any number of people — without any judicial review whatsoever — simply by writing a letter that says the information is needed for national security purposes. The FBI at one point was cranking out over 50,000 such letters a year; by the latest count, it still issues about 60 a day. The letters look like this:
  • Recipients are legally required to comply — but it doesn’t stop there. They also aren’t allowed to mention the order to anyone, least of all the person whose data is being searched. Ever. That’s because National Security Letters almost always come with eternal gag orders. Here’s that part:
  • That means the NSL process utterly disregards the First Amendment as well. More than a year ago, President Obama announced that he was ordering the Justice Department to terminate gag orders “within a fixed time unless the government demonstrates a real need for further secrecy.” And on Feb. 3, when the Office of the Director of National Intelligence announced a handful of baby steps resulting from its “comprehensive effort to examine and enhance [its] privacy and civil liberty protections” one of the most concrete was — finally — to cap the gag orders: In response to the President’s new direction, the FBI will now presumptively terminate National Security Letter nondisclosure orders at the earlier of three years after the opening of a fully predicated investigation or the investigation’s close. Continued nondisclosures orders beyond this period are permitted only if a Special Agent in Charge or a Deputy Assistant Director determines that the statutory standards for nondisclosure continue to be satisfied and that the case agent has justified, in writing, why continued nondisclosure is appropriate.
  • ...6 more annotations...
  • Despite the use of the word “now” in that first sentence, however, the FBI has yet to do any such thing. It has not announced any such change, nor explained how it will implement it, or when. Media inquiries were greeted with stalling and, finally, a no comment — ostensibly on advice of legal counsel. “There is pending litigation that deals with a lot of the same questions you’re asking, out of the Ninth Circuit,” FBI spokesman Chris Allen told me. “So for now, we’ll just have to decline to comment.” FBI lawyers are working on a court filing for that case, and “it will address” the new policy, he said. He would not say when to expect it.
  • There is indeed a significant case currently before the federal appeals court in San Francisco. Oral arguments were in October. A decision could come any time. But in that case, the Electronic Frontier Foundation (EFF), which is representing two unnamed communications companies that received NSLs, is calling for the entire NSL statute to be thrown out as unconstitutional — not for a tweak to the gag. And it has a March 2013 district court ruling in its favor. “The gag is a prior restraint under the First Amendment, and prior restraints have to meet an extremely high burden,” said Andrew Crocker, a legal fellow at EFF. That means going to court and meeting the burden of proof — not just signing a letter. Or as the Cato Institute’s Julian Sanchez put it, “To have such a low bar for denying persons or companies the right to speak about government orders they have been served with is anathema. And it is not very good for accountability.”
  • In a separate case, a wide range of media companies (including First Look Media, the non-profit digital media venture that produces The Intercept) are supporting a lawsuit filed by Twitter, demanding the right to say specifically how many NSLs it has received. But simply releasing companies from a gag doesn’t assure the kind of accountability that privacy advocates are saying is required by the Constitution. “What the public has to remember is a NSL is asking for your information, but it’s not asking it from you,” said Michael German, a former FBI agent who is now a fellow with the Brennan Center for Justice. “The vast majority of these things go to the very large telecommunications and financial companies who have a large stake in maintaining a good relationship with the government because they’re heavily regulated entities.”
  • So, German said, “the number of NSLs that would be exposed as a result of the release of the gag order is probably very few. The person whose records are being obtained is the one who should receive some notification.” A time limit on gags going forward also raises the question of whether past gag orders will now be withdrawn. “Obviously there are at this point literally hundreds of thousands of National Security Letters that are more than three years old,” said Sanchez. Individual review is therefore unlikely, but there ought to be some recourse, he said. And the further back you go, “it becomes increasingly implausible that a significant percentage of those are going to entail some dire national security risk.” The NSL program has a troubled history. The absolute secrecy of the program and resulting lack of accountability led to systemic abuse as documented by repeated inspector-general investigations, including improperly authorized NSLs, factual misstatements in the NSLs, improper requests under NSL statutes, requests for information based on First Amendment protected activity, “after-the-fact” blanket NSLs to “cover” illegal requests, and hundreds of NSLs for “community of interest” or “calling circle” information without any determination that the telephone numbers were relevant to authorized national security investigations.
  • Obama’s own hand-selected “Review Group on Intelligence and Communications Technologies” recommended in December 2013 that NSLs should only be issued after judicial review — just like warrants — and that any gag should end within 180 days barring judicial re-approval. But FBI director James Comey objected to the idea, calling NSLs “a very important tool that is essential to the work we do.” His argument evidently prevailed with Obama.
  • NSLs have managed to stay largely under the American public’s radar. But, Crocker says, “pretty much every time I bring it up and give the thumbnail, people are shocked. Then you go into how many are issued every year, and they go crazy.” Want to send me your old NSL and see if we can set a new precedent? Here’s how to reach me. And here’s how to leak to me.
3More

U.S. Net Neutrality Has a Massive Copyright Loophole | TorrentFreak - 0 views

  •  
    # ! [... Fingers crossed…. ] " Ernesto on March 15, 2015 C: 0 Opinion After years of debating U.S. Internet subscribers now have Government regulated Net Neutrality. A huge step forward according to some, but the full order released a few days ago reveals some worrying caveats. While the rules prevent paid prioritization, they do very little to prevent BitTorrent blocking, the very issue that got the net neutrality debate started."
  •  
    # ! [... Fingers crossed…. ] " Ernesto on March 15, 2015 C: 0 Opinion After years of debating U.S. Internet subscribers now have Government regulated Net Neutrality. A huge step forward according to some, but the full order released a few days ago reveals some worrying caveats. While the rules prevent paid prioritization, they do very little to prevent BitTorrent blocking, the very issue that got the net neutrality debate started."
  •  
    # ! [... Fingers crossed…. ] " Ernesto on March 15, 2015 C: 0 Opinion After years of debating U.S. Internet subscribers now have Government regulated Net Neutrality. A huge step forward according to some, but the full order released a few days ago reveals some worrying caveats. While the rules prevent paid prioritization, they do very little to prevent BitTorrent blocking, the very issue that got the net neutrality debate started."
1More

Issue in focus: the impact of intellectual property regimes on the enjoyment of right t... - 0 views

  •  
    "The Special Rapporteur has decided to devote her next thematic report to the Human Rights Council (March 2015) to the issue of the impact of intellectual property regimes on the enjoyment of right to science and culture, as enshrined in particular in article 15 of the International Covenant on Economic, Social and Cultural Rights. "
3More

Section 215 and "Fruitless" (?!?) Constitutional Adjudication | Just Security - 0 views

  • This morning, the Second Circuit issued a follow-on ruling to its May decision in ACLU v. Clapper (which had held that the NSA’s bulk telephone records program was unlawful insofar as it had not properly been authorized by Congress). In a nutshell, today’s ruling rejects the ACLU’s request for an injunction against the continued operation of the program for the duration of the 180-day transitional period (which ends on November 29) from the old program to the quite different collection regime authorized by the USA Freedom Act. As the Second Circuit (in my view, quite correctly) concluded, “Regardless of whether the bulk telephone metadata program was illegal prior to May, as we have held, and whether it would be illegal after November 29, as Congress has now explicitly provided, it is clear that Congress intended to authorize it during the transitionary period.” So far, so good. But remember that the ACLU’s challenge to bulk collection was mounted on both statutory and constitutional grounds, the latter of which the Second Circuit was able to avoid in its earlier ruling because of its conclusion that, prior to the enactment of the USA Freedom Act, bulk collection was unauthorized by Congress. Now that it has held that it is authorized during the transitional period, that therefore tees up, quite unavoidably, whether bulk collection violates the Fourth Amendment. But rather than decide that (momentous) question, the Second Circuit ducked:
  • We agree with the government that we ought not meddle with Congress’s considered decision regarding the transition away from bulk telephone metadata collection, and also find that addressing these issues at this time would not be a prudent use of judicial authority. We need not, and should not, decide such momentous constitutional issues based on a request for such narrow and temporary relief. To do so would take more time than the brief transition period remaining for the telephone metadata program, at which point, any ruling on the constitutionality of the demised program would be fruitless. In other words, because any constitutional violation is short-lived, and because it results from the “considered decision” of Congress, it would be fruitless to actually resolve the constitutionality of bulk collection during the transitional period.
  • Hopefully, it won’t take a lot of convincing for folks to understand just how wrong-headed this is. For starters, if the plaintiffs are correct, they are currently being subjected to unconstitutional government surveillance for which they are entitled to a remedy. The fact that this surveillance has a limited shelf-life (and/or that Congress was complicit in it) doesn’t in any way ameliorate the constitutional violation — which is exactly why the Supreme Court has, for generations, recognized an exception to mootness doctrine for constitutional violations that, owing to their short duration, are “capable of repetition, yet evading review.” Indeed, in this very same opinion, the Second Circuit first held that the ACLU’s challenge isn’t moot, only to then invokes mootness-like principles to justify not resolving the constitutional claim. It can’t be both; either the constitutional challenge is moot, or it isn’t. But more generally, the notion that constitutional adjudication of a claim with a short shelf-life is “fruitless” utterly misses the significance of the establishment of forward-looking judicial precedent, especially in a day and age in which courts are allowed to (and routinely do) avoid resolving the merits of constitutional claims in cases in which the relevant precedent is not “clearly established.” Maybe, if this were the kind of constitutional question that was unlikely to recur, there’d be more to the Second Circuit’s avoidance of the issue in this case. But whether and to what extent the Fourth Amendment applies to information we voluntarily provide to third parties is hardly that kind of question, and the Second Circuit’s unconvincing refusal to answer that question in a context in which it is quite squarely presented is nothing short of feckless.
4More

German Parliament Says No More Software Patents | Electronic Frontier Foundation - 0 views

  • The German Parliament recently took a huge step that would eliminate software patents (PDF) when it issued a joint motion requiring the German government to ensure that computer programs are only covered by copyright. Put differently, in Germany, software cannot be patented. The Parliament's motion follows a similar announcement made by New Zealand's government last month (PDF), in which it determined that computer programs were not inventions or a manner of manufacture and, thus, cannot be patented.
  • The crux of the German Parliament's motion rests on the fact that software is already protected by copyright, and developers are afforded "exploitation rights." These rights, however, become confused when broad, abstract patents also cover general aspects of computer programs. These two intellectual property systems are at odds. The clearest example of this clash is with free software. The motion recognizes this issue and therefore calls upon the government "to preserve the precedence of copyright law so that software developers can also publish their work under open source license terms and conditions with legal security." The free software movement relies upon the fact that software can be released under a copyright license that allows users to share it and build upon others' works. Patents, as Parliament finds, inhibit this fundamental spread.
  • Just like in the New Zealand order, the German Parliament carved out one type of software that could be patented, when: the computer program serves merely as a replaceable equivalent for a mechanical or electro-mechanical component, as is the case, for instance, when software-based washing machine controls can replace an electromechanical program control unit consisting of revolving cylinders which activate the control circuits for the specific steps of the wash cycle This allows for software that is tied to (and controls part of) another invention to be patented. In other words, if a claimed process is purely a computer program, then it is not patentable. (New Zealand's order uses a similar washing machine example.) The motion ends by calling upon the German government to push for this approach to be standard across all of Europe. We hope policymakers in the United States will also consider fundamental reform that deals with the problems caused by low-quality software patents. Ultimately, any real reform must address this issue.
  •  
    Note that an unofficial translation of the parliamentary motion is linked from the article. This adds substantially to the pressure internationally to end software patents because Germany has been the strongest defender of software patents in Europe. The same legal grounds would not apply in the U.S. The strongest argument for the non-patentability in the U.S., in my opinion, is that software patents embody embody both prior art and obviousness. A general purpose computer can accomplish nothing unforeseen by the prior art of the computing device. And it is impossible for software to do more than cause different sequences of bit register states to be executed. This is the province of "skilled artisans" using known methods to produce predictable results. There is a long line of Supreme Court decisions holding that an "invention" with such traits is non-patentable. I have summarized that argument with citations at . 
2More

Despite Administration's Promises, Most Government Transparency Still The Work Of Whist... - 0 views

  •  
    "from the the-government-can-doxx-you,-but-not-itself dept Meet the new transparency/Same as the old transparency: The Justice Department has kept classified at least 74 opinions, memos and letters on national security issues, including interrogation, detention and surveillance, according to a report released Tuesday by the Brennan Center for Justice."
  •  
    "from the the-government-can-doxx-you,-but-not-itself dept Meet the new transparency/Same as the old transparency: The Justice Department has kept classified at least 74 opinions, memos and letters on national security issues, including interrogation, detention and surveillance, according to a report released Tuesday by the Brennan Center for Justice."
2More

EFF in 2015 - Annual Report - 0 views

  •  
    [The Electronic Frontier Foundation was founded in 1990 to protect the rights of technology users, a mission that expands dramatically as digital devices and networks transform modern life and culture. With over 25,000 dues-paying members around the world and a social media reach of well over 1 million followers across different social networks, EFF engages directly with digital users worldwide and provides leadership on cutting-edge issues of free expression, privacy, and human rights. Our annual report features reflections from several EFF staff members about some of our most significant efforts, as well as financial information for the fiscal year ending June 2015. To learn more, read our Year in Review series. ...]
  •  
    [The Electronic Frontier Foundation was founded in 1990 to protect the rights of technology users, a mission that expands dramatically as digital devices and networks transform modern life and culture. With over 25,000 dues-paying members around the world and a social media reach of well over 1 million followers across different social networks, EFF engages directly with digital users worldwide and provides leadership on cutting-edge issues of free expression, privacy, and human rights. Our annual report features reflections from several EFF staff members about some of our most significant efforts, as well as financial information for the fiscal year ending June 2015. To learn more, read our Year in Review series. ...]
2More

Lynis 2.2.0 Released - Security Auditing and Scanning Tool for Linux Systems - 0 views

  •  
    " Lynis is an open source and much powerful auditing tool for Unix/Linux like operating systems. It scans system for security information, general system information, installed and available software information, configuration mistakes, security issues, user accounts without password, wrong file permissions, firewall auditing, etc."
  •  
    " Lynis is an open source and much powerful auditing tool for Unix/Linux like operating systems. It scans system for security information, general system information, installed and available software information, configuration mistakes, security issues, user accounts without password, wrong file permissions, firewall auditing, etc."
2More

Google, ACLU call to delay government hacking rule | TheHill - 0 views

  • A coalition of 26 organizations, including the American Civil Liberties Union (ACLU) and Google, signed a letter Monday asking lawmakers to delay a measure that would expand the government’s hacking authority. The letter asks Senate Majority Leader Mitch McConnellMitch McConnellTrump voices confidence on infrastructure plan GOP leaders to Obama: Leave Iran policy to Trump GOP debates going big on tax reform MORE (R-Ky.) and Minority Leader Harry ReidHarry ReidNevada can’t trust Trump to protect public lands Sanders, Warren face tough decision on Trump Google, ACLU call to delay government hacking rule MORE (D-Nev.), plus House Speaker Paul RyanPaul RyanTrump voices confidence on infrastructure plan GOP leaders to Obama: Leave Iran policy to Trump GOP debates going big on tax reform MORE (R-Wis.), and House Minority Leader Nancy Pelosi (D-Calif.) to further review proposed changes to Rule 41 and delay its implementation until July 1, 2017. ADVERTISEMENTThe Department of Justice’s alterations to the rule would allow law enforcement to use a single warrant to hack multiple devices beyond the jurisdiction that the warrant was issued in. The FBI used such a tactic to apprehend users of the child pornography dark website, Playpen. It took control of the dark website for two weeks and after securing two warrants, installed malware on Playpen users computers to acquire their identities. But the signatories of the letter — which include advocacy groups, companies and trade associations — are raising questions about the effects of the change. 
  •  
    ".. no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." Fourth Amendment. The changes to Rule 41 ignore the particularity requirement by allowing the government to search computers that are not particularly identified in multiple locations not particularly identifed, in other words, a general warrant that is precisely the reason the particularity requirement was adopted to outlaw.
2More

http://www.linux-server-security.com/linux_servers_howtos/linux_monitor_network_nload.html - 0 views

  •  
    "©2016 Chris Binnie On a continually changing network it is often difficult to spot issues due to the amount of noise generated by expected network traffic. Even when communications are seemingly quiet a packet sniffer will display screeds of noisy data. That data might be otherwise unseen broadcast traffic being sent to all hosts willing to listen and respond on a local network. Make no mistake, noise on a network link can cause all sorts of headaches because it can be impossible to identify trends quickly, especially if a host or the network itself is under attack. Packet sniffers will clearly display more traffic for the busiest connections which ultimately obscures the activities of less busy hosts."
  •  
    "©2016 Chris Binnie On a continually changing network it is often difficult to spot issues due to the amount of noise generated by expected network traffic. Even when communications are seemingly quiet a packet sniffer will display screeds of noisy data. That data might be otherwise unseen broadcast traffic being sent to all hosts willing to listen and respond on a local network. Make no mistake, noise on a network link can cause all sorts of headaches because it can be impossible to identify trends quickly, especially if a host or the network itself is under attack. Packet sniffers will clearly display more traffic for the busiest connections which ultimately obscures the activities of less busy hosts."
2More

Fix Copyright! | Help us Reform Copyright - 0 views

  •  
    "01 DYSFUNCTIONAL & NOT FIT FOR THE DIGITAL WORLD Copyright reform is needed to adapt to the digital world we live in. Under the current system everything tends to fall under copyright unless it is covered by a specific exception in the law. The trouble is that these exceptions are narrow, specific and technologically outdated: the list was written in 2001! This was well before YouTube and Facebook were created. As a result, everyday habits of online users could be considered illegal today. A blogger linking to copyrighted content, a meme based on a copyrighted image, a video with some footage from an existing movie or a song: all of that could create issues for the user that posted them."
  •  
    "01 DYSFUNCTIONAL & NOT FIT FOR THE DIGITAL WORLD Copyright reform is needed to adapt to the digital world we live in. Under the current system everything tends to fall under copyright unless it is covered by a specific exception in the law. The trouble is that these exceptions are narrow, specific and technologically outdated: the list was written in 2001! This was well before YouTube and Facebook were created. As a result, everyday habits of online users could be considered illegal today. A blogger linking to copyrighted content, a meme based on a copyrighted image, a video with some footage from an existing movie or a song: all of that could create issues for the user that posted them."
4More

Kremlin Denies Claim It Considered Giving Snowden As 'Gift' To Trump - 0 views

  • Amid reports that Moscow is considering handing over NSA whistleblower Edward Snowden as a “gift” to U.S. President Donald Trump, a Russian government spokesperson said Monday that the Kremlin and the White House have not discussed the matter, Russia’s state TASS agency reported. “No, this issue (Snowden’s fate) was not raised,” presidential spokesperson Dmitry Peskov told reporters Monday, adding that Russian officials have not taken a position on whether Snowden should be extradited to the U.S. or granted Russian citizenship. “The issue was not raised (during the Russian-US contacts),” Peskov said. “At the moment it is not among bilateral issues.” The statement comes after Snowden — who has lived in Russia since 2013, first with one-year temporary asylum then a residence permit — revealed in recent days that he is “not afraid” of being handed over to the United States, where he faces espionage charges for his explosive 2013 leak of documents on secret U.S. mass surveillance programs.
  • However, Snowden also said in an interview with Yahoo News that talk of a possible trade between Moscow and Washington makes him feel “encouraged” because it vindicates him in the face of accusations that he has been a spy for Russia by laying bare the fact that he has always been independent and “worked on behalf of the United States.” “Finally: irrefutable evidence that I never cooperated with Russian intel,” he tweeted on Friday. “No country trades away spies, as the rest would fear they’re next.” In the U.S., Snowden faces charges of theft of government property and violation of the Espionage Act on two counts, which each carry a maximum sentence of 10 years.
  • “What I am proud of,” Snowden told Yahoo News, “is the fact that every decision that I have made I can defend.” Snowden is set to be eligible to apply for Russian citizenship next year, according to his lawyer. Last month, Moscow extended his residence permit, which is now valid until 2020.
  •  
    One of the bravest patriots in U.S. history, forced to live abroad. Ain't that life?
2More

Law Professor Claims Any Internet Company 'Research' On Users Without Review Board Appr... - 1 views

  •  
    "from the you-sure-you-want-to-go-there dept For many years I've been a huge fan of law professor James Grimmelmann. His legal analysis on various issues is often quite valuable, and I've quoted him more than a few times. However, he's now arguing that the now infamous Facebook happiness experiment and the similarly discussed OkCupid "hook you up with someone you should hate" experiments weren't just unethical, but illegal."
  •  
    "from the you-sure-you-want-to-go-there dept For many years I've been a huge fan of law professor James Grimmelmann. His legal analysis on various issues is often quite valuable, and I've quoted him more than a few times. However, he's now arguing that the now infamous Facebook happiness experiment and the similarly discussed OkCupid "hook you up with someone you should hate" experiments weren't just unethical, but illegal."
2More

Free Software Foundation statement on the GNU Bash "shellshock" vulnerability - Free So... - 0 views

  •  
    "by Free Software Foundation - Published on Sep 25, 2014 04:51 PM A major security vulnerability has been discovered in the free software shell GNU Bash. The most serious issues have already been fixed, and a complete fix is well underway. GNU/Linux distributions are working quickly to release updated packages for their users. All Bash users should upgrade immediately, and audit the list of remote network services running on their systems. " [# ! + http://security.stackexchange.com/questions/68168/is-there-a-short-command-to-test-if-my-server-is-secure-against-the-shellshock-b]
  •  
    "by Free Software Foundation - Published on Sep 25, 2014 04:51 PM A major security vulnerability has been discovered in the free software shell GNU Bash. The most serious issues have already been fixed, and a complete fix is well underway. GNU/Linux distributions are working quickly to release updated packages for their users. All Bash users should upgrade immediately, and audit the list of remote network services running on their systems. "
‹ Previous 21 - 40 of 274 Next › Last »
Showing 20 items per page