Skip to main content

Home/ Future of the Web/ Group items tagged Research just

Rss Feed Group items tagged

Paul Merrell

Open Access Can't Wait. Pass FASTR Now. | Electronic Frontier Foundation - 1 views

  • When you pay for federally funded research, you should be allowed to read it. That’s the idea behind the Fair Access to Science and Technology Research Act (S.1701, H.R.3427), which was recently reintroduced in both houses of Congress. FASTR was first introduced in 2013, and while it has strong support in both parties, it has never gained enough momentum to pass. We need to change that. Let’s tell Congress that passing an open access law should be a top priority.
  • Tell Congress: It’s time to move FASTR The proposal is pretty simple: Under FASTR, every federal agency that spends more than $100 million on grants for research would be required to adopt an open access policy. The bill gives each agency flexibility to implement an open access policy suited to the work it funds, so long as research is available to the public after an “embargo period” of a year or less. One of the major points of contention around FASTR is how long that embargo period should be. Last year, the Senate Homeland Security and Governmental Affairs Committee approved FASTR unanimously, but only after extending that embargo period from six months to 12, putting FASTR in line with the 2013 White House open access memo. That’s the version that was recently reintroduced in the Senate.  The House bill, by contrast, sets the embargo period at six months. EFF supports a shorter period. Part of what’s important about open access is that it democratizes knowledge: when research is available to the public, you don’t need expensive journal subscriptions or paid access to academic databases in order to read it. A citizen scientist can use and build on the same body of knowledge as someone with institutional connections. But in the fast-moving world of scientific research, 12 months is an eternity. A shorter embargo is far from a radical proposition, especially in 2017. The landscape for academic publishing is very different from what it was when FASTR was first introduced, thanks in larger part to nongovernmental funders who already enforce open access mandates. Major foundations like Ford, Gates, and Hewlett have adopted strong open access policies requiring that research be not only available to the public, but also licensed to allow republishing and reuse by anyone.
  • Just last year, the Gates Foundation made headlines when it dropped the embargo period from its policy entirely, requiring that research be published openly immediately. After a brief standoff, major publishers began to accommodate Gates’ requirements. As a result, we finally have public confirmation of what we’ve always known: open access mandates don’t put publishers out of business; they push them to modernize their business models. Imagine how a strong open access mandate for government-funded research—with a requirement that that research be licensed openly—could transform publishing. FASTR may not be that law, but it’s a huge step in the right direction, and it’s the best option on the table today. Let’s urge Congress to pass a version of FASTR with an embargo period of six months or less, and then use it as a foundation for stronger open access in the future.
thinkahol *

Citizen Scientist 2.0 - 4 views

  •  
    What does the future of science look like? About a year ago, I was asked this question. My response then was: Transdisciplinary collaboration. Researchers from a variety of domains-biology, philosophy, psychology, neuroscience, economics, law-all coming together, using inputs from each specialized area to generate the best comprehensive solutions to society's more persistent problems. Indeed, it appears as if I was on the right track, as more and more academic research departments, as well as industries, are seeing the value in this type of partnership. Now let's take this a step further. Not only do I think we will be relying on inputs from researchers and experts from multiple domains to solve scientific problems, but I see society itself getting involved on a much more significant level as well. And I don't just mean science awareness. I'm talking about actually participating in the research itself. Essentially, I see a huge boom in the future for Citizen Science.
Paul Merrell

The Wifi Alliance, Coming Soon to Your Neighborhood: 5G Wireless | Global Research - Ce... - 0 views

  • Just as any new technology claims to offer the most advanced development; that their definition of progress will cure society’s ills or make life easier by eliminating the drudgery of antiquated appliances, the Wifi Alliance  was organized as a worldwide wireless network to connect ‘everyone and everything, everywhere” as it promised “improvements to nearly every aspect of daily life.”    The Alliance, which makes no pretense of potential health or environmental concerns, further proclaimed (and they may be correct) that there are “more wifi devices than people on earth”.   It is that inescapable exposure to ubiquitous wireless technologies wherein lies the problem.   
  • Even prior to the 1997 introduction of commercially available wifi devices which has saturated every industrialized country, EMF wifi hot spots were everywhere.  Today with the addition of cell and cordless phones and towers, broadcast antennas, smart meters and the pervasive computer wifi, both adults and especially vulnerable children are surrounded 24-7 by an inescapable presence with little recognition that all radiation exposure is cumulative.    
  • The National Toxicology Program (NTP), a branch of the US National Institute for Health (NIH), conducted the world’s largest study on radiofrequency radiation used by the US telecommunications industry and found a ‘significantly statistical increase in brain and heart cancers” in animals exposed to EMF (electromagnetic fields).  The NTP study confirmed the connection between mobile and wireless phone use and human brain cancer risks and its conclusions were supported by other epidemiological peer-reviewed studies.  Of special note is that studies citing the biological risk to human health were below accepted international exposure standards.    
  •  
    ""…what this means is that the current safety standards as off by a factor of about 7 million.' Pointing out that a recent FCC Chair was a former lobbyist for the telecom industry, "I know how they've attacked various people.  In the U.S. … the funding for the EMF research [by the Environmental Protection Agency] was cut off starting in 1986 … The U.S. Office of Naval Research had been funding a fair amount of research in this area [in the '70s]. They [also] … stopped funding new grants in 1986 …  And then the NIH a few years later followed the same path …" As if all was not reason enough for concern or even downright panic,  the next generation of wireless technology known as 5G (fifth generation), representing the innocuous sounding Internet of Things, promises a quantum leap in power and exceedingly more damaging health impacts with mandatory exposures.      The immense expansion of radiation emissions from the current wireless EMF frequency band and 5G about to be perpetrated on an unsuspecting American public should be criminal.  Developed by the US military as non lethal perimeter and crowd control, the Active Denial System emits a high density, high frequency wireless radiation comparable to 5G and emits radiation in the neighborhood of 90 GHz.    The current Pre 5G, frequency band emissions used in today's commercial wireless range is from 300 Mhz to 3 GHZ as 5G will become the first wireless system to utilize millimeter waves with frequencies ranging from 30 to 300 GHz. One example of the differential is that a current LANS (local area network system) uses 2.4 GHz.  Hidden behind these numbers is an utterly devastating increase in health effects of immeasurable impacts so stunning as to numb the senses. In 2017, the international Environmental Health Trust recommended an EU moratorium "on the roll-out of the fifth generation, 5G, for telecommunication until potential hazards for human health and the environment hav
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

Software Piracy Hurts Linux Adoption, Research Finds - TorrentFreak [# ! Note...] - 0 views

    • Gonzalo San Gil, PhD.
       
      # ! No way. Piracy has nothing to do with Linux. It's just another 'biased' press #vane #try to #identify #opensource and '#crime'...
  •  
    " Ernesto on February 21, 2016 C: 24 News New research suggests that software piracy has a detrimental effect on the adoption of Linux desktop operating systems. Piracy is one of the reasons why Windows has been able to maintain its dominant market position, making open source alternatives "forgotten victims" of copyright infringement."
  •  
    " Ernesto on February 21, 2016 C: 24 News New research suggests that software piracy has a detrimental effect on the adoption of Linux desktop operating systems. Piracy is one of the reasons why Windows has been able to maintain its dominant market position, making open source alternatives "forgotten victims" of copyright infringement."
Gonzalo San Gil, PhD.

Shutting Down Pirate Sites is Ineffective, European Commission Finds | TorrentFreak - 0 views

  •  
    " Ernesto on May 14, 2015 C: 0 Breaking Shutting down pirate websites such as The Pirate Bay is high on the agenda of the entertainment industries. However, according to research published by the European Commission's Joint Research Centre, these raids are relatively ineffective and potentially counterproductive."
  •  
    " Ernesto on May 14, 2015 C: 0 Breaking Shutting down pirate websites such as The Pirate Bay is high on the agenda of the entertainment industries. However, according to research published by the European Commission's Joint Research Centre, these raids are relatively ineffective and potentially counterproductive."
Gonzalo San Gil, PhD.

MPAA Research: Blocking The Pirate Bay Works, So..... | TorrentFreak - 1 views

  •  
    " Ernesto on August 28, 2014 C: 61 News Hollywood has helped to get The Pirate Bay blocked in many countries, but not on its home turf. There are now various signs that this may change in the near future. Among other things, the MPAA has conducted internal research to show that site blocking is rather effective."
  •  
    " Ernesto on August 28, 2014 C: 61 News Hollywood has helped to get The Pirate Bay blocked in many countries, but not on its home turf. There are now various signs that this may change in the near future. Among other things, the MPAA has conducted internal research to show that site blocking is rather effective."
  •  
    Domain blocking in the U.S. is largely a non-starter in the U.S. because of the Constitution's First Amendment, although it has been allowed in some circumstances. Over-generalizing, but the more legal content a site has, the less susceptible it is to domain-blocking. It's even more difficult at the ISP level because of statutory protections that immunize ISPs from private content-related suit. Major U.S. ISPs zealously protect those protections in Congress. At the request of Hollywood, President Obama convened a meeting that persuaded major ISPs to voluntarily block download of particular movies, using DRM filters. But my understanding is that users can still download them if they are using the Tor browser. I haven't checked because there's nothing Hollywood releases that I can't wait until it's available on my cable television service. Even then, I mainly use the television to find something just interesting enough to persuade me to look up from my computer monitors for a moment, to reduce eye strain from monitor glare. I'm not a movie buff nor am I enamored of thinly veiled propaganda. So Hollywood does not figure largely in my life. As yet, there is no comparable blocking on music downloads.
Paul Merrell

Glassholes: A Mini NSA on Your Face, Recorded by the Spy Agency | Global Research - 2 views

  • eOnline reports: A new app will allow total strangers to ID you and pull up all your information, just by looking at you and scanning your face with their Google Glass. The app is called NameTag and it sounds CREEPY. The “real-time facial recognition” software “can detect a face using the Google Glass camera, send it wirelessly to a server, compare it to millions of records, and in seconds return a match complete with a name, additional photos and social media profiles.” The information listed could include your name, occupation, any social media profiles you have set up and whether or not you have a criminal record (“CRIMINAL HISTORY FOUND” pops up in bright red letters according to the demo).
  • Since the NSA is tapping into all of our digital communications, it is not unreasonable to assume that all of the info from your digital glasses – yup, everything – may be recorded by the spy agency. Are we going to have millions of mini NSAs walking around recording everything … glassholes? It doesn’t help inspire confidence that America’s largest police force and Taser are beta-testing Google Glasses. Postscript: I love gadgets and tech, and previously discussed the exciting possibilities of Google Glasses. But the NSA is ruining the fun, just like it’s harming U.S. Internet business.
  •  
    Thankfully, there's buddying technology to block computer facial-recognition algorithms. http://tinyurl.com/mzfyfra On the other hand, used Hallowe'en masks can usually be purchased inexpensively from some nearby school kids at this time of year. Now if I could just put together a few near-infrared LEDs to fry a license plate-scanner's view ...  
Gonzalo San Gil, PhD.

Search Engines Can Diminish Online Piracy, Research Finds | TorrentFreak - 1 views

    • Gonzalo San Gil, PhD.
       
      # ! This siege against Search Engines # ! a is just pa bunch of unitive measures related to # ! other issues.
    • Gonzalo San Gil, PhD.
       
      # ! People is already aware that (so-called) 'Legal' sites # ! have a manipulated -limited- supply. # ! Industry has t '#monetize' -Fairly!- free file-sharing...
  •  
    "It has to be noted that Professor Telang and his colleagues received a generous donation from the MPAA for their research program. However, the researchers suggest that their work is carried out independently. "As a word of caution the researchers point out that meddling with search results in the real world may be much more challenging. False positives could lead to significant social costs and should be avoided, for example."
Gonzalo San Gil, PhD.

Law Professor Claims Any Internet Company 'Research' On Users Without Review Board Appr... - 1 views

  •  
    "from the you-sure-you-want-to-go-there dept For many years I've been a huge fan of law professor James Grimmelmann. His legal analysis on various issues is often quite valuable, and I've quoted him more than a few times. However, he's now arguing that the now infamous Facebook happiness experiment and the similarly discussed OkCupid "hook you up with someone you should hate" experiments weren't just unethical, but illegal."
  •  
    "from the you-sure-you-want-to-go-there dept For many years I've been a huge fan of law professor James Grimmelmann. His legal analysis on various issues is often quite valuable, and I've quoted him more than a few times. However, he's now arguing that the now infamous Facebook happiness experiment and the similarly discussed OkCupid "hook you up with someone you should hate" experiments weren't just unethical, but illegal."
Paul Merrell

US spy lab hopes to geotag every outdoor photo on social media | Ars Technica - 0 views

  • Imagine if someone could scan every image on Facebook, Twitter, and Instagram, then instantly determine where each was taken. The ability to combine this location data with information about who appears in those photos—and any social media contacts tied to them—would make it possible for government agencies to quickly track terrorist groups posting propaganda photos. (And, really, just about anyone else.) That's precisely the goal of Finder, a research program of the Intelligence Advanced Research Projects Agency (IARPA), the Office of the Director of National Intelligence's dedicated research organization. For many photos taken with smartphones (and with some consumer cameras), geolocation information is saved with the image by default. The location is stored in the Exif (Exchangable Image File Format) data of the photo itself unless geolocation services are turned off. If you have used Apple's iCloud photo store or Google Photos, you've probably created a rich map of your pattern of life through geotagged metadata. However, this location data is pruned off for privacy reasons when images are uploaded to some social media services, and privacy-conscious photographers (particularly those concerned about potential drone strikes) will purposely disable geotagging on their devices and social media accounts.
Paul Merrell

The best way to read Glenn Greenwald's 'No Place to Hide' - 0 views

  • Journalist Glenn Greenwald just dropped a pile of new secret National Security Agency documents onto the Internet. But this isn’t just some haphazard WikiLeaks-style dump. These documents, leaked to Greenwald last year by former NSA contractor Edward Snowden, are key supplemental reading material for his new book, No Place to Hide, which went on sale Tuesday. Now, you could just go buy the book in hardcover and read it like you would any other nonfiction tome. Thanks to all the additional source material, however, if any work should be read on an e-reader or computer, this is it. Here are all the links and instructions for getting the most out of No Place to Hide.
  • Greenwald has released two versions of the accompanying NSA docs: a compressed version and an uncompressed version. The only difference between these two is the quality of the PDFs. The uncompressed version clocks in at over 91MB, while the compressed version is just under 13MB. For simple reading purposes, just go with the compressed version and save yourself some storage space. Greenwald also released additional “notes” for the book, which are just citations. Unless you’re doing some scholarly research, you can skip this download.
  • No Place to Hide is, of course, available on a wide variety of ebook formats—all of which are a few dollars cheaper than the hardcover version, I might add. Pick your e-poison: Amazon, Nook, Kobo, iBooks. Flipping back and forth Each page of the documents includes a corresponding page number for the book, to allow readers to easily flip between the book text and the supporting documents. If you use the Amazon Kindle version, you also have the option of reading Greenwald’s book directly on your computer using the Kindle for PC app or directly in your browser. Yes, that may be the worst way to read a book. In this case, however, it may be the easiest way to flip back and forth between the book text and the notes and supporting documents. Of course, you can do the same on your e-reader—though it can be a bit of a pain. Those of you who own a tablet are in luck, as they provide the best way to read both ebooks and PDF files. Simply download the book using the e-reader app of your choice, download the PDFs from Greenwald’s website, and dig in. If you own a Kindle, Nook, or other ereader, you may have to convert the PDFs into a format that works well with your device. The Internet is full of tools and how-to guides for how to do this. Here’s one:
  • ...1 more annotation...
  • Kindle users also have the option of using Amazon’s Whispernet service, which converts PDFs into a format that functions best on the company’s e-reader. That will cost you a small fee, however—$0.15 per megabyte, which means the compressed Greenwald docs will cost you a whopping $1.95.
Paul Merrell

NSA contractors use LinkedIn profiles to cash in on national security | Al Jazeera America - 0 views

  • NSA spies need jobs, too. And that is why many covert programs could be hiding in plain sight. Job websites such as LinkedIn and Indeed.com contain hundreds of profiles that reference classified NSA efforts, posted by everyone from career government employees to low-level IT workers who served in Iraq or Afghanistan. They offer a rare glimpse into the intelligence community's projects and how they operate. Now some researchers are using the same kinds of big-data tools employed by the NSA to scrape public LinkedIn profiles for classified programs. But the presence of so much classified information in public view raises serious concerns about security — and about the intelligence industry as a whole. “I’ve spent the past couple of years searching LinkedIn profiles for NSA programs,” said Christopher Soghoian, the principal technologist with the American Civil Liberties Union’s Speech, Privacy and Technology Project.
  • On Aug. 3, The Wall Street Journal published a story about the FBI’s growing use of hacking to monitor suspects, based on information Soghoian provided. The next day, Soghoian spoke at the Defcon hacking conference about how he uncovered the existence of the FBI’s hacking team, known as the Remote Operations Unit (ROU), using the LinkedIn profiles of two employees at James Bimen Associates, with which the FBI contracts for hacking operations. “Had it not been for the sloppy actions of a few contractors updating their LinkedIn profiles, we would have never known about this,” Soghoian said in his Defcon talk. Those two contractors were not the only ones being sloppy.
  • And there are many more. A quick search of Indeed.com using three code names unlikely to return false positives — Dishfire, XKeyscore and Pinwale — turned up 323 résumés. The same search on LinkedIn turned up 48 profiles mentioning Dishfire, 18 mentioning XKeyscore and 74 mentioning Pinwale. Almost all these people appear to work in the intelligence industry. Network-mapping the data Fabio Pietrosanti of the Hermes Center for Transparency and Digital Human Rights noticed all the code names on LinkedIn last December. While sitting with M.C. McGrath at the Chaos Communication Congress in Hamburg, Germany, Pietrosanti began searching the website for classified program names — and getting serious results. McGrath was already developing Transparency Toolkit, a Web application for investigative research, and knew he could improve on Pietrosanti’s off-the-cuff methods.
  • ...2 more annotations...
  • “I was, like, huh, maybe there’s more we can do with this — actually get a list of all these profiles that have these results and use that to analyze the structure of which companies are helping with which programs, which people are helping with which programs, try to figure out in what capacity, and learn more about things that we might not know about,” McGrath said. He set up a computer program called a scraper to search LinkedIn for public profiles that mention known NSA programs, contractors or jargon — such as SIGINT, the agency’s term for “signals intelligence” gleaned from intercepted communications. Once the scraper found the name of an NSA program, it searched nearby for other words in all caps. That allowed McGrath to find the names of unknown programs, too. Once McGrath had the raw data — thousands of profiles in all, with 70 to 80 different program names — he created a network graph that showed the relationships between specific government agencies, contractors and intelligence programs. Of course, the data are limited to what people are posting on their LinkedIn profiles. Still, the network graph gives a sense of which contractors work on several NSA programs, which ones work on just one or two, and even which programs military units in Iraq and Afghanistan are using. And that is just the beginning.
  • Click on the image to view an interactive network illustration of the relationships between specific national security surveillance programs in red, and government organizations or private contractors in blue.
  •  
    What a giggle, public spying on NSA and its contractors using Big Data. The interactive network graph with its sidebar display of relevant data derived from LinkedIn profiles is just too delightful. 
Gary Edwards

Skynet rising: Google acquires 512-qubit quantum computer; NSA surveillance to be turne... - 0 views

  •  
    "The ultimate code breakers" If you know anything about encryption, you probably also realize that quantum computers are the secret KEY to unlocking all encrypted files. As I wrote about last year here on Natural News, once quantum computers go into widespread use by the NSA, the CIA, Google, etc., there will be no more secrets kept from the government. All your files - even encrypted files - will be easily opened and read. Until now, most people believed this day was far away. Quantum computing is an "impractical pipe dream," we've been told by scowling scientists and "flat Earth" computer engineers. "It's not possible to build a 512-qubit quantum computer that actually works," they insisted. Don't tell that to Eric Ladizinsky, co-founder and chief scientist of a company called D-Wave. Because Ladizinsky's team has already built a 512-qubit quantum computer. And they're already selling them to wealthy corporations, too. DARPA, Northrup Grumman and Goldman Sachs In case you're wondering where Ladizinsky came from, he's a former employee of Northrup Grumman Space Technology (yes, a weapons manufacturer) where he ran a multi-million-dollar quantum computing research project for none other than DARPA - the same group working on AI-driven armed assault vehicles and battlefield robots to replace human soldiers. .... When groundbreaking new technology is developed by smart people, it almost immediately gets turned into a weapon. Quantum computing will be no different. This technology grants God-like powers to police state governments that seek to dominate and oppress the People.  ..... Google acquires "Skynet" quantum computers from D-Wave According to an article published in Scientific American, Google and NASA have now teamed up to purchase a 512-qubit quantum computer from D-Wave. The computer is called "D-Wave Two" because it's the second generation of the system. The first system was a 128-qubit computer. Gen two
  •  
    Normally, I'd be suspicious of anything published by Infowars because its editors are willing to publish really over the top stuff, but: [i] this is subject matter I've maintained an interest in over the years and I was aware that working quantum computers were imminent; and [ii] the pedigree on this particular information does not trace to Scientific American, as stated in the article. I've known Scientific American to publish at least one soothing and lengthy article on the subject of chlorinated dioxin hazard -- my specialty as a lawyer was litigating against chemical companies that generated dioxin pollution -- that was generated by known closet chemical industry advocates long since discredited and was totally lacking in scientific validity and contrary to established scientific knowledge. So publication in Scientific American doesn't pack a lot of weight with me. But checking the Scientific American linked article, notes that it was reprinted by permission from Nature, a peer-reviewed scientific journal and news organization that I trust much more. That said, the InfoWars version is a rewrite that contains lots of information not in the Nature/Scientific American version of a sensationalist nature, so heightened caution is still in order. Check the reprinted Nature version before getting too excited: "The D-Wave computer is not a 'universal' computer that can be programmed to tackle any kind of problem. But scientists have found they can usefully frame questions in machine-learning research as optimisation problems. "D-Wave has battled to prove that its computer really operates on a quantum level, and that it is better or faster than a conventional computer. Before striking the latest deal, the prospective customers set a series of tests for the quantum computer. D-Wave hired an outside expert in algorithm-racing, who concluded that the speed of the D-Wave Two was above average overall, and that it was 3,600 times faster than a leading conventional comput
Paul Merrell

BitTorrent Sync creates private, peer-to-peer Dropbox, no cloud required | Ars Technica - 6 views

  • BitTorrent today released folder syncing software that replicates files across multiple computers using the same peer-to-peer file sharing technology that powers BitTorrent clients. The free BitTorrent Sync application is labeled as being in the alpha stage, so it's not necessarily ready for prime-time, but it is publicly available for download and working as advertised on my home network. BitTorrent, Inc. (yes, there is a legitimate company behind BitTorrent) took to its blog to announce the move from a pre-alpha, private program to the publicly available alpha. Additions since the private alpha include one-way synchronization, one-time secrets for sharing files with a friend or colleague, and the ability to exclude specific files and directories.
  • BitTorrent Sync provides "unlimited, secure file-syncing," the company said. "You can use it for remote backup. Or, you can use it to transfer large folders of personal media between users and machines; editors and collaborators. It’s simple. It’s free. It’s the awesome power of P2P, applied to file-syncing." File transfers are encrypted, with private information never being stored on an external server or in the "cloud." "Since Sync is based on P2P and doesn’t require a pit-stop in the cloud, you can transfer files at the maximum speed supported by your network," BitTorrent said. "BitTorrent Sync is specifically designed to handle large files, so you can sync original, high quality, uncompressed files."
  •  
    Direct P2P encrypted file syncing, no cloud intermediate, which should translate to far more secure exchange of files, with less opportunity for snooping by governments or others, than with cloud-based services. 
  • ...5 more comments...
  •  
    Hey Paul, is there an open source document management system that I could hook the BitTorrent Sync to?
  •  
    More detail please. What do you want to do with the doc management system? Platform? Server-side or stand-alone? Industrial strength and highly configurable or lightweight and simple? What do you mean by "hook?" Not that I would be able to answer anyway. I really know very little about BitTorrent Sync. In fact, as far as I'd gone before your question was to look at the FAQ. It's linked from . But there's a link to a forum on the same page. Giving the first page a quick scan confirms that this really is alpha-state software. But that would probably be a better place to ask. (Just give them more specific information of what you'd like to do.) There are other projects out there working on getting around the surveillance problem. I2P is one that is a farther along than BitTorrent Sync and quite a bit more flexible. See . (But I haven't used it, so caveat emptor.)
  •  
    There is a great list of PRISM Proof software at http://prism-break.org/. Includes a link to I2P. I want to replace gmail though, but would like another Web based system since I need multi device access. Of course, I need to replace my Google Apps / Google Docs system. That's why I asked about a PRISM Proof sync-share-store DMS. My guess is that there are many users similarly seeking a PRISM Proof platform of communications, content and collaborative computing systems. BusinessIndiser.com is crushed with articles about Google struggling to squirm out from under the NSA PRISM boot-on-the-back-of-their-neck situation. As if blaming the NSA makes up for the dragnet that they consented/allowed/conceded to cover their entire platform. Perhaps we should be watching Germany? There must be tons of startup operations underway, all seeking to replace Google, Amazon, FaceBook, Microsoft, Skype and so many others. It's a great day for Libertyware :)
  •  
    Is the NSA involvement the "Kiss of Death"? Google seems to think so. I'm wondering what the impact would be if ZOHO were to announce a PRISM Proof productivity platform?
  •  
    It is indeed. The E.U. has far more protective digital privacy rights than we do (none). If you're looking for a Dropbox replacement (you should be), for a cloud-based solution take a look at . Unlike Dropbox, all of the encryption/decryption happens on your local machine; Wuala never sees your files unencrypted. Dropbox folks have admitted that there's no technical barrier to them looking at your files. Their encrypt/decrypt operations are done in the cloud (if they actually bother) and they have the key. Which makes it more chilling that the PRISM docs Snowden link make reference to Dropbox being the next cloud service NSA plans to add to their collection. Wuala also is located (as are its servers) in Switzerland, which also has far stronger digital data privacy laws than the U.S. Plus the Swiss are well along the path to E.U. membership; they've ratified many of the E.U. treaties including the treaty on Human Rights, which as I recall is where the digital privacy sections are. I've begun to migrate from Dropbox to Wuala. It seems to be neck and neck with Dropbox on features and supported platforms, with the advantage of a far more secure approach and 5 GB free. But I'd also love to see more approaches akin to IP2 and Bittorrent Sync that provide the means to bypass the cloud. Don't depend on government to ensure digital privacy, route around the government voyeurs. Hmmm ... I wonder if the NSA has the computer capacity to handle millions of people switching to encrypted communication? :-) Thanks for the link to the software list.
  •  
    Re: Google. I don't know if it's the 'kiss of death" but they're definitely going to take a hit, particularly outside the U.S. BTW, I'm remembering from a few years back when the ODF Foundation was still kicking. I did a fair bit of research on the bureaucratic forces in the E.U. that were pushing for the Open Document Exchange Formats. That grew out of a then-ongoing push to get all of the E.U. nations connected via a network that is not dependent on the Internet. It was fairly complete at the time down to the national level and was branching out to the local level and the plan from there was to push connections to business and then to Joe Sixpack and wife. Interop was key, hence ODEF. The E.U. might not be that far away from an ability to sever the digital connections with the U.S. Say a bunch of daisy-chained proxy anonymizers for communications with the U.S. Of course they'd have to block the UK from the network and treat it like it is the U.S. There's a formal signals intelligence service collaboration/integration dating back to WW 2, as I recall, among the U.S., the U.K., Canada, Australia, and New Zealand. Don't remember its name. But it's the same group of nations that were collaborating on Echelon. So the E.U. wouldn't want to let the UK fox inside their new chicken coop. Ah, it's just a fantasy. The U.S. and the E.U. are too interdependent. I have no idea hard it would be for the Zoho folk to come up with desktop/side encryption/decryption. And I don't know whether their servers are located outside the reach of a U.S. court's search warrant. But I think Google is going to have to move in that direction fast if it wants to minimize the damage. Or get way out in front of the hounds chomping at the NSA's ankles and reduce the NSA to compost. OTOH, Google might be a government covert op. for all I know. :-) I'm really enjoying watching the NSA show. Who knows what facet of their Big Brother operation gets revealed next?
  •  
    ZOHO is an Indian company with USA marketing offices. No idea where the server farm is located, but they were not on the NSA list. I've known Raju Vegesna for years, mostly from the old Web 2.0 and Office 2.0 Conferences. Raju runs the USA offices in Santa Clara. I'll try to catch up with him on Thursday. How he could miss this once in a lifetime moment to clean out Google, Microsoft and SalesForce.com is something I'd like to find out about. Thanks for the Wuala tip. You sent me that years ago, when i was working on research and design for the SurDocs project. Incredible that all our notes, research, designs and correspondence was left to rot in Google Wave! Too too funny. I recall telling Alex from SurDocs that he had to use a USA host, like Amazon, that could be trusted by USA customers to keep their docs safe and secure. Now look what i've done! I've tossed his entire company information set into the laps of the NSA and their cabal of connected corporatists :)
Gary Edwards

The True Story of How the Patent Bar Captured a Court and Shrank the Intellectual Commo... - 1 views

  • The change in the law wrought by the Federal Circuit can also be viewed substantively through the controversy over software patents. Throughout the 1960s, the USPTO refused to award patents for software innovations. However, several of the USPTO’s decisions were overruled by the patent-friendly U.S. Court of Customs and Patent Appeals, which ordered that software patents be granted. In Gottschalk v. Benson (1972) and Parker v. Flook (1978), the U.S. Supreme Court reversed the Court of Customs and Patent Appeals, holding that mathematical algorithms (and therefore software) were not patentable subject matter. In 1981, in Diamond v. Diehr, the Supreme Court upheld a software patent on the grounds that the patent in question involved a physical process—the patent was issued for software used in the molding of rubber. While affirming their prior ruling that mathematical formulas are not patentable in the abstract, the Court held that an otherwise patentable invention did not become unpatentable simply because it utilized a computer.
  • In the hands of the newly established Federal Circuit, however, this small scope for software patents in precedent was sufficient to open the floodgates. In a series of decisions culminating in State Street Bank v. Signature Financial Group (1998), the Federal Circuit broadened the criteria for patentability of software and business methods substantially, allowing protection as long as the innovation “produces a useful, concrete and tangible result.” That broadened criteria led to an explosion of low-quality software patents, from Amazon’s 1-Click checkout system to Twitter’s pull-to-refresh feature on smartphones. The GAO estimates that more than half of all patents granted in recent years are software-related. Meanwhile, the Supreme Court continues to hold, as in Parker v. Flook, that computer software algorithms are not patentable, and has begun to push back against the Federal Circuit. In Bilski v. Kappos (2010), the Supreme Court once again held that abstract ideas are not patentable, and in Alice v. CLS (2014), it ruled that simply applying an abstract idea on a computer does not suffice to make the idea patent-eligible. It still is not clear what portion of existing software patents Alice invalidates, but it could be a significant one.
  • Supreme Court justices also recognize the Federal Circuit’s insubordination. In oral arguments in Carlsbad Technology v. HIF Bio (2009), Chief Justice John Roberts joked openly about it:
  • ...17 more annotations...
  • The Opportunity of the Commons
  • As a result of the Federal Circuit’s pro-patent jurisprudence, our economy has been flooded with patents that would otherwise not have been granted. If more patents meant more innovation, then we would now be witnessing a spectacular economic boom. Instead, we have been living through what Tyler Cowen has called a Great Stagnation. The fact that patents have increased while growth has not is known in the literature as the “patent puzzle.” As Michele Boldrin and David Levine put it, “there is no empirical evidence that [patents] serve to increase innovation and productivity, unless productivity is identified with the number of patents awarded—which, as evidence shows, has no correlation with measured productivity.”
  • While more patents have not resulted in faster economic growth, they have resulted in more patent lawsuits.
  • Software patents have characteristics that make them particularly susceptible to litigation. Unlike, say, chemical patents, software patents are plagued by a problem of description. How does one describe a software innovation in such a way that anyone searching for it will easily find it? As Christina Mulligan and Tim Lee demonstrate, chemical formulas are indexable, meaning that as the number of chemical patents grow, it will still be easy to determine if a molecule has been patented. Since software innovations are not indexable, they estimate that “patent clearance by all firms would require many times more hours of legal research than all patent lawyers in the United States can bill in a year. The result has been an explosion of patent litigation.” Software and business method patents, estimate James Bessen and Michael Meurer, are 2 and 7 times more likely to be litigated than other patents, respectively (4 and 13 times more likely than chemical patents).
  • Software patents make excellent material for predatory litigation brought by what are often called “patent trolls.”
  • Trolls use asymmetries in the rules of litigation to legally extort millions of dollars from innocent parties. For example, one patent troll, Innovatio IP Ventures, LLP, acquired patents that implicated Wi-Fi. In 2011, it started sending demand letters to coffee shops and hotels that offered wireless Internet access, offering to settle for $2,500 per location. This amount was far in excess of the 9.56 cents per device that Innovatio was entitled to under the “Fair, Reasonable, and Non-Discriminatory” licensing promises attached to their portfolio, but it was also much less than the cost of trial, and therefore it was rational for firms to pay. Cisco stepped in and spent $13 million in legal fees on the case, and settled on behalf of their customers for 3.2 cents per device. Other manufacturers had already licensed Innovatio’s portfolio, but that didn’t stop their customers from being targeted by demand letters.
  • Litigation cost asymmetries are magnified by the fact that most patent trolls are nonpracticing entities. This means that when patent infringement trials get to the discovery phase, they will cost the troll very little—a firm that does not operate a business has very few records to produce.
  • But discovery can cost a medium or large company millions of dollars. Using an event study methodology, James Bessen and coauthors find that infringement lawsuits by nonpracticing entities cost publicly traded companies $83 billion per year in stock market capitalization, while plaintiffs gain less than 10 percent of that amount.
  • Software patents also reduce innovation in virtue of their cumulative nature and the fact that many of them are frequently inputs into a single product. Law professor Michael Heller coined the phrase “tragedy of the anticommons” to refer to a situation that mirrors the well-understood “tragedy of the commons.” Whereas in a commons, multiple parties have the right to use a resource but not to exclude others, in an anticommons, multiple parties have the right to exclude others, and no one is therefore able to make effective use of the resource. The tragedy of the commons results in overuse of the resource; the tragedy of the anticommons results in underuse.
  • In order to cope with the tragedy of the anticommons, we should carefully investigate the opportunity of  the commons. The late Nobelist Elinor Ostrom made a career of studying how communities manage shared resources without property rights. With appropriate self-governance institutions, Ostrom found again and again that a commons does not inevitably lead to tragedy—indeed, open access to shared resources can provide collective benefits that are not available under other forms of property management.
  • This suggests that—litigation costs aside—patent law could be reducing the stock of ideas rather than expanding it at current margins.
  • Advocates of extensive patent protection frequently treat the commons as a kind of wasteland. But considering the problems in our patent system, it is worth looking again at the role of well-tailored limits to property rights in some contexts. Just as we all benefit from real property rights that no longer extend to the highest heavens, we would also benefit if the scope of patent protection were more narrowly drawn.
  • Reforming the Patent System
  • This analysis raises some obvious possibilities for reforming the patent system. Diane Wood, Chief Judge of the 7th Circuit, has proposed ending the Federal Circuit’s exclusive jurisdiction over patent appeals—instead, the Federal Circuit could share jurisdiction with the other circuit courts. While this is a constructive suggestion, it still leaves the door open to the Federal Circuit playing “a leading role in shaping patent law,” which is the reason for its capture by patent interests. It would be better instead simply to abolish the Federal Circuit and return to the pre-1982 system, in which patents received no special treatment in appeals. This leaves open the possibility of circuit splits, which the creation of the Federal Circuit was designed to mitigate, but there are worse problems than circuit splits, and we now have them.
  • Another helpful reform would be for Congress to limit the scope of patentable subject matter via statute. New Zealand has done just that, declaring that software is “not an invention” to get around WTO obligations to respect intellectual property. Congress should do the same with respect to both software and business methods.
  • Finally, even if the above reforms were adopted, there would still be a need to address the asymmetries in patent litigation that result in predatory “troll” lawsuits. While the holding in Alice v. CLS arguably makes a wide swath of patents invalid, those patents could still be used in troll lawsuits because a ruling of invalidity for each individual patent might not occur until late in a trial. Current legislation in Congress addresses this class of problem by mandating disclosures, shifting fees in the case of spurious lawsuits, and enabling a review of the patent’s validity before a trial commences.
  • What matters for prosperity is not just property rights in the abstract, but good property-defining institutions. Without reform, our patent system will continue to favor special interests and forestall economic growth.
  •  
    "Libertarians intuitively understand the case for patents: just as other property rights internalize the social benefits of improvements to land, automobile maintenance, or business investment, patents incentivize the creation of new inventions, which might otherwise be undersupplied. So far, so good. But it is important to recognize that the laws that govern property, intellectual or otherwise, do not arise out of thin air. Rather, our political institutions, with all their virtues and foibles, determine the contours of property-the exact bundle of rights that property holders possess, their extent, and their limitations. Outlining efficient property laws is not a trivial problem. The optimal contours of property are neither immutable nor knowable a priori. For example, in 1946, the U.S. Supreme Court reversed the age-old common law doctrine that extended real property rights to the heavens without limit. The advent of air travel made such extensive property rights no longer practicable-airlines would have had to cobble together a patchwork of easements, acre by acre, for every corridor through which they flew, and they would have opened themselves up to lawsuits every time their planes deviated from the expected path. The Court rightly abridged property rights in light of these empirical realities. In defining the limits of patent rights, our political institutions have gotten an analogous question badly wrong. A single, politically captured circuit court with exclusive jurisdiction over patent appeals has consistently expanded the scope of patentable subject matter. This expansion has resulted in an explosion of both patents and patent litigation, with destructive consequences. "
  •  
    I added a comment to the page's article. Patents are antithetical to the precepts of Libertarianism and do not involve Natural Law rights. But I agree with the author that the Court of Appeals for the Federal Circuit should be abolished. It's a failed experiment.
Gary Edwards

Everything You Need to Know About the Bitcoin Protocol - 0 views

  • . In this research paper we hope to explain that the bitcoin currency itself is ‘just’ the next phase in the evolution of money – from dumb to smart money. It’s the underlying platform, the Bitcoin protocol aka Bitcoin 2.0, that holds the real transformative power. That is where the revolution starts. According to our research there are several reasons why this new technology is going to disrupt our economy and society as we have never experienced before:
  • From dumb to smart money
  • The Bitcoin protocol is the underlying platform that holds the real transformative power and is where the revolution starts. According to our research there are several reasons why this new technology is going to disrupt our economy and society as we have never experienced before:
  • ...2 more annotations...
  • Similar to when the TCP/IP, HTTP and SMTP protocols were still in their infancy; the Bitcoin protocol is currently in a similar evolutionary stage. Contrary to the early days of the Internet, when only a few people had a computer, nowadays everybody has a supercomputer in its pocket. It’s Moore’s Law all over again. Bitcoin is going to disrupt the economy and society with breathtaking speed. For the first time in history technology makes it possible to transfer property rights (such as shares, certificates, digital money, etc.) fast, transparent and very secure. Moreover, these transactions can take place without the involvement of a trusted intermediary such as a government, notary, or bank. Companies and governments are no longer needed as the “middle man” in all kinds of financial agreements. Not only does The Internet of Things give machines a digital identity, the bitcoin API’s (machine-machine interfaces) gives them an economic identity as well. Next to people and corporations, machines will become a new type of agent in the economy.
  • The Bitcoin protocol flips automation upside down. From now on automation within companies can start top down, making the white-collar employees obsolete. Corporate missions can be encoded on top of the protocol. Machines can manage a corporation all by themselves. Bitcoin introduces the world to the new nature of the firm: the Distributed Autonomous Corporation (DAC). This new type of corporation also adds a new perspective to the discussion on technological unemployment. The DAC might even turn technological unemplyment into structural unemployment. Bitcoin is key to the success of the Collaborative Economy. Bitcoin enables a frictionless and transparent way of sharing ideas, media, products, services and technology between people without the interference of corporations and governments.
  •  
    A series of eleven pages discussing Bitcoin and the extraordinary impact it will have on the world economy. Excellent article and a worthy follow up to the previous Marc Andressen discussion of Bitcoin.
  •  
    A series of eleven pages discussing Bitcoin and the extraordinary impact it will have on the world economy. Excellent article and a worthy follow up to the previous Marc Andressen discussion of Bitcoin.
Paul Merrell

Profiled From Radio to Porn, British Spies Track Web Users' Online Identities | Global ... - 0 views

  • One system builds profiles showing people’s web browsing histories. Another analyzes instant messenger communications, emails, Skype calls, text messages, cell phone locations, and social media interactions. Separate programs were built to keep tabs on “suspicious” Google searches and usage of Google Maps. The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails and Internet browsing logs of Brits, Americans, and any other citizens  all without a court order or judicial warrant.
  • The power of KARMA POLICE was illustrated in 2009, when GCHQ launched a top-secret operation to collect intelligence about people using the Internet to listen to radio shows. The agency used a sample of nearly 7 million metadata records, gathered over a period of three months, to observe the listening habits of more than 200,000 people across 185 countries, including the U.S., the U.K., Ireland, Canada, Mexico, Spain, the Netherlands, France, and Germany.
  • GCHQ’s documents indicate that the plans for KARMA POLICE were drawn up between 2007 and 2008. The system was designed to provide the agency with “either (a) a web browsing profile for every visible user on the Internet, or (b) a user profile for every visible website on the Internet.” The origin of the surveillance system’s name is not discussed in the documents. But KARMA POLICE is also the name of a popular song released in 1997 by the Grammy Award-winning British band Radiohead, suggesting the spies may have been fans. A verse repeated throughout the hit song includes the lyric, “This is what you’ll get, when you mess with us.”
  • ...3 more annotations...
  • GCHQ vacuums up the website browsing histories using “probes” that tap into the international fiber-optic cables that transport Internet traffic across the world. A huge volume of the Internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis. Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events”  a term the agency uses to refer to metadata records  with about 10 billion new entries added every day. As of March 2009, the largest slice of data Black Hole held  41 percent  was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.
  • Throughout this period, as smartphone sales started to boom, the frequency of people’s Internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data. By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it saidwould be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.” HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the Internet.” Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media and news websites, search engines, chat forums, and blogs.
  • The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into Internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
Paul Merrell

The Fundamentals of US Surveillance: What Edward Snowden Never Told Us? | Global Resear... - 0 views

  • Former US intelligence contractor Edward Snowden’s revelations rocked the world.  According to his detailed reports, the US had launched massive spying programs and was scrutinizing the communications of American citizens in a manner which could only be described as extreme and intense. The US’s reaction was swift and to the point. “”Nobody is listening to your telephone calls,” President Obama said when asked about the NSA. As quoted in The Guardian,  Obama went on to say that surveillance programs were “fully overseen not just by Congress but by the Fisa court, a court specially put together to evaluate classified programs to make sure that the executive branch, or government generally, is not abusing them”. However, it appears that Snowden may have missed a pivotal part of the US surveillance program. And in stating that the “nobody” is not listening to our calls, President Obama may have been fudging quite a bit.
  • In fact, Great Britain maintains a “listening post” at NSA HQ. The laws restricting live wiretaps do not apply to foreign countries  and thus this listening post  is not subject to  US law.  In other words, the restrictions upon wiretaps, etc. do not apply to the British listening post.  So when Great Britain hands over the recordings to the NSA, technically speaking, a law is not being broken and technically speaking, the US is not eavesdropping on our each and every call. It is Great Britain which is doing the eavesdropping and turning over these records to US intelligence. According to John Loftus, formerly an attorney with  the Department of Justice and author of a number of books concerning US intelligence activities, back in the late seventies  the USDOJ issued a memorandum proposing an amendment to FISA. Loftus, who recalls seeing  the memo, stated in conversation this week that the DOJ proposed inserting the words “by the NSA” into the FISA law  so the scope of the law would only restrict surveillance by the NSA, not by the British.  Any subsequent sharing of the data culled through the listening posts was strictly outside the arena of FISA. Obama was less than forthcoming when he insisted that “What I can say unequivocally is that if you are a US person, the NSA cannot listen to your telephone calls, and the NSA cannot target your emails … and have not.”
  • According to Loftus, the NSA is indeed listening as Great Britain is turning over the surveillance records en masse to that agency. Loftus states that the arrangement is reciprocal, with the US maintaining a parallel listening post in Great Britain. In an interview this past week, Loftus told this reporter that  he believes that Snowden simply did not know about the arrangement between Britain and the US. As a contractor, said Loftus, Snowden would not have had access to this information and thus his detailed reports on the extent of US spying, including such programs as XKeyscore, which analyzes internet data based on global demographics, and PRISM, under which the telecommunications companies, such as Google, Facebook, et al, are mandated to collect our communications, missed the critical issue of the FISA loophole.
  • ...2 more annotations...
  • U.S. government officials have defended the program by asserting it cannot be used on domestic targets without a warrant. But once again, the FISA courts and their super-secret warrants  do not apply to foreign government surveillance of US citizens. So all this sturm and drang about whether or not the US is eavesdropping on our communications is, in fact, irrelevant and diversionary.
  • In fact, the USA Freedom Act reinstituted a number of the surveillance protocols of Section 215, including  authorization for  roving wiretaps  and tracking “lone wolf terrorists.”  While mainstream media heralded the passage of the bill as restoring privacy rights which were shredded under 215, privacy advocates have maintained that the bill will do little, if anything, to reverse the  surveillance situation in the US. The NSA went on the record as supporting the Freedom Act, stating it would end bulk collection of telephone metadata. However, in light of the reciprocal agreement between the US and Great Britain, the entire hoopla over NSA surveillance, Section 215, FISA courts and the USA Freedom Act could be seen as a giant smokescreen. If Great Britain is collecting our real time phone conversations and turning them over to the NSA, outside the realm or reach of the above stated laws, then all this posturing over the privacy rights of US citizens and surveillance laws expiring and being resurrected doesn’t amount to a hill of CDs.
Gonzalo San Gil, PhD.

Piracy Can Boost Digital Music Sales, Research Shows - TorrentFreak - 0 views

  •  
    Ernesto on January 21, 2016 C: 18 News A new academic paper published by the Economics Department of Queen's University examines the link between BitTorrent downloads and music album sales. The study shows that depending on the circumstances, piracy can hurt sales or give it a boost through free promotion.
1 - 20 of 53 Next › Last »
Showing 20 items per page