Skip to main content

Home/ Future of the Web/ Group items matching ""1 out of 4"" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML,
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

Internet users raise funds to buy lawmakers' browsing histories in protest | TheHill - 0 views

  • House passes bill undoing Obama internet privacy rule House passes bill undoing Obama internet privacy rule TheHill.com Mesmerizing Slow-Motion Lightning Celebrate #NationalPuppyDay with some adorable puppies on Instagram 5 plants to add to your garden this Spring House passes bill undoing Obama internet privacy rule Inform News. Coming Up... Ed Sheeran responds to his 'baby lookalike' margin: 0px; padding: 0px; borde
  • Great news! The House just voted to pass SJR34. We will finally be able to buy the browser history of all the Congresspeople who voted to sell our data and privacy without our consent!” he wrote on the fundraising page.Another activist from Tennessee has raised more than $152,000 from more than 9,800 people.A bill on its way to President Trump’s desk would allow internet service providers (ISPs) to sell users’ data and Web browsing history. It has not taken effect, which means there is no growing history data yet to purchase.A Washington Post reporter also wrote it would be possible to buy the data “in theory, but probably not in reality.”A former enforcement bureau chief at the Federal Communications Commission told the newspaper that most internet service providers would cover up this information, under their privacy policies. If they did sell any individual's personal data in violation of those policies, a state attorney general could take the ISPs to court.
Gary Edwards

The True Story of How the Patent Bar Captured a Court and Shrank the Intellectual Commons | Cato Unbound - 1 views

  • The change in the law wrought by the Federal Circuit can also be viewed substantively through the controversy over software patents. Throughout the 1960s, the USPTO refused to award patents for software innovations. However, several of the USPTO’s decisions were overruled by the patent-friendly U.S. Court of Customs and Patent Appeals, which ordered that software patents be granted. In Gottschalk v. Benson (1972) and Parker v. Flook (1978), the U.S. Supreme Court reversed the Court of Customs and Patent Appeals, holding that mathematical algorithms (and therefore software) were not patentable subject matter. In 1981, in Diamond v. Diehr, the Supreme Court upheld a software patent on the grounds that the patent in question involved a physical process—the patent was issued for software used in the molding of rubber. While affirming their prior ruling that mathematical formulas are not patentable in the abstract, the Court held that an otherwise patentable invention did not become unpatentable simply because it utilized a computer.
  • In the hands of the newly established Federal Circuit, however, this small scope for software patents in precedent was sufficient to open the floodgates. In a series of decisions culminating in State Street Bank v. Signature Financial Group (1998), the Federal Circuit broadened the criteria for patentability of software and business methods substantially, allowing protection as long as the innovation “produces a useful, concrete and tangible result.” That broadened criteria led to an explosion of low-quality software patents, from Amazon’s 1-Click checkout system to Twitter’s pull-to-refresh feature on smartphones. The GAO estimates that more than half of all patents granted in recent years are software-related. Meanwhile, the Supreme Court continues to hold, as in Parker v. Flook, that computer software algorithms are not patentable, and has begun to push back against the Federal Circuit. In Bilski v. Kappos (2010), the Supreme Court once again held that abstract ideas are not patentable, and in Alice v. CLS (2014), it ruled that simply applying an abstract idea on a computer does not suffice to make the idea patent-eligible. It still is not clear what portion of existing software patents Alice invalidates, but it could be a significant one.
  • Supreme Court justices also recognize the Federal Circuit’s insubordination. In oral arguments in Carlsbad Technology v. HIF Bio (2009), Chief Justice John Roberts joked openly about it:
  • ...17 more annotations...
  • The Opportunity of the Commons
  • As a result of the Federal Circuit’s pro-patent jurisprudence, our economy has been flooded with patents that would otherwise not have been granted. If more patents meant more innovation, then we would now be witnessing a spectacular economic boom. Instead, we have been living through what Tyler Cowen has called a Great Stagnation. The fact that patents have increased while growth has not is known in the literature as the “patent puzzle.” As Michele Boldrin and David Levine put it, “there is no empirical evidence that [patents] serve to increase innovation and productivity, unless productivity is identified with the number of patents awarded—which, as evidence shows, has no correlation with measured productivity.”
  • While more patents have not resulted in faster economic growth, they have resulted in more patent lawsuits.
  • Software patents have characteristics that make them particularly susceptible to litigation. Unlike, say, chemical patents, software patents are plagued by a problem of description. How does one describe a software innovation in such a way that anyone searching for it will easily find it? As Christina Mulligan and Tim Lee demonstrate, chemical formulas are indexable, meaning that as the number of chemical patents grow, it will still be easy to determine if a molecule has been patented. Since software innovations are not indexable, they estimate that “patent clearance by all firms would require many times more hours of legal research than all patent lawyers in the United States can bill in a year. The result has been an explosion of patent litigation.” Software and business method patents, estimate James Bessen and Michael Meurer, are 2 and 7 times more likely to be litigated than other patents, respectively (4 and 13 times more likely than chemical patents).
  • Software patents make excellent material for predatory litigation brought by what are often called “patent trolls.”
  • Trolls use asymmetries in the rules of litigation to legally extort millions of dollars from innocent parties. For example, one patent troll, Innovatio IP Ventures, LLP, acquired patents that implicated Wi-Fi. In 2011, it started sending demand letters to coffee shops and hotels that offered wireless Internet access, offering to settle for $2,500 per location. This amount was far in excess of the 9.56 cents per device that Innovatio was entitled to under the “Fair, Reasonable, and Non-Discriminatory” licensing promises attached to their portfolio, but it was also much less than the cost of trial, and therefore it was rational for firms to pay. Cisco stepped in and spent $13 million in legal fees on the case, and settled on behalf of their customers for 3.2 cents per device. Other manufacturers had already licensed Innovatio’s portfolio, but that didn’t stop their customers from being targeted by demand letters.
  • Litigation cost asymmetries are magnified by the fact that most patent trolls are nonpracticing entities. This means that when patent infringement trials get to the discovery phase, they will cost the troll very little—a firm that does not operate a business has very few records to produce.
  • But discovery can cost a medium or large company millions of dollars. Using an event study methodology, James Bessen and coauthors find that infringement lawsuits by nonpracticing entities cost publicly traded companies $83 billion per year in stock market capitalization, while plaintiffs gain less than 10 percent of that amount.
  • Software patents also reduce innovation in virtue of their cumulative nature and the fact that many of them are frequently inputs into a single product. Law professor Michael Heller coined the phrase “tragedy of the anticommons” to refer to a situation that mirrors the well-understood “tragedy of the commons.” Whereas in a commons, multiple parties have the right to use a resource but not to exclude others, in an anticommons, multiple parties have the right to exclude others, and no one is therefore able to make effective use of the resource. The tragedy of the commons results in overuse of the resource; the tragedy of the anticommons results in underuse.
  • In order to cope with the tragedy of the anticommons, we should carefully investigate the opportunity of  the commons. The late Nobelist Elinor Ostrom made a career of studying how communities manage shared resources without property rights. With appropriate self-governance institutions, Ostrom found again and again that a commons does not inevitably lead to tragedy—indeed, open access to shared resources can provide collective benefits that are not available under other forms of property management.
  • This suggests that—litigation costs aside—patent law could be reducing the stock of ideas rather than expanding it at current margins.
  • Advocates of extensive patent protection frequently treat the commons as a kind of wasteland. But considering the problems in our patent system, it is worth looking again at the role of well-tailored limits to property rights in some contexts. Just as we all benefit from real property rights that no longer extend to the highest heavens, we would also benefit if the scope of patent protection were more narrowly drawn.
  • Reforming the Patent System
  • This analysis raises some obvious possibilities for reforming the patent system. Diane Wood, Chief Judge of the 7th Circuit, has proposed ending the Federal Circuit’s exclusive jurisdiction over patent appeals—instead, the Federal Circuit could share jurisdiction with the other circuit courts. While this is a constructive suggestion, it still leaves the door open to the Federal Circuit playing “a leading role in shaping patent law,” which is the reason for its capture by patent interests. It would be better instead simply to abolish the Federal Circuit and return to the pre-1982 system, in which patents received no special treatment in appeals. This leaves open the possibility of circuit splits, which the creation of the Federal Circuit was designed to mitigate, but there are worse problems than circuit splits, and we now have them.
  • Another helpful reform would be for Congress to limit the scope of patentable subject matter via statute. New Zealand has done just that, declaring that software is “not an invention” to get around WTO obligations to respect intellectual property. Congress should do the same with respect to both software and business methods.
  • Finally, even if the above reforms were adopted, there would still be a need to address the asymmetries in patent litigation that result in predatory “troll” lawsuits. While the holding in Alice v. CLS arguably makes a wide swath of patents invalid, those patents could still be used in troll lawsuits because a ruling of invalidity for each individual patent might not occur until late in a trial. Current legislation in Congress addresses this class of problem by mandating disclosures, shifting fees in the case of spurious lawsuits, and enabling a review of the patent’s validity before a trial commences.
  • What matters for prosperity is not just property rights in the abstract, but good property-defining institutions. Without reform, our patent system will continue to favor special interests and forestall economic growth.
  •  
    "Libertarians intuitively understand the case for patents: just as other property rights internalize the social benefits of improvements to land, automobile maintenance, or business investment, patents incentivize the creation of new inventions, which might otherwise be undersupplied. So far, so good. But it is important to recognize that the laws that govern property, intellectual or otherwise, do not arise out of thin air. Rather, our political institutions, with all their virtues and foibles, determine the contours of property-the exact bundle of rights that property holders possess, their extent, and their limitations. Outlining efficient property laws is not a trivial problem. The optimal contours of property are neither immutable nor knowable a priori. For example, in 1946, the U.S. Supreme Court reversed the age-old common law doctrine that extended real property rights to the heavens without limit. The advent of air travel made such extensive property rights no longer practicable-airlines would have had to cobble together a patchwork of easements, acre by acre, for every corridor through which they flew, and they would have opened themselves up to lawsuits every time their planes deviated from the expected path. The Court rightly abridged property rights in light of these empirical realities. In defining the limits of patent rights, our political institutions have gotten an analogous question badly wrong. A single, politically captured circuit court with exclusive jurisdiction over patent appeals has consistently expanded the scope of patentable subject matter. This expansion has resulted in an explosion of both patents and patent litigation, with destructive consequences. "
  •  
    I added a comment to the page's article. Patents are antithetical to the precepts of Libertarianism and do not involve Natural Law rights. But I agree with the author that the Court of Appeals for the Federal Circuit should be abolished. It's a failed experiment.
Paul Merrell

Prepare to Hang Up the Phone, Forever - WSJ.com - 0 views

  • At decade's end, the trusty landline telephone could be nothing more than a memory. Telecom giants AT&T T +0.31% AT&T Inc. U.S.: NYSE $35.07 +0.11 +0.31% March 28, 2014 4:00 pm Volume (Delayed 15m) : 24.66M AFTER HOURS $35.03 -0.04 -0.11% March 28, 2014 7:31 pm Volume (Delayed 15m): 85,446 P/E Ratio 10.28 Market Cap $182.60 Billion Dividend Yield 5.25% Rev. per Employee $529,844 03/29/14 Prepare to Hang Up the Phone, ... 03/21/14 AT&T Criticizes Netflix's 'Arr... 03/21/14 Samsung's Galaxy S5 Smartphone... More quote details and news » T in Your Value Your Change Short position and Verizon Communications VZ -0.57% Verizon Communications Inc. U.S.: NYSE $47.42 -0.27 -0.57% March 28, 2014 4:01 pm Volume (Delayed 15m) : 24.13M AFTER HOURS $47.47 +0.05 +0.11% March 28, 2014 7:59 pm Volume (Delayed 15m): 1.57M
  • The two providers want to lay the crumbling POTS to rest and replace it with Internet Protocol-based systems that use the same wired and wireless broadband networks that bring Web access, cable programming and, yes, even your telephone service, into your homes. You may think you have a traditional landline because your home phone plugs into a jack, but if you have bundled your phone with Internet and cable services, you're making calls over an IP network, not twisted copper wires. California, Florida, Texas, Georgia, North Carolina, Wisconsin and Ohio are among states that agree telecom resources would be better redirected into modern telephone technologies and innovations, and will kill copper-based technologies in the next three years or so. Kentucky and Colorado are weighing similar laws, which force people to go wireless whether they want to or not. In Mantoloking, N.J., Verizon wants to replace the landline system, which Hurricane Sandy wiped out, with its wireless Voice Link. That would make it the first entire town to go landline-less, a move that isn't sitting well with all residents.
  • New Jersey's legislature, worried about losing data applications such as credit-card processing and alarm systems that wireless systems can't handle, wants a one-year moratorium to block that switch. It will vote on the measure this month. (Verizon tried a similar change in Fire Island, N.Y., when its copper lines were destroyed, but public opposition persuaded Verizon to install fiber-optic cable.) It's no surprise that landlines are unfashionable, considering many of us already have or are preparing to ditch them. More than 38% of adults and 45.5% of children live in households without a landline telephone, says the Centers for Disease Control and Prevention. That means two in every five U.S. homes, or 39%, are wireless, up from 26.6% three years ago. Moreover, a scant 8.5% of households relied only on a landline, while 2% were phoneless in 2013. Metropolitan residents have few worries about the end of landlines. High-speed wire and wireless services are abundant and work well, despite occasional dropped calls. Those living in rural areas, where cell towers are few and 4G capability limited, face different issues.
  • ...2 more annotations...
  • Safety is one of them. Call 911 from a landline and the emergency operator pinpoints your exact address, down to the apartment number. Wireless phones lack those specifics, and even with GPS navigation aren't as precise. Matters are worse in rural and even suburban areas that signals don't reach, sometimes because they're blocked by buildings or the landscape. That's of concern to the Federal Communications Commission, which oversees all forms of U.S. communications services. Universal access is a tenet of its mission, and, despite the state-by-state degradation of the mandate, it's unwilling to let telecom companies simply drop geographically undesirable customers. Telecom firms need FCC approval to ax services completely, and can't do so unless there is a viable competitor to pick up the slack. Last year AT&T asked to turn off its legacy network, which could create gaps in universal coverage and will force people off the grid to get a wireless provider.
  • AT&T and the FCC will soon begin trials to explore life without copper-wired landlines. Consumers will voluntarily test IP-connected networks and their impact on towns like Carbon Hills, Ala., population 2,071. They want to know how households will reach 911, how small businesses will connect to customers, how people with medical-monitoring devices or home alarms know they will always be connected to a reliable network, and what the costs are. "We cannot be a nation of opportunity without networks of opportunity," said FCC Chairman Tom Wheeler in unveiling the plan. "This pilot program will help us learn how fiber might be deployed where it is not now deployed…and how new forms of wireless can reach deep into the interior of rural America."
Gary Edwards

» 21 Facts About NSA Snooping That Every American Should Know Alex Jones' Infowars: There's a war on for your mind! - 0 views

  •  
    NSA-PRISM-Echelon in a nutshell.  The list below is a short sample.  Each fact is documented, and well worth the time reading. "The following are 21 facts about NSA snooping that every American should know…" #1 According to CNET, the NSA told Congress during a recent classified briefing that it does not need court authorization to listen to domestic phone calls… #2 According to U.S. Representative Loretta Sanchez, members of Congress learned "significantly more than what is out in the media today" about NSA snooping during that classified briefing. #3 The content of all of our phone calls is being recorded and stored.  The following is a from a transcript of an exchange between Erin Burnett of CNN and former FBI counterterrorism agent Tim Clemente which took place just last month… #4 The chief technology officer at the CIA, Gus Hunt, made the following statement back in March… "We fundamentally try to collect everything and hang onto it forever." #5 During a Senate Judiciary Oversight Committee hearing in March 2011, FBI Director Robert Mueller admitted that the intelligence community has the ability to access emails "as they come in"… #6 Back in 2007, Director of National Intelligence Michael McConnell told Congress that the president has the "constitutional authority" to authorize domestic spying without warrants no matter when the law says. #7 The Director Of National Intelligence James Clapper recently told Congress that the NSA was not collecting any information about American citizens.  When the media confronted him about his lie, he explained that he "responded in what I thought was the most truthful, or least untruthful manner". #8 The Washington Post is reporting that the NSA has four primary data collection systems… MAINWAY, MARINA, METADATA, PRISM #9 The NSA knows pretty much everything that you are doing on the Internet.  The following is a short excerpt from a recent Yahoo article… #10 The NSA is suppose
Gonzalo San Gil, PhD.

Linux Kernel 4.0.5 Is Out with x86, ARM, and XFS Improvements, Updated Drivers - Softpedia - 0 views

    • Gonzalo San Gil, PhD.
       
      # ! Towards the 4.1...
  •  
    "All users of the 4.0 kernel branch must update The fifth maintenance released of Linux kernel 4.0 arrived today, June 6, as announced by Greg Kroah-Hartman, a renowned kernel developer, on Linux kernel's official mailing lists."
  •  
    "All users of the 4.0 kernel branch must update The fifth maintenance released of Linux kernel 4.0 arrived today, June 6, as announced by Greg Kroah-Hartman, a renowned kernel developer, on Linux kernel's official mailing lists."
Paul Merrell

European Lawmakers Demand Answers on Phone Key Theft - The Intercept - 0 views

  • European officials are demanding answers and investigations into a joint U.S. and U.K. hack of the world’s largest manufacturer of mobile SIM cards, following a report published by The Intercept Thursday. The report, based on leaked documents provided by NSA whistleblower Edward Snowden, revealed the U.S. spy agency and its British counterpart Government Communications Headquarters, GCHQ, hacked the Franco-Dutch digital security giant Gemalto in a sophisticated heist of encrypted cell-phone keys. The European Parliament’s chief negotiator on the European Union’s data protection law, Jan Philipp Albrecht, said the hack was “obviously based on some illegal activities.” “Member states like the U.K. are frankly not respecting the [law of the] Netherlands and partner states,” Albrecht told the Wall Street Journal. Sophie in ’t Veld, an EU parliamentarian with D66, the Netherlands’ largest opposition party, added, “Year after year we have heard about cowboy practices of secret services, but governments did nothing and kept quiet […] In fact, those very same governments push for ever-more surveillance capabilities, while it remains unclear how effective these practices are.”
  • “If the average IT whizzkid breaks into a company system, he’ll end up behind bars,” In ’t Veld added in a tweet Friday. The EU itself is barred from undertaking such investigations, leaving individual countries responsible for looking into cases that impact their national security matters. “We even get letters from the U.K. government saying we shouldn’t deal with these issues because it’s their own issue of national security,” Albrecht said. Still, lawmakers in the Netherlands are seeking investigations. Gerard Schouw, a Dutch member of parliament, also with the D66 party, has called on Ronald Plasterk, the Dutch minister of the interior, to answer questions before parliament. On Tuesday, the Dutch parliament will debate Schouw’s request. Additionally, European legal experts tell The Intercept, public prosecutors in EU member states that are both party to the Cybercrime Convention, which prohibits computer hacking, and home to Gemalto subsidiaries could pursue investigations into the breach of the company’s systems.
  • According to secret documents from 2010 and 2011, a joint NSA-GCHQ unit penetrated Gemalto’s internal networks and infiltrated the private communications of its employees in order to steal encryption keys, embedded on tiny SIM cards, which are used to protect the privacy of cellphone communications across the world. Gemalto produces some 2 billion SIM cards a year. The company’s clients include AT&T, T-Mobile, Verizon, Sprint and some 450 wireless network providers. “[We] believe we have their entire network,” GCHQ boasted in a leaked slide, referring to the Gemalto heist.
  • ...4 more annotations...
  • While Gemalto was indeed another casualty in Western governments’ sweeping effort to gather as much global intelligence advantage as possible, the leaked documents make clear that the company was specifically targeted. According to the materials published Thursday, GCHQ used a specific codename — DAPINO GAMMA — to refer to the operations against Gemalto. The spies also actively penetrated the email and social media accounts of Gemalto employees across the world in an effort to steal the company’s encryption keys. Evidence of the Gemalto breach rattled the digital security community. “Almost everyone in the world carries cell phones and this is an unprecedented mass attack on the privacy of citizens worldwide,” said Greg Nojeim, senior counsel at the Center for Democracy & Technology, a non-profit that advocates for digital privacy and free online expression. “While there is certainly value in targeted surveillance of cell phone communications, this coordinated subversion of the trusted technical security infrastructure of cell phones means the US and British governments now have easy access to our mobile communications.”
  • For Gemalto, evidence that their vaunted security systems and the privacy of customers had been compromised by the world’s top spy agencies made an immediate financial impact. The company’s shares took a dive on the Paris bourse Friday, falling $500 million. In the U.S., Gemalto’s shares fell as much 10 percent Friday morning. They had recovered somewhat — down 4 percent — by the close of trading on the Euronext stock exchange. Analysts at Dutch financial services company Rabobank speculated in a research note that Gemalto could be forced to recall “a large number” of SIM cards. The French daily L’Express noted today that Gemalto board member Alex Mandl was a founding trustee of the CIA-funded venture capital firm In-Q-Tel. Mandl resigned from In-Q-Tel’s board in 2002, when he was appointed CEO of Gemplus, which later merged with another company to become Gemalto. But the CIA connection still dogged Mandl, with the French press regularly insinuating that American spies could infiltrate the company. In 2003, a group of French lawmakers tried unsuccessfully to create a commission to investigate Gemplus’s ties to the CIA and its implications for the security of SIM cards. Mandl, an Austrian-American businessman who was once a top executive at AT&T, has denied that he had any relationship with the CIA beyond In-Q-Tel. In 2002, he said he did not even have a security clearance.
  • AT&T, T-Mobile and Verizon could not be reached for comment Friday. Sprint declined to comment. Vodafone, the world’s second largest telecom provider by subscribers and a customer of Gemalto, said in a statement, “[W]e have no further details of these allegations which are industrywide in nature and are not focused on any one mobile operator. We will support industry bodies and Gemalto in their investigations.” Deutsche Telekom AG, a German company, said it has changed encryption algorithms in its Gemalto SIM cards. “We currently have no knowledge that this additional protection mechanism has been compromised,” the company said in a statement. “However, we cannot rule out this completely.”
  • Update: Asked about the SIM card heist, White House press secretary Josh Earnest said he did not expect the news would hurt relations with the tech industry: “It’s hard for me to imagine that there are a lot of technology executives that are out there that are in a position of saying that they hope that people who wish harm to this country will be able to use their technology to do so. So, I do think in fact that there are opportunities for the private sector and the federal government to coordinate and to cooperate on these efforts, both to keep the country safe, but also to protect our civil liberties.”
  •  
    Watch for massive class action product defect litigation to be filed against the phone companies.and mobile device manufacturers.  In most U.S. jurisdictions, proof that the vendors/manufacturers  knew of the product defect is not required, only proof of the defect. Also, this is a golden opportunity for anyone who wants to get out of a pricey cellphone contract, since providing a compromised cellphone is a material breach of warranty, whether explicit or implied..   
Gonzalo San Gil, PhD.

WordPress 4.4.1 Updates for XSS (and 52 other issues) - InternetNews. [# ! Note] - 0 views

  •  
    "January 07, 2016 The first WordPress update of 2016 is out and like many other incremental updates, it is being triggered by a security vulnerability. The single security issue being patched in WordPress 4.4.1 is a cross site scripting vulnerability that could have potentially enabled a site compromised."
Paul Merrell

Rural America and the 5G Digital Divide. Telecoms Expanding Their "Toxic Infrastructure" - Global ResearchGlobal Research - Centre for Research on Globalization - 0 views

  • While there is considerable telecom hubris regarding the 5G rollout and increasing speculation that the next generation of wireless is not yet ready for Prime Time, the industry continues to make promises to Rural America that it has no intention of fulfilling. Decades-long promises to deliver digital Utopia to rural America by T-Mobile, Verizon and AT&T have never materialized.  
  • In 2017, the USDA reported that 29% of American farms had no internet access. The FCC says that 14 million rural Americans and 1.2 million Americans living on tribal lands do not have 4G LTE on their phones, and that 30 million rural residents do not have broadband service compared to 2% of urban residents.  It’s beginning to sound like a Third World country. Despite an FCC $4.5 billion annual subsidy to carriers to provide broadband service in rural areas, the FCC reports that ‘over 24 million Americans do not have access to high-speed internet service, the bulk of them in rural area”while a  Microsoft Study found that  “162 million people across the US do not have internet service at broadband speeds.” At the same time, only three cable companies have access to 70% of the market in a sweetheart deal to hike rates as they avoid competition and the FCC looks the other way.  The FCC believes that it would cost $40 billion to bring broadband access to 98% of the country with expansion in rural America even more expensive.  While the FCC has pledged a $2 billion, ten year plan to identify rural wireless locations, only 4 million rural American businesses and homes will be targeted, a mere drop in the bucket. Which brings us to rural mapping: Since the advent of the digital age, there have been no accurate maps identifying where broadband service is available in rural America and where it is not available.  The FCC has a long history of promulgating unreliable and unverified carrier-provided numbers as the Commission has repeatedly ‘bungled efforts to produce accurate broadband maps” that would have facilitated rural coverage. During the Senate Commerce Committee hearing on April 10th regarding broadband mapping, critical testimony questioned whether the FCC and/or the telecom industry have either the commitment or the proficiency to provide 5G to rural America.  Members of the Committee shared concerns that 5G might put rural America further behind the curve so as to never catch up with the rest of the country
Paul Merrell

Why the Sony hack is unlikely to be the work of North Korea. | Marc's Security Ramblings - 0 views

  • Everyone seems to be eager to pin the blame for the Sony hack on North Korea. However, I think it’s unlikely. Here’s why:1. The broken English looks deliberately bad and doesn’t exhibit any of the classic comprehension mistakes you actually expect to see in “Konglish”. i.e it reads to me like an English speaker pretending to be bad at writing English. 2. The fact that the code was written on a PC with Korean locale & language actually makes it less likely to be North Korea. Not least because they don’t speak traditional “Korean” in North Korea, they speak their own dialect and traditional Korean is forbidden. This is one of the key things that has made communication with North Korean refugees difficult. I would find the presence of Chinese far more plausible.
  • 3. It’s clear from the hard-coded paths and passwords in the malware that whoever wrote it had extensive knowledge of Sony’s internal architecture and access to key passwords. While it’s plausible that an attacker could have built up this knowledge over time and then used it to make the malware, Occam’s razor suggests the simpler explanation of an insider. It also fits with the pure revenge tact that this started out as. 4. Whoever did this is in it for revenge. The info and access they had could have easily been used to cash out, yet, instead, they are making every effort to burn Sony down. Just think what they could have done with passwords to all of Sony’s financial accounts? With the competitive intelligence in their business documents? From simple theft, to the sale of intellectual property, or even extortion – the attackers had many ways to become rich. Yet, instead, they chose to dump the data, rendering it useless. Likewise, I find it hard to believe that a “Nation State” which lives by propaganda would be so willing to just throw away such an unprecedented level of access to the beating heart of Hollywood itself.
  • 5. The attackers only latched onto “The Interview” after the media did – the film was never mentioned by GOP right at the start of their campaign. It was only after a few people started speculating in the media that this and the communication from DPRK “might be linked” that suddenly it became linked. I think the attackers both saw this as an opportunity for “lulz” and as a way to misdirect everyone into thinking it was a nation state. After all, if everyone believes it’s a nation state, then the criminal investigation will likely die.
  • ...4 more annotations...
  • 6. Whoever is doing this is VERY net and social media savvy. That, and the sophistication of the operation, do not match with the profile of DPRK up until now. Grugq did an excellent analysis of this aspect his findings are here – http://0paste.com/6875#md 7. Finally, blaming North Korea is the easy way out for a number of folks, including the security vendors and Sony management who are under the microscope for this. Let’s face it – most of today’s so-called “cutting edge” security defenses are either so specific, or so brittle, that they really don’t offer much meaningful protection against a sophisticated attacker or group of attackers.
  • 8. It probably also suits a number of political agendas to have something that justifies sabre-rattling at North Korea, which is why I’m not that surprised to see politicians starting to point their fingers at the DPRK also. 9. It’s clear from the leaked data that Sony has a culture which doesn’t take security very seriously. From plaintext password files, to using “password” as the password in business critical certificates, through to just the shear volume of aging unclassified yet highly sensitive data left out in the open. This isn’t a simple slip-up or a “weak link in the chain” – this is a serious organization-wide failure to implement anything like a reasonable security architecture.
  • The reality is, as things stand, Sony has little choice but to burn everything down and start again. Every password, every key, every certificate is tainted now and that’s a terrifying place for an organization to find itself. This hack should be used as the definitive lesson in why security matters and just how bad things can get if you don’t take it seriously. 10. Who do I think is behind this? My money is on a disgruntled (possibly ex) employee of Sony.
  • EDIT: This appears (at least in part) to be substantiated by a conversation the Verge had with one of the alleged hackers – http://www.theverge.com/2014/11/25/7281097/sony-pictures-hackers-say-they-want-equality-worked-with-staff-to-break-in Finally for an EXCELLENT blow by blow analysis of the breach and the events that followed, read the following post by my friends from Risk Based Security – https://www.riskbasedsecurity.com/2014/12/a-breakdown-and-analysis-of-the-december-2014-sony-hack EDIT: Also make sure you read my good friend Krypt3ia’s post on the hack – http://krypt3ia.wordpress.com/2014/12/18/sony-hack-winners-and-losers/
  •  
    Seems that the FBI overlooked a few clues before it told Obama to go ahead and declare war against North Korea. 
Paul Merrell

Data Transfer Pact Between U.S. and Europe Is Ruled Invalid - The New York Times - 0 views

  • Europe’s highest court on Tuesday struck down an international agreement that allowed companies to move digital information like people’s web search histories and social media updates between the European Union and the United States. The decision left the international operations of companies like Google and Facebook in a sort of legal limbo even as their services continued working as usual.The ruling, by the European Court of Justice, said the so-called safe harbor agreement was flawed because it allowed American government authorities to gain routine access to Europeans’ online information. The court said leaks from Edward J. Snowden, the former contractor for the National Security Agency, made it clear that American intelligence agencies had almost unfettered access to the data, infringing on Europeans’ rights to privacy. The court said data protection regulators in each of the European Union’s 28 countries should have oversight over how companies collect and use online information of their countries’ citizens. European countries have widely varying stances towards privacy.
  • Data protection advocates hailed the ruling. Industry executives and trade groups, though, said the decision left a huge amount of uncertainty for big companies, many of which rely on the easy flow of data for lucrative businesses like online advertising. They called on the European Commission to complete a new safe harbor agreement with the United States, a deal that has been negotiated for more than two years and could limit the fallout from the court’s decision.
  • Some European officials and many of the big technology companies, including Facebook and Microsoft, tried to play down the impact of the ruling. The companies kept their services running, saying that other agreements with the European Union should provide an adequate legal foundation.But those other agreements are now expected to be examined and questioned by some of Europe’s national privacy watchdogs. The potential inquiries could make it hard for companies to transfer Europeans’ information overseas under the current data arrangements. And the ruling appeared to leave smaller companies with fewer legal resources vulnerable to potential privacy violations.
  • ...3 more annotations...
  • “We can’t assume that anything is now safe,” Brian Hengesbaugh, a privacy lawyer with Baker & McKenzie in Chicago who helped to negotiate the original safe harbor agreement. “The ruling is so sweepingly broad that any mechanism used to transfer data from Europe could be under threat.”At issue is the sort of personal data that people create when they post something on Facebook or other social media; when they do web searches on Google; or when they order products or buy movies from Amazon or Apple. Such data is hugely valuable to companies, which use it in a broad range of ways, including tailoring advertisements to individuals and promoting products or services based on users’ online activities.The data-transfer ruling does not apply solely to tech companies. It also affects any organization with international operations, such as when a company has employees in more than one region and needs to transfer payroll information or allow workers to manage their employee benefits online.
  • But it was unclear how bulletproof those treaties would be under the new ruling, which cannot be appealed and went into effect immediately. Europe’s privacy watchdogs, for example, remain divided over how to police American tech companies.France and Germany, where companies like Facebook and Google have huge numbers of users and have already been subject to other privacy rulings, are among the countries that have sought more aggressive protections for their citizens’ personal data. Britain and Ireland, among others, have been supportive of Safe Harbor, and many large American tech companies have set up overseas headquarters in Ireland.
  • “For those who are willing to take on big companies, this ruling will have empowered them to act,” said Ot van Daalen, a Dutch privacy lawyer at Project Moore, who has been a vocal advocate for stricter data protection rules. The safe harbor agreement has been in place since 2000, enabling American tech companies to compile data generated by their European clients in web searches, social media posts and other online activities.
  •  
    Another take on it from EFF: https://www.eff.org/deeplinks/2015/10/europes-court-justice-nsa-surveilance Expected since the Court's Advocate General released an opinion last week, presaging today's opinion.  Very big bucks involved behind the scenes because removing U.S.-based internet companies from the scene in the E.U. would pave the way for growth of E.U.-based companies.  The way forward for the U.S. companies is even more dicey because of a case now pending in the U.S.  The Second U.S. Circuit Court of Appeals is about to decide a related case in which Microsoft was ordered by the lower court to produce email records stored on a server in Ireland. . Should the Second Circuit uphold the order and the Supreme Court deny review, then under the principles announced today by the Court in the E.U., no U.S.-based company could ever be allowed to have "possession, custody, or control" of the data of E.U. citizens. You can bet that the E.U. case will weigh heavily in the Second Circuit's deliberations.  The E.U. decision is by far and away the largest legal event yet flowing out of the Edward Snowden disclosures, tectonic in scale. Up to now, Congress has succeeded in confining all NSA reforms to apply only to U.S. citizens. But now the large U.S. internet companies, Google, Facebook, Microsoft, Dropbox, etc., face the loss of all Europe as a market. Congress *will* be forced by their lobbying power to extend privacy protections to "non-U.S. persons."  Thank you again, Edward Snowden.
Gonzalo San Gil, PhD.

Dance to the Holy Man: Silencers: Music @ Amazon.com - 3 views

  •  
    - 37$ for a Disc from 1991 is Thievery. Full Stop. Who Is Killing Music....? @ll We Know. [ Silencers | Format: Audio CD 5.0 out of 5 stars See all reviews (1 customer review) 1 Review 5 star: (1) 4 star: (0) 3 star: (0) 2 star: (0) 1 star: (0) › See the customer review... | Like 1304352585 false -1 0 0 0 (0) Price: $37.60 & this item ships for FREE with Super Saver Shipping. Details ]
Paul Merrell

No Fake Internet - 0 views

  • Zuckerberg's Internet.org will control what billions do online People in countries like India,1,2,3 Zimbabwe,4 Brazil,5 and Paraguay6 are speaking out about Facebook's so-called Internet.org platform and its ability to control what billions of Internet users can do online.7,8   Zuckerberg's partnership with telecom giants, Internet.org, provides access to a fake Internet where selected services are prioritized over others.9 This scheme threatens innovation,10 free expression,11 and privacy online12   It blocks many of the websites, apps, and services the world loves from being made available on equal terms.13   The fake Internet will also restrict access to local service providers struggling to get a foothold online.14   We all deserve access to the real open Internet. Stand with people around the world demanding Zuckerberg stops restricting access to the open Internet.
Gary Edwards

Do we need two open source office suites? | TalkBack on ZDNet - 0 views

  • Symphony isn't based on Lotus 1-2-3 and AmiPro (WordPro). It's originally based on OpenOffice 1.1.4. And has since been updated by Sun's StarOffice group to OpenOffice 2 something. The history here is that IBM ripped off the OpenOffice 1.1.4 code base when it was still under the dual SSSL-LGPL license. Here it languished as IBM "WorkPlace", finally to be released as Lotus Symphony.
  •  
    Response to ZDNet article about Lotus Symphony and OpenOffice. Dana gets it terribly wrong, claiming that Lotus Symphony is "open Source". I respond by setting the record straight. Couldn't help myself though. I dove into the whole "rip out and replace", government mandates, ODF vs. OOXML thing. ending of course with the transition from client/server to client/Web-Stack/server and the future of the Web.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Gonzalo San Gil, PhD.

Internet piracy traffic - Havocscope [# ! + Note] - 0 views

    • Gonzalo San Gil, PhD.
       
      # ! perhaps 1 of every 4 Dollars/Users from Internet Busines -for the ISPs included- comes from such 'pirates'... (# ! that's what can be named as 'A #Culture'... # ! ... to be '#Protected...)
  •  
    [n Counterfeit Goods Piracy on the Internet of movies, music, video games and television shows make up to 24 percent of all Internet traffic worldwide. Source: Gautham Nagesh, "Study: 24 percent of Web traffic involves piracy," Hillicon Valley Blog, The Hill, February 1, 2011. ]
  •  
    [n Counterfeit Goods Piracy on the Internet of movies, music, video games and television shows make up to 24 percent of all Internet traffic worldwide. Source: Gautham Nagesh, "Study: 24 percent of Web traffic involves piracy," Hillicon Valley Blog, The Hill, February 1, 2011. ]
Gary Edwards

Duke Engines' incredibly compact, lightweight valveless axial engine - 0 views

  • The Duke engine is an axial design, meaning that its five cylinders encircle the drive shaft and run parallel with it. The pistons drive a star-shaped reciprocator, which nutates around the drive shaft, kind of like a spinning coin coming to rest on a table.
  • The reciprocator's center point is used to drive the central drive shaft, which rotates in the opposite direction to the reciprocator. "That counter-rotation keeps it in tidy balance," says Duke co-founder John Garvey. "If you lay your hand on it while it's running, you can barely detect any motion at all, it's quite remarkable." That's borne out by the video below, where the engine revving doesn't even cause enough vibrations to tip a coin off its side.
  • Instead of cam- or pneumatically-operated intake and outlet valves, the cylinders rotate past intake and outlet ports in a stationary head ring. The spark plugs are also mounted in this stationary ring – the cylinders simply slide past each port or plug at the stage of the cycle it's needed for and move on. In this way, Duke eliminates all the complexity of valve operation and manages to run a five-cylinder engine with just three spark plugs and three fuel injectors. The Duke engine ends up delivering as many power strokes per revolution as a six cylinder engine, but with huge weight savings and a vast reduction in the number of engine parts.
  • ...1 more annotation...
  • The engine has shown excellent resistance to pre-ignition (or detonation) – potentially because its cylinders tend to run cooler than comparable engines. Duke has run compression ratios as high as 14:1 with regular 91-octane gasoline. This suggests that further developments will pull even more power out of a given amount of fuel, increasing the overall efficiency of the unit.
  •  
    Watch the second video! This is extraordinary. "New Zealand's Duke Engines has been busy developing and demonstrating excellent results with a bizarre axial engine prototype that completely does away with valves, while delivering excellent power and torque from an engine much smaller, lighter and simpler than the existing technology. We spoke with Duke co-founder John Garvey to find out how the Duke Axial Engine project is going."
Paul Merrell

Firefox, YouTube and WebM ✩ Mozilla Hacks - the Web developer blog - 1 views

  • 1. Google will be releasing VP8 under an open source and royalty-free basis. VP8 is a high-quality video codec that Google acquired when they purchased the company On2. The VP8 codec represents a vast improvement in quality-per-bit over Theora and is comparable in quality to H.264. 2. The VP8 codec will be combined with the Vorbis audio codec and a subset of the Matroska container format to build a new standard for Open Video on the web called WebM. You can find out more about the project at its new site: http://www.webmproject.org/. 3. We will include support for WebM in Firefox. You can get super-early WebM builds of Firefox 4 pre-alpha today. WebM will also be included in Google Chrome and Opera. 4. Every video on YouTube will be transcoded into WebM. They have about 1.2 million videos available today and will be working through their back catalog over time. But they have committed to supporting everything. 5. This is something that is supported by many partners, not just Google and others. Content providers like Brightcove have signed up to support WebM as part of a full HTML5 video solution. Hardware companies, encoding providers and other parts of the video stack are all part of the list of companies backing WebM. Even Adobe will be supporting WebM in Flash. Firefox, with its market share and principled leadership and YouTube, with its video reach are the most important partners in this solution, but we are only a small part of the larger ecosystem of video.
Gary Edwards

MS finally to bring Office to the Web, Windows smart phones - NYTimes.com - 0 views

  •  
    Last week, Microsoft reported that revenue from the Microsoft business division (MBD) grew 20% year over year to US$4.95 billion in the most recent quarter. That is more than Microsoft's client division, which makes Windows. Most of MBD's revenue comes from Office, though Microsoft doesn't break out an exact percentage. Windows has 1 billion users. Office has only 500 million. Consumers will be able to subscribe to Office Web and even get it at a discount price, provided they are willing to view Web ads. Business customers seeking "more manageability and control" will be able to buy subscriptions to Office Web similar to the subscription Microsoft offers for a bundle combining Web-based versions of Exchange and SharePoint. That costs $3 per user per month. Enterprises may also get Office Web through conventional volume licensing software contracts, which will allow them to either install Office on desktop and other client PCs, or have Microsoft host it on their server. Unlike non-Microsoft products (Google Docs - ZOHO - BuzzWord), Office Web will guarantee that the "viewing experience is fantastic" and that formatting and meta data from Office documents don't "get munged up,". Office Web will provide a superior "end-to-end solution" by letting users view and edit documents whenever they want to, including browsers such as Firefox, Internet Explorer and Safari and Windows Mobile smart phones. The Office Web focus will be on business productivity according to Chris Capossela. The Office Web experience can be enhanced by Silverlight (Microsoft RiA).
Paul Merrell

Court Approves F.C.C. Plan to Subsidize Rural Broadband Service - NYTimes.com - 0 views

  • A federal appeals court on Friday upheld the Federal Communications Commission’s effort to convert its $4.5 billion program that pays for telephone service in rural parts of the country into one that subsidizes high-speed Internet service in high-cost areas.The program, known as Connect America, is the largest portion of the $8 billion Universal Service Fund, which pays for a variety of efforts to provide telecommunications links to schools, low-income families and others.In October 2011, the F.C.C. approved an overhaul of the fund. Soon after its approval, however, the effort was challenged in court by dozens of phone companies. Many were small carriers that provided service in rural areas and that stood to lose annual subsidies because of the changes.The United States Court of Appeals for the Tenth Circuit, in Denver, rejected the phone companies’ arguments because their claims were “either unpersuasive or barred from judicial review.”
1 - 19 of 19
Showing 20 items per page