Skip to main content

Home/ Future of the Web/ Group items tagged case

Rss Feed Group items tagged

Gary Edwards

The True Story of How the Patent Bar Captured a Court and Shrank the Intellectual Commo... - 1 views

  • The change in the law wrought by the Federal Circuit can also be viewed substantively through the controversy over software patents. Throughout the 1960s, the USPTO refused to award patents for software innovations. However, several of the USPTO’s decisions were overruled by the patent-friendly U.S. Court of Customs and Patent Appeals, which ordered that software patents be granted. In Gottschalk v. Benson (1972) and Parker v. Flook (1978), the U.S. Supreme Court reversed the Court of Customs and Patent Appeals, holding that mathematical algorithms (and therefore software) were not patentable subject matter. In 1981, in Diamond v. Diehr, the Supreme Court upheld a software patent on the grounds that the patent in question involved a physical process—the patent was issued for software used in the molding of rubber. While affirming their prior ruling that mathematical formulas are not patentable in the abstract, the Court held that an otherwise patentable invention did not become unpatentable simply because it utilized a computer.
  • In the hands of the newly established Federal Circuit, however, this small scope for software patents in precedent was sufficient to open the floodgates. In a series of decisions culminating in State Street Bank v. Signature Financial Group (1998), the Federal Circuit broadened the criteria for patentability of software and business methods substantially, allowing protection as long as the innovation “produces a useful, concrete and tangible result.” That broadened criteria led to an explosion of low-quality software patents, from Amazon’s 1-Click checkout system to Twitter’s pull-to-refresh feature on smartphones. The GAO estimates that more than half of all patents granted in recent years are software-related. Meanwhile, the Supreme Court continues to hold, as in Parker v. Flook, that computer software algorithms are not patentable, and has begun to push back against the Federal Circuit. In Bilski v. Kappos (2010), the Supreme Court once again held that abstract ideas are not patentable, and in Alice v. CLS (2014), it ruled that simply applying an abstract idea on a computer does not suffice to make the idea patent-eligible. It still is not clear what portion of existing software patents Alice invalidates, but it could be a significant one.
  • Supreme Court justices also recognize the Federal Circuit’s insubordination. In oral arguments in Carlsbad Technology v. HIF Bio (2009), Chief Justice John Roberts joked openly about it:
  • ...17 more annotations...
  • The Opportunity of the Commons
  • As a result of the Federal Circuit’s pro-patent jurisprudence, our economy has been flooded with patents that would otherwise not have been granted. If more patents meant more innovation, then we would now be witnessing a spectacular economic boom. Instead, we have been living through what Tyler Cowen has called a Great Stagnation. The fact that patents have increased while growth has not is known in the literature as the “patent puzzle.” As Michele Boldrin and David Levine put it, “there is no empirical evidence that [patents] serve to increase innovation and productivity, unless productivity is identified with the number of patents awarded—which, as evidence shows, has no correlation with measured productivity.”
  • While more patents have not resulted in faster economic growth, they have resulted in more patent lawsuits.
  • Software patents have characteristics that make them particularly susceptible to litigation. Unlike, say, chemical patents, software patents are plagued by a problem of description. How does one describe a software innovation in such a way that anyone searching for it will easily find it? As Christina Mulligan and Tim Lee demonstrate, chemical formulas are indexable, meaning that as the number of chemical patents grow, it will still be easy to determine if a molecule has been patented. Since software innovations are not indexable, they estimate that “patent clearance by all firms would require many times more hours of legal research than all patent lawyers in the United States can bill in a year. The result has been an explosion of patent litigation.” Software and business method patents, estimate James Bessen and Michael Meurer, are 2 and 7 times more likely to be litigated than other patents, respectively (4 and 13 times more likely than chemical patents).
  • Software patents make excellent material for predatory litigation brought by what are often called “patent trolls.”
  • Trolls use asymmetries in the rules of litigation to legally extort millions of dollars from innocent parties. For example, one patent troll, Innovatio IP Ventures, LLP, acquired patents that implicated Wi-Fi. In 2011, it started sending demand letters to coffee shops and hotels that offered wireless Internet access, offering to settle for $2,500 per location. This amount was far in excess of the 9.56 cents per device that Innovatio was entitled to under the “Fair, Reasonable, and Non-Discriminatory” licensing promises attached to their portfolio, but it was also much less than the cost of trial, and therefore it was rational for firms to pay. Cisco stepped in and spent $13 million in legal fees on the case, and settled on behalf of their customers for 3.2 cents per device. Other manufacturers had already licensed Innovatio’s portfolio, but that didn’t stop their customers from being targeted by demand letters.
  • Litigation cost asymmetries are magnified by the fact that most patent trolls are nonpracticing entities. This means that when patent infringement trials get to the discovery phase, they will cost the troll very little—a firm that does not operate a business has very few records to produce.
  • But discovery can cost a medium or large company millions of dollars. Using an event study methodology, James Bessen and coauthors find that infringement lawsuits by nonpracticing entities cost publicly traded companies $83 billion per year in stock market capitalization, while plaintiffs gain less than 10 percent of that amount.
  • Software patents also reduce innovation in virtue of their cumulative nature and the fact that many of them are frequently inputs into a single product. Law professor Michael Heller coined the phrase “tragedy of the anticommons” to refer to a situation that mirrors the well-understood “tragedy of the commons.” Whereas in a commons, multiple parties have the right to use a resource but not to exclude others, in an anticommons, multiple parties have the right to exclude others, and no one is therefore able to make effective use of the resource. The tragedy of the commons results in overuse of the resource; the tragedy of the anticommons results in underuse.
  • In order to cope with the tragedy of the anticommons, we should carefully investigate the opportunity of  the commons. The late Nobelist Elinor Ostrom made a career of studying how communities manage shared resources without property rights. With appropriate self-governance institutions, Ostrom found again and again that a commons does not inevitably lead to tragedy—indeed, open access to shared resources can provide collective benefits that are not available under other forms of property management.
  • This suggests that—litigation costs aside—patent law could be reducing the stock of ideas rather than expanding it at current margins.
  • Advocates of extensive patent protection frequently treat the commons as a kind of wasteland. But considering the problems in our patent system, it is worth looking again at the role of well-tailored limits to property rights in some contexts. Just as we all benefit from real property rights that no longer extend to the highest heavens, we would also benefit if the scope of patent protection were more narrowly drawn.
  • Reforming the Patent System
  • This analysis raises some obvious possibilities for reforming the patent system. Diane Wood, Chief Judge of the 7th Circuit, has proposed ending the Federal Circuit’s exclusive jurisdiction over patent appeals—instead, the Federal Circuit could share jurisdiction with the other circuit courts. While this is a constructive suggestion, it still leaves the door open to the Federal Circuit playing “a leading role in shaping patent law,” which is the reason for its capture by patent interests. It would be better instead simply to abolish the Federal Circuit and return to the pre-1982 system, in which patents received no special treatment in appeals. This leaves open the possibility of circuit splits, which the creation of the Federal Circuit was designed to mitigate, but there are worse problems than circuit splits, and we now have them.
  • Another helpful reform would be for Congress to limit the scope of patentable subject matter via statute. New Zealand has done just that, declaring that software is “not an invention” to get around WTO obligations to respect intellectual property. Congress should do the same with respect to both software and business methods.
  • Finally, even if the above reforms were adopted, there would still be a need to address the asymmetries in patent litigation that result in predatory “troll” lawsuits. While the holding in Alice v. CLS arguably makes a wide swath of patents invalid, those patents could still be used in troll lawsuits because a ruling of invalidity for each individual patent might not occur until late in a trial. Current legislation in Congress addresses this class of problem by mandating disclosures, shifting fees in the case of spurious lawsuits, and enabling a review of the patent’s validity before a trial commences.
  • What matters for prosperity is not just property rights in the abstract, but good property-defining institutions. Without reform, our patent system will continue to favor special interests and forestall economic growth.
  •  
    "Libertarians intuitively understand the case for patents: just as other property rights internalize the social benefits of improvements to land, automobile maintenance, or business investment, patents incentivize the creation of new inventions, which might otherwise be undersupplied. So far, so good. But it is important to recognize that the laws that govern property, intellectual or otherwise, do not arise out of thin air. Rather, our political institutions, with all their virtues and foibles, determine the contours of property-the exact bundle of rights that property holders possess, their extent, and their limitations. Outlining efficient property laws is not a trivial problem. The optimal contours of property are neither immutable nor knowable a priori. For example, in 1946, the U.S. Supreme Court reversed the age-old common law doctrine that extended real property rights to the heavens without limit. The advent of air travel made such extensive property rights no longer practicable-airlines would have had to cobble together a patchwork of easements, acre by acre, for every corridor through which they flew, and they would have opened themselves up to lawsuits every time their planes deviated from the expected path. The Court rightly abridged property rights in light of these empirical realities. In defining the limits of patent rights, our political institutions have gotten an analogous question badly wrong. A single, politically captured circuit court with exclusive jurisdiction over patent appeals has consistently expanded the scope of patentable subject matter. This expansion has resulted in an explosion of both patents and patent litigation, with destructive consequences. "
  •  
    I added a comment to the page's article. Patents are antithetical to the precepts of Libertarianism and do not involve Natural Law rights. But I agree with the author that the Court of Appeals for the Federal Circuit should be abolished. It's a failed experiment.
Gonzalo San Gil, PhD.

How 2 Legal Cases May Decide the Future of Open Source Software | Network World [# ! Pe... - 0 views

    • Gonzalo San Gil, PhD.
       
      # ! This is: The 'Problem' is not 'Open Source' but # ! 'Those' who do a bad use...
    • Gonzalo San Gil, PhD.
       
      # ! The Attacks on Open Source continue... # ! wonder why... and take part for The Freedom.
  •  
    [ The open source universe may soon be less collaborative and more litigious. Two cases now in the courts could open the legal floodgates. By Paul Rubens Follow CIO | Mar 6, 2015 6:00 AM PT ...]
Paul Merrell

Australia bans reporting of multi-nation corruption case involving Malaysia, Indonesia ... - 0 views

  • Today, 29 July 2014, WikiLeaks releases an unprecedented Australian censorship order concerning a multi-million dollar corruption case explicitly naming the current and past heads of state of Indonesia, Malaysia and Vietnam, their relatives and other senior officials. The super-injunction invokes “national security” grounds to prevent reporting about the case, by anyone, in order to “prevent damage to Australia's international relations”. The court-issued gag order follows the secret 19 June 2014 indictment of seven senior executives from subsidiaries of Australia's central bank, the Reserve Bank of Australia (RBA). The case concerns allegations of multi-million dollar inducements made by agents of the RBA subsidiaries Securency and Note Printing Australia in order to secure contracts for the supply of Australian-style polymer bank notes to the governments of Malaysia, Indonesia, Vietnam and other countries. Read the full press release here. Download the full Australia-wide censorship order for corruption case involving Malaysia, Indonesia and Vietnam.
Paul Merrell

Court upholds NSA snooping | TheHill - 0 views

  • A district court in California has issued a ruling in favor of the National Security Agency in a long-running case over the spy agency’s collection of Internet records.The challenge against the controversial Upstream program was tossed out because additional defense from the government would have required “impermissible disclosure of state secret information,” Judge Jeffrey White wrote in his decision.ADVERTISEMENTUnder the program — details of which were revealed through leaks from Edward Snowden and others — the NSA taps into the fiber cables that make up the backbone of the Internet and gathers information about people's online and phone communications. The agency then filters out communications of U.S. citizens, whose data is protected with legal defenses not extended to foreigners, and searches for “selectors” tied to a terrorist or other target.In 2008, the Electronic Frontier Foundation (EFF) sued the government over the program on behalf of five AT&T customers, who said that the collection violated the constitutional protections to privacy and free speech.
  • But “substantial details” about the program still remain classified, White, an appointee under former President George W. Bush, wrote in his decision. Moving forward with the merits of a trial would risk “exceptionally grave damage to national security,” he added. <A HREF="http://ws-na.amazon-adsystem.com/widgets/q?rt=tf_mfw&ServiceVersion=20070822&MarketPlace=US&ID=V20070822%2FUS%2Fthehill07-20%2F8001%2Fdffbe72d-f425-4b83-b07e-357ae9d405f6&Operation=NoScript">Amazon.com Widgets</A> The government has been “persuasive” in using its state secrets privilege, he continued, which allows it to withhold evidence from a case that could severely jeopardize national security.   In addition to saying that the program appeared constitutional, the judge also found that the AT&T customers did not even have the standing to sue the NSA over its data gathering.While they may be AT&T customers, White wrote that the evidence presented to the court was “insufficient to establish that the Upstream collection process operates in the manner” that they say it does, which makes it impossible to tell if their information was indeed collected in the NSA program.  The decision is a stinging rebuke to critics of the NSA, who have seen public interest in their cause slowly fade in the months since Snowden’s revelations.
  • The EFF on Tuesday evening said that it was considering next steps and noted that the court focused on just one program, not the totality of the NSA’s controversial operations.“It would be a travesty of justice if our clients are denied their day in court over the ‘secrecy’ of a program that has been front-page news for nearly a decade,” the group said in a statement.“We will continue to fight to end NSA mass surveillance.”The name of the case is Jewel v. NSA. 
  •  
    The article should have mentioned that the decision was on cross-motions for *partial* summary judgment. The Jewel case will proceed on other plaintiff claims. 
Paul Merrell

Dept. of Justice Accuses Google of Illegally Protecting Monopoly - The New York Times - 1 views

  • The Justice Department accused Google on Tuesday of illegally protecting its monopoly over search and search advertising, the government’s most significant challenge to a tech company’s market power in a generation and one that could reshape the way consumers use the internet.In a much-anticipated lawsuit, the agency accused Google of locking up deals with giant partners like Apple and throttling competition through exclusive business contracts and agreements.Google’s deals with Apple, mobile carriers and other handset makers to make its search engine the default option for users accounted for most of its dominant market share in search, the agency said, a figure that it put at around 80 percent.“For many years,” the agency said in its 57-page complaint, “Google has used anticompetitive tactics to maintain and extend its monopolies in the markets for general search services, search advertising and general search text advertising — the cornerstones of its empire.”The lawsuit, which may stretch on for years, could set off a cascade of other antitrust lawsuits from state attorneys general. About four dozen states and jurisdictions, including New York and Texas, have conducted parallel investigations and some of them are expected to bring separate complaints against the company’s grip on technology for online advertising. Eleven state attorneys general, all Republicans, signed on to support the federal lawsuit.
  • The Justice Department did not immediately put forward remedies, such as selling off parts of the company or unwinding business contracts, in the lawsuit. Such actions are typically pursued in later stages of a case.Ryan Shores, an associate deputy attorney general, said “nothing is off the table” in terms of remedies.
  • Democratic lawmakers on the House Judiciary Committee released a sprawling report on the tech giants two weeks ago, also accusing Google of controlling a monopoly over online search and the ads that come up when users enter a query.
  • ...1 more annotation...
  • Google last faced serious scrutiny from an American antitrust regulator nearly a decade ago, when the Federal Trade Commission investigated whether it had abused its power over the search market. The agency’s staff recommended bringing charges against the company, according to a memo reported on by The Wall Street Journal. But the agency’s five commissioners voted in 2013 not to bring a case.Other governments have been more aggressive toward the big tech companies. The European Union has brought three antitrust cases against Google in recent years, focused on its search engine, advertising business and Android mobile operating system. Regulators in Britain and Australia are examining the digital advertising market, in inquiries that could ultimately implicate the company.“It’s the most newsworthy monopolization action brought by the government since the Microsoft case in the late ’90s,” said Bill Baer, a former chief of the Justice Department’s antitrust division. “It’s significant in that the government believes that a highly successful tech platform has engaged in conduct that maintains its monopoly power unlawfully, and as a result injures consumers and competition.”
Gonzalo San Gil, PhD.

Time to #Fixcopyright and Free the Panorama Across EU - infojustice - 0 views

  •  
    "[Anna Mazgal, Communia Association, Link (CC-0)] Freedom of panorama is a fundamental element of European cultural heritage and visual history. Rooted in freedom of expression, it allows painters, photographers, filmmakers, journalists and tourists alike to document public spaces, create masterpieces of art and memories of beautiful places, and freely share it with others. Within the Best Case Scenarios for Copyright series we present Portugal as the best example for freedom of panorama. Below you can find the basic facts and for more evidence check the Best Case Scenario for Copyright - Freedom of Panorama in Portugal legal study. EU, it's time to #fixcopyright!"
Gonzalo San Gil, PhD.

SCO Again Returns From Dead, Plans Appeal | FOSS Force - 1 views

  •  
    "FOSS Force Staff FOSS Force has learned that we shouldn't write obituaries until we actually see a death certificate. SCO intends to file an appeal over the dismissal of its case against IBM."
  •  
    "FOSS Force Staff FOSS Force has learned that we shouldn't write obituaries until we actually see a death certificate. SCO intends to file an appeal over the dismissal of its case against IBM."
Gary Edwards

Be Prepared: 5 Android Apps You Must Have in Case of Emergency - Best Android Apps - 2 views

  •  
    "WHEN CALAMITY STRIKES The unpredictable nature of most emergencies often puts us in dangerous situations. But today's technology has come far enough to inform, alert, secure and guide us through dangerous situation, if we happen to face any. Emergencies may come in the form of environmental or circumstantial problems. Therefore it is always better to accompany yourself with at least 5 apps that can help you in case of any emergency. Here is the list of top 5 emergency apps for Android."
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

A Year Ago, The European Supreme Court Appears To Have Ruled The Whole Web To Be In The... - 1 views

  •  
    "On February 13, 2014, the European Court of Justice - the Supreme Court of the European Union - appears to have ruled that anything published on the web may be re-published freely by anybody else. The case concerned linking, but the court went beyond linking in its ruling. This case has not really been noticed, nor have its effects been absorbed by the community at large."
Paul Merrell

EU/Antitrust cases from 39514 to 39592 - 0 views

  • COMP/39.530 - Microsoft (Tying) Microsoft 14.01.2008 MemoAntitrust: Commission initiates formal investigations against Microsoft in two cases of suspected abuse of dominant market position 14.01.2008 Opening of Proceedings Concerns economic activity: C33.2
  •  
    When the DG Competition statement of objections regarding the tying of MSIE to Windows appears, it should appear here, under COMP/39.530.
Paul Merrell

Rapid - Press Releases - EUROPA - 0 views

  • Did the Commission co-operate with the United States on this case? The Commission and the United States Federal Trade Commission have kept each other regularly and closely informed on the state of play of their respective Intel investigations. These discussions have been held in a co-operative and friendly atmosphere, and have been substantively fruitful in terms of sharing experiences on issues of common interest.
  • Where does the money go? Once final judgment has been delivered in any appeals before the Court of First Instance (CFI) and the Court of Justice, the money goes into the EU’s central budget, thus reducing the contributions that Member States pay to the EU. Does Intel have to pay the fine if it appeals to the European Court of First Instance (CFI)? Yes. In case of appeals to the CFI, it is normal practice that the fine is paid into a blocked bank account pending the final outcome of the appeals process. Any fine that is provisionally paid will produce interest based on the interest rate applied by the European Central Bank to its main refinancing operations. In exceptional circumstances, companies may be allowed to cover the amount of the fine by a bank guarantee at a higher interest rate. What percentage of Intel's turnover does the fine represent? The fine represents 4.15 % of Intel's turnover in 2008. This is less than half the allowable maximum, which is 10% of a company's annual turnover.
  • How long is the Decision? The Decision is 542 pages long. When is the Decision going to be published? The Decision in English (the official language version of the Decision) will be made available as soon as possible on DG Competition’s website (once relevant business secrets have been taken out). French and German translations will also be made available on DG Competition’s website in due course. A summary of the Decision will be published in the EU's Official Journal L series in all languages (once the translations are available).
Gonzalo San Gil, PhD.

The energy and greenhouse-gas implications of internet video streaming in the United St... - 0 views

  •  
    [# ! Via Francisco Manuel Hernandez Sosa's FB...] OPEN ACCESS Arman Shehabi1, Ben Walker2 and Eric Masanet2 "Letters The rapid growth of streaming video entertainment has recently received attention as a possibly less energy intensive alternative to the manufacturing and transportation of digital video discs (DVDs). This study utilizes a life-cycle assessment approach to estimate the primary energy use and greenhouse-gas emissions associated with video viewing through both traditional DVD methods and online video streaming. Base-case estimates for 2011 video viewing energy and CO2(e) emission intensities indicate video streaming can be more efficient than DVDs, depending on DVD viewing method. "
  •  
    OPEN ACCESS Arman Shehabi1, Ben Walker2 and Eric Masanet2 "Letters The rapid growth of streaming video entertainment has recently received attention as a possibly less energy intensive alternative to the manufacturing and transportation of digital video discs (DVDs). This study utilizes a life-cycle assessment approach to estimate the primary energy use and greenhouse-gas emissions associated with video viewing through both traditional DVD methods and online video streaming. Base-case estimates for 2011 video viewing energy and CO2(e) emission intensities indicate video streaming can be more efficient than DVDs, depending on DVD viewing method. "
  •  
    [# ! Via Francisco Manuel Hernandez Sosa's FB...] OPEN ACCESS Arman Shehabi1, Ben Walker2 and Eric Masanet2 "Letters The rapid growth of streaming video entertainment has recently received attention as a possibly less energy intensive alternative to the manufacturing and transportation of digital video discs (DVDs). This study utilizes a life-cycle assessment approach to estimate the primary energy use and greenhouse-gas emissions associated with video viewing through both traditional DVD methods and online video streaming. Base-case estimates for 2011 video viewing energy and CO2(e) emission intensities indicate video streaming can be more efficient than DVDs, depending on DVD viewing method. "
Paul Merrell

DOJ Pushes to Expand Hacking Abilities Against Cyber-Criminals - Law Blog - WSJ - 0 views

  • The U.S. Department of Justice is pushing to make it easier for law enforcement to get warrants to hack into the computers of criminal suspects across the country. The move, which would alter federal court rules governing search warrants, comes amid increases in cases related to computer crimes. Investigators say they need more flexibility to get warrants to allow hacking in such cases, especially when multiple computers are involved or the government doesn’t know where the suspect’s computer is physically located. The Justice Department effort is raising questions among some technology advocates, who say the government should focus on fixing the holes in computer software that allow such hacking instead of exploiting them. Privacy advocates also warn government spyware could end up on innocent people’s computers if remote attacks are authorized against equipment whose ownership isn’t clear.
  • The government’s push for rule changes sheds light on law enforcement’s use of remote hacking techniques, which are being deployed more frequently but have been protected behind a veil of secrecy for years. In documents submitted by the government to the judicial system’s rule-making body this year, the government discussed using software to find suspected child pornographers who visited a U.S. site and concealed their identity using a strong anonymization tool called Tor. The government’s hacking tools—such as sending an email embedded with code that installs spying software — resemble those used by criminal hackers. The government doesn’t describe these methods as hacking, preferring instead to use terms like “remote access” and “network investigative techniques.” Right now, investigators who want to search property, including computers, generally need to get a warrant from a judge in the district where the property is located, according to federal court rules. In a computer investigation, that might not be possible, because criminals can hide behind anonymizing technologies. In cases involving botnets—groups of hijacked computers—investigators might also want to search many machines at once without getting that many warrants.
  • Some judges have already granted warrants in cases when authorities don’t know where the machine is. But at least one judge has denied an application in part because of the current rules. The department also wants warrants to be allowed for multiple computers at the same time, as well as for searches of many related storage, email and social media accounts at once, as long as those accounts are accessed by the computer being searched. “Remote searches of computers are often essential to the successful investigation” of computer crimes, Acting Assistant Attorney General Mythili Raman wrote in a letter to the judicial system’s rulemaking authority requesting the change in September. The government tries to obtain these “remote access warrants” mainly to “combat Internet anonymizing techniques,” the department said in a memo to the authority in March. Some groups have raised questions about law enforcement’s use of hacking technologies, arguing that such tools mean the government is failing to help fix software problems exploited by criminals. “It is crucial that we have a robust public debate about how the Fourth Amendment and federal law should limit the government’s use of malware and spyware within the U.S.,” said Nathan Wessler, a staff attorney at the American Civil Liberties Union who focuses on technology issues.
  • ...1 more annotation...
  • A Texas judge who denied a warrant application last year cited privacy concerns associated with sending malware when the location of the computer wasn’t known. He pointed out that a suspect opening an email infected with spyware could be doing so on a public computer, creating risk of information being collected from innocent people. A former computer crimes prosecutor serving on an advisory committee of the U.S. Judicial Conference, which is reviewing the request, said he was concerned that allowing the search of multiple computers under a single warrant would violate the Fourth Amendment’s protections against overly broad searches. The proposed rule is set to be debated by the Judicial Conference’s Advisory Committee on Criminal Rules in early April, after which it would be opened to public comment.
Gonzalo San Gil, PhD.

Infamous "podcast patent" heads to trial | Ars Technica - 0 views

  •  
    [# ! Patents turned into a collecting business instead of a invention promotion mechanism :/ ...] "A few years later, "monetizing" patents through lawsuits turned into an industry of its own. Logan turned his patents into a powerhouse licensing machine, beginning with a case filed against Apple in 2009. He went to trial and won $8 million. Settlements with other industry giants, like Samsung and Amazon, followed."
  •  
    [# ! Patents a collecting business instead of a invention promotion mechanism :/ ...] "A few years later, "monetizing" patents through lawsuits turned into an industry of its own. Logan turned his patents into a powerhouse licensing machine, beginning with a case filed against Apple in 2009. He went to trial and won $8 million. Settlements with other industry giants, like Samsung and Amazon, followed."
  •  
    [# ! Patents turned into a collecting business instead of a invention promotion mechanism :/ ...] "A few years later, "monetizing" patents through lawsuits turned into an industry of its own. Logan turned his patents into a powerhouse licensing machine, beginning with a case filed against Apple in 2009. He went to trial and won $8 million. Settlements with other industry giants, like Samsung and Amazon, followed."
Paul Merrell

After Brit spies 'snoop' on families' lawyers, UK govt admits: We flouted human rights ... - 0 views

  • The British government has admitted that its practice of spying on confidential communications between lawyers and their clients was a breach of the European Convention on Human Rights (ECHR). Details of the controversial snooping emerged in November: lawyers suing Blighty over its rendition of two Libyan families to be tortured by the late and unlamented Gaddafi regime claimed Her Majesty's own lawyers seemed to have access to the defense team's emails. The families' briefs asked for a probe by the secretive Investigatory Powers Tribunal (IPT), a move that led to Wednesday's admission. "The concession the government has made today relates to the agencies' policies and procedures governing the handling of legally privileged communications and whether they are compatible with the ECHR," a government spokesman said in a statement to the media, via the Press Association. "In view of recent IPT judgments, we acknowledge that the policies applied since 2010 have not fully met the requirements of the ECHR, specifically Article 8. This includes a requirement that safeguards are made sufficiently public."
  • The guidelines revealed by the investigation showed that MI5 – which handles the UK's domestic security – had free reign to spy on highly private and sensitive lawyer-client conversations between April 2011 and January 2014. MI6, which handles foreign intelligence, had no rules on the matter either until 2011, and even those were considered void if "extremists" were involved. Britain's answer to the NSA, GCHQ, had rules against such spying, but they too were relaxed in 2011. "By allowing the intelligence agencies free rein to spy on communications between lawyers and their clients, the Government has endangered the fundamental British right to a fair trial," said Cori Crider, a director at the non-profit Reprieve and one of the lawyers for the Libyan families. "For too long, the security services have been allowed to snoop on those bringing cases against them when they speak to their lawyers. In doing so, they have violated a right that is centuries old in British common law. Today they have finally admitted they have been acting unlawfully for years."
  • Crider said it now seemed probable that UK snoopers had been listening in on the communications over the Libyan case. The British government hasn't admitted guilt, but it has at least acknowledged that it was doing something wrong – sort of. "It does not mean that there was any deliberate wrongdoing on the part of the security and intelligence agencies, which have always taken their obligation to protect legally privileged material extremely seriously," the government spokesman said. "Nor does it mean that any of the agencies' activities have prejudiced or in any way resulted in an abuse of process in any civil or criminal proceedings. The agencies will now work with the independent Interception of Communications Commissioner to ensure their policies satisfy all of the UK's human rights obligations." So that's all right, then.
  •  
    If you follow the "November" link you'[l learn that yes, indeed, the UK government lawyers were happily getting the content of their adversaries privileged attorney-client communications. Conspicuously, the promises of reform make no mention of what is surely a disbarment offense in the U.S. I doubt that it's different in the UK. Discovery rules of procedure strictly limit how parties may obtain information from the other side. Wiretapping the other side's lawyers is not a permitted from of discovery. Hopefully, at least the government lawyers in the case in which the misbehavior was discovered have been referred for disciplinary action.  
Gonzalo San Gil, PhD.

Lawsuit threatens to break new ground on the GPL and software licensing issues | Openso... - 3 views

  •  
    "When Versata Software sued Ameriprise Financial Services for breaching its software license, it unwittingly unearthed a GPL violation of its own and touched off another lawsuit that could prove to be a leading case on free and open source software licensing. This post takes a look at the legal issues raised by both cases and what they mean for FOSS producers and users."
Gonzalo San Gil, PhD.

4shared Wins Court Case to Overcome Piracy Blockade - TorrentFreak - 0 views

  •  
    " Ernesto on February 19, 2016 C: 12 Breaking Popular file-hosting service 4shared has won a court case against the South Korean authorities who placed the site on a national piracy blocklist. While 4shared's users occasionally host pirated files, the court concludes that it can't be seen as a service that is setup specifically to facilitate copyright infringement."
Paul Merrell

Apple's New Challenge: Learning How the U.S. Cracked Its iPhone - The New York Times - 0 views

  • Now that the United States government has cracked open an iPhone that belonged to a gunman in the San Bernardino, Calif., mass shooting without Apple’s help, the tech company is under pressure to find and fix the flaw.But unlike other cases where security vulnerabilities have cropped up, Apple may face a higher set of hurdles in ferreting out and repairing the particular iPhone hole that the government hacked.The challenges start with the lack of information about the method that the law enforcement authorities, with the aid of a third party, used to break into the iPhone of Syed Rizwan Farook, an attacker in the San Bernardino rampage last year. Federal officials have refused to identify the person, or organization, who helped crack the device, and have declined to specify the procedure used to open the iPhone. Apple also cannot obtain the device to reverse-engineer the problem, the way it would in other hacking situations.
  •  
    It would make a very interesting Freedom of Information Act case if Apple sued under that Act to force disclosure of the security hole iPhone product defect the FBI exploited. I know of no interpretation of the law enforcement FOIA exemption that would justify FBI disclosure of the information. It might be alleged that the information is the trade secret of the company that disclosed the defect and exploit to the the FBI, but there's a very strong argument that the fact that the information was shared with the FBI waived the trade secrecy claim. And the notion that government is entitled to collect product security defects and exploit them without informing the exploited product's company of the specific defect is extremely weak.  Were I Tim Cook, I would have already told my lawyers to get cracking on filing the FOIA request with the FBI to get the legal ball rolling. 
Paul Merrell

Judge "Disturbed" To Learn Google Tracks 'Incognito' Users, Demands Answers | ZeroHedge - 1 views

  • A US District Judge in San Jose, California says she was "disturbed" over Google's data collection practices, after learning that the company still collects and uses data from users in its Chrome browser's so-called 'incognito' mode - and has demanded an explanation "about what exactly Google does," according to Bloomberg.
  • In a class-action lawsuit that describes the company's private browsing claims as a "ruse" - and "seeks $5,000 in damages for each of the millions of people whose privacy has been compromised since June of 2016," US District Judge Lucy Koh said she finds it "unusual" that the company would make the "extra effort" to gather user data if it doesn't actually use the information for targeted advertising or to build user profiles.Koh has a long history with the Alphabet Inc. subsidiary, previously forcing the Mountain View, California-based company to disclose its scanning of emails for the purposes of targeted advertising and profile building.In this case, Google is accused of relying on pieces of its code within websites that use its analytics and advertising services to scrape users’ supposedly private browsing history and send copies of it to Google’s servers. Google makes it seem like private browsing mode gives users more control of their data, Amanda Bonn, a lawyer representing users, told Koh. In reality, “Google is saying there’s basically very little you can do to prevent us from collecting your data, and that’s what you should assume we’re doing,” Bonn said.Andrew Schapiro, a lawyer for Google, argued the company’s privacy policy “expressly discloses” its practices. “The data collection at issue is disclosed,” he said.Another lawyer for Google, Stephen Broome, said website owners who contract with the company to use its analytics or other services are well aware of the data collection described in the suit. -Bloomberg
  • Koh isn't buying it - arguing that the company is effectively tricking users under the impression that their information is not being transmitted to the company."I want a declaration from Google on what information they’re collecting on users to the court’s website, and what that’s used for," Koh demanded.The case is Brown v. Google, 20-cv-03664, U.S. District Court, Northern District of California (San Jose), via Bloomberg.
‹ Previous 21 - 40 of 268 Next › Last »
Showing 20 items per page