Skip to main content

Home/ Future of the Web/ Group items tagged community matters

Rss Feed Group items tagged

Paul Merrell

European Human Rights Court Deals a Heavy Blow to the Lawfulness of Bulk Surveillance |... - 0 views

  • In a seminal decision updating and consolidating its previous jurisprudence on surveillance, the Grand Chamber of the European Court of Human Rights took a sideways swing at mass surveillance programs last week, reiterating the centrality of “reasonable suspicion” to the authorization process and the need to ensure interception warrants are targeted to an individual or premises. The decision in Zakharov v. Russia — coming on the heels of the European Court of Justice’s strongly-worded condemnation in Schrems of interception systems that provide States with “generalised access” to the content of communications — is another blow to governments across Europe and the United States that continue to argue for the legitimacy and lawfulness of bulk collection programs. It also provoked the ire of the Russian government, prompting an immediate legislative move to give the Russian constitution precedence over Strasbourg judgments. The Grand Chamber’s judgment in Zakharov is especially notable because its subject matter — the Russian SORM system of interception, which includes the installation of equipment on telecommunications networks that subsequently enables the State direct access to the communications transiting through those networks — is similar in many ways to the interception systems currently enjoying public and judicial scrutiny in the United States, France, and the United Kingdom. Zakharov also provides a timely opportunity to compare the differences between UK and Russian law: Namely, Russian law requires prior independent authorization of interception measures, whereas neither the proposed UK law nor the existing legislative framework do.
  • The decision is lengthy and comprises a useful restatement and harmonization of the Court’s approach to standing (which it calls “victim status”) in surveillance cases, which is markedly different from that taken by the US Supreme Court. (Indeed, Judge Dedov’s separate but concurring opinion notes the contrast with Clapper v. Amnesty International.) It also addresses at length issues of supervision and oversight, as well as the role played by notification in ensuring the effectiveness of remedies. (Marko Milanovic discusses many of these issues here.) For the purpose of the ongoing debate around the legitimacy of bulk surveillance regimes under international human rights law, however, three particular conclusions of the Court are critical.
  • The Court took issue with legislation permitting the interception of communications for broad national, military, or economic security purposes (as well as for “ecological security” in the Russian case), absent any indication of the particular circumstances under which an individual’s communications may be intercepted. It said that such broadly worded statutes confer an “almost unlimited degree of discretion in determining which events or acts constitute such a threat and whether that threat is serious enough to justify secret surveillance” (para. 248). Such discretion cannot be unbounded. It can be limited through the requirement for prior judicial authorization of interception measures (para. 249). Non-judicial authorities may also be competent to authorize interception, provided they are sufficiently independent from the executive (para. 258). What is important, the Court said, is that the entity authorizing interception must be “capable of verifying the existence of a reasonable suspicion against the person concerned, in particular, whether there are factual indications for suspecting that person of planning, committing or having committed criminal acts or other acts that may give rise to secret surveillance measures, such as, for example, acts endangering national security” (para. 260). This finding clearly constitutes a significant threshold which a number of existing and pending European surveillance laws would not meet. For example, the existence of individualized reasonable suspicion runs contrary to the premise of signals intelligence programs where communications are intercepted in bulk; by definition, those programs collect information without any consideration of individualized suspicion. Yet the Court was clearly articulating the principle with national security-driven surveillance in mind, and with the knowledge that interception of communications in Russia is conducted by Russian intelligence on behalf of law enforcement agencies.
  • ...6 more annotations...
  • This element of the Grand Chamber’s decision distinguishes it from prior jurisprudence of the Court, namely the decisions of the Third Section in Weber and Saravia v. Germany (2006) and of the Fourth Section in Liberty and Ors v. United Kingdom (2008). In both cases, the Court considered legislative frameworks which enable bulk interception of communications. (In the German case, the Court used the term “strategic monitoring,” while it referred to “more general programmes of surveillance” in Liberty.) In the latter case, the Fourth Section sought to depart from earlier European Commission of Human Rights — the court of first instance until 1998 — decisions which developed the requirements of the law in the context of surveillance measures targeted at specific individuals or addresses. It took note of the Weber decision which “was itself concerned with generalized ‘strategic monitoring’, rather than the monitoring of individuals” and concluded that there was no “ground to apply different principles concerning the accessibility and clarity of the rules governing the interception of individual communications, on the one hand, and more general programmes of surveillance, on the other” (para. 63). The Court in Liberty made no mention of any need for any prior or reasonable suspicion at all.
  • In Weber, reasonable suspicion was addressed only at the post-interception stage; that is, under the German system, bulk intercepted data could be transmitted from the German Federal Intelligence Service (BND) to law enforcement authorities without any prior suspicion. The Court found that the transmission of personal data without any specific prior suspicion, “in order to allow the institution of criminal proceedings against those being monitored” constituted a fairly serious interference with individuals’ privacy rights that could only be remedied by safeguards and protections limiting the extent to which such data could be used (para. 125). (In the context of that case, the Court found that Germany’s protections and restrictions were sufficient.) When you compare the language from these three cases, it would appear that the Grand Chamber in Zakharov is reasserting the requirement for individualized reasonable suspicion, including in national security cases, with full knowledge of the nature of surveillance considered by the Court in its two recent bulk interception cases.
  • The requirement of reasonable suspicion is bolstered by the Grand Chamber’s subsequent finding in Zakharov that the interception authorization (e.g., the court order or warrant) “must clearly identify a specific person to be placed under surveillance or a single set of premises as the premises in respect of which the authorisation is ordered. Such identification may be made by names, addresses, telephone numbers or other relevant information” (para. 264). In making this finding, it references paragraphs from Liberty describing the broad nature of the bulk interception warrants under British law. In that case, it was this description that led the Court to find the British legislation possessed insufficient clarity on the scope or manner of exercise of the State’s discretion to intercept communications. In one sense, therefore, the Grand Chamber seems to be retroactively annotating the Fourth Section’s Liberty decision so that it might become consistent with its decision in Zakharov. Without this revision, the Court would otherwise appear to depart to some extent — arguably, purposefully — from both Liberty and Weber.
  • Finally, the Grand Chamber took issue with the direct nature of the access enjoyed by Russian intelligence under the SORM system. The Court noted that this contributed to rendering oversight ineffective, despite the existence of a requirement for prior judicial authorization. Absent an obligation to demonstrate such prior authorization to the communications service provider, the likelihood that the system would be abused through “improper action by a dishonest, negligent or overly zealous official” was quite high (para. 270). Accordingly, “the requirement to show an interception authorisation to the communications service provider before obtaining access to a person’s communications is one of the important safeguards against abuse by the law-enforcement authorities” (para. 269). Again, this requirement arguably creates an unconquerable barrier for a number of modern bulk interception systems, which rely on the use of broad warrants to authorize the installation of, for example, fiber optic cable taps that facilitate the interception of all communications that cross those cables. In the United Kingdom, the Independent Reviewer of Terrorism Legislation David Anderson revealed in his essential inquiry into British surveillance in 2015, there are only 20 such warrants in existence at any time. Even if these 20 warrants are served on the relevant communications service providers upon the installation of cable taps, the nature of bulk interception deprives this of any genuine meaning, making the safeguard an empty one. Once a tap is installed for the purposes of bulk interception, the provider is cut out of the equation and can no longer play the role the Court found so crucial in Zakharov.
  • The Zakharov case not only levels a serious blow at bulk, untargeted surveillance regimes, it suggests the Grand Chamber’s intention to actively craft European Court of Human Rights jurisprudence in a manner that curtails such regimes. Any suggestion that the Grand Chamber’s decision was issued in ignorance of the technical capabilities or intentions of States and the continued preference for bulk interception systems should be dispelled; the oral argument in the case took place in September 2014, at a time when the Court had already indicated its intention to accord priority to cases arising out of the Snowden revelations. Indeed, the Court referenced such forthcoming cases in the fact sheet it issued after the Zakharov judgment was released. Any remaining doubt is eradicated through an inspection of the multiple references to the Snowden revelations in the judgment itself. In the main judgment, the Court excerpted text from the Director of the European Union Agency for Human Rights discussing Snowden, and in the separate opinion issued by Judge Dedov, he goes so far as to quote Edward Snowden: “With each court victory, with every change in the law, we demonstrate facts are more convincing than fear. As a society, we rediscover that the value of the right is not in what it hides, but in what it protects.”
  • The full implications of the Zakharov decision remain to be seen. However, it is likely we will not have to wait long to know whether the Grand Chamber intends to see the demise of bulk collection schemes; the three UK cases (Big Brother Watch & Ors v. United Kingdom, Bureau of Investigative Journalism & Alice Ross v. United Kingdom, and 10 Human Rights Organisations v. United Kingdom) pending before the Court have been fast-tracked, indicating the Court’s willingness to continue to confront the compliance of bulk collection schemes with human rights law. It is my hope that the approach in Zakharov hints at the Court’s conviction that bulk collection schemes lie beyond the bounds of permissible State surveillance.
Gonzalo San Gil, PhD.

The Linux desktop battle (and why it matters) - TechRepublic - 2 views

  •  
    Jack Wallen ponders the problem with the ever-lagging acceptance of the Linux desktop and poses a radical solution.
  •  
    "Jack Wallen ponders the problem with the ever-lagging acceptance of the Linux desktop and poses a radical solution. Linux desktop I have been using Ubuntu Unity for a very long time. In fact, I would say that this is, by far, the longest I've stuck with a single desktop interface. Period. That doesn't mean I don't stop to smell the desktop roses along the Linux path. In fact, I've often considered other desktops as a drop-in replacement for Unity. GNOME and Budgie have vied for my attention of late. Both are solid takes on the desktop that offer a minimalistic, modern look and feel (something I prefer) and help me get my work done with an efficiency other desktops can't match. What I see across the Linux landscape, however, often takes me by surprise. While Microsoft and Apple continue to push the idea of the user interface forward, a good amount of the Linux community seems bent on holding us in a perpetual state of "90s computing." Consider Xfce, Mate, and Cinnamon -- three very popular Linux desktop interfaces that work with one very common thread... not changing for the sake of change. Now, this can be considered a very admirable cause when it's put in place to ensure that user experience (UX) is as positive as possible. What this idea does, however, is deny the idea that change can affect an even more efficient and positive UX. When I spin up a distribution that makes use of Xfce, Mate, or Cinnamon, I find the environments work well and get the job done. At the same time, I feel as if the design of the desktops is trapped in the wrong era. At this point, you're certainly questioning the validity and path of this post. If the desktops work well and help you get the job done, what's wrong? It's all about perception. Let me offer you up a bit of perspective. The only reason Apple managed to rise from the ashes and become one of the single most powerful forces in technology is because they understood the concept of perception. They re-invented th
  •  
    Jack Wallen ponders the problem with the ever-lagging acceptance of the Linux desktop and poses a radical solution.
Paul Merrell

After Brit spies 'snoop' on families' lawyers, UK govt admits: We flouted human rights ... - 0 views

  • The British government has admitted that its practice of spying on confidential communications between lawyers and their clients was a breach of the European Convention on Human Rights (ECHR). Details of the controversial snooping emerged in November: lawyers suing Blighty over its rendition of two Libyan families to be tortured by the late and unlamented Gaddafi regime claimed Her Majesty's own lawyers seemed to have access to the defense team's emails. The families' briefs asked for a probe by the secretive Investigatory Powers Tribunal (IPT), a move that led to Wednesday's admission. "The concession the government has made today relates to the agencies' policies and procedures governing the handling of legally privileged communications and whether they are compatible with the ECHR," a government spokesman said in a statement to the media, via the Press Association. "In view of recent IPT judgments, we acknowledge that the policies applied since 2010 have not fully met the requirements of the ECHR, specifically Article 8. This includes a requirement that safeguards are made sufficiently public."
  • The guidelines revealed by the investigation showed that MI5 – which handles the UK's domestic security – had free reign to spy on highly private and sensitive lawyer-client conversations between April 2011 and January 2014. MI6, which handles foreign intelligence, had no rules on the matter either until 2011, and even those were considered void if "extremists" were involved. Britain's answer to the NSA, GCHQ, had rules against such spying, but they too were relaxed in 2011. "By allowing the intelligence agencies free rein to spy on communications between lawyers and their clients, the Government has endangered the fundamental British right to a fair trial," said Cori Crider, a director at the non-profit Reprieve and one of the lawyers for the Libyan families. "For too long, the security services have been allowed to snoop on those bringing cases against them when they speak to their lawyers. In doing so, they have violated a right that is centuries old in British common law. Today they have finally admitted they have been acting unlawfully for years."
  • Crider said it now seemed probable that UK snoopers had been listening in on the communications over the Libyan case. The British government hasn't admitted guilt, but it has at least acknowledged that it was doing something wrong – sort of. "It does not mean that there was any deliberate wrongdoing on the part of the security and intelligence agencies, which have always taken their obligation to protect legally privileged material extremely seriously," the government spokesman said. "Nor does it mean that any of the agencies' activities have prejudiced or in any way resulted in an abuse of process in any civil or criminal proceedings. The agencies will now work with the independent Interception of Communications Commissioner to ensure their policies satisfy all of the UK's human rights obligations." So that's all right, then.
  •  
    If you follow the "November" link you'[l learn that yes, indeed, the UK government lawyers were happily getting the content of their adversaries privileged attorney-client communications. Conspicuously, the promises of reform make no mention of what is surely a disbarment offense in the U.S. I doubt that it's different in the UK. Discovery rules of procedure strictly limit how parties may obtain information from the other side. Wiretapping the other side's lawyers is not a permitted from of discovery. Hopefully, at least the government lawyers in the case in which the misbehavior was discovered have been referred for disciplinary action.  
Gary Edwards

The True Story of How the Patent Bar Captured a Court and Shrank the Intellectual Commo... - 1 views

  • The change in the law wrought by the Federal Circuit can also be viewed substantively through the controversy over software patents. Throughout the 1960s, the USPTO refused to award patents for software innovations. However, several of the USPTO’s decisions were overruled by the patent-friendly U.S. Court of Customs and Patent Appeals, which ordered that software patents be granted. In Gottschalk v. Benson (1972) and Parker v. Flook (1978), the U.S. Supreme Court reversed the Court of Customs and Patent Appeals, holding that mathematical algorithms (and therefore software) were not patentable subject matter. In 1981, in Diamond v. Diehr, the Supreme Court upheld a software patent on the grounds that the patent in question involved a physical process—the patent was issued for software used in the molding of rubber. While affirming their prior ruling that mathematical formulas are not patentable in the abstract, the Court held that an otherwise patentable invention did not become unpatentable simply because it utilized a computer.
  • In the hands of the newly established Federal Circuit, however, this small scope for software patents in precedent was sufficient to open the floodgates. In a series of decisions culminating in State Street Bank v. Signature Financial Group (1998), the Federal Circuit broadened the criteria for patentability of software and business methods substantially, allowing protection as long as the innovation “produces a useful, concrete and tangible result.” That broadened criteria led to an explosion of low-quality software patents, from Amazon’s 1-Click checkout system to Twitter’s pull-to-refresh feature on smartphones. The GAO estimates that more than half of all patents granted in recent years are software-related. Meanwhile, the Supreme Court continues to hold, as in Parker v. Flook, that computer software algorithms are not patentable, and has begun to push back against the Federal Circuit. In Bilski v. Kappos (2010), the Supreme Court once again held that abstract ideas are not patentable, and in Alice v. CLS (2014), it ruled that simply applying an abstract idea on a computer does not suffice to make the idea patent-eligible. It still is not clear what portion of existing software patents Alice invalidates, but it could be a significant one.
  • Supreme Court justices also recognize the Federal Circuit’s insubordination. In oral arguments in Carlsbad Technology v. HIF Bio (2009), Chief Justice John Roberts joked openly about it:
  • ...17 more annotations...
  • The Opportunity of the Commons
  • As a result of the Federal Circuit’s pro-patent jurisprudence, our economy has been flooded with patents that would otherwise not have been granted. If more patents meant more innovation, then we would now be witnessing a spectacular economic boom. Instead, we have been living through what Tyler Cowen has called a Great Stagnation. The fact that patents have increased while growth has not is known in the literature as the “patent puzzle.” As Michele Boldrin and David Levine put it, “there is no empirical evidence that [patents] serve to increase innovation and productivity, unless productivity is identified with the number of patents awarded—which, as evidence shows, has no correlation with measured productivity.”
  • While more patents have not resulted in faster economic growth, they have resulted in more patent lawsuits.
  • Software patents have characteristics that make them particularly susceptible to litigation. Unlike, say, chemical patents, software patents are plagued by a problem of description. How does one describe a software innovation in such a way that anyone searching for it will easily find it? As Christina Mulligan and Tim Lee demonstrate, chemical formulas are indexable, meaning that as the number of chemical patents grow, it will still be easy to determine if a molecule has been patented. Since software innovations are not indexable, they estimate that “patent clearance by all firms would require many times more hours of legal research than all patent lawyers in the United States can bill in a year. The result has been an explosion of patent litigation.” Software and business method patents, estimate James Bessen and Michael Meurer, are 2 and 7 times more likely to be litigated than other patents, respectively (4 and 13 times more likely than chemical patents).
  • Software patents make excellent material for predatory litigation brought by what are often called “patent trolls.”
  • Trolls use asymmetries in the rules of litigation to legally extort millions of dollars from innocent parties. For example, one patent troll, Innovatio IP Ventures, LLP, acquired patents that implicated Wi-Fi. In 2011, it started sending demand letters to coffee shops and hotels that offered wireless Internet access, offering to settle for $2,500 per location. This amount was far in excess of the 9.56 cents per device that Innovatio was entitled to under the “Fair, Reasonable, and Non-Discriminatory” licensing promises attached to their portfolio, but it was also much less than the cost of trial, and therefore it was rational for firms to pay. Cisco stepped in and spent $13 million in legal fees on the case, and settled on behalf of their customers for 3.2 cents per device. Other manufacturers had already licensed Innovatio’s portfolio, but that didn’t stop their customers from being targeted by demand letters.
  • Litigation cost asymmetries are magnified by the fact that most patent trolls are nonpracticing entities. This means that when patent infringement trials get to the discovery phase, they will cost the troll very little—a firm that does not operate a business has very few records to produce.
  • But discovery can cost a medium or large company millions of dollars. Using an event study methodology, James Bessen and coauthors find that infringement lawsuits by nonpracticing entities cost publicly traded companies $83 billion per year in stock market capitalization, while plaintiffs gain less than 10 percent of that amount.
  • Software patents also reduce innovation in virtue of their cumulative nature and the fact that many of them are frequently inputs into a single product. Law professor Michael Heller coined the phrase “tragedy of the anticommons” to refer to a situation that mirrors the well-understood “tragedy of the commons.” Whereas in a commons, multiple parties have the right to use a resource but not to exclude others, in an anticommons, multiple parties have the right to exclude others, and no one is therefore able to make effective use of the resource. The tragedy of the commons results in overuse of the resource; the tragedy of the anticommons results in underuse.
  • In order to cope with the tragedy of the anticommons, we should carefully investigate the opportunity of  the commons. The late Nobelist Elinor Ostrom made a career of studying how communities manage shared resources without property rights. With appropriate self-governance institutions, Ostrom found again and again that a commons does not inevitably lead to tragedy—indeed, open access to shared resources can provide collective benefits that are not available under other forms of property management.
  • This suggests that—litigation costs aside—patent law could be reducing the stock of ideas rather than expanding it at current margins.
  • Advocates of extensive patent protection frequently treat the commons as a kind of wasteland. But considering the problems in our patent system, it is worth looking again at the role of well-tailored limits to property rights in some contexts. Just as we all benefit from real property rights that no longer extend to the highest heavens, we would also benefit if the scope of patent protection were more narrowly drawn.
  • Reforming the Patent System
  • This analysis raises some obvious possibilities for reforming the patent system. Diane Wood, Chief Judge of the 7th Circuit, has proposed ending the Federal Circuit’s exclusive jurisdiction over patent appeals—instead, the Federal Circuit could share jurisdiction with the other circuit courts. While this is a constructive suggestion, it still leaves the door open to the Federal Circuit playing “a leading role in shaping patent law,” which is the reason for its capture by patent interests. It would be better instead simply to abolish the Federal Circuit and return to the pre-1982 system, in which patents received no special treatment in appeals. This leaves open the possibility of circuit splits, which the creation of the Federal Circuit was designed to mitigate, but there are worse problems than circuit splits, and we now have them.
  • Another helpful reform would be for Congress to limit the scope of patentable subject matter via statute. New Zealand has done just that, declaring that software is “not an invention” to get around WTO obligations to respect intellectual property. Congress should do the same with respect to both software and business methods.
  • Finally, even if the above reforms were adopted, there would still be a need to address the asymmetries in patent litigation that result in predatory “troll” lawsuits. While the holding in Alice v. CLS arguably makes a wide swath of patents invalid, those patents could still be used in troll lawsuits because a ruling of invalidity for each individual patent might not occur until late in a trial. Current legislation in Congress addresses this class of problem by mandating disclosures, shifting fees in the case of spurious lawsuits, and enabling a review of the patent’s validity before a trial commences.
  • What matters for prosperity is not just property rights in the abstract, but good property-defining institutions. Without reform, our patent system will continue to favor special interests and forestall economic growth.
  •  
    "Libertarians intuitively understand the case for patents: just as other property rights internalize the social benefits of improvements to land, automobile maintenance, or business investment, patents incentivize the creation of new inventions, which might otherwise be undersupplied. So far, so good. But it is important to recognize that the laws that govern property, intellectual or otherwise, do not arise out of thin air. Rather, our political institutions, with all their virtues and foibles, determine the contours of property-the exact bundle of rights that property holders possess, their extent, and their limitations. Outlining efficient property laws is not a trivial problem. The optimal contours of property are neither immutable nor knowable a priori. For example, in 1946, the U.S. Supreme Court reversed the age-old common law doctrine that extended real property rights to the heavens without limit. The advent of air travel made such extensive property rights no longer practicable-airlines would have had to cobble together a patchwork of easements, acre by acre, for every corridor through which they flew, and they would have opened themselves up to lawsuits every time their planes deviated from the expected path. The Court rightly abridged property rights in light of these empirical realities. In defining the limits of patent rights, our political institutions have gotten an analogous question badly wrong. A single, politically captured circuit court with exclusive jurisdiction over patent appeals has consistently expanded the scope of patentable subject matter. This expansion has resulted in an explosion of both patents and patent litigation, with destructive consequences. "
  •  
    I added a comment to the page's article. Patents are antithetical to the precepts of Libertarianism and do not involve Natural Law rights. But I agree with the author that the Court of Appeals for the Federal Circuit should be abolished. It's a failed experiment.
Paul Merrell

European Lawmakers Demand Answers on Phone Key Theft - The Intercept - 0 views

  • European officials are demanding answers and investigations into a joint U.S. and U.K. hack of the world’s largest manufacturer of mobile SIM cards, following a report published by The Intercept Thursday. The report, based on leaked documents provided by NSA whistleblower Edward Snowden, revealed the U.S. spy agency and its British counterpart Government Communications Headquarters, GCHQ, hacked the Franco-Dutch digital security giant Gemalto in a sophisticated heist of encrypted cell-phone keys. The European Parliament’s chief negotiator on the European Union’s data protection law, Jan Philipp Albrecht, said the hack was “obviously based on some illegal activities.” “Member states like the U.K. are frankly not respecting the [law of the] Netherlands and partner states,” Albrecht told the Wall Street Journal. Sophie in ’t Veld, an EU parliamentarian with D66, the Netherlands’ largest opposition party, added, “Year after year we have heard about cowboy practices of secret services, but governments did nothing and kept quiet […] In fact, those very same governments push for ever-more surveillance capabilities, while it remains unclear how effective these practices are.”
  • “If the average IT whizzkid breaks into a company system, he’ll end up behind bars,” In ’t Veld added in a tweet Friday. The EU itself is barred from undertaking such investigations, leaving individual countries responsible for looking into cases that impact their national security matters. “We even get letters from the U.K. government saying we shouldn’t deal with these issues because it’s their own issue of national security,” Albrecht said. Still, lawmakers in the Netherlands are seeking investigations. Gerard Schouw, a Dutch member of parliament, also with the D66 party, has called on Ronald Plasterk, the Dutch minister of the interior, to answer questions before parliament. On Tuesday, the Dutch parliament will debate Schouw’s request. Additionally, European legal experts tell The Intercept, public prosecutors in EU member states that are both party to the Cybercrime Convention, which prohibits computer hacking, and home to Gemalto subsidiaries could pursue investigations into the breach of the company’s systems.
  • According to secret documents from 2010 and 2011, a joint NSA-GCHQ unit penetrated Gemalto’s internal networks and infiltrated the private communications of its employees in order to steal encryption keys, embedded on tiny SIM cards, which are used to protect the privacy of cellphone communications across the world. Gemalto produces some 2 billion SIM cards a year. The company’s clients include AT&T, T-Mobile, Verizon, Sprint and some 450 wireless network providers. “[We] believe we have their entire network,” GCHQ boasted in a leaked slide, referring to the Gemalto heist.
  • ...4 more annotations...
  • While Gemalto was indeed another casualty in Western governments’ sweeping effort to gather as much global intelligence advantage as possible, the leaked documents make clear that the company was specifically targeted. According to the materials published Thursday, GCHQ used a specific codename — DAPINO GAMMA — to refer to the operations against Gemalto. The spies also actively penetrated the email and social media accounts of Gemalto employees across the world in an effort to steal the company’s encryption keys. Evidence of the Gemalto breach rattled the digital security community. “Almost everyone in the world carries cell phones and this is an unprecedented mass attack on the privacy of citizens worldwide,” said Greg Nojeim, senior counsel at the Center for Democracy & Technology, a non-profit that advocates for digital privacy and free online expression. “While there is certainly value in targeted surveillance of cell phone communications, this coordinated subversion of the trusted technical security infrastructure of cell phones means the US and British governments now have easy access to our mobile communications.”
  • For Gemalto, evidence that their vaunted security systems and the privacy of customers had been compromised by the world’s top spy agencies made an immediate financial impact. The company’s shares took a dive on the Paris bourse Friday, falling $500 million. In the U.S., Gemalto’s shares fell as much 10 percent Friday morning. They had recovered somewhat — down 4 percent — by the close of trading on the Euronext stock exchange. Analysts at Dutch financial services company Rabobank speculated in a research note that Gemalto could be forced to recall “a large number” of SIM cards. The French daily L’Express noted today that Gemalto board member Alex Mandl was a founding trustee of the CIA-funded venture capital firm In-Q-Tel. Mandl resigned from In-Q-Tel’s board in 2002, when he was appointed CEO of Gemplus, which later merged with another company to become Gemalto. But the CIA connection still dogged Mandl, with the French press regularly insinuating that American spies could infiltrate the company. In 2003, a group of French lawmakers tried unsuccessfully to create a commission to investigate Gemplus’s ties to the CIA and its implications for the security of SIM cards. Mandl, an Austrian-American businessman who was once a top executive at AT&T, has denied that he had any relationship with the CIA beyond In-Q-Tel. In 2002, he said he did not even have a security clearance.
  • AT&T, T-Mobile and Verizon could not be reached for comment Friday. Sprint declined to comment. Vodafone, the world’s second largest telecom provider by subscribers and a customer of Gemalto, said in a statement, “[W]e have no further details of these allegations which are industrywide in nature and are not focused on any one mobile operator. We will support industry bodies and Gemalto in their investigations.” Deutsche Telekom AG, a German company, said it has changed encryption algorithms in its Gemalto SIM cards. “We currently have no knowledge that this additional protection mechanism has been compromised,” the company said in a statement. “However, we cannot rule out this completely.”
  • Update: Asked about the SIM card heist, White House press secretary Josh Earnest said he did not expect the news would hurt relations with the tech industry: “It’s hard for me to imagine that there are a lot of technology executives that are out there that are in a position of saying that they hope that people who wish harm to this country will be able to use their technology to do so. So, I do think in fact that there are opportunities for the private sector and the federal government to coordinate and to cooperate on these efforts, both to keep the country safe, but also to protect our civil liberties.”
  •  
    Watch for massive class action product defect litigation to be filed against the phone companies.and mobile device manufacturers.  In most U.S. jurisdictions, proof that the vendors/manufacturers  knew of the product defect is not required, only proof of the defect. Also, this is a golden opportunity for anyone who wants to get out of a pricey cellphone contract, since providing a compromised cellphone is a material breach of warranty, whether explicit or implied..   
Gonzalo San Gil, PhD.

How to Save the Net | Magazine | WIRED - 1 views

  •  
    By Wired Magazine 08.19.14 | 6:30 am | Permalink It's impossible to overstate how much the Internet matters. It has forever altered how we share information and store it for safekeeping, how we communicate with political leaders, how we document atrocities and hold wrongdoers accountable, how we consume entertainment and create it, even how we meet others and maintain relationships."
  •  
    By Wired Magazine 08.19.14 | 6:30 am | Permalink It's impossible to overstate how much the Internet matters. It has forever altered how we share information and store it for safekeeping, how we communicate with political leaders, how we document atrocities and hold wrongdoers accountable, how we consume entertainment and create it, even how we meet others and maintain relationships."
Gary Edwards

» 21 Facts About NSA Snooping That Every American Should Know Alex Jones' Inf... - 0 views

  •  
    NSA-PRISM-Echelon in a nutshell.  The list below is a short sample.  Each fact is documented, and well worth the time reading. "The following are 21 facts about NSA snooping that every American should know…" #1 According to CNET, the NSA told Congress during a recent classified briefing that it does not need court authorization to listen to domestic phone calls… #2 According to U.S. Representative Loretta Sanchez, members of Congress learned "significantly more than what is out in the media today" about NSA snooping during that classified briefing. #3 The content of all of our phone calls is being recorded and stored.  The following is a from a transcript of an exchange between Erin Burnett of CNN and former FBI counterterrorism agent Tim Clemente which took place just last month… #4 The chief technology officer at the CIA, Gus Hunt, made the following statement back in March… "We fundamentally try to collect everything and hang onto it forever." #5 During a Senate Judiciary Oversight Committee hearing in March 2011, FBI Director Robert Mueller admitted that the intelligence community has the ability to access emails "as they come in"… #6 Back in 2007, Director of National Intelligence Michael McConnell told Congress that the president has the "constitutional authority" to authorize domestic spying without warrants no matter when the law says. #7 The Director Of National Intelligence James Clapper recently told Congress that the NSA was not collecting any information about American citizens.  When the media confronted him about his lie, he explained that he "responded in what I thought was the most truthful, or least untruthful manner". #8 The Washington Post is reporting that the NSA has four primary data collection systems… MAINWAY, MARINA, METADATA, PRISM #9 The NSA knows pretty much everything that you are doing on the Internet.  The following is a short excerpt from a recent Yahoo article… #10 The NSA is suppose
Paul Merrell

Prepare to Hang Up the Phone, Forever - WSJ.com - 0 views

  • At decade's end, the trusty landline telephone could be nothing more than a memory. Telecom giants AT&T T +0.31% AT&T Inc. U.S.: NYSE $35.07 +0.11 +0.31% March 28, 2014 4:00 pm Volume (Delayed 15m) : 24.66M AFTER HOURS $35.03 -0.04 -0.11% March 28, 2014 7:31 pm Volume (Delayed 15m): 85,446 P/E Ratio 10.28 Market Cap $182.60 Billion Dividend Yield 5.25% Rev. per Employee $529,844 03/29/14 Prepare to Hang Up the Phone, ... 03/21/14 AT&T Criticizes Netflix's 'Arr... 03/21/14 Samsung's Galaxy S5 Smartphone... More quote details and news » T in Your Value Your Change Short position and Verizon Communications VZ -0.57% Verizon Communications Inc. U.S.: NYSE $47.42 -0.27 -0.57% March 28, 2014 4:01 pm Volume (Delayed 15m) : 24.13M AFTER HOURS $47.47 +0.05 +0.11% March 28, 2014 7:59 pm Volume (Delayed 15m): 1.57M
  • The two providers want to lay the crumbling POTS to rest and replace it with Internet Protocol-based systems that use the same wired and wireless broadband networks that bring Web access, cable programming and, yes, even your telephone service, into your homes. You may think you have a traditional landline because your home phone plugs into a jack, but if you have bundled your phone with Internet and cable services, you're making calls over an IP network, not twisted copper wires. California, Florida, Texas, Georgia, North Carolina, Wisconsin and Ohio are among states that agree telecom resources would be better redirected into modern telephone technologies and innovations, and will kill copper-based technologies in the next three years or so. Kentucky and Colorado are weighing similar laws, which force people to go wireless whether they want to or not. In Mantoloking, N.J., Verizon wants to replace the landline system, which Hurricane Sandy wiped out, with its wireless Voice Link. That would make it the first entire town to go landline-less, a move that isn't sitting well with all residents.
  • New Jersey's legislature, worried about losing data applications such as credit-card processing and alarm systems that wireless systems can't handle, wants a one-year moratorium to block that switch. It will vote on the measure this month. (Verizon tried a similar change in Fire Island, N.Y., when its copper lines were destroyed, but public opposition persuaded Verizon to install fiber-optic cable.) It's no surprise that landlines are unfashionable, considering many of us already have or are preparing to ditch them. More than 38% of adults and 45.5% of children live in households without a landline telephone, says the Centers for Disease Control and Prevention. That means two in every five U.S. homes, or 39%, are wireless, up from 26.6% three years ago. Moreover, a scant 8.5% of households relied only on a landline, while 2% were phoneless in 2013. Metropolitan residents have few worries about the end of landlines. High-speed wire and wireless services are abundant and work well, despite occasional dropped calls. Those living in rural areas, where cell towers are few and 4G capability limited, face different issues.
  • ...2 more annotations...
  • Safety is one of them. Call 911 from a landline and the emergency operator pinpoints your exact address, down to the apartment number. Wireless phones lack those specifics, and even with GPS navigation aren't as precise. Matters are worse in rural and even suburban areas that signals don't reach, sometimes because they're blocked by buildings or the landscape. That's of concern to the Federal Communications Commission, which oversees all forms of U.S. communications services. Universal access is a tenet of its mission, and, despite the state-by-state degradation of the mandate, it's unwilling to let telecom companies simply drop geographically undesirable customers. Telecom firms need FCC approval to ax services completely, and can't do so unless there is a viable competitor to pick up the slack. Last year AT&T asked to turn off its legacy network, which could create gaps in universal coverage and will force people off the grid to get a wireless provider.
  • AT&T and the FCC will soon begin trials to explore life without copper-wired landlines. Consumers will voluntarily test IP-connected networks and their impact on towns like Carbon Hills, Ala., population 2,071. They want to know how households will reach 911, how small businesses will connect to customers, how people with medical-monitoring devices or home alarms know they will always be connected to a reliable network, and what the costs are. "We cannot be a nation of opportunity without networks of opportunity," said FCC Chairman Tom Wheeler in unveiling the plan. "This pilot program will help us learn how fiber might be deployed where it is not now deployed…and how new forms of wireless can reach deep into the interior of rural America."
Gary Edwards

Skynet rising: Google acquires 512-qubit quantum computer; NSA surveillance to be turne... - 0 views

  •  
    "The ultimate code breakers" If you know anything about encryption, you probably also realize that quantum computers are the secret KEY to unlocking all encrypted files. As I wrote about last year here on Natural News, once quantum computers go into widespread use by the NSA, the CIA, Google, etc., there will be no more secrets kept from the government. All your files - even encrypted files - will be easily opened and read. Until now, most people believed this day was far away. Quantum computing is an "impractical pipe dream," we've been told by scowling scientists and "flat Earth" computer engineers. "It's not possible to build a 512-qubit quantum computer that actually works," they insisted. Don't tell that to Eric Ladizinsky, co-founder and chief scientist of a company called D-Wave. Because Ladizinsky's team has already built a 512-qubit quantum computer. And they're already selling them to wealthy corporations, too. DARPA, Northrup Grumman and Goldman Sachs In case you're wondering where Ladizinsky came from, he's a former employee of Northrup Grumman Space Technology (yes, a weapons manufacturer) where he ran a multi-million-dollar quantum computing research project for none other than DARPA - the same group working on AI-driven armed assault vehicles and battlefield robots to replace human soldiers. .... When groundbreaking new technology is developed by smart people, it almost immediately gets turned into a weapon. Quantum computing will be no different. This technology grants God-like powers to police state governments that seek to dominate and oppress the People.  ..... Google acquires "Skynet" quantum computers from D-Wave According to an article published in Scientific American, Google and NASA have now teamed up to purchase a 512-qubit quantum computer from D-Wave. The computer is called "D-Wave Two" because it's the second generation of the system. The first system was a 128-qubit computer. Gen two
  •  
    Normally, I'd be suspicious of anything published by Infowars because its editors are willing to publish really over the top stuff, but: [i] this is subject matter I've maintained an interest in over the years and I was aware that working quantum computers were imminent; and [ii] the pedigree on this particular information does not trace to Scientific American, as stated in the article. I've known Scientific American to publish at least one soothing and lengthy article on the subject of chlorinated dioxin hazard -- my specialty as a lawyer was litigating against chemical companies that generated dioxin pollution -- that was generated by known closet chemical industry advocates long since discredited and was totally lacking in scientific validity and contrary to established scientific knowledge. So publication in Scientific American doesn't pack a lot of weight with me. But checking the Scientific American linked article, notes that it was reprinted by permission from Nature, a peer-reviewed scientific journal and news organization that I trust much more. That said, the InfoWars version is a rewrite that contains lots of information not in the Nature/Scientific American version of a sensationalist nature, so heightened caution is still in order. Check the reprinted Nature version before getting too excited: "The D-Wave computer is not a 'universal' computer that can be programmed to tackle any kind of problem. But scientists have found they can usefully frame questions in machine-learning research as optimisation problems. "D-Wave has battled to prove that its computer really operates on a quantum level, and that it is better or faster than a conventional computer. Before striking the latest deal, the prospective customers set a series of tests for the quantum computer. D-Wave hired an outside expert in algorithm-racing, who concluded that the speed of the D-Wave Two was above average overall, and that it was 3,600 times faster than a leading conventional comput
Paul Merrell

How Edward Snowden Changed Everything | The Nation - 0 views

  • Ben Wizner, who is perhaps best known as Edward Snowden’s lawyer, directs the American Civil Liberties Union’s Speech, Privacy & Technology Project. Wizner, who joined the ACLU in August 2001, one month before the 9/11 attacks, has been a force in the legal battles against torture, watch lists, and extraordinary rendition since the beginning of the global “war on terror.” Ad Policy On October 15, we met with Wizner in an upstate New York pub to discuss the state of privacy advocacy today. In sometimes sardonic tones, he talked about the transition from litigating on issues of torture to privacy advocacy, differences between corporate and state-sponsored surveillance, recent developments in state legislatures and the federal government, and some of the obstacles impeding civil liberties litigation. The interview has been edited and abridged for publication.
  • en Wizner, who is perhaps best known as Edward Snowden’s lawyer, directs the American Civil Liberties Union’s Speech, Privacy & Technology Project. Wizner, who joined the ACLU in August 2001, one month before the 9/11 attacks, has been a force in the legal battles against torture, watch lists, and extraordinary rendition since the beginning of the global “war on terror.” Ad Policy On October 15, we met with Wizner in an upstate New York pub to discuss the state of privacy advocacy today. In sometimes sardonic tones, he talked about the transition from litigating on issues of torture to privacy advocacy, differences between corporate and state-sponsored surveillance, recent developments in state legislatures and the federal government, and some of the obstacles impeding civil liberties litigation. The interview has been edited and abridged for publication.
  • Many of the technologies, both military technologies and surveillance technologies, that are developed for purposes of policing the empire find their way back home and get repurposed. You saw this in Ferguson, where we had military equipment in the streets to police nonviolent civil unrest, and we’re seeing this with surveillance technologies, where things that are deployed for use in war zones are now commonly in the arsenals of local police departments. For example, a cellphone surveillance tool that we call the StingRay—which mimics a cellphone tower and communicates with all the phones around—was really developed as a military technology to help identify targets. Now, because it’s so inexpensive, and because there is a surplus of these things that are being developed, it ends up getting pushed down into local communities without local democratic consent or control.
  • ...4 more annotations...
  • SG & TP: How do you see the current state of the right to privacy? BW: I joked when I took this job that I was relieved that I was going to be working on the Fourth Amendment, because finally I’d have a chance to win. That was intended as gallows humor; the Fourth Amendment had been a dishrag for the last several decades, largely because of the war on drugs. The joke in civil liberties circles was, “What amendment?” But I was able to make this joke because I was coming to Fourth Amendment litigation from something even worse, which was trying to sue the CIA for torture, or targeted killings, or various things where the invariable outcome was some kind of non-justiciability ruling. We weren’t even reaching the merits at all. It turns out that my gallows humor joke was prescient.
  • The truth is that over the last few years, we’ve seen some of the most important Fourth Amendment decisions from the Supreme Court in perhaps half a century. Certainly, I think the Jones decision in 2012 [U.S. v. Jones], which held that GPS tracking was a Fourth Amendment search, was the most important Fourth Amendment decision since Katz in 1967 [Katz v. United States], in terms of starting a revolution in Fourth Amendment jurisprudence signifying that changes in technology were not just differences in degree, but they were differences in kind, and require the Court to grapple with it in a different way. Just two years later, you saw the Court holding that police can’t search your phone incident to an arrest without getting a warrant [Riley v. California]. Since 2012, at the level of Supreme Court jurisprudence, we’re seeing a recognition that technology has required a rethinking of the Fourth Amendment at the state and local level. We’re seeing a wave of privacy legislation that’s really passing beneath the radar for people who are not paying close attention. It’s not just happening in liberal states like California; it’s happening in red states like Montana, Utah, and Wyoming. And purple states like Colorado and Maine. You see as many libertarians and conservatives pushing these new rules as you see liberals. It really has cut across at least party lines, if not ideologies. My overall point here is that with respect to constraints on government surveillance—I should be more specific—law-enforcement government surveillance—momentum has been on our side in a way that has surprised even me.
  • Do you think that increased privacy protections will happen on the state level before they happen on the federal level? BW: I think so. For example, look at what occurred with the death penalty and the Supreme Court’s recent Eighth Amendment jurisprudence. The question under the Eighth Amendment is, “Is the practice cruel and unusual?” The Court has looked at what it calls “evolving standards of decency” [Trop v. Dulles, 1958]. It matters to the Court, when it’s deciding whether a juvenile can be executed or if a juvenile can get life without parole, what’s going on in the states. It was important to the litigants in those cases to be able to show that even if most states allowed the bad practice, the momentum was in the other direction. The states that were legislating on this most recently were liberalizing their rules, were making it harder to execute people under 18 or to lock them up without the possibility of parole. I think you’re going to see the same thing with Fourth Amendment and privacy jurisprudence, even though the Court doesn’t have a specific doctrine like “evolving standards of decency.” The Court uses this much-maligned test, “Do individuals have a reasonable expectation of privacy?” We’ll advance the argument, I think successfully, that part of what the Court should look at in considering whether an expectation of privacy is reasonable is showing what’s going on in the states. If we can show that a dozen or eighteen state legislatures have enacted a constitutional protection that doesn’t exist in federal constitutional law, I think that that will influence the Supreme Court.
  • The question is will it also influence Congress. I think there the answer is also “yes.” If you’re a member of the House or the Senate from Montana, and you see that your state legislature and your Republican governor have enacted privacy legislation, you’re not going to be worried about voting in that direction. I think this is one of those places where, unlike civil rights, where you saw most of the action at the federal level and then getting forced down to the states, we’re going to see more action at the state level getting funneled up to the federal government.
  •  
    A must-read. Ben Wizner discusses the current climate in the courts in government surveillance cases and how Edward Snowden's disclosures have affected that, and much more. Wizner is not only Edward Snowden's lawyer, he is also the coordinator of all ACLU litigation on electronic surveillance matters.
Paul Merrell

Comcast Plans to Drop Time Warner Cable Deal - Bloomberg Business - 0 views

  • Fourteen months after unveiling a $45.2 billion merger that would create a new Internet and cable giant, Comcast Corp. is planning to walk away from its proposed takeover of Time Warner Cable Inc., people with knowledge of the matter said. The decision marks a swift unraveling of a deal that awaited federal approval for more than a year. Opposition from the U.S. Justice Department and Federal Communications Commission took shape over the past week, leaving officials of the two companies to conclude the deal wouldn’t pass muster.
  • Comcast’s board will meet to finalize the decision on Thursday, and an announcement may come as soon as Friday, said one of the people, who asked not to be identified because the information is private. Time Warner Cable executives plan to tell shareholders on an earnings conference call next Thursday how the company can survive independently, the person said.
  • On Wednesday, FCC staff joined lawyers at the Justice Department opposing the transaction. That day, FCC officials told representatives of the two companies they are leaning toward concluding the merger doesn’t help consumers, a person with knowledge of the matter said. The FCC’s plan to call a hearing effectively killed the deal’s chances of success. An FCC hearing can take months to complete and drag out the approval process beyond the companies’ time frame for completion. Bloomberg News reported last week that Justice Department staff was leaning against the deal. Senators including Al Franken, a Democrat from Minnesota, also voiced opposition. “Comcast’s withdrawal of its proposed merger with Time Warner Cable would be spectacularly good news for consumers,” Michael Copps, a Democratic former FCC commissioner working with Common Cause to oppose the deal, said in a statement.
  •  
    Looks like all that online lobbying from the internet community worked. 
Paul Merrell

Memo to Potential Whistleblowers: If You See Something, Say Something | Global Research - 0 views

  • Blowing the whistle on wrongdoing creates a moral frequency that vast numbers of people are eager to hear. We don’t want our lives, communities, country and world continually damaged by the deadening silences of fear and conformity. I’ve met many whistleblowers over the years, and they’ve been extraordinarily ordinary. None were applying for halos or sainthood. All experienced anguish before deciding that continuous inaction had a price that was too high. All suffered negative consequences as well as relief after they spoke up and took action. All made the world better with their courage. Whistleblowers don’t sign up to be whistleblowers. Almost always, they begin their work as true believers in the system that conscience later compels them to challenge. “It took years of involvement with a mendacious war policy, evidence of which was apparent to me as early as 2003, before I found the courage to follow my conscience,” Matthew Hoh recalled this week.“It is not an easy or light decision for anyone to make, but we need members of our military, development, diplomatic and intelligence community to speak out if we are ever to have a just and sound foreign policy.”
  • Hoh describes his record this way: “After over 11 continuous years of service with the U.S. military and U.S. government, nearly six of those years overseas, including service in Iraq and Afghanistan, as well as positions within the Secretary of the Navy’s Office as a White House Liaison, and as a consultant for the State Department’s Iraq Desk, I resigned from my position with the State Department in Afghanistan in protest of the escalation of war in 2009.” Another former Department of State official, the ex-diplomat and retired Army colonel Ann Wright, who resigned in protest of the Iraq invasion in March 2003, is crossing paths with Hoh on Friday as they do the honors at a ribbon-cutting — half a block from the State Department headquarters in Washington — for a billboard with a picture of Pentagon Papers whistleblower Daniel Ellsberg. Big-lettered words begin by referring to the years he waited before releasing the Pentagon Papers in 1971. “Don’t do what I did,” Ellsberg says on the billboard.  “Don’t wait until a new war has started, don’t wait until thousands more have died, before you tell the truth with documents that reveal lies or crimes or internal projections of costs and dangers. You might save a war’s worth of lives.
  • The billboard – sponsored by the ExposeFacts organization, which launched this week — will spread to other prominent locations in Washington and beyond. As an organizer for ExposeFacts, I’m glad to report that outreach to potential whistleblowers is just getting started. (For details, visit ExposeFacts.org.) We’re propelled by the kind of hopeful determination that Hoh expressed the day before the billboard ribbon-cutting when he said: “I trust ExposeFacts and its efforts will encourage others to follow their conscience and do what is right.” The journalist Kevin Gosztola, who has astutely covered a range of whistleblower issues for years, pointed this week to the imperative of opening up news media. “There is an important role for ExposeFacts to play in not only forcing more transparency, but also inspiring more media organizations to engage in adversarial journalism,” he wrote. “Such journalism is called for in the face of wars, environmental destruction, escalating poverty, egregious abuses in the justice system, corporate control of government, and national security state secrecy. Perhaps a truly successful organization could inspire U.S. media organizations to play much more of a watchdog role than a lapdog role when covering powerful institutions in government.”
  • ...2 more annotations...
  • Overall, we desperately need to nurture and propagate a steadfast culture of outspoken whistleblowing. A central motto of the AIDS activist movement dating back to the 1980s – Silence = Death – remains urgently relevant in a vast array of realms. Whether the problems involve perpetual war, corporate malfeasance, climate change, institutionalized racism, patterns of sexual assault, toxic pollution or countless other ills, none can be alleviated without bringing grim realities into the light. “All governments lie,” Ellsberg says in a video statement released for the launch of ExposeFacts, “and they all like to work in the dark as far as the public is concerned, in terms of their own decision-making, their planning — and to be able to allege, falsely, unanimity in addressing their problems, as if no one who had knowledge of the full facts inside could disagree with the policy the president or the leader of the state is announcing.” Ellsberg adds: “A country that wants to be a democracy has to be able to penetrate that secrecy, with the help of conscientious individuals who understand in this country that their duty to the Constitution and to the civil liberties and to the welfare of this country definitely surmount their obligation to their bosses, to a given administration, or in some cases to their promise of secrecy.”
  • Right now, our potential for democracy owes a lot to people like NSA whistleblowers William Binney and Kirk Wiebe, and EPA whistleblower Marsha Coleman-Adebayo. When they spoke at the June 4 news conference in Washington that launched ExposeFacts, their brave clarity was inspiring. Antidotes to the poisons of cynicism and passive despair can emerge from organizing to help create a better world. The process requires applying a single standard to the real actions of institutions and individuals, no matter how big their budgets or grand their power. What cannot withstand the light of day should not be suffered in silence. If you see something, say something.
  •  
    While some governments -- my own included -- attempt to impose an Orwellian Dark State of ubiquitous secret surveillance, secret wars, the rule of oligarchs, and public ignorance, the Edward Snowden leaks fanned the flames of the countering War on Ignorance that had been kept alive by civil libertarians. Only days after the U.S. Supreme Court denied review in a case where a reporter had been ordered to reveal his source of information for a book on the Dark State under the penalties for contempt of court (a long stretch in jail), a new web site is launched for communications between sources and journalists where the source's names never need to be revealed. This article is part of the publicity for that new weapon fielded by the civil libertarian side in the War Against Ignorance.  Hurrah!
Paul Merrell

Google Says Website Encryption Will Now Influence Search Rankings - 0 views

  • Google will begin using website encryption, or HTTPS, as a ranking signal – a move which should prompt website developers who have dragged their heels on increased security measures, or who debated whether their website was “important” enough to require encryption, to make a change. Initially, HTTPS will only be a lightweight signal, affecting fewer than 1% of global queries, says Google. That means that the new signal won’t carry as much weight as other factors, including the quality of the content, the search giant noted, as Google means to give webmasters time to make the switch to HTTPS. Over time, however, encryption’s effect on search ranking make strengthen, as the company places more importance on website security. Google also promises to publish a series of best practices around TLS (HTTPS, is also known as HTTP over TLS, or Transport Layer Security) so website developers can better understand what they need to do in order to implement the technology and what mistakes they should avoid. These tips will include things like what certificate type is needed, how to use relative URLs for resources on the same secure domain, best practices around allowing for site indexing, and more.
  • In addition, website developers can test their current HTTPS-enabled website using the Qualys Lab tool, says Google, and can direct further questions to Google’s Webmaster Help Forums where the company is already in active discussions with the broader community. The announcement has drawn a lot of feedback from website developers and those in the SEO industry – for instance, Google’s own blog post on the matter, shared in the early morning hours on Thursday, is already nearing 1,000 comments. For the most part, the community seems to support the change, or at least acknowledge that they felt that something like this was in the works and are not surprised. Google itself has been making moves to better securing its own traffic in recent months, which have included encrypting traffic between its own servers. Gmail now always uses an encrypted HTTPS connection which keeps mail from being snooped on as it moves from a consumer’s machine to Google’s data centers.
  • While HTTPS and site encryption have been a best practice in the security community for years, the revelation that the NSA has been tapping the cables, so to speak, to mine user information directly has prompted many technology companies to consider increasing their own security measures, too. Yahoo, for example, also announced in November its plans to encrypt its data center traffic. Now Google is helping to push the rest of the web to do the same.
  •  
    The Internet continues to harden in the wake of the NSA revelations. This is a nice nudge by Google.
Paul Merrell

Edward Snowden Explains How To Reclaim Your Privacy - 0 views

  • Micah Lee: What are some operational security practices you think everyone should adopt? Just useful stuff for average people. Edward Snowden: [Opsec] is important even if you’re not worried about the NSA. Because when you think about who the victims of surveillance are, on a day-to-day basis, you’re thinking about people who are in abusive spousal relationships, you’re thinking about people who are concerned about stalkers, you’re thinking about children who are concerned about their parents overhearing things. It’s to reclaim a level of privacy. The first step that anyone could take is to encrypt their phone calls and their text messages. You can do that through the smartphone app Signal, by Open Whisper Systems. It’s free, and you can just download it immediately. And anybody you’re talking to now, their communications, if it’s intercepted, can’t be read by adversaries. [Signal is available for iOS and Android, and, unlike a lot of security tools, is very easy to use.] You should encrypt your hard disk, so that if your computer is stolen the information isn’t obtainable to an adversary — pictures, where you live, where you work, where your kids are, where you go to school. [I’ve written a guide to encrypting your disk on Windows, Mac, and Linux.] Use a password manager. One of the main things that gets people’s private information exposed, not necessarily to the most powerful adversaries, but to the most common ones, are data dumps. Your credentials may be revealed because some service you stopped using in 2007 gets hacked, and your password that you were using for that one site also works for your Gmail account. A password manager allows you to create unique passwords for every site that are unbreakable, but you don’t have the burden of memorizing them. [The password manager KeePassX is free, open source, cross-platform, and never stores anything in the cloud.]
  • The other thing there is two-factor authentication. The value of this is if someone does steal your password, or it’s left or exposed somewhere … [two-factor authentication] allows the provider to send you a secondary means of authentication — a text message or something like that. [If you enable two-factor authentication, an attacker needs both your password as the first factor and a physical device, like your phone, as your second factor, to login to your account. Gmail, Facebook, Twitter, Dropbox, GitHub, Battle.net, and tons of other services all support two-factor authentication.]
  • We should armor ourselves using systems we can rely on every day. This doesn’t need to be an extraordinary lifestyle change. It doesn’t have to be something that is disruptive. It should be invisible, it should be atmospheric, it should be something that happens painlessly, effortlessly. This is why I like apps like Signal, because they’re low friction. It doesn’t require you to re-order your life. It doesn’t require you to change your method of communications. You can use it right now to talk to your friends.
  • ...4 more annotations...
  • Lee: What do you think about Tor? Do you think that everyone should be familiar with it, or do you think that it’s only a use-it-if-you-need-it thing? Snowden: I think Tor is the most important privacy-enhancing technology project being used today. I use Tor personally all the time. We know it works from at least one anecdotal case that’s fairly familiar to most people at this point. That’s not to say that Tor is bulletproof. What Tor does is it provides a measure of security and allows you to disassociate your physical location. … But the basic idea, the concept of Tor that is so valuable, is that it’s run by volunteers. Anyone can create a new node on the network, whether it’s an entry node, a middle router, or an exit point, on the basis of their willingness to accept some risk. The voluntary nature of this network means that it is survivable, it’s resistant, it’s flexible. [Tor Browser is a great way to selectively use Tor to look something up and not leave a trace that you did it. It can also help bypass censorship when you’re on a network where certain sites are blocked. If you want to get more involved, you can volunteer to run your own Tor node, as I do, and support the diversity of the Tor network.]
  • Lee: So that is all stuff that everybody should be doing. What about people who have exceptional threat models, like future intelligence-community whistleblowers, and other people who have nation-state adversaries? Maybe journalists, in some cases, or activists, or people like that? Snowden: So the first answer is that you can’t learn this from a single article. The needs of every individual in a high-risk environment are different. And the capabilities of the adversary are constantly improving. The tooling changes as well. What really matters is to be conscious of the principles of compromise. How can the adversary, in general, gain access to information that is sensitive to you? What kinds of things do you need to protect? Because of course you don’t need to hide everything from the adversary. You don’t need to live a paranoid life, off the grid, in hiding, in the woods in Montana. What we do need to protect are the facts of our activities, our beliefs, and our lives that could be used against us in manners that are contrary to our interests. So when we think about this for whistleblowers, for example, if you witnessed some kind of wrongdoing and you need to reveal this information, and you believe there are people that want to interfere with that, you need to think about how to compartmentalize that.
  • Tell no one who doesn’t need to know. [Lindsay Mills, Snowden’s girlfriend of several years, didn’t know that he had been collecting documents to leak to journalists until she heard about it on the news, like everyone else.] When we talk about whistleblowers and what to do, you want to think about tools for protecting your identity, protecting the existence of the relationship from any type of conventional communication system. You want to use something like SecureDrop, over the Tor network, so there is no connection between the computer that you are using at the time — preferably with a non-persistent operating system like Tails, so you’ve left no forensic trace on the machine you’re using, which hopefully is a disposable machine that you can get rid of afterward, that can’t be found in a raid, that can’t be analyzed or anything like that — so that the only outcome of your operational activities are the stories reported by the journalists. [SecureDrop is a whistleblower submission system. Here is a guide to using The Intercept’s SecureDrop server as safely as possible.]
  • And this is to be sure that whoever has been engaging in this wrongdoing cannot distract from the controversy by pointing to your physical identity. Instead they have to deal with the facts of the controversy rather than the actors that are involved in it. Lee: What about for people who are, like, in a repressive regime and are trying to … Snowden: Use Tor. Lee: Use Tor? Snowden: If you’re not using Tor you’re doing it wrong. Now, there is a counterpoint here where the use of privacy-enhancing technologies in certain areas can actually single you out for additional surveillance through the exercise of repressive measures. This is why it’s so critical for developers who are working on security-enhancing tools to not make their protocols stand out.
  •  
    Lots more in the interview that I didn't highlight. This is a must-read.
Paul Merrell

China's quantum satellite enables first totally secure long-range messages - 2 views

  • In the middle of the night, invisible to anyone but special telescopes in two Chinese observatories, satellite Micius sends particles of light to Earth to establish the world’s most secure communication link. Named after the ancient Chinese philosopher also known as Mozi, Micius is the world’s first quantum communications satellite and has, for several years, been at the forefront of quantum encryption. Scientists have now reported using this technology to reach a major milestone: long-range secure communication you could trust even without trusting the satellite it runs through. Launched in 2016, Micius has already produced a number of breakthroughs under its operating team led by Pan Jian-Wei, China’s “Father of Quantum”. The satellite serves as the source of pairs of entangled photons, twinned light particles whose properties remain intertwined no matter how far apart they are. If you manipulate one of the photons, the other will be similarly affected at the very same moment.
  • It is this property that lies in the heart of the most secure forms of quantum cryptography, the entanglement-based quantum key distribution. If you use one of the entangled particles to create a key for encoding messages, only the person with the other particle can decode them.
  • Secure long-distance links such as this one will be the foundation of the quantum internet, the future global network with added security powered by laws of quantum mechanics, unmatched by classical cryptographic methods. The launch of Micius and the records set by the scientists and engineers building quantum communication systems with its help have been compared to the effect Sputnik had on the space race in the 20th century. In a similar way, the quantum race has political and military implications that are hard to ignore.
Paul Merrell

What to Do About Lawless Government Hacking and the Weakening of Digital Security | Ele... - 0 views

  • In our society, the rule of law sets limits on what government can and cannot do, no matter how important its goals. To give a simple example, even when chasing a fleeing murder suspect, the police have a duty not to endanger bystanders. The government should pay the same care to our safety in pursuing threats online, but right now we don’t have clear, enforceable rules for government activities like hacking and "digital sabotage." And this is no abstract question—these actions increasingly endanger everyone’s security
  • The problem became especially clear this year during the San Bernardino case, involving the FBI’s demand that Apple rewrite its iOS operating system to defeat security features on a locked iPhone. Ultimately the FBI exploited an existing vulnerability in iOS and accessed the contents of the phone with the help of an "outside party." Then, with no public process or discussion of the tradeoffs involved, the government refused to tell Apple about the flaw. Despite the obvious fact that the security of the computers and networks we all use is both collective and interwoven—other iPhones used by millions of innocent people presumably have the same vulnerability—the government chose to withhold information Apple could have used to improve the security of its phones. Other examples include intelligence activities like Stuxnet and Bullrun, and law enforcement investigations like the FBI’s mass use of malware against Tor users engaged in criminal behavior. These activities are often disproportionate to stopping legitimate threats, resulting in unpatched software for millions of innocent users, overbroad surveillance, and other collateral effects.  That’s why we’re working on a positive agenda to confront governmental threats to digital security. Put more directly, we’re calling on lawyers, advocates, technologists, and the public to demand a public discussion of whether, when, and how governments can be empowered to break into our computers, phones, and other devices; sabotage and subvert basic security protocols; and stockpile and exploit software flaws and vulnerabilities.  
  • Smart people in academia and elsewhere have been thinking and writing about these issues for years. But it’s time to take the next step and make clear, public rules that carry the force of law to ensure that the government weighs the tradeoffs and reaches the right decisions. This long post outlines some of the things that can be done. It frames the issue, then describes some of the key areas where EFF is already pursuing this agenda—in particular formalizing the rules for disclosing vulnerabilities and setting out narrow limits for the use of government malware. Finally it lays out where we think the debate should go from here.   
  •  
    "In our society, the rule of law sets limits on what government can and cannot do, no matter how important its goals. "
  •  
    It's not often that I disagree with EFF's positions, but on this one I do. The government should be prohibited from exploiting computer vulnerabilities and should be required to immediately report all vulnerabilities discovered to the relevant developers of hardware or software. It's been one long slippery slope since the Supreme Court first approved wiretapping in Olmstead v. United States, 277 US 438 (1928), https://goo.gl/NJevsr (.) Left undecided to this day is whether we have a right to whisper privately, a right that is undeniable. All communications intercept cases since Olmstead fly directly in the face of that right.
Gonzalo San Gil, PhD.

The value of open source is the open development process: Scott Wilson OSS Watch | Open... - 0 views

  •  
    "Scott Wilson agrees that open source matters because of open code, but just as important is the process in which the code is made. Open development of code is in the social nature of many programmers, hackers, documentors, and project managers. So, what is it about open development? "
Gonzalo San Gil, PhD.

Techdirt Reading List: The Idealist: Aaron Swartz And The Rise Of Free Culture On The I... - 0 views

  •  
    "from the free-culture-matters dept We're back again with another in our weekly reading list posts of books we think our community will find interesting and thought provoking. Once again, buying the book via the Amazon links in this story also helps support Techdirt. "
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

Vodafone reveals existence of secret wires that allow state surveillance | Business | T... - 0 views

  • Vodafone, one of the world's largest mobile phone groups, has revealed the existence of secret wires that allow government agencies to listen to all conversations on its networks, saying they are widely used in some of the 29 countries in which it operates in Europe and beyond.The company has broken its silence on government surveillance in order to push back against the increasingly widespread use of phone and broadband networks to spy on citizens, and will publish its first Law Enforcement Disclosure Report on Friday. At 40,000 words, it is the most comprehensive survey yet of how governments monitor the conversations and whereabouts of their people.The company said wires had been connected directly to its network and those of other telecoms groups, allowing agencies to listen to or record live conversations and, in certain cases, track the whereabouts of a customer. Privacy campaigners said the revelations were a "nightmare scenario" that confirmed their worst fears on the extent of snooping.
  • Vodafone's group privacy officer, Stephen Deadman, said: "These pipes exist, the direct access model exists."We are making a call to end direct access as a means of government agencies obtaining people's communication data. Without an official warrant, there is no external visibility. If we receive a demand we can push back against the agency. The fact that a government has to issue a piece of paper is an important constraint on how powers are used."Vodafone is calling for all direct-access pipes to be disconnected, and for the laws that make them legal to be amended. It says governments should "discourage agencies and authorities from seeking direct access to an operator's communications infrastructure without a lawful mandate".
  • In America, Verizon and AT&T have published data, but only on their domestic operations. Deutsche Telekom in Germany and Telstra in Australia have also broken ground at home. Vodafone is the first to produce a global survey.
  • ...2 more annotations...
  • Peter Micek, policy counsel at the campaign group Access, said: "In a sector that has historically been quiet about how it facilitates government access to user data, Vodafone has for the first time shone a bright light on the challenges of a global telecom giant, giving users a greater understanding of the demands governments make of telcos. Vodafone's report also highlights how few governments issue any transparency reports, with little to no information about the number of wiretaps, cell site tower dumps, and other invasive surveillance practices."
  • Snowden, the National Security Agency whistleblower, joined Google, Reddit, Mozilla and other tech firms and privacy groups on Thursday to call for a strengthening of privacy rights online in a "Reset the net" campaign.Twelve months after revelations about the scale of the US government's surveillance programs were first published in the Guardian and the Washington Post, Snowden said: "One year ago, we learned that the internet is under surveillance, and our activities are being monitored to create permanent records of our private lives – no matter how innocent or ordinary those lives might be. Today, we can begin the work of effectively shutting down the collection of our online communications, even if the US Congress fails to do the same."
  •  
    The Vodafone disclosures will undoubtedly have a very large ripple effect. Note carefully that this is the first major telephone service in the world to break ranks with the others and come out swinging at secret government voyeur agencies. Will others follow. If you follow the links to the Vodafone report, you'll find a very handy big PDF providing an overview of the relevant laws in each of the customer nations. There's a cute Guardian table that shows the aggregate number of warrants for interception of content via Vodafone for each of those nations, broken down by content type. That table has white-on-black cells noting where disclosure of those types of surveillance statistics are prohibited by law. So it is far from a complete picture, but it's a heck of a good start.  But several of those customer nations are members of the E.U., where digital privacy rights are enshrined as human rights under an EU-wide treaty. So expect some heat to roll downhill on those nations from the European treaty organizations, particularly the European Court of Human Rights, staffed with civil libertarian judges, from which there is no appeal.     
1 - 20 of 32 Next ›
Showing 20 items per page