Skip to main content

Home/ Future of the Web/ Group items tagged reveals

Rss Feed Group items tagged

Paul Merrell

Exclusive: Inside America's Plan to Kill Online Privacy Rights Everywhere | The Cable - 0 views

  • The United States and its key intelligence allies are quietly working behind the scenes to kneecap a mounting movement in the United Nations to promote a universal human right to online privacy, according to diplomatic sources and an internal American government document obtained by The Cable. The diplomatic battle is playing out in an obscure U.N. General Assembly committee that is considering a proposal by Brazil and Germany to place constraints on unchecked internet surveillance by the National Security Agency and other foreign intelligence services. American representatives have made it clear that they won't tolerate such checks on their global surveillance network. The stakes are high, particularly in Washington -- which is seeking to contain an international backlash against NSA spying -- and in Brasilia, where Brazilian President Dilma Roussef is personally involved in monitoring the U.N. negotiations.
  • The Brazilian and German initiative seeks to apply the right to privacy, which is enshrined in the International Covenant on Civil and Political Rights (ICCPR), to online communications. Their proposal, first revealed by The Cable, affirms a "right to privacy that is not to be subjected to arbitrary or unlawful interference with their privacy, family, home, or correspondence." It notes that while public safety may "justify the gathering and protection of certain sensitive information," nations "must ensure full compliance" with international human rights laws. A final version the text is scheduled to be presented to U.N. members on Wednesday evening and the resolution is expected to be adopted next week. A draft of the resolution, which was obtained by The Cable, calls on states to "to respect and protect the right to privacy," asserting that the "same rights that people have offline must also be protected online, including the right to privacy." It also requests the U.N. high commissioner for human rights, Navi Pillay, present the U.N. General Assembly next year with a report on the protection and promotion of the right to privacy, a provision that will ensure the issue remains on the front burner.
  • Publicly, U.S. representatives say they're open to an affirmation of privacy rights. "The United States takes very seriously our international legal obligations, including those under the International Covenant on Civil and Political Rights," Kurtis Cooper, a spokesman for the U.S. mission to the United Nations, said in an email. "We have been actively and constructively negotiating to ensure that the resolution promotes human rights and is consistent with those obligations." But privately, American diplomats are pushing hard to kill a provision of the Brazilian and German draft which states that "extraterritorial surveillance" and mass interception of communications, personal information, and metadata may constitute a violation of human rights. The United States and its allies, according to diplomats, outside observers, and documents, contend that the Covenant on Civil and Political Rights does not apply to foreign espionage.
  • ...6 more annotations...
  • n recent days, the United States circulated to its allies a confidential paper highlighting American objectives in the negotiations, "Right to Privacy in the Digital Age -- U.S. Redlines." It calls for changing the Brazilian and German text so "that references to privacy rights are referring explicitly to States' obligations under ICCPR and remove suggestion that such obligations apply extraterritorially." In other words: America wants to make sure it preserves the right to spy overseas. The U.S. paper also calls on governments to promote amendments that would weaken Brazil's and Germany's contention that some "highly intrusive" acts of online espionage may constitute a violation of freedom of expression. Instead, the United States wants to limit the focus to illegal surveillance -- which the American government claims it never, ever does. Collecting information on tens of millions of people around the world is perfectly acceptable, the Obama administration has repeatedly said. It's authorized by U.S. statute, overseen by Congress, and approved by American courts.
  • "Recall that the USG's [U.S. government's] collection activities that have been disclosed are lawful collections done in a manner protective of privacy rights," the paper states. "So a paragraph expressing concern about illegal surveillance is one with which we would agree." The privacy resolution, like most General Assembly decisions, is neither legally binding nor enforceable by any international court. But international lawyers say it is important because it creates the basis for an international consensus -- referred to as "soft law" -- that over time will make it harder and harder for the United States to argue that its mass collection of foreigners' data is lawful and in conformity with human rights norms. "They want to be able to say ‘we haven't broken the law, we're not breaking the law, and we won't break the law,'" said Dinah PoKempner, the general counsel for Human Rights Watch, who has been tracking the negotiations. The United States, she added, wants to be able to maintain that "we have the freedom to scoop up anything we want through the massive surveillance of foreigners because we have no legal obligations."
  • The United States negotiators have been pressing their case behind the scenes, raising concerns that the assertion of extraterritorial human rights could constrain America's effort to go after international terrorists. But Washington has remained relatively muted about their concerns in the U.N. negotiating sessions. According to one diplomat, "the United States has been very much in the backseat," leaving it to its allies, Australia, Britain, and Canada, to take the lead. There is no extraterritorial obligation on states "to comply with human rights," explained one diplomat who supports the U.S. position. "The obligation is on states to uphold the human rights of citizens within their territory and areas of their jurisdictions."
  • The position, according to Jamil Dakwar, the director of the American Civil Liberties Union's Human Rights Program, has little international backing. The International Court of Justice, the U.N. Human Rights Committee, and the European Court have all asserted that states do have an obligation to comply with human rights laws beyond their own borders, he noted. "Governments do have obligation beyond their territories," said Dakwar, particularly in situations, like the Guantanamo Bay detention center, where the United States exercises "effective control" over the lives of the detainees. Both PoKempner and Dakwar suggested that courts may also judge that the U.S. dominance of the Internet places special legal obligations on it to ensure the protection of users' human rights.
  • "It's clear that when the United States is conducting surveillance, these decisions and operations start in the United States, the servers are at NSA headquarters, and the capabilities are mainly in the United States," he said. "To argue that they have no human rights obligations overseas is dangerous because it sends a message that there is void in terms of human rights protection outside countries territory. It's going back to the idea that you can create a legal black hole where there is no applicable law." There were signs emerging on Wednesday that America may have been making ground in pressing the Brazilians and Germans to back on one of its toughest provisions. In an effort to address the concerns of the U.S. and its allies, Brazil and Germany agreed to soften the language suggesting that mass surveillance may constitute a violation of human rights. Instead, it simply deep "concern at the negative impact" that extraterritorial surveillance "may have on the exercise of and enjoyment of human rights." The U.S., however, has not yet indicated it would support the revised proposal.
  • The concession "is regrettable. But it’s not the end of the battle by any means," said Human Rights Watch’s PoKempner. She added that there will soon be another opportunity to corral America's spies: a U.N. discussion on possible human rights violations as a result of extraterritorial surveillance will soon be taken up by the U.N. High commissioner.
  •  
    Woo-hoo! Go get'em, U.N.
Paul Merrell

The Government's Secret Plan to Shut Off Cellphones and the Internet, Explained | Conne... - 1 views

  • This month, the United States District Court for the District of Columbia ruled that the Department of Homeland Security must make its plan to shut off the Internet and cellphone communications available to the American public. You, of course, may now be thinking: What plan?! Though President Barack Obama swiftly disapproved of ousted Egyptian President Hosni Mubarak turning off the Internet in his country (to quell widespread civil disobedience) in 2011, the US government has the authority to do the same sort of thing, under a plan that was devised during the George W. Bush administration. Many details of the government’s controversial “kill switch” authority have been classified, such as the conditions under which it can be implemented and how the switch can be used. But thanks to a Freedom of Information Act lawsuit filed by the Electronic Privacy Information Center (EPIC), DHS has to reveal those details by December 12 — or mount an appeal. (The smart betting is on an appeal, since DHS has fought to release this information so far.) Yet here’s what we do know about the government’s “kill switch” plan:
  • What are the constitutional problems? Civil liberties advocates argue that kill switches violate the First Amendment and pose a problem because they aren’t subject to rigorous judicial and congressional oversight. “There is no court in the loop at all, at any stage in the SOP 303 process,” according to the Center for Democracy and Technology. ”The executive branch, untethered by the checks and balances of court oversight, clear instruction from Congress, or transparency to the public, is free to act as it will and in secret.” David Jacobs of EPIC says, “Cutting off communications imposes a prior restraint on speech, so the First Amendment imposes the strictest of limitations…We don’t know how DHS thinks [the kill switch] is consistent with the First Amendment.” He adds, “Such a policy, unbounded by clear rules and oversight, just invites abuse.”
Paul Merrell

Gmail blows up e-mail marketing by caching all images on Google servers | Ars Technica - 1 views

  • Ever wonder why most e-mail clients hide images by default? The reason for the "display images" button is because images in an e-mail must be loaded from a third-party server. For promotional e-mails and spam, usually this server is operated by the entity that sent the e-mail. So when you load these images, you aren't just receiving an image—you're also sending a ton of data about yourself to the e-mail marketer. Loading images from these promotional e-mails reveals a lot about you. Marketers get a rough idea of your location via your IP address. They can see the HTTP referrer, meaning the URL of the page that requested the image. With the referral data, marketers can see not only what client you are using (desktop app, Web, mobile, etc.) but also what folder you were viewing the e-mail in. For instance, if you had a Gmail folder named "Ars Technica" and loaded e-mail images, the referral URL would be "https://mail.google.com/mail/u/0/#label/Ars+Technica"—the folder is right there in the URL. The same goes for the inbox, spam, and any other location. It's even possible to uniquely identify each e-mail, so marketers can tell which e-mail address requested the images—they know that you've read the e-mail. And if it was spam, this will often earn you more spam since the spammers can tell you've read their last e-mail.
  • But Google has just announced a move that will shut most of these tactics down: it will cache all images for Gmail users. Embedded images will now be saved by Google, and the e-mail content will be modified to display those images from Google's cache, instead of from a third-party server. E-mail marketers will no longer be able to get any information from images—they will see a single request from Google, which will then be used to send the image out to all Gmail users. Unless you click on a link, marketers will have no idea the e-mail has been seen. While this means improved privacy from e-mail marketers, Google will now be digging deeper than ever into your e-mails and literally modifying the contents. If you were worried about e-mail scanning, this may take things a step further. However, if you don't like the idea of cached images, you can turn it off in the settings. This move will allow Google to automatically display images, killing the "display all images" button in Gmail. Google servers should also be faster than the usual third-party image host. Hosting all images sent to all Gmail users sounds like a huge bandwidth and storage undertaking, but if anyone can do it, it's Google. The new image handling will rollout to desktop users today, and it should hit mobile apps sometime in early 2014. There's also a bonus side effect for Google: e-mail marketing is advertising. Google exists because of advertising dollars, but they don't do e-mail marketing. They've just made a competitive form of advertising much less appealing and informative to advertisers. No doubt Google hopes this move pushes marketers to spend less on e-mail and more on Adsense.
  •  
    There's an antitrust angle to this; it could be viewed by a court as anti-competitive. But given the prevailing winds on digital privacy, my guess would be that Google would slide by.
Paul Merrell

This project aims to make '404 not found' pages a thing of the past - 0 views

  • The Internet is always changing. Sites are rising and falling, content is deleted, and bad URLs can lead to '404 Not Found' errors that are as helpful as a brick wall. A new project proposes an do away with dead 404 errors by implementing new HTML code that will help access prior versions of hyperlinked content. With any luck, that means that you’ll never have to run into a dead link again. The “404-No-More” project is backed by a formidable coalition including members from organizations like the Harvard Library Innovation Lab, Los Alamos National Laboratory, Old Dominion University, and the Berkman Center for Internet & Society. Part of the Knight News Challenge, which seeks to strengthen the Internet for free expression and innovation through a variety of initiatives, 404-No-More recently reached the semifinal stage. The project aims to cure so-called link rot, the process by which hyperlinks become useless overtime because they point to addresses that are no longer available. If implemented, websites such as Wikipedia and other reference documents would be vastly improved. The new feature would also give Web authors a way provide links that contain both archived copies of content and specific dates of reference, the sort of information that diligent readers have to hunt down on a website like Archive.org.
  • While it may sound trivial, link rot can actually have real ramifications. Nearly 50 percent of the hyperlinks in Supreme Court decisions no longer work, a 2013 study revealed. Losing footnotes and citations in landmark legal decisions can mean losing crucial information and context about the laws that govern us. The same study found that 70 percent of URLs within the Harvard Law Review and similar journals didn’t link to the originally cited information, considered a serious loss surrounding the discussion of our laws. The project’s proponents have come up with more potential uses as well. Activists fighting censorship will have an easier time combatting government takedowns, for instance. Journalists will be much more capable of researching dynamic Web pages. “If every hyperlink was annotated with a publication date, you could automatically view an archived version of the content as the author intended for you to see it,” the project’s authors explain. The ephemeral nature of the Web could no longer be used as a weapon. Roger Macdonald, a director at the Internet Archive, called the 404-No-More project “an important contribution to preservation of knowledge.”
  • The new feature would come in the form of introducing the mset attribute to the <a> element in HTML, which would allow users of the code to specify multiple dates and copies of content as an external resource. For instance, if both the date of reference and the location of a copy of targeted content is known by an author, the new code would like like this: The 404-No-More project’s goals are numerous, but the ultimate goal is to have mset become a new HTML standard for hyperlinks. “An HTML standard that incorporates archives for hyperlinks will loop in these efforts and make the Web better for everyone,” project leaders wrote, “activists, journalists, and regular ol’ everyday web users.”
Paul Merrell

Another judge upholds NSA call tracking - POLITICO.com - 0 views

  • A federal judge in Idaho has upheld the constitutionality of the National Security Agency's program that gathers massive quanities of data on the telephone calls of Americans. The ruling Tuesday from U.S. District Court Judge B. Lynn Winmill leaves the federal government with two wins in lawsuits decided since the program was revealed about a year ago by ex-NSA contractor Edward Snowden. In addition, one judge handling a criminal case ruled that the surveillance did not violate the Constitution. Opponents of the program have only one win: U.S. District Court Judge Richard Leon's ruling in December that the program likely violates the Fourth Amendment. In the new decision, Winmill said binding precedent in the Ninth Circuit holds that call and email metadata are not protected by the Constitution and no warrant is needed to obtain it.
  • "The weight of the authority favors the NSA," wrote Winmill, an appointee of President Bill Clinton. Winmill took note of Leon's contrary decision and called it eloquent, but concluded it departs from current Supreme Court precedent — though perhaps not for long. "Judge Leon’s decision should serve as a template for a Supreme Court opinion. And it might yet," Winmill wrote as he threw out the lawsuit brought by an Idaho registered nurse who objected to the gathering of data on her phone calls. Winmill's opinion (posted here) does not address an argument put forward by some critics of the program, including some lawmakers: that the metadata program violates federal law because it does not fit squarely within the language of the statute used to authorize it.
  •  
    A partial win for the public. The judge makes plain that he disagrees with pre-Snowden disclosure precedent and recommends that the Supreme Court adopt the reasoning of Judge Richard Leon's ruling that finds the NSA call-metadata violative of the Fourth Amendment. The judge says his hands are tied by prior decisions in the Ninth Circuit Court of Appeals that gave an expansive reading to Smith v. Maryland.
Paul Merrell

The Government Can No Longer Track Your Cell Phone Without a Warrant | Motherboard - 0 views

  • The government and police regularly use location data pulled off of cell phone towers to put criminals at the scenes of crimes—often without a warrant. Well, an appeals court ruled today that the practice is unconstitutional, in one of the strongest judicial defenses of technology privacy rights we've seen in a while.  The United States Court of Appeals for the Eleventh Circuit ruled that the government illegally obtained and used Quartavious Davis's cell phone location data to help convict him in a string of armed robberies in Miami and unequivocally stated that cell phone location information is protected by the Fourth Amendment. "In short, we hold that cell site location information is within the subscriber’s reasonable expectation of privacy," the court ruled in an opinion written by Judge David Sentelle. "The obtaining of that data without a warrant is a Fourth Amendment violation."
  • In Davis's case, police used his cell phone's call history against him to put him at the scene of several armed robberies. They obtained a court order—which does not require the government to show probable cause—not a warrant, to do so. From now on, that'll be illegal. The decision applies only in the Eleventh Circuit, but sets a strong precedent for future cases.
  • Indeed, the decision alone is a huge privacy win, but Sentelle's strong language supporting cell phone users' privacy rights is perhaps the most important part of the opinion. Sentelle pushed back against several of the federal government's arguments, including one that suggested that, because cell phone location data based on a caller's closest cell tower isn't precise, it should be readily collectable.  "The United States further argues that cell site location information is less protected than GPS data because it is less precise. We are not sure why this should be significant. We do not doubt that there may be a difference in precision, but that is not to say that the difference in precision has constitutional significance," Sentelle wrote. "That information obtained by an invasion of privacy may not be entirely precise does not change the calculus as to whether obtaining it was in fact an invasion of privacy." The court also cited the infamous US v. Jones Supreme Court decision that held that attaching a GPS to a suspect's car is a "search" under the Fourth Amendment. Sentelle suggested a cell phone user has an even greater expectation of location privacy with his or her cell phone use than a driver does with his or her car. A car, Sentelle wrote, isn't always with a person, while a cell phone, these days, usually is.
  • ...2 more annotations...
  • "One’s cell phone, unlike an automobile, can accompany its owner anywhere. Thus, the exposure of the cell site location information can convert what would otherwise be a private event into a public one," he wrote. "In that sense, cell site data is more like communications data than it is like GPS information. That is, it is private in nature rather than being public data that warrants privacy protection only when its collection creates a sufficient mosaic to expose that which would otherwise be private." Finally, the government argued that, because Davis made outgoing calls, he "voluntarily" gave up his location data. Sentelle rejected that, too, citing a prior decision by a Third Circuit Court. "The Third Circuit went on to observe that 'a cell phone customer has not ‘voluntarily’ shared his location information with a cellular provider in any meaningful way.' That circuit further noted that 'it is unlikely that cell phone customers are aware that their cell phone providers collect and store historical location information,'” Sentelle wrote.
  • "Therefore, as the Third Circuit concluded, 'when a cell phone user makes a call, the only information that is voluntarily and knowingly conveyed to the phone company is the number that is dialed, and there is no indication to the user that making that call will also locate the caller,'" he continued.
  •  
    Another victory for civil libertarians against the surveillance state. Note that this is another decision drawing guidance from the Supreme Court's decision in U.S. v. Jones, shortly before the Edward Snowden leaks came to light, that called for re-examination of the Third Party Doctrine, an older doctrine that data given to or generated by third parties is not protected by the Fourth Amendment.   
Paul Merrell

Exclusive: How FBI Informant Sabu Helped Anonymous Hack Brazil | Motherboard - 0 views

  • In early 2012, members of the hacking collective Anonymous carried out a series of cyber attacks on government and corporate websites in Brazil. They did so under the direction of a hacker who, unbeknownst to them, was wearing another hat: helping the Federal Bureau of Investigation carry out one of its biggest cybercrime investigations to date. A year after leaked files exposed the National Security Agency's efforts to spy on citizens and companies in Brazil, previously unpublished chat logs obtained by Motherboard reveal that while under the FBI's supervision, Hector Xavier Monsegur, widely known by his online persona, "Sabu," facilitated attacks that affected Brazilian websites. The operation raises questions about how the FBI uses global internet vulnerabilities during cybercrime investigations, how it works with informants, and how it shares information with other police and intelligence agencies. 
  • After his arrest in mid-2011, Monsegur continued to organize cyber attacks while working for the FBI. According to documents and interviews, Monsegur passed targets and exploits to hackers to disrupt government and corporate servers in Brazil and several other countries. Details about his work as a federal informant have been kept mostly secret, aired only in closed-door hearings and in redacted documents that include chat logs between Monsegur and other hackers. The chat logs remain under seal due to a protective order upheld in court, but in April, they and other court documents were obtained by journalists at Motherboard and the Daily Dot. 
Paul Merrell

How an FBI informant orchestrated the Stratfor hack - 0 views

  • Sitting inside a medium-security federal prison in Kentucky, Jeremy Hammond looks defiant and frustrated.  “[The FBI] could've stopped me,” he told the Daily Dot last month at the Federal Correctional Institution, Manchester. “They could've. They knew about it. They could’ve stopped dozens of sites I was breaking into.” Hammond is currently serving the remainder of a 10-year prison sentence in part for his role in one of the most high-profile cyberattacks of the early 21st century. His 2011 breach of Strategic Forecasting, Inc. (Stratfor) left tens of thousands of Americans vulnerable to identity theft and irrevocably damaged the Texas-based intelligence firm's global reputation. He was also indicted for his role in the June 2011 hack of an Arizona state law enforcement agency's computer servers.
  • There's no question of his guilt: Hammond, 29, admittedly hacked into Stratfor’s network and exfiltrated an estimated 60,000 credit card numbers and associated data and millions of emails, information that was later shared with the whistleblower organization WikiLeaks and the hacker collective Anonymous.   Sealed court documents obtained by the Daily Dot and Motherboard, however, reveal that the attack was instigated and orchestrated not by Hammond, but by an informant, with the full knowledge of the Federal Bureau of Investigation (FBI).  In addition to directly facilitating the breach, the FBI left Stratfor and its customers—which included defense contractors, police chiefs, and National Security Agency employees—vulnerable to future attacks and fraud, and it requested knowledge of the data theft to be withheld from affected customers. This decision would ultimately allow for millions of dollars in damages.
Paul Merrell

Marriott fined $600,000 for jamming guest hotspots - SlashGear - 0 views

  • Marriott will cough up $600,000 in penalties after being caught blocking mobile hotspots so that guests would have to pay for its own WiFi services, the FCC has confirmed today. The fine comes after staff at the Gaylord Opryland Hotel and Convention Center in Nashville, Tennessee were found to be jamming individual hotspots and then charging people up to $1,000 per device to get online. Marriott has been operating the center since 2012, and is believed to have been running its interruption scheme since then. The first complaint to the FCC, however, wasn't until March 2013, when one guest warned the Commission that they suspected their hardware had been jammed. An investigation by the FCC's Enforcement Bureau revealed that was, in fact, the case. A WiFi monitoring system installed at the Gaylord Opryland would target access points with de-authentication packets, disconnecting users so that their browsing was interrupted.
  • The FCC deemed Marriott's behaviors as contravening Section 333 of the Communications Act, which states that "no person shall willfully or maliciously interfere with or cause interference to any radio communications of any station licensed or authorized by or under this chapter or operated by the United States Government." In addition to the $600,000 civil penalty, Marriott will have to cease blocking guests, hand over details of any access point containment features to the FCC across its entire portfolio of owned or managed properties, and finally file compliance and usage reports each quarter for the next three years.
  • Update: Marriott has issued the following statement on the FCC ruling: "Marriott has a strong interest in ensuring that when our guests use our Wi-Fi service, they will be protected from rogue wireless hotspots that can cause degraded service, insidious cyber-attacks and identity theft. Like many other institutions and companies in a wide variety of industries, including hospitals and universities, the Gaylord Opryland protected its Wi-Fi network by using FCC-authorized equipment provided by well-known, reputable manufacturers. We believe that the Gaylord Opryland's actions were lawful. We will continue to encourage the FCC to pursue a rulemaking in order to eliminate the ongoing confusion resulting from today's action and to assess the merits of its underlying policy."
Paul Merrell

Internet Giants Erect Barriers to Spy Agencies - NYTimes.com - 0 views

  • As fast as it can, Google is sealing up cracks in its systems that Edward J. Snowden revealed the N.S.A. had brilliantly exploited. It is encrypting more data as it moves among its servers and helping customers encode their own emails. Facebook, Microsoft and Yahoo are taking similar steps.
  • After years of cooperating with the government, the immediate goal now is to thwart Washington — as well as Beijing and Moscow. The strategy is also intended to preserve business overseas in places like Brazil and Germany that have threatened to entrust data only to local providers. Google, for example, is laying its own fiber optic cable under the world’s oceans, a project that began as an effort to cut costs and extend its influence, but now has an added purpose: to assure that the company will have more control over the movement of its customer data.
  • A year after Mr. Snowden’s revelations, the era of quiet cooperation is over. Telecommunications companies say they are denying requests to volunteer data not covered by existing law. A.T.&T., Verizon and others say that compared with a year ago, they are far more reluctant to cooperate with the United States government in “gray areas” where there is no explicit requirement for a legal warrant.
  • ...8 more annotations...
  • Eric Grosse, Google’s security chief, suggested in an interview that the N.S.A.'s own behavior invited the new arms race.“I am willing to help on the purely defensive side of things,” he said, referring to Washington’s efforts to enlist Silicon Valley in cybersecurity efforts. “But signals intercept is totally off the table,” he said, referring to national intelligence gathering.“No hard feelings, but my job is to make their job hard,” he added.
  • In Washington, officials acknowledge that covert programs are now far harder to execute because American technology companies, fearful of losing international business, are hardening their networks and saying no to requests for the kind of help they once quietly provided.Continue reading the main story Robert S. Litt, the general counsel of the Office of the Director of National Intelligence, which oversees all 17 American spy agencies, said on Wednesday that it was “an unquestionable loss for our nation that companies are losing the willingness to cooperate legally and voluntarily” with American spy agencies.
  • Many point to an episode in 2012, when Russian security researchers uncovered a state espionage tool, Flame, on Iranian computers. Flame, like the Stuxnet worm, is believed to have been produced at least in part by American intelligence agencies. It was created by exploiting a previously unknown flaw in Microsoft’s operating systems. Companies argue that others could have later taken advantage of this defect.Worried that such an episode undercuts confidence in its wares, Microsoft is now fully encrypting all its products, including Hotmail and Outlook.com, by the end of this year with 2,048-bit encryption, a stronger protection that would take a government far longer to crack. The software is protected by encryption both when it is in data centers and when data is being sent over the Internet, said Bradford L. Smith, the company’s general counsel.
  • Mr. Smith also said the company was setting up “transparency centers” abroad so that technical experts of foreign governments could come in and inspect Microsoft’s proprietary source code. That will allow foreign governments to check to make sure there are no “back doors” that would permit snooping by United States intelligence agencies. The first such center is being set up in Brussels.Microsoft has also pushed back harder in court. In a Seattle case, the government issued a “national security letter” to compel Microsoft to turn over data about a customer, along with a gag order to prevent Microsoft from telling the customer it had been compelled to provide its communications to government officials. Microsoft challenged the gag order as violating the First Amendment. The government backed down.
  • Hardware firms like Cisco, which makes routers and switches, have found their products a frequent subject of Mr. Snowden’s disclosures, and their business has declined steadily in places like Asia, Brazil and Europe over the last year. The company is still struggling to convince foreign customers that their networks are safe from hackers — and free of “back doors” installed by the N.S.A. The frustration, companies here say, is that it is nearly impossible to prove that their systems are N.S.A.-proof.
  • In one slide from the disclosures, N.S.A. analysts pointed to a sweet spot inside Google’s data centers, where they could catch traffic in unencrypted form. Next to a quickly drawn smiley face, an N.S.A. analyst, referring to an acronym for a common layer of protection, had noted, “SSL added and removed here!”
  • Facebook and Yahoo have also been encrypting traffic among their internal servers. And Facebook, Google and Microsoft have been moving to more strongly encrypt consumer traffic with so-called Perfect Forward Secrecy, specifically devised to make it more labor intensive for the N.S.A. or anyone to read stored encrypted communications.One of the biggest indirect consequences from the Snowden revelations, technology executives say, has been the surge in demands from foreign governments that saw what kind of access to user information the N.S.A. received — voluntarily or surreptitiously. Now they want the same.
  • The latest move in the war between intelligence agencies and technology companies arrived this week, in the form of a new Google encryption tool. The company released a user-friendly, email encryption method to replace the clunky and often mistake-prone encryption schemes the N.S.A. has readily exploited.But the best part of the tool was buried in Google’s code, which included a jab at the N.S.A.'s smiley-face slide. The code included the phrase: “ssl-added-and-removed-here-; - )”
Paul Merrell

PATRIOT Act spying programs on death watch - Seung Min Kim and Kate Tummarello - POLITICO - 0 views

  • With only days left to act and Rand Paul threatening a filibuster, Senate Republicans remain deeply divided over the future of the PATRIOT Act and have no clear path to keep key government spying authorities from expiring at the end of the month. Crucial parts of the PATRIOT Act, including a provision authorizing the government’s controversial bulk collection of American phone records, first revealed by Edward Snowden, are due to lapse May 31. That means Congress has barely a week to figure out a fix before before lawmakers leave town for Memorial Day recess at the end of the next week. Story Continued Below The prospects of a deal look grim: Senate Majority Leader Mitch McConnell on Thursday night proposed just a two-month extension of expiring PATRIOT Act provisions to give the two sides more time to negotiate, but even that was immediately dismissed by critics of the program.
  •  
    A must-read. The major danger is that the the Senate could pass the USA Freedom Act, which has already been passed by the House. Passage of that Act, despite its name, would be bad news for civil liberties.  Now is the time to let your Congress critters know that you want them to fight to the Patriot Act provisions expire on May 31, without any replacement legislation.  Keep in mind that Section 502 does not apply just to telephone metadata. It authorizes the FBI to gather without notice to their victims "any tangible thing", specifically including as examples "library circulation records, library patron lists, book sales records, book customer lists, firearms sales records, tax return records, educational records, or medical records containing information that would identify a person." The breadth of the section is illustrated by telephone metadata not even being mentioned in the section.  NSA going after your medical records souand far fetched? Former NSA technical director William Binney says they're already doing it: "Binney alludes to even more extreme intelligence practices that are not yet public knowledge, including the collection of Americans' medical data, the collection and use of client-attorney conversations, and law enforcement agencies' "direct access," without oversight, to NSA databases." https://consortiumnews.com/2015/03/05/seeing-the-stasi-through-nsa-eyes/ So please, contact your Congress critters right now and tell them to sunset the Patriot Act NOW. This will be decided in the next few days so the sooner you contact them the better. 
Paul Merrell

Protocols of the Hackers of Zion? « LobeLog - 0 views

  • When Israeli Prime Minister Benjamin Netanyahu met with Google chairman Eric Schmidt on Tuesday afternoon, he boasted about Israel’s “robust hi-tech and cyber industries.” According to The Jerusalem Post, “Netanyahu also noted that ‘Israel was making great efforts to diversify the markets with which it is trading in the technological field.'” Just how diversified and developed Israeli hi-tech innovation has become was revealed the very next morning, when the Russian cyber-security firm Kaspersky Labs, which claims more than 400 million users internationally, announced that sophisticated spyware with the hallmarks of Israeli origin (although no country was explicitly identified) had targeted three European hotels that had been venues for negotiations over Iran’s nuclear program.
  • Wednesday’s Wall Street Journal, one of the first news sources to break the story, reported that Kaspersky itself had been hacked by malware whose code was remarkably similar to that of a virus attributed to Israel. Code-named “Duqu” because it used the letters DQ in the names of the files it created, the malware had first been detected in 2011. On Thursday, Symantec, another cyber-security firm, announced it too had discovered Duqu 2 on its global network, striking undisclosed telecommunication sites in Europe, North Africa, Hong Kong, and  Southeast Asia. It said that Duqu 2 is much more difficult to detect that its predecessor because it lives exclusively in the memory of the computers it infects, rather than writing files to a drive or disk. The original Duqu shared coding with — and was written on the same platform as — Stuxnet, the computer worm  that partially disabled enrichment centrifuges in Iranian nuclear power plants, according to a 2012 report in The New York Times. Intelligence and military experts said that Stuxnet was first tested at Dimona, a nuclear-reactor complex in the Negev desert that houses Israel’s own clandestine nuclear weapons program. While Stuxnet is widely believed to have been a joint Israeli-U.S. operation, Israel seems to have developed and implemented Duqu on its own.
  • Coding of the spyware that targeted two Swiss hotels and one in Vienna—both sites where talks were held between the P5+1 and Iran—so closely resembled that of Duqu that Kaspersky has dubbed it “Duqu 2.” A Kaspersky report contends that the new and improved Duqu would have been almost impossible to create without access to the original Duqu code. Duqu 2’s one hundred “modules” enabled the cyber attackers to commandeer infected computers, compress video feeds  (including those from hotel surveillance cameras), monitor and disrupt telephone service and Wi-Fi, and steal electronic files. The hackers’ penetration of computers used by the front desk would have allowed them to determine the room numbers of negotiators and delegation members. Duqu 2 also gave the hackers the ability to operate two-way microphones in the hotels’ elevators and control their alarm systems.
Paul Merrell

Hackers Prove Fingerprints Are Not Secure, Now What? | nsnbc international - 0 views

  • The Office of Personnel Management (OPM) recently revealed that an estimated 5.6 million government employees were affected by the hack; and not 1.1 million as previously assumed.
  • Samuel Schumach, spokesman for the OPM, said: “As part of the government’s ongoing work to notify individuals affected by the theft of background investigation records, the Office of Personnel Management and the Department of Defense have been analyzing impacted data to verify its quality and completeness. Of the 21.5 million individuals whose Social Security Numbers and other sensitive information were impacted by the breach, the subset of individuals whose fingerprints have been stolen has increased from a total of approximately 1.1 million to approximately 5.6 million.” This endeavor expended the use of the Department of Defense (DoD), the Department of Homeland Security (DHS), the National Security Agency (NSA), and the Pentagon. Schumer added that “if, in the future, new means are developed to misuse the fingerprint data, the government will provide additional information to individuals whose fingerprints may have been stolen in this breach.” However, we do not need to wait for the future for fingerprint data to be misused and coveted by hackers.
  • Look no further than the security flaws in Samsung’s new Galaxy 5 smartphone as was demonstrated by researchers at Security Research Labs (SRL) showing how fingerprints, iris scans and other biometric identifiers could be fabricated and yet authenticated by the Apple Touch ID fingerprints scanner. The shocking part of this demonstration is that this hack was achieved less than 2 days after the technology was released to the public by Apple. Ben Schlabs, researcher for SRL explained: “We expected we’d be able to spoof the S5’s Finger Scanner, but I hoped it would at least be a challenge. The S5 Finger Scanner feature offers nothing new except—because of the way it is implemented in this Android device—slightly higher risk than that already posed by previous devices.” Schlabs and other researchers discovered that “the S5 has no mechanism requiring a password when encountering a large number of incorrect finger swipes.” By rebotting the smartphone, Schlabs could force “the handset to accept an unlimited number of incorrect swipes without requiring users to enter a password [and] the S5 fingerprint authenticator [could] be associated with sensitive banking or payment apps such as PayPal.”
  • ...1 more annotation...
  • Schlab said: “Perhaps most concerning is that Samsung does not seem to have learned from what others have done less poorly. Not only is it possible to spoof the fingerprint authentication even after the device has been turned off, but the implementation also allows for seemingly unlimited authentication attempts without ever requiring a password. Incorporation of fingerprint authentication into highly sensitive apps such as PayPal gives a would-be attacker an even greater incentive to learn the simple skill of fingerprint spoofing.” Last year Hackers from the Chaos Computer Club (CCC) proved Apple wrong when the corporation insisted that their new iPhone 5S fingerprint sensor is “a convenient and highly secure way to access your phone.” CCC stated that it is as easy as stealing a fingerprint from a drinking glass – and anyone can do it.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

NSA Spying Relies on AT&T's 'Extreme Willingness to Help' - ProPublica - 0 views

  • he National Security Agency’s ability to spy on vast quantities of Internet traffic passing through the United States has relied on its extraordinary, decades-long partnership with a single company: the telecom giant AT&T. While it has been long known that American telecommunications companies worked closely with the spy agency, newly disclosed NSA documents show that the relationship with AT&T has been considered unique and especially productive. One document described it as “highly collaborative,” while another lauded the company’s “extreme willingness to help.”
  • AT&T’s cooperation has involved a broad range of classified activities, according to the documents, which date from 2003 to 2013. AT&T has given the NSA access, through several methods covered under different legal rules, to billions of emails as they have flowed across its domestic networks. It provided technical assistance in carrying out a secret court order permitting the wiretapping of all Internet communications at the United Nations headquarters, a customer of AT&T. The NSA’s top-secret budget in 2013 for the AT&T partnership was more than twice that of the next-largest such program, according to the documents. The company installed surveillance equipment in at least 17 of its Internet hubs on American soil, far more than its similarly sized competitor, Verizon. And its engineers were the first to try out new surveillance technologies invented by the eavesdropping agency. One document reminds NSA officials to be polite when visiting AT&T facilities, noting: “This is a partnership, not a contractual relationship.” The documents, provided by the former agency contractor Edward Snowden, were jointly reviewed by The New York Times and ProPublica.
  • It is not clear if the programs still operate in the same way today. Since the Snowden revelations set off a global debate over surveillance two years ago, some Silicon Valley technology companies have expressed anger at what they characterize as NSA intrusions and have rolled out new encryption to thwart them. The telecommunications companies have been quieter, though Verizon unsuccessfully challenged a court order for bulk phone records in 2014. At the same time, the government has been fighting in court to keep the identities of its telecom partners hidden. In a recent case, a group of AT&T customers claimed that the NSA’s tapping of the Internet violated the Fourth Amendment protection against unreasonable searches. This year, a federal judge dismissed key portions of the lawsuit after the Obama administration argued that public discussion of its telecom surveillance efforts would reveal state secrets, damaging national security.
Paul Merrell

U.K. Police Confirm Ongoing Criminal Probe of Snowden Leak Journalists - 0 views

  • A secretive British police investigation focusing on journalists working with Edward Snowden’s leaked documents remains ongoing two years after it was quietly launched, The Intercept can reveal. London’s Metropolitan Police Service has admitted it is still carrying out the probe, which is being led by its counterterrorism department, after previously refusing to confirm or deny its existence on the grounds that doing so could be “detrimental to national security.” The disclosure was made by police in a letter sent to this reporter Tuesday, concluding a seven-month freedom of information battle that saw the London force repeatedly attempt to withhold basic details about the status of the case. It reversed its position this week only after an intervention from the Information Commissioner’s Office, the public body that enforces the U.K.’s freedom of information laws.
Paul Merrell

XKeyscore Exposé Reaffirms the Need to Rid the Web of Tracking Cookies | Elec... - 0 views

  • The Intercept published an expose on the NSA's XKeyscore program. Along with information on the breadth and scale of the NSA's metadata collection, The Intercept revealed how the NSA relies on unencrypted cookie data to identify users. As The Intercept says: "The NSA’s ability to piggyback off of private companies’ tracking of their own users is a vital instrument that allows the agency to trace the data it collects to individual users. It makes no difference if visitors switch to public Wi-Fi networks or connect to VPNs to change their IP addresses: the tracking cookie will follow them around as long as they are using the same web browser and fail to clear their cookies." The NSA slides released by The Intercept give detailed guides to understanding the data transmitted by these cookies, as well as how to find unique machine identifiers that analysts can use to differentiate between multiple machines using the same IP address. We've written before about how spy agencies piggyback on social media account data to find Internet users' names or other identifying info, and these slides drive home the point that HTTP cookies leave users vulnerable to government surveillance, since any intermediary (or spy agency) can read the sensitive data they contain.
  • Worse yet, most of the time these identifying cookies come from third-party sources on webpages, and users have no meaningful way to opt out of receiving them (short of blocking all third party cookies) since advertisers (the main server of these types of cookies) refuse to honor the Do Not Track header.  Browser makers could help address this sort of non-consensual tracking by both advertisers and the NSA with some simple technical changes—changes that have been shown to reduce the number of third party cookies received by 67%. So far, though, they've been unwilling to build privacy protecting features in by default. Until they do, the best way for users to protect themselves is by installing a privacy protecting app like Privacy Badger, which is designed to block these types of uniquely identifying tracking cookies, or HTTPS Everywhere to block the transmission of HTTP cookies.
Paul Merrell

Spies and internet giants are in the same business: surveillance. But we can stop them ... - 0 views

  • On Tuesday, the European court of justice, Europe’s supreme court, lobbed a grenade into the cosy, quasi-monopolistic world of the giant American internet companies. It did so by declaring invalid a decision made by the European commission in 2000 that US companies complying with its “safe harbour privacy principles” would be allowed to transfer personal data from the EU to the US. This judgment may not strike you as a big deal. You may also think that it has nothing to do with you. Wrong on both counts, but to see why, some background might be useful. The key thing to understand is that European and American views about the protection of personal data are radically different. We Europeans are very hot on it, whereas our American friends are – how shall I put it? – more relaxed.
  • Given that personal data constitutes the fuel on which internet companies such as Google and Facebook run, this meant that their exponential growth in the US market was greatly facilitated by that country’s tolerant data-protection laws. Once these companies embarked on global expansion, however, things got stickier. It was clear that the exploitation of personal data that is the core business of these outfits would be more difficult in Europe, especially given that their cloud-computing architectures involved constantly shuttling their users’ data between server farms in different parts of the world. Since Europe is a big market and millions of its citizens wished to use Facebook et al, the European commission obligingly came up with the “safe harbour” idea, which allowed companies complying with its seven principles to process the personal data of European citizens. The circle having been thus neatly squared, Facebook and friends continued merrily on their progress towards world domination. But then in the summer of 2013, Edward Snowden broke cover and revealed what really goes on in the mysterious world of cloud computing. At which point, an Austrian Facebook user, one Maximilian Schrems, realising that some or all of the data he had entrusted to Facebook was being transferred from its Irish subsidiary to servers in the United States, lodged a complaint with the Irish data protection commissioner. Schrems argued that, in the light of the Snowden revelations, the law and practice of the United States did not offer sufficient protection against surveillance of the data transferred to that country by the government.
  • The Irish data commissioner rejected the complaint on the grounds that the European commission’s safe harbour decision meant that the US ensured an adequate level of protection of Schrems’s personal data. Schrems disagreed, the case went to the Irish high court and thence to the European court of justice. On Tuesday, the court decided that the safe harbour agreement was invalid. At which point the balloon went up. “This is,” writes Professor Lorna Woods, an expert on these matters, “a judgment with very far-reaching implications, not just for governments but for companies the business model of which is based on data flows. It reiterates the significance of data protection as a human right and underlines that protection must be at a high level.”
  • ...2 more annotations...
  • This is classic lawyerly understatement. My hunch is that if you were to visit the legal departments of many internet companies today you would find people changing their underpants at regular intervals. For the big names of the search and social media worlds this is a nightmare scenario. For those of us who take a more detached view of their activities, however, it is an encouraging development. For one thing, it provides yet another confirmation of the sterling service that Snowden has rendered to civil society. His revelations have prompted a wide-ranging reassessment of where our dependence on networking technology has taken us and stimulated some long-overdue thinking about how we might reassert some measure of democratic control over that technology. Snowden has forced us into having conversations that we needed to have. Although his revelations are primarily about government surveillance, they also indirectly highlight the symbiotic relationship between the US National Security Agency and Britain’s GCHQ on the one hand and the giant internet companies on the other. For, in the end, both the intelligence agencies and the tech companies are in the same business, namely surveillance.
  • And both groups, oddly enough, provide the same kind of justification for what they do: that their surveillance is both necessary (for national security in the case of governments, for economic viability in the case of the companies) and conducted within the law. We need to test both justifications and the great thing about the European court of justice judgment is that it starts us off on that conversation.
Paul Merrell

Popular Security Software Came Under Relentless NSA and GCHQ Attacks - The Intercept - 0 views

  • The National Security Agency and its British counterpart, Government Communications Headquarters, have worked to subvert anti-virus and other security software in order to track users and infiltrate networks, according to documents from NSA whistleblower Edward Snowden. The spy agencies have reverse engineered software products, sometimes under questionable legal authority, and monitored web and email traffic in order to discreetly thwart anti-virus software and obtain intelligence from companies about security software and users of such software. One security software maker repeatedly singled out in the documents is Moscow-based Kaspersky Lab, which has a holding registered in the U.K., claims more than 270,000 corporate clients, and says it protects more than 400 million people with its products. British spies aimed to thwart Kaspersky software in part through a technique known as software reverse engineering, or SRE, according to a top-secret warrant renewal request. The NSA has also studied Kaspersky Lab’s software for weaknesses, obtaining sensitive customer information by monitoring communications between the software and Kaspersky servers, according to a draft top-secret report. The U.S. spy agency also appears to have examined emails inbound to security software companies flagging new viruses and vulnerabilities.
  • The efforts to compromise security software were of particular importance because such software is relied upon to defend against an array of digital threats and is typically more trusted by the operating system than other applications, running with elevated privileges that allow more vectors for surveillance and attack. Spy agencies seem to be engaged in a digital game of cat and mouse with anti-virus software companies; the U.S. and U.K. have aggressively probed for weaknesses in software deployed by the companies, which have themselves exposed sophisticated state-sponsored malware.
  • The requested warrant, provided under Section 5 of the U.K.’s 1994 Intelligence Services Act, must be renewed by a government minister every six months. The document published today is a renewal request for a warrant valid from July 7, 2008 until January 7, 2009. The request seeks authorization for GCHQ activities that “involve modifying commercially available software to enable interception, decryption and other related tasks, or ‘reverse engineering’ software.”
  • ...9 more annotations...
  • The NSA, like GCHQ, has studied Kaspersky Lab’s software for weaknesses. In 2008, an NSA research team discovered that Kaspersky software was transmitting sensitive user information back to the company’s servers, which could easily be intercepted and employed to track users, according to a draft of a top-secret report. The information was embedded in “User-Agent” strings included in the headers of Hypertext Transfer Protocol, or HTTP, requests. Such headers are typically sent at the beginning of a web request to identify the type of software and computer issuing the request.
  • According to the draft report, NSA researchers found that the strings could be used to uniquely identify the computing devices belonging to Kaspersky customers. They determined that “Kaspersky User-Agent strings contain encoded versions of the Kaspersky serial numbers and that part of the User-Agent string can be used as a machine identifier.” They also noted that the “User-Agent” strings may contain “information about services contracted for or configurations.” Such data could be used to passively track a computer to determine if a target is running Kaspersky software and thus potentially susceptible to a particular attack without risking detection.
  • Another way the NSA targets foreign anti-virus companies appears to be to monitor their email traffic for reports of new vulnerabilities and malware. A 2010 presentation on “Project CAMBERDADA” shows the content of an email flagging a malware file, which was sent to various anti-virus companies by François Picard of the Montréal-based consulting and web hosting company NewRoma. The presentation of the email suggests that the NSA is reading such messages to discover new flaws in anti-virus software. Picard, contacted by The Intercept, was unaware his email had fallen into the hands of the NSA. He said that he regularly sends out notification of new viruses and malware to anti-virus companies, and that he likely sent the email in question to at least two dozen such outfits. He also said he never sends such notifications to government agencies. “It is strange the NSA would show an email like mine in a presentation,” he added.
  • The NSA presentation goes on to state that its signals intelligence yields about 10 new “potentially malicious files per day for malware triage.” This is a tiny fraction of the hostile software that is processed. Kaspersky says it detects 325,000 new malicious files every day, and an internal GCHQ document indicates that its own system “collect[s] around 100,000,000 malware events per day.” After obtaining the files, the NSA analysts “[c]heck Kaspersky AV to see if they continue to let any of these virus files through their Anti-Virus product.” The NSA’s Tailored Access Operations unit “can repurpose the malware,” presumably before the anti-virus software has been updated to defend against the threat.
  • The Project CAMBERDADA presentation lists 23 additional AV companies from all over the world under “More Targets!” Those companies include Check Point software, a pioneering maker of corporate firewalls based Israel, whose government is a U.S. ally. Notably omitted are the American anti-virus brands McAfee and Symantec and the British company Sophos.
  • As government spies have sought to evade anti-virus software, the anti-virus firms themselves have exposed malware created by government spies. Among them, Kaspersky appears to be the sharpest thorn in the side of government hackers. In the past few years, the company has proven to be a prolific hunter of state-sponsored malware, playing a role in the discovery and/or analysis of various pieces of malware reportedly linked to government hackers, including the superviruses Flame, which Kaspersky flagged in 2012; Gauss, also detected in 2012; Stuxnet, discovered by another company in 2010; and Regin, revealed by Symantec. In February, the Russian firm announced its biggest find yet: the “Equation Group,” an organization that has deployed espionage tools widely believed to have been created by the NSA and hidden on hard drives from leading brands, according to Kaspersky. In a report, the company called it “the most advanced threat actor we have seen” and “probably one of the most sophisticated cyber attack groups in the world.”
  • Hacks deployed by the Equation Group operated undetected for as long as 14 to 19 years, burrowing into the hard drive firmware of sensitive computer systems around the world, according to Kaspersky. Governments, militaries, technology companies, nuclear research centers, media outlets and financial institutions in 30 countries were among those reportedly infected. Kaspersky estimates that the Equation Group could have implants in tens of thousands of computers, but documents published last year by The Intercept suggest the NSA was scaling up their implant capabilities to potentially infect millions of computers with malware. Kaspersky’s adversarial relationship with Western intelligence services is sometimes framed in more sinister terms; the firm has been accused of working too closely with the Russian intelligence service FSB. That accusation is partly due to the company’s apparent success in uncovering NSA malware, and partly due to the fact that its founder, Eugene Kaspersky, was educated by a KGB-backed school in the 1980s before working for the Russian military.
  • Kaspersky has repeatedly denied the insinuations and accusations. In a recent blog post, responding to a Bloomberg article, he complained that his company was being subjected to “sensationalist … conspiracy theories,” sarcastically noting that “for some reason they forgot our reports” on an array of malware that trace back to Russian developers. He continued, “It’s very hard for a company with Russian roots to become successful in the U.S., European and other markets. Nobody trusts us — by default.”
  • Documents published with this article: Kaspersky User-Agent Strings — NSA Project CAMBERDADA — NSA NDIST — GCHQ’s Developing Cyber Defence Mission GCHQ Application for Renewal of Warrant GPW/1160 Software Reverse Engineering — GCHQ Reverse Engineering — GCHQ Wiki Malware Analysis & Reverse Engineering — ACNO Skill Levels — GCHQ
Paul Merrell

Google Chrome Listening In To Your Room Shows The Importance Of Privacy Defense In Depth - 0 views

  • Yesterday, news broke that Google has been stealth downloading audio listeners onto every computer that runs Chrome, and transmits audio data back to Google. Effectively, this means that Google had taken itself the right to listen to every conversation in every room that runs Chrome somewhere, without any kind of consent from the people eavesdropped on. In official statements, Google shrugged off the practice with what amounts to “we can do that”.It looked like just another bug report. "When I start Chromium, it downloads something." Followed by strange status information that notably included the lines "Microphone: Yes" and "Audio Capture Allowed: Yes".
  • Without consent, Google’s code had downloaded a black box of code that – according to itself – had turned on the microphone and was actively listening to your room.A brief explanation of the Open-source / Free-software philosophy is needed here. When you’re installing a version of GNU/Linux like Debian or Ubuntu onto a fresh computer, thousands of really smart people have analyzed every line of human-readable source code before that operating system was built into computer-executable binary code, to make it common and open knowledge what the machine actually does instead of trusting corporate statements on what it’s supposed to be doing. Therefore, you don’t install black boxes onto a Debian or Ubuntu system; you use software repositories that have gone through this source-code audit-then-build process. Maintainers of operating systems like Debian and Ubuntu use many so-called “upstreams” of source code to build the final product.Chromium, the open-source version of Google Chrome, had abused its position as trusted upstream to insert lines of source code that bypassed this audit-then-build process, and which downloaded and installed a black box of unverifiable executable code directly onto computers, essentially rendering them compromised. We don’t know and can’t know what this black box does. But we see reports that the microphone has been activated, and that Chromium considers audio capture permitted.
  • This was supposedly to enable the “Ok, Google” behavior – that when you say certain words, a search function is activated. Certainly a useful feature. Certainly something that enables eavesdropping of every conversation in the entire room, too.Obviously, your own computer isn’t the one to analyze the actual search command. Google’s servers do. Which means that your computer had been stealth configured to send what was being said in your room to somebody else, to a private company in another country, without your consent or knowledge, an audio transmission triggered by… an unknown and unverifiable set of conditions.Google had two responses to this. The first was to introduce a practically-undocumented switch to opt out of this behavior, which is not a fix: the default install will still wiretap your room without your consent, unless you opt out, and more importantly, know that you need to opt out, which is nowhere a reasonable requirement. But the second was more of an official statement following technical discussions on Hacker News and other places. That official statement amounted to three parts (paraphrased, of course):
  • ...4 more annotations...
  • 1) Yes, we’re downloading and installing a wiretapping black-box to your computer. But we’re not actually activating it. We did take advantage of our position as trusted upstream to stealth-insert code into open-source software that installed this black box onto millions of computers, but we would never abuse the same trust in the same way to insert code that activates the eavesdropping-blackbox we already downloaded and installed onto your computer without your consent or knowledge. You can look at the code as it looks right now to see that the code doesn’t do this right now.2) Yes, Chromium is bypassing the entire source code auditing process by downloading a pre-built black box onto people’s computers. But that’s not something we care about, really. We’re concerned with building Google Chrome, the product from Google. As part of that, we provide the source code for others to package if they like. Anybody who uses our code for their own purpose takes responsibility for it. When this happens in a Debian installation, it is not Google Chrome’s behavior, this is Debian Chromium’s behavior. It’s Debian’s responsibility entirely.3) Yes, we deliberately hid this listening module from the users, but that’s because we consider this behavior to be part of the basic Google Chrome experience. We don’t want to show all modules that we install ourselves.
  • If you think this is an excusable and responsible statement, raise your hand now.Now, it should be noted that this was Chromium, the open-source version of Chrome. If somebody downloads the Google product Google Chrome, as in the prepackaged binary, you don’t even get a theoretical choice. You’re already downloading a black box from a vendor. In Google Chrome, this is all included from the start.This episode highlights the need for hard, not soft, switches to all devices – webcams, microphones – that can be used for surveillance. A software on/off switch for a webcam is no longer enough, a hard shield in front of the lens is required. A software on/off switch for a microphone is no longer enough, a physical switch that breaks its electrical connection is required. That’s how you defend against this in depth.
  • Of course, people were quick to downplay the alarm. “It only listens when you say ‘Ok, Google’.” (Ok, so how does it know to start listening just before I’m about to say ‘Ok, Google?’) “It’s no big deal.” (A company stealth installs an audio listener that listens to every room in the world it can, and transmits audio data to the mothership when it encounters an unknown, possibly individually tailored, list of keywords – and it’s no big deal!?) “You can opt out. It’s in the Terms of Service.” (No. Just no. This is not something that is the slightest amount of permissible just because it’s hidden in legalese.) “It’s opt-in. It won’t really listen unless you check that box.” (Perhaps. We don’t know, Google just downloaded a black box onto my computer. And it may not be the same black box as was downloaded onto yours. )Early last decade, privacy activists practically yelled and screamed that the NSA’s taps of various points of the Internet and telecom networks had the technical potential for enormous abuse against privacy. Everybody else dismissed those points as basically tinfoilhattery – until the Snowden files came out, and it was revealed that precisely everybody involved had abused their technical capability for invasion of privacy as far as was possible.Perhaps it would be wise to not repeat that exact mistake. Nobody, and I really mean nobody, is to be trusted with a technical capability to listen to every room in the world, with listening profiles customizable at the identified-individual level, on the mere basis of “trust us”.
  • Privacy remains your own responsibility.
  •  
    And of course, Google would never succumb to a subpoena requiring it to turn over the audio stream to the NSA. The Tor Browser just keeps looking better and better. https://www.torproject.org/projects/torbrowser.html.en
« First ‹ Previous 101 - 120 of 172 Next › Last »
Showing 20 items per page