Skip to main content

Home/ Future of the Web/ Group items tagged Human Right

Rss Feed Group items tagged

Paul Merrell

'Manhunting Timeline' Further Suggests US Pressured Countries to Prosecute WikiLeaks Ed... - 0 views

  • An entry in something the government calls a “Manhunting Timeline” suggests that the United States pressured officials of countries around the world to prosecute WikiLeaks editor-in-chief, Julian Assange, in 2010. The file—marked unclassified, revealed by National Security Agency whistleblower Edward Snowden and published by The Intercept—is dated August 2010. Under the headline, “United States, Australia, Great Britain, Germany, Iceland” – it states: The United States on 10 August urged other nations with forces in Afghanistan, including Australia, United Kingdom and Germany, to consider filing criminal charges against Julian Assange, founder of the rogue WikiLeaks Internet website and responsible for the unauthorized publication of over 70,000 classified documents covering the war in Afghanistan. The documents may have been provided to WikiLeaks by Army Private First Class Bradley Manning. The appeal exemplifies the start of an international effort to focus the legal element of national power upon non-state actor Assange and the human network that supports WikiLeaks. Another document—a top-secret page from an internal wiki—indicates there has been discussion in the NSA with the Threat Operations Center Oversight and Compliance (NOC) and Office of General Counsel (OGC) on the legality of designating WikiLeaks a “malicious foreign actor” and whether this would make it permissible to conduct surveillance on Americans accessing the website. “Can we treat a foreign server who stores or potentially disseminates leaked or stolen data on its server as a ‘malicious foreign actor’ for the purpose of targeting with no defeats?” Examples: WikiLeaks, thepiratebay.org). The NOC/OGC answered, “Let me get back to you.” (The page does not indicate if anyone ever got back to the NSA. And “defeats” essentially means protections.)
  • GCHQ, the NSA’s counterpart in the UK, had a program called “ANTICRISIS GIRL,” which could engage in “targeted website monitoring.” This means data of hundreds of users accessing a website, like WikiLeaks, could be collected. The IP addresses of readers and supporters could be monitored. The agency could even target the publisher if it had a public dropbox or submission system. NSA and GCHQ could also target the foreign “branches” of the hacktivist group, Anonymous. An answer to another question from the wiki entry involves the question, “Is it okay to query against a foreign server known to be malicious even if there is a possibility that US persons could be using it as well? Example: thepiratebay.org.” The NOC/OGC responded, “Okay to go after foreign servers which US people use also (with no defeats). But try to minimize to ‘post’ only for example to filter out non-pertinent information.” WikiLeaks is not an example in this question, however, if it was designated as a “malicious foreign actor,” then the NSA would do queries of American users.
  • Michael Ratner, a lawyer from the Center for Constitutional Rights (CCR) who represents WikiLeaks, said on “Democracy Now!”, this shows he has every reason to fear what would happen if he set foot outside of the embassy. The files show some of the extent to which the US and UK have tried to destroy WikiLeaks. CCR added in a statement, “These NSA documents should make people understand why Julian Assange was granted diplomatic asylum, why he must be given safe passage to Ecuador, and why he must keep himself out of the hands of the United States and apparently other countries as well. These revelations only corroborate the expectation that Julian Assange is on a US target list for prosecution under the archaic “Espionage Act,” for what is nothing more than publishing evidence of government misconduct.” “These documents demonstrate that the political persecution of WikiLeaks is very much alive,”Baltasar Garzón, the Spanish former judge who now represents the group, told The Intercept. “The paradox is that Julian Assange and the WikiLeaks organization are being treated as a threat instead of what they are: a journalist and a media organization that are exercising their fundamental right to receive and impart information in its original form, free from omission and censorship, free from partisan interests, free from economic or political pressure.”
Paul Merrell

China Pressures U.S. Companies to Buckle on Strong Encryption and Surveillance - 0 views

  • Before Chinese President Xi Jinping visits President Obama, he and Chinese executives have some business in Seattle: pressing U.S. tech companies, hungry for the Chinese market, to comply with the country’s new stringent and suppressive Internet policies. The New York Times reported last week that Chinese authorities sent a letter to some U.S. tech firms seeking a promise they would not harm China’s national security. That might require such things as forcing users to register with their real names, storing Chinese citizens’ data locally where the government can access it, and building government “back doors” into encrypted communication products for better surveillance. China’s new national security law calls for systems that are “secure and controllable”, which industry groups told the Times in July means companies will have to hand over encryption keys or even source code to their products. Among the big names joining Xi at Wednesday’s U.S.-China Internet Industry Forum: Apple, Google, Facebook, IBM, and Microsoft.
  • The meeting comes as U.S. law enforcement officials have been pressuring companies to give them a way to access encrypted communications. The technology community has responded by pointing out that any sort of hole for law enforcement weakens the entire system to attack from outside bad actors—such as China, which has been tied to many instances of state-sponsored hacking into U.S systems. In fact, one argument privacy advocates have repeatedly made is that back doors for law enforcement would set a dangerous precedent when countries like China want the same kind of access to pursue their own domestic political goals. But here, potentially, the situation has been reversed, with China using its massive economic leverage to demand that sort of access right now. Human rights groups are urging U.S. companies not to give in.
Paul Merrell

'UK surveillance is worse than 1984' says UN privacy chief (Wired UK) - 0 views

  • The UN's newly appointed special rapporteur on privacy, Joseph Cannataci, has described digital surveillance in the UK as "worse" than anything imagined in George Orwell's totalitarian dystopia 1984.Speaking to the Guardian, Cannataci -- who doesn't own a Facebook account or use Twitter -- lambasted the oversight of British digital surveillance as "a rather bad joke at its citizens' expense".Warning against the steady erosion of privacy and increasing levels of government intrusion, he also drew sinister parallels with Orwell's vision of a mass-surveilled society, adding that today's reality was far worse than the fiction: "At least Winston [a character in Orwell's 1984] was able to go out in the countryside and go under a tree and expect there wouldn't be any screen, as it was called. Whereas today there are many parts of the English countryside where there are more cameras than George Orwell could ever have imagined."
  • Cannataci, who holds posts as a professor of technology of law at the University of Groningen, and as head of the department of Information Policy and Governance at the University of Malta, also called for a "Geneva convention-style law" for the internet. "Some people may not want to buy into it. But you know, if one takes the attitude that some countries will not play ball, then, for example, the chemical weapons agreement would never have come about."
  • As part of his new role -- which elevates digital privacy to the same level of importance as other human rights -- Cannataci has vowed to begin systematically reviewing government policies and the business models of large corporations, which he accuses of "very often taking the data that you never even knew they were taking". Although the privacy chief admits that his mandate is more than likely "impossible to achieve in the next three years", he stressed the importance of a "longer-term view" in an effort to help protect people's data and safeguard their digital rights.
Paul Merrell

Here Are All the Sketchy Government Agencies Buying Hacking Team's Spy Tech | Motherboard - 0 views

  • They say what goes around comes around, and there's perhaps nowhere that rings more true than in the world of government surveillance. Such was the case on Monday morning when Hacking Team, the Italian company known for selling electronic intrusion tools to police and federal agencies around the world, awoke to find that it had been hacked itself—big time—apparently exposing its complete client list, email spools, invoices, contracts, source code, and more. Those documents show that not only has the company been selling hacking tools to a long list of foreign governments with dubious human rights records, but it’s also establishing a nice customer base right here in the good old US of A. The cache, which sources told Motherboard is legitimate, contains more than 400 gigabytes of files, many of which confirm previous reports that the company has been selling industrial-grade surveillance software to authoritarian governments. Hacking Team is known in the surveillance world for its flagship hacking suite, Remote Control System (RCS) or Galileo, which allows its government and law enforcement clients to secretly install “implants” on remote machines that can steal private emails, record Skype calls, and even monitor targets through their computer's webcam. Hacking Team in North America
  • According to leaked contracts, invoices and an up-to-date list of customer subscriptions, Hacking Team’s clients—which the company has consistently refused to name—also include Kazakhstan, Azerbaijan, Oman, Saudi Arabia, Uzbekistan, Bahrain, Ethiopia, Nigeria, Sudan and many others. The list of names matches the findings of Citizen Lab, a research lab at the University of Toronto's Munk School of Global Affairs that previously found traces of Hacking Team on the computers of journalists and activists around the world. Last year, the Lab's researchers mapped out the worldwide collection infrastructure used by Hacking Team's customers to covertly transport stolen data, unveiling a massive network comprised of servers based in 21 countries. Reporters Without Borders later named the company one of the “Enemies of the Internet” in its annual report on government surveillance and censorship.
  • we’ve only scratched the surface of this massive leak, and it’s unclear how Hacking Team will recover from having its secrets spilling across the internet for all to see. In the meantime, the company is asking all customers to stop using its spyware—and likely preparing for the worst.
Paul Merrell

ExposeFacts - For Whistleblowers, Journalism and Democracy - 0 views

  • Launched by the Institute for Public Accuracy in June 2014, ExposeFacts.org represents a new approach for encouraging whistleblowers to disclose information that citizens need to make truly informed decisions in a democracy. From the outset, our message is clear: “Whistleblowers Welcome at ExposeFacts.org.” ExposeFacts aims to shed light on concealed activities that are relevant to human rights, corporate malfeasance, the environment, civil liberties and war. At a time when key provisions of the First, Fourth and Fifth Amendments are under assault, we are standing up for a free press, privacy, transparency and due process as we seek to reveal official information—whether governmental or corporate—that the public has a right to know. While no software can provide an ironclad guarantee of confidentiality, ExposeFacts—assisted by the Freedom of the Press Foundation and its “SecureDrop” whistleblower submission system—is utilizing the latest technology on behalf of anonymity for anyone submitting materials via the ExposeFacts.org website. As journalists we are committed to the goal of protecting the identity of every source who wishes to remain anonymous.
  • The seasoned editorial board of ExposeFacts will be assessing all the submitted material and, when deemed appropriate, will arrange for journalistic release of information. In exercising its judgment, the editorial board is able to call on the expertise of the ExposeFacts advisory board, which includes more than 40 journalists, whistleblowers, former U.S. government officials and others with wide-ranging expertise. We are proud that Pentagon Papers whistleblower Daniel Ellsberg was the first person to become a member of the ExposeFacts advisory board. The icon below links to a SecureDrop implementation for ExposeFacts overseen by the Freedom of the Press Foundation and is only accessible using the Tor browser. As the Freedom of the Press Foundation notes, no one can guarantee 100 percent security, but this provides a “significantly more secure environment for sources to get information than exists through normal digital channels, but there are always risks.” ExposeFacts follows all guidelines as recommended by Freedom of the Press Foundation, and whistleblowers should too; the SecureDrop onion URL should only be accessed with the Tor browser — and, for added security, be running the Tails operating system. Whistleblowers should not log-in to SecureDrop from a home or office Internet connection, but rather from public wifi, preferably one you do not frequent. Whistleblowers should keep to a minimum interacting with whistleblowing-related websites unless they are using such secure software.
  •  
    A new resource site for whistle-blowers. somewhat in the tradition of Wikileaks, but designed for encrypted communications between whistleblowers and journalists.  This one has an impressive board of advisors that includes several names I know and tend to trust, among them former whistle-blowers Daniel Ellsberg, Ray McGovern, Thomas Drake, William Binney, and Ann Wright. Leaked records can only be dropped from a web browser running the Tor anonymizer software and uses the SecureDrop system originally developed by Aaron Schwartz. They strongly recommend using the Tails secure operating system that can be installed to a thumb drive and leaves no tracks on the host machine. https://tails.boum.org/index.en.html Curious, I downloaded Tails and installed it to a virtual machine. It's a heavily customized version of Debian. It has a very nice Gnome desktop and blocks any attempt to connect to an external network by means other than installed software that demands encrypted communications. For example, web sites can only be viewed via the Tor anonymizing proxy network. It does take longer for web pages to load because they are moving over a chain of proxies, but even so it's faster than pages loaded in the dial-up modem days, even for web pages that are loaded with graphics, javascript, and other cruft. E.g., about 2 seconds for New York Times pages. All cookies are treated by default as session cookies so disappear when you close the page or the browser. I love my Linux Mint desktop, but I am thinking hard about switching that box to Tails. I've been looking for methods to send a lot more encrypted stuff down the pipe for NSA to store. Tails looks to make that not only easy, but unavoidable. From what I've gathered so far, if you want to install more software on Tails, it takes about an hour to create a customized version and then update your Tails installation from a new ISO file. Tails has a wonderful odor of having been designed for secure computing. Current
Paul Merrell

European Parliament Urges Protection for Edward Snowden - The New York Times - 0 views

  • The European Parliament narrowly adopted a nonbinding but nonetheless forceful resolution on Thursday urging the 28 nations of the European Union to recognize Edward J. Snowden as a “whistle-blower and international human rights defender” and shield him from prosecution.On Twitter, Mr. Snowden, the former National Security Agency contractor who leaked millions of documents about electronic surveillance by the United States government, called the vote a “game-changer.” But the resolution has no legal force and limited practical effect for Mr. Snowden, who is living in Russia on a three-year residency permit.Whether to grant Mr. Snowden asylum remains a decision for the individual European governments, and none have done so thus far. Continue reading the main story Related Coverage Open Source: Now Following the N.S.A. on Twitter, @SnowdenSEPT. 29, 2015 Snowden Sees Some Victories, From a DistanceMAY 19, 2015 Still, the resolution was the strongest statement of support seen for Mr. Snowden from the European Parliament. At the same time, the close vote — 285 to 281 — suggested the extent to which some European lawmakers are wary of alienating the United States.
  • The resolution calls on European Union members to “drop any criminal charges against Edward Snowden, grant him protection and consequently prevent extradition or rendition by third parties.”In June 2013, shortly after Mr. Snowden’s leaks became public, the United States charged him with theft of government property and violations of the Espionage Act of 1917. By then, he had flown to Moscow, where he spent weeks in legal limbo before he was granted temporary asylum and, later, a residency permit.Four Latin American nations have offered him permanent asylum, but he does not believe he could travel from Russia to those countries without running the risk of arrest and extradition to the United States along the way.
  • The White House, which has used diplomatic efforts to discourage even symbolic resolutions of support for Mr. Snowden, immediately criticized the resolution.“Our position has not changed,” said Ned Price, a spokesman for the National Security Council in Washington.“Mr. Snowden is accused of leaking classified information and faces felony charges here in the United States. As such, he should be returned to the U.S. as soon as possible, where he will be accorded full due process.”Jan Philipp Albrecht, one of the lawmakers who sponsored the resolution in Europe, said it should increase pressure on national governments.
  • ...1 more annotation...
  • “It’s the first time a Parliament votes to ask for this to be done — and it’s the European Parliament,” Mr. Albrecht, a German lawmaker with the Greens political bloc, said in a phone interview shortly after the vote, which was held in Strasbourg, France. “So this has an impact surely on the debate in the member states.”The resolution “is asking or demanding the member states’ governments to end all the charges and to prevent any extradition to a third party,” Mr. Albrecht said. “That’s a very clear call, and that can’t be just ignored by the governments,” he said.
Paul Merrell

Sick Of Facebook? Read This. - 2 views

  • In 2012, The Guardian reported on Facebook’s arbitrary and ridiculous nudity and violence guidelines which allow images of crushed limbs but – dear god spare us the image of a woman breastfeeding. Still, people stayed – and Facebook grew. In 2014, Facebook admitted to mind control games via positive or negative emotional content tests on unknowing and unwilling platform users. Still, people stayed – and Facebook grew. Following the 2016 election, Facebook responded to the Harpie shrieks from the corporate Democrats bysetting up a so-called “fake news” task force to weed out those dastardly commies (or socialists or anarchists or leftists or libertarians or dissidents or…). And since then, I’ve watched my reach on Facebook drain like water in a bathtub – hard to notice at first and then a spastic swirl while people bicker about how to plug the drain. And still, we stayed – and the censorship tightened. Roughly a year ago, my show Act Out! reported on both the censorship we were experiencing but also the cramped filter bubbling that Facebook employs in order to keep the undesirables out of everyone’s news feed. Still, I stayed – and the censorship tightened. 2017 into 2018 saw more and more activist organizers, particularly black and brown, thrown into Facebook jail for questioning systemic violence and demanding better. In August, puss bag ass hat in a human suit Alex Jones was banned from Facebook – YouTube, Apple and Twitter followed suit shortly thereafter. Some folks celebrated. Some others of us skipped the party because we could feel what was coming.
  • On Thursday, October 11th of this year, Facebook purged more than 800 pages including The Anti-Media, Police the Police, Free Thought Project and many other social justice and alternative media pages. Their explanation rested on the painfully flimsy foundation of “inauthentic behavior.” Meanwhile, their fake-news checking team is stacked with the likes of the Atlantic Council and the Weekly Standard, neocon junk organizations that peddle such drivel as “The Character Assassination of Brett Kavanaugh.” Soon after, on the Monday before the Midterm elections, Facebook blocked another 115 accounts citing once again, “inauthentic behavior.” Then, in mid November, a massive New York Times piece chronicled Facebook’s long road to not only save its image amid rising authoritarian behavior, but “to discredit activist protesters, in part by linking them to the liberal financier George Soros.” (I consistently find myself waiting for those Soros and Putin checks in the mail that just never appear.)
  • What we need is an open source, non-surveillance platform. And right now, that platform is Minds. Before you ask, I’m not being paid to write that.
  • ...2 more annotations...
  • Fashioned as an alternative to the closed and creepy Facebook behemoth, Minds advertises itself as “an open source and decentralized social network for Internet freedom.” Minds prides itself on being hands-off with regards to any content that falls in line with what’s permitted by law, which has elicited critiques from some on the left who say Minds is a safe haven for fascists and right-wing extremists. Yet, Ottman has himself stated openly that he wants ideas on content moderation and ways to make Minds a better place for social network users as well as radical content creators. What a few fellow journos and I are calling #MindsShift is an important step in not only moving away from our gagged existence on Facebook but in building a social network that can serve up the real news folks are now aching for.
  • To be clear, we aren’t advocating that you delete your Facebook account – unless you want to. For many, Facebook is still an important tool and our goal is to add to the outreach toolkit, not suppress it. We have set January 1st, 2019 as the ultimate date for this #MindsShift. Several outlets with a combined reach of millions of users will be making the move – and asking their readerships/viewerships to move with them. Along with fellow journalists, I am working with Minds to brainstorm new user-friendly functions and ways to make this #MindsShift a loud and powerful move. We ask that you, the reader, add to the conversation by joining the #MindsShift and spreading the word to your friends and family. (Join Minds via this link) We have created the #MindsShift open group on Minds.com so that you can join and offer up suggestions and ideas to make this platform a new home for radical and progressive media.
Paul Merrell

Google starts watching what you do off the Internet too - RT - 1 views

  • The most powerful company on the Internet just got a whole lot creepier: a new service from Google merges offline consumer info with online intelligence, allowing advertisers to target users based on what they do at the keyboard and at the mall. Without much fanfare, Google announced news this week of a new advertising project, Conversions API, that will let businesses build all-encompassing user profiles based off of not just what users search for on the Web, but what they purchase outside of the home. In a blog post this week on Google’s DoubleClick Search site, the Silicon Valley giant says that targeting consumers based off online information only allows advertisers to learn so much. “Conversions,” tech-speak for the digital metric made by every action a user makes online, are incomplete until coupled with real life data, Google says.
  • Of course, there is always the possibility that all of this information can be unencrypted and, in some cases, obtained by third-parties that you might not want prying into your personal business. Edwards notes in his report that Google does not explicitly note that intelligence used in Conversions API will be anonymized, but the blowback from not doing as much would sure be enough to start a colossal uproar. Meanwhile, however, all of the information being collected by Google — estimated to be on millions of servers around the globe — is being handed over to more than just advertising companies. Last month Google reported that the US government requested personal information from roughly 8,000 individual users during just the first few months of 2012.“This is the sixth time we’ve released this data, and one trend has become clear: Government surveillance is on the rise,” Google admitted with their report.
Gonzalo San Gil, PhD.

Keep the FBI out of my Computer - Access Now - 0 views

  •  
    "The U.S. Federal Bureau of Investigation (FBI) wants the power to hack into computers anywhere in the world, and even millions of computers at once. Instead of asking U.S. Congress for permission, they're sneaking a procedural rule change through the bureaucracy. It's called Rule 41 and it's part of the U.S.Federal Rules of Criminal Procedure. Read more about Rule 41 and government hacking here and here. "
Gonzalo San Gil, PhD.

U.S. military official: Internet shutdowns don't help during conflict - Access Now - 0 views

  •  
    "22 June 2016 | 4:49 pm Today the U.S. military gave a resounding rebuke to U.S. presidential candidate Donald Trump's call to shut down the internet in the fight against the Islamic State of Iraq and the Levant (ISIL). Politico reports from a House Armed Services Committee hearing that Acting Assistant Defense Secretary Thomas Atkins testified:"
Gonzalo San Gil, PhD.

#KeepItOn - Access Now - 1 views

  •  
    "in·ter·net shut·down noun | /ˈin(t)ərˌnet ˈSHətˌdoun/ An intentional disruption of internet or electronic communications, rendering them inaccessible or effectively unusable, for a specific population or within a location, often to exert control over the flow of information. Internet shutdowns happen when governments order companies to cut off access to communications tools - like Twitter, SMS, or Facebook. Unfortunately, companies often comply with these government orders, even when they happen during elections."
Gonzalo San Gil, PhD.

'Failed' Piracy Letters Should Escalate to Fines & Jail, MP Says | TorrentFreak - 0 views

  •  
    " Andy on June 26, 2014 C: 13 Breaking UK ISPs have agreed to send their customers warning letters when they pirate movies, music and TV shows, but before the scheme starts thoughts are turning to its potential failure. The Prime Minister's IP advisor says 'VCAP' needs to be followed by something more enforceable, including disconnections, fines and jail sentences."
  •  
    " Andy on June 26, 2014 C: 13 Breaking UK ISPs have agreed to send their customers warning letters when they pirate movies, music and TV shows, but before the scheme starts thoughts are turning to its potential failure. The Prime Minister's IP advisor says 'VCAP' needs to be followed by something more enforceable, including disconnections, fines and jail sentences."
Paul Merrell

Court gave NSA broad leeway in surveillance, documents show - The Washington Post - 0 views

  • Virtually no foreign government is off-limits for the National Security Agency, which has been authorized to intercept information “concerning” all but four countries, according to top-secret documents. The United States has long had broad no-spying arrangements with those four countries — Britain, Canada, Australia and New Zealand — in a group known collectively with the United States as the Five Eyes. But a classified 2010 legal certification and other documents indicate the NSA has been given a far more elastic authority than previously known, one that allows it to intercept through U.S. companies not just the communications of its overseas targets but any communications about its targets as well.
  • The certification — approved by the Foreign Intelligence Surveillance Court and included among a set of documents leaked by former NSA contractor Edward Snowden — lists 193 countries that would be of valid interest for U.S. intelligence. The certification also permitted the agency to gather intelligence about entities including the World Bank, the International Monetary Fund, the European Union and the International Atomic Energy Agency. The NSA is not necessarily targeting all the countries or organizations identified in the certification, the affidavits and an accompanying exhibit; it has only been given authority to do so. Still, the privacy implications are far-reaching, civil liberties advocates say, because of the wide spectrum of people who might be engaged in communication about foreign governments and entities and whose communications might be of interest to the United States.
  • That language could allow for surveillance of academics, journalists and human rights researchers. A Swiss academic who has information on the German government’s position in the run-up to an international trade negotiation, for instance, could be targeted if the government has determined there is a foreign-intelligence need for that information. If a U.S. college professor e-mails the Swiss professor’s e-mail address or phone number to a colleague, the American’s e-mail could be collected as well, under the program’s court-approved rules
  • ...4 more annotations...
  • On Friday, the Office of the Director of National Intelligence released a transparency report stating that in 2013 the government targeted nearly 90,000 foreign individuals or organizations for foreign surveillance under the program. Some tech-industry lawyers say the number is relatively low, considering that several billion people use U.S. e-mail services.
  • Still, some lawmakers are concerned that the potential for intrusions on Americans’ privacy has grown under the 2008 law because the government is intercepting not just communications of its targets but communications about its targets as well. The expansiveness of the foreign-powers certification increases that concern.
  • In a 2011 FISA court opinion, a judge using an NSA-provided sample estimated that the agency could be collecting as many as 46,000 wholly domestic e-mails a year that mentioned a particular target’s e-mail address or phone number, in what is referred to as “about” collection. “When Congress passed Section 702 back in 2008, most members of Congress had no idea that the government was collecting Americans’ communications simply because they contained a particular individual’s contact information,” Sen. Ron Wyden (D-Ore.), who has co-sponsored ­legislation to narrow “about” collection authority, said in an e-mail to The Washington Post. “If ‘about the target’ collection were limited to genuine national security threats, there would be very little privacy impact. In fact, this collection is much broader than that, and it is scooping up huge amounts of Americans’ wholly domestic communications.”
  • The only reason the court has oversight of the NSA program is that Congress in 2008 gave the government a new authority to gather intelligence from U.S. companies that own the Internet cables running through the United States, former officials noted. Edgar, the former privacy officer at the Office of the Director of National Intelligence, said ultimately he believes the authority should be narrowed. “There are valid privacy concerns with leaving these collection decisions entirely in the executive branch,” he said. “There shouldn’t be broad collection, using this authority, of foreign government information without any meaningful judicial role that defines the limits of what can be collected.”
Gonzalo San Gil, PhD.

Stop TTIP | Campaign - 0 views

  •  
    "... However TTIP is not about trade. It is in fact a corporate coup that will take us to a 'corporatocracy', a corporate-run world. ..."
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

EU doubles down on TTIP secrecy as public resistance grows | Ars Technica UK [# ! Note...] - 0 views

    • Gonzalo San Gil, PhD.
       
      # ! What kind of #Democracy is this where citiens # ! are kept aside of the process of elaboration of # ! essential legislation...? # ! :/
  •  
    "Top national politicians must visit a special room in Brussels to read key TTIP documents. by Glyn Moody - Aug 14, 2015 9:45 am UTC"
  •  
    "Top national politicians must visit a special room in Brussels to read key TTIP documents. by Glyn Moody - Aug 14, 2015 9:45 am UTC"
Paul Merrell

Spies and internet giants are in the same business: surveillance. But we can stop them ... - 0 views

  • On Tuesday, the European court of justice, Europe’s supreme court, lobbed a grenade into the cosy, quasi-monopolistic world of the giant American internet companies. It did so by declaring invalid a decision made by the European commission in 2000 that US companies complying with its “safe harbour privacy principles” would be allowed to transfer personal data from the EU to the US. This judgment may not strike you as a big deal. You may also think that it has nothing to do with you. Wrong on both counts, but to see why, some background might be useful. The key thing to understand is that European and American views about the protection of personal data are radically different. We Europeans are very hot on it, whereas our American friends are – how shall I put it? – more relaxed.
  • Given that personal data constitutes the fuel on which internet companies such as Google and Facebook run, this meant that their exponential growth in the US market was greatly facilitated by that country’s tolerant data-protection laws. Once these companies embarked on global expansion, however, things got stickier. It was clear that the exploitation of personal data that is the core business of these outfits would be more difficult in Europe, especially given that their cloud-computing architectures involved constantly shuttling their users’ data between server farms in different parts of the world. Since Europe is a big market and millions of its citizens wished to use Facebook et al, the European commission obligingly came up with the “safe harbour” idea, which allowed companies complying with its seven principles to process the personal data of European citizens. The circle having been thus neatly squared, Facebook and friends continued merrily on their progress towards world domination. But then in the summer of 2013, Edward Snowden broke cover and revealed what really goes on in the mysterious world of cloud computing. At which point, an Austrian Facebook user, one Maximilian Schrems, realising that some or all of the data he had entrusted to Facebook was being transferred from its Irish subsidiary to servers in the United States, lodged a complaint with the Irish data protection commissioner. Schrems argued that, in the light of the Snowden revelations, the law and practice of the United States did not offer sufficient protection against surveillance of the data transferred to that country by the government.
  • The Irish data commissioner rejected the complaint on the grounds that the European commission’s safe harbour decision meant that the US ensured an adequate level of protection of Schrems’s personal data. Schrems disagreed, the case went to the Irish high court and thence to the European court of justice. On Tuesday, the court decided that the safe harbour agreement was invalid. At which point the balloon went up. “This is,” writes Professor Lorna Woods, an expert on these matters, “a judgment with very far-reaching implications, not just for governments but for companies the business model of which is based on data flows. It reiterates the significance of data protection as a human right and underlines that protection must be at a high level.”
  • ...2 more annotations...
  • This is classic lawyerly understatement. My hunch is that if you were to visit the legal departments of many internet companies today you would find people changing their underpants at regular intervals. For the big names of the search and social media worlds this is a nightmare scenario. For those of us who take a more detached view of their activities, however, it is an encouraging development. For one thing, it provides yet another confirmation of the sterling service that Snowden has rendered to civil society. His revelations have prompted a wide-ranging reassessment of where our dependence on networking technology has taken us and stimulated some long-overdue thinking about how we might reassert some measure of democratic control over that technology. Snowden has forced us into having conversations that we needed to have. Although his revelations are primarily about government surveillance, they also indirectly highlight the symbiotic relationship between the US National Security Agency and Britain’s GCHQ on the one hand and the giant internet companies on the other. For, in the end, both the intelligence agencies and the tech companies are in the same business, namely surveillance.
  • And both groups, oddly enough, provide the same kind of justification for what they do: that their surveillance is both necessary (for national security in the case of governments, for economic viability in the case of the companies) and conducted within the law. We need to test both justifications and the great thing about the European court of justice judgment is that it starts us off on that conversation.
Paul Merrell

AT&T ups the ante in speech recognition | Signal Strength - CNET News - 2 views

  • It's developed a core technology platform, known as Watson, which is a cloud-based system of services that not only identifies words but interprets meaning and context to deliver more accurate results. The system itself is built on servers that model and compare speech to recorded voices. Watson is an evolving platform that with more data is able to adapt and learn so that it continues to improve accuracy and also cross reference data to use speech as input for getting to all kinds of communication and data. "We are really on the cusp of a technology revolution in speech and language technology," said Mazin Gilbert, executive director of speech and language technology at AT&T Labs. "It's no longer about simply trying to get the words right. It's about adding intelligence to interpret what is being said and then using that to apply to other modes of communication, such as text or video."
  • The system is designed to get more accurate over time as it learns the speech patterns of large numbers of users.
Paul Merrell

NSA contractors use LinkedIn profiles to cash in on national security | Al Jazeera America - 0 views

  • NSA spies need jobs, too. And that is why many covert programs could be hiding in plain sight. Job websites such as LinkedIn and Indeed.com contain hundreds of profiles that reference classified NSA efforts, posted by everyone from career government employees to low-level IT workers who served in Iraq or Afghanistan. They offer a rare glimpse into the intelligence community's projects and how they operate. Now some researchers are using the same kinds of big-data tools employed by the NSA to scrape public LinkedIn profiles for classified programs. But the presence of so much classified information in public view raises serious concerns about security — and about the intelligence industry as a whole. “I’ve spent the past couple of years searching LinkedIn profiles for NSA programs,” said Christopher Soghoian, the principal technologist with the American Civil Liberties Union’s Speech, Privacy and Technology Project.
  • On Aug. 3, The Wall Street Journal published a story about the FBI’s growing use of hacking to monitor suspects, based on information Soghoian provided. The next day, Soghoian spoke at the Defcon hacking conference about how he uncovered the existence of the FBI’s hacking team, known as the Remote Operations Unit (ROU), using the LinkedIn profiles of two employees at James Bimen Associates, with which the FBI contracts for hacking operations. “Had it not been for the sloppy actions of a few contractors updating their LinkedIn profiles, we would have never known about this,” Soghoian said in his Defcon talk. Those two contractors were not the only ones being sloppy.
  • And there are many more. A quick search of Indeed.com using three code names unlikely to return false positives — Dishfire, XKeyscore and Pinwale — turned up 323 résumés. The same search on LinkedIn turned up 48 profiles mentioning Dishfire, 18 mentioning XKeyscore and 74 mentioning Pinwale. Almost all these people appear to work in the intelligence industry. Network-mapping the data Fabio Pietrosanti of the Hermes Center for Transparency and Digital Human Rights noticed all the code names on LinkedIn last December. While sitting with M.C. McGrath at the Chaos Communication Congress in Hamburg, Germany, Pietrosanti began searching the website for classified program names — and getting serious results. McGrath was already developing Transparency Toolkit, a Web application for investigative research, and knew he could improve on Pietrosanti’s off-the-cuff methods.
  • ...2 more annotations...
  • “I was, like, huh, maybe there’s more we can do with this — actually get a list of all these profiles that have these results and use that to analyze the structure of which companies are helping with which programs, which people are helping with which programs, try to figure out in what capacity, and learn more about things that we might not know about,” McGrath said. He set up a computer program called a scraper to search LinkedIn for public profiles that mention known NSA programs, contractors or jargon — such as SIGINT, the agency’s term for “signals intelligence” gleaned from intercepted communications. Once the scraper found the name of an NSA program, it searched nearby for other words in all caps. That allowed McGrath to find the names of unknown programs, too. Once McGrath had the raw data — thousands of profiles in all, with 70 to 80 different program names — he created a network graph that showed the relationships between specific government agencies, contractors and intelligence programs. Of course, the data are limited to what people are posting on their LinkedIn profiles. Still, the network graph gives a sense of which contractors work on several NSA programs, which ones work on just one or two, and even which programs military units in Iraq and Afghanistan are using. And that is just the beginning.
  • Click on the image to view an interactive network illustration of the relationships between specific national security surveillance programs in red, and government organizations or private contractors in blue.
  •  
    What a giggle, public spying on NSA and its contractors using Big Data. The interactive network graph with its sidebar display of relevant data derived from LinkedIn profiles is just too delightful. 
Paul Merrell

Transparency Toolkit - 0 views

  • About Transparency Toolkit We need information about governments, companies, and other institutions to uncover corruption, human rights abuses, and civil liberties violations. Unfortunately, the information provided by most transparency initiatives today is difficult to understand and incomplete. Transparency Toolkit is an open source web application where journalists, activists, or anyone can chain together tools to rapidly collect, combine, visualize, and analyze documents and data. For example, Transparency Toolkit can be used to get data on all of a legislator’s actions in congress (votes, bills sponsored, etc.), get data on the fundraising parties a legislator attends, combine that data, and show it on a timeline to find correlations between actions in congress and parties attended. It could also be used to extract all locations from a document and plot them on a map where each point is linked to where the location was mentioned in the document.
  • Analysis Platform On the analysis platform, users can add steps to the analysis process. These steps chain together the tools, so someone could scrape data, upload a document, crossreference that with the scraped data, and then visualize the result all in less than a minute with little technical knowledge. Some of the tools allow users to specify input, but when this is not the case the output of the last step is the input of the next. Tools Existing and planned Transparency Toolkit tools include include scrapers and APIs for accessing data, format converters, extraction tools (for dates, names, locations, numbers), tools for crossreferencing and merging data, visualizations (maps, timelines, network graphs, maps), and pattern and trend detecting tools. These tools are designed to work in many cases rather than a single specific situation. The tools can be linked together on Transparency Toolkit, but they are also available individually. Where possible, we build our tools off of existing open source software. Road Map You can see the plans for future development of Transparency Toolkit here.
  •  
    If you think this isn't a tool for some very serious research, check the short descriptions of the modules here. https://github.com/transparencytoolkit I'll be installing this and doing some test-driving soon. From the source files, the glue for the tools seems to be Ruby on Rails. The development roadmap linked from the last word on this About page is also highly instructive. It ranks among the most detailed dev roadmaps I have ever seen. Notice that it is classified by milestones with scheduled work periods, giving specific date ranges for achievement. Even given the inevitable need to alter the schedule for unforeseen problems, this is a very aggressive (not quite the word I want) development plan and schedule. And the planned changes look to be super-useful, including a lot of "make it easier for the user" changes.   
« First ‹ Previous 61 - 80 of 94 Next ›
Showing 20 items per page