Skip to main content

Home/ Future of the Web/ Group items tagged account

Rss Feed Group items tagged

Paul Merrell

Hey ITU Member States: No More Secrecy, Release the Treaty Proposals | Electronic Front... - 0 views

  • The International Telecommunication Union (ITU) will hold the World Conference on International Telecommunications (WCIT-12) in December in Dubai, an all-important treaty-writing event where ITU Member States will discuss the proposed revisions to the International Telecommunication Regulations (ITR). The ITU is a United Nations agency responsible for international telecom regulation, a bureaucratic, slow-moving, closed regulatory organization that issues treaty-level provisions for international telecommunication networks and services. The ITR, a legally binding international treaty signed by 178 countries, defines the boundaries of ITU’s regulatory authority and provides "general principles" on international telecommunications. However, media reports indicate that some proposed amendments to the ITR—a negotiation that is already well underway—could potentially expand the ITU’s mandate to encompass the Internet.
  • The International Telecommunication Union (ITU) will hold the World Conference on International Telecommunications (WCIT-12) in December in Dubai, an all-important treaty-writing event where ITU Member States will discuss the proposed revisions to the International Telecommunication Regulations (ITR). The ITU is a United Nations agency responsible for international telecom regulation, a bureaucratic, slow-moving, closed regulatory organization that issues treaty-level provisions for international telecommunication networks and services. The ITR, a legally binding international treaty signed by 178 countries, defines the boundaries of ITU’s regulatory authority and provides "general principles" on international telecommunications. However, media reports indicate that some proposed amendments to the ITR—a negotiation that is already well underway—could potentially expand the ITU’s mandate to encompass the Internet. In similar fashion to the secrecy surrounding ACTA and TPP, the ITR proposals are being negotiated in secret, with high barriers preventing access to any negotiating document. While aspiring to be a venue for Internet policy-making, the ITU Member States do not appear to be very open to the idea of allowing all stakeholders (including civil society) to participate. The framework under which the ITU operates does not allow for any form of open participation. Mere access to documents and decision-makers is sold by the ITU to corporate “associate” members at prohibitively high rates. Indeed, the ITU’s business model appears to depend on revenue generation from those seeking to ‘participate’ in its policy-making processes. This revenue-based principle of policy-making is deeply troubling in and of itself, as the objective of policy making should be to reach the best possible outcome.
  • EFF, European Digital Rights, CIPPIC and CDT and a coalition of civil society organizations from around the world are demanding that the ITU Secretary General, the  WCIT-12 Council Working Group, and ITU Member States open up the WCIT-12 and the Council working group negotiations, by immediately releasing all the preparatory materials and Treaty proposals. If it affects the digital rights of citizens across the globe, the public needs to know what is going on and deserves to have a say. The Council Working Group is responsible for the preparatory work towards WCIT-12, setting the agenda for and consolidating input from participating governments and Sector Members. We demand full and meaningful participation for civil society in its own right, and without cost, at the Council Working Group meetings and the WCIT on equal footing with all other stakeholders, including participating governments. A transparent, open process that is inclusive of civil society at every stage is crucial to creating sound policy.
  • ...5 more annotations...
  • Civil society has good reason to be concerned regarding an expanded ITU policy-making role. To begin with, the institution does not appear to have high regard for the distributed multi-stakeholder decision making model that has been integral to the development of an innovative, successful and open Internet. In spite of commitments at WSIS to ensure Internet policy is based on input from all relevant stakeholders, the ITU has consistently put the interests of one stakeholder—Governments—above all others. This is discouraging, as some government interests are inconsistent with an open, innovative network. Indeed, the conditions which have made the Internet the powerful tool it is today emerged in an environment where the interests of all stakeholders are given equal footing, and existing Internet policy-making institutions at least aspire, with varying success, to emulate this equal footing. This formula is enshrined in the Tunis Agenda, which was committed to at WSIS in 2005:
  • 83. Building an inclusive development-oriented Information Society will require unremitting multi-stakeholder effort. We thus commit ourselves to remain fully engaged—nationally, regionally and internationally—to ensure sustainable implementation and follow-up of the outcomes and commitments reached during the WSIS process and its Geneva and Tunis phases of the Summit. Taking into account the multifaceted nature of building the Information Society, effective cooperation among governments, private sector, civil society and the United Nations and other international organizations, according to their different roles and responsibilities and leveraging on their expertise, is essential. 84. Governments and other stakeholders should identify those areas where further effort and resources are required, and jointly identify, and where appropriate develop, implementation strategies, mechanisms and processes for WSIS outcomes at international, regional, national and local levels, paying particular attention to people and groups that are still marginalized in their access to, and utilization of, ICTs.
  • Indeed, the ITU’s current vision of Internet policy-making is less one of distributed decision-making, and more one of ‘taking control.’ For example, in an interview conducted last June with ITU Secretary General Hamadoun Touré, Russian Prime Minister Vladimir Putin raised the suggestion that the union might take control of the Internet: “We are thankful to you for the ideas that you have proposed for discussion,” Putin told Touré in that conversation. “One of them is establishing international control over the Internet using the monitoring and supervisory capabilities of the International Telecommunication Union (ITU).” Perhaps of greater concern are views espoused by the ITU regarding the nature of the Internet. Yesterday, at the World Summit of Information Society Forum, Mr. Alexander Ntoko, head of the Corporate Strategy Division of the ITU, explained the proposals made during the preparatory process for the WCIT, outlining a broad set of topics that can seriously impact people's rights. The categories include "security," "interoperability" and "quality of services," and the possibility that ITU recommendations and regulations will be not only binding on the world’s nations, but enforced.
  • Rights to online expression are unlikely to fare much better than privacy under an ITU model. During last year’s IGF in Kenya, a voluntary code of conduct was issued to further restrict free expression online. A group of nations (including China, the Russian Federation, Tajikistan and Uzbekistan) released a Resolution for the UN General Assembly titled, “International Code of Conduct for Information Security.”  The Code seems to be designed to preserve and protect national powers in information and communication. In it, governments pledge to curb “the dissemination of information that incites terrorism, secessionism or extremism or that undermines other countries’ political, economic and social stability, as well as their spiritual and cultural environment.” This overly broad provision accords any state the right to censor or block international communications, for almost any reason.
  • EFF Joins Coalition Denouncing Secretive WCIT Planning Process June 2012 Congressional Witnesses Agree: Multistakeholder Processes Are Right for Internet Regulation June 2012 Widespread Participation Is Key in Internet Governance July 2012 Blogging ITU: Internet Users Will Be Ignored Again if Flawed ITU Proposals Gain Traction June 2012 Global Telecom Governance Debated at European Parliament Workshop
Paul Merrell

The Internet of Things Will Turn Large-Scale Hacks into Real World Disasters | Motherboard - 0 views

  • Disaster stories involving the Internet of Things are all the rage. They feature cars (both driven and driverless), the power grid, dams, and tunnel ventilation systems. A particularly vivid and realistic one, near-future fiction published last month in New York Magazine, described a cyberattack on New York that involved hacking of cars, the water system, hospitals, elevators, and the power grid. In these stories, thousands of people die. Chaos ensues. While some of these scenarios overhype the mass destruction, the individual risks are all real. And traditional computer and network security isn’t prepared to deal with them.Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).
  • So far, internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by—presumptively—China in 2015. On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door—or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location. With the advent of the Internet of Things and cyber-physical systems in general, we've given the internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete. Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.
  •  
    Bruce Scneier on the insecurity of the Internet of Things, and possible consequences.
Paul Merrell

In Hearing on Internet Surveillance, Nobody Knows How Many Americans Impacted in Data C... - 0 views

  • The Senate Judiciary Committee held an open hearing today on the FISA Amendments Act, the law that ostensibly authorizes the digital surveillance of hundreds of millions of people both in the United States and around the world. Section 702 of the law, scheduled to expire next year, is designed to allow U.S. intelligence services to collect signals intelligence on foreign targets related to our national security interests. However—thanks to the leaks of many whistleblowers including Edward Snowden, the work of investigative journalists, and statements by public officials—we now know that the FISA Amendments Act has been used to sweep up data on hundreds of millions of people who have no connection to a terrorist investigation, including countless Americans. What do we mean by “countless”? As became increasingly clear in the hearing today, the exact number of Americans impacted by this surveillance is unknown. Senator Franken asked the panel of witnesses, “Is it possible for the government to provide an exact count of how many United States persons have been swept up in Section 702 surveillance? And if not the exact count, then what about an estimate?”
  • The lack of information makes rigorous oversight of the programs all but impossible. As Senator Franken put it in the hearing today, “When the public lacks even a rough sense of the scope of the government’s surveillance program, they have no way of knowing if the government is striking the right balance, whether we are safeguarding our national security without trampling on our citizens’ fundamental privacy rights. But the public can’t know if we succeed in striking that balance if they don’t even have the most basic information about our major surveillance programs."  Senator Patrick Leahy also questioned the panel about the “minimization procedures” associated with this type of surveillance, the privacy safeguard that is intended to ensure that irrelevant data and data on American citizens is swiftly deleted. Senator Leahy asked the panel: “Do you believe the current minimization procedures ensure that data about innocent Americans is deleted? Is that enough?”  David Medine, who recently announced his pending retirement from the Privacy and Civil Liberties Oversight Board, answered unequivocally:
  • Elizabeth Goitein, the Brennan Center director whose articulate and thought-provoking testimony was the highlight of the hearing, noted that at this time an exact number would be difficult to provide. However, she asserted that an estimate should be possible for most if not all of the government’s surveillance programs. None of the other panel participants—which included David Medine and Rachel Brand of the Privacy and Civil Liberties Oversight Board as well as Matthew Olsen of IronNet Cybersecurity and attorney Kenneth Wainstein—offered an estimate. Today’s hearing reaffirmed that it is not only the American people who are left in the dark about how many people or accounts are impacted by the NSA’s dragnet surveillance of the Internet. Even vital oversight committees in Congress like the Senate Judiciary Committee are left to speculate about just how far-reaching this surveillance is. It's part of the reason why we urged the House Judiciary Committee to demand that the Intelligence Community provide the public with a number. 
  • ...2 more annotations...
  • Senator Leahy, they don’t. The minimization procedures call for the deletion of innocent Americans’ information upon discovery to determine whether it has any foreign intelligence value. But what the board’s report found is that in fact information is never deleted. It sits in the databases for 5 years, or sometimes longer. And so the minimization doesn’t really address the privacy concerns of incidentally collected communications—again, where there’s been no warrant at all in the process… In the United States, we simply can’t read people’s emails and listen to their phone calls without court approval, and the same should be true when the government shifts its attention to Americans under this program. One of the most startling exchanges from the hearing today came toward the end of the session, when Senator Dianne Feinstein—who also sits on the Intelligence Committee—seemed taken aback by Ms. Goitein’s mention of “backdoor searches.” 
  • Feinstein: Wow, wow. What do you call it? What’s a backdoor search? Goitein: Backdoor search is when the FBI or any other agency targets a U.S. person for a search of data that was collected under Section 702, which is supposed to be targeted against foreigners overseas. Feinstein: Regardless of the minimization that was properly carried out. Goitein: Well the data is searched in its unminimized form. So the FBI gets raw data, the NSA, the CIA get raw data. And they search that raw data using U.S. person identifiers. That’s what I’m referring to as backdoor searches. It’s deeply concerning that any member of Congress, much less a member of the Senate Judiciary Committee and the Senate Intelligence Committee, might not be aware of the problem surrounding backdoor searches. In April 2014, the Director of National Intelligence acknowledged the searches of this data, which Senators Ron Wyden and Mark Udall termed “the ‘back-door search’ loophole in section 702.” The public was so incensed that the House of Representatives passed an amendment to that year's defense appropriations bill effectively banning the warrantless backdoor searches. Nonetheless, in the hearing today it seemed like Senator Feinstein might not recognize or appreciate the serious implications of allowing U.S. law enforcement agencies to query the raw data collected through these Internet surveillance programs. Hopefully today’s testimony helped convince the Senator that there is more to this topic than what she’s hearing in jargon-filled classified security briefings.
  •  
    The 4th Amendment: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and *particularly describing the place to be searched, and the* persons or *things to be seized."* So much for the particularized description of the place to be searched and the thngs to be seized.  Fah! Who needs a Constitution, anyway .... 
Paul Merrell

Lawmakers warn of 'radical' move by NSA to share information | TheHill - 0 views

  • A bipartisan pair of lawmakers is expressing alarm at reported changes at the National Security Agency that would allow the intelligence service’s information to be used for policing efforts in the United States.“If media accounts are true, this radical policy shift by the NSA would be unconstitutional, and dangerous,” Reps. Ted Lieu (D-Calif.) and Blake FarentholdBlake FarentholdLawmakers warn of 'radical' move by NSA to share information Overnight Tech: Netflix scores win over Postal Service Lawmakers go green for St. Patrick's Day MORE (R-Texas) wrote in a letter to the spy agency this week. “The proposed shift in the relationship between our intelligence agencies and the American people should not be done in secret.ADVERTISEMENT“NSA’s mission has never been, and should never be, domestic policing or domestic spying.”The NSA has yet to publicly announce the change, but The New York Times reported last month that the administration was poised to expand the agency's ability to share information that it picks up about people’s communications with other intelligence agencies.The modification would open the door for the NSA to give the FBI and other federal agencies uncensored communications of foreigners and Americans picked up incidentally — but without a warrant — during sweeps.  
  • Robert Litt, the general counsel at the Office of the Director of National Intelligence, told the Times that it was finalizing a 21-page draft of procedures to allow the expanded sharing.  Separately, the Guardian reported earlier this month that the FBI had quietly changed its internal privacy rules to allow direct access to the NSA’s massive storehouse of communication data picked up on Internet service providers and websites.The revelations unnerved civil liberties advocates, who encouraged lawmakers to demand answers of the spy agency.“Under a policy like this, information collected by the NSA would be available to a host of federal agencies that may use it to investigate and prosecute domestic crimes,” said Neema Singh Guliani, legislative counsel and the American Civil Liberties Union. “Making such a change without authorization from Congress or the opportunity for debate would ignore public demands for greater transparency and oversight over intelligence activities.”In their letter this week, Lieu and Farenthold warned that the NSA’s changes would undermine Congress and unconstitutionally violate people’s privacy rights.   
  • “The executive branch would be violating the separation of powers by unilaterally transferring warrantless data collected under the NSA’s extraordinary authority to domestic agencies, which do not have such authority,” they wrote.“Domestic law enforcement agencies — which need a warrant supported by probable cause to search or seize — cannot do an end run around the Fourth Amendment by searching warrantless information collected by the NSA.”
Paul Merrell

Race to Introduce Fascist Internet Regulations in Russia Continues - Now under the Bann... - 0 views

  • Russian lawmaker Vitaly Milonov, on Monday, proposed a bill aimed to ban children under the age of 14 from social media. Although the bill is touted under the banner of child protection, it also aims to introduce the mandatory submission of passport data. In January Russia introduced semi-fascist regulations to severely curb the rights of bloggers and independent media.
  • Vitaly Milnov, generally known for being ultra-conservative, introduced the controversial bill on Monday. Touting the bill under the banner of wanting to protect children and limit their access to social media the bill has far deeper implications. Parents could very well self-regulate their children’s access to social media. The bill, however, implies that it would become mandatory for social media users to submit their passport data. Moreover, the bill also proposes that the use of pseudonyms will be banned. The proposed legislation also aims to introducing strict rules, requiring two-party consent before the publication of screenshots of online correspondence. The bill reads, among others: “Social networks create a special virtual world where a person spends significant part of their life, contacting other people and essentially doing everything that they would do in real world. This world can’t be left unregulated by law. Especially now, when growing number of users are falling victim to different types of fraud.” Even though Milonov is generally viewed as ultra-conservative, there are about 62 percent of Russians who according to polls support the ban of social networks for children while 39 percent supported using passport data to create an online account, a poll by the state-funded pollster VTsIOM revealed Monday.
  • Social media has come under intense scrutiny in Russia in recent months. Disturbingly, there are very few Russians who have received independent information about the not so overtly advertised implications of this scrutiny, of the proposed bill, and of plans to create a “Russian internet” to filter “unwanted foreign content. Russia also cracks down on independent bloggers and journalists. On January 1, 2016 the Russian Federation implemented amendments to laws that further censor the internet and potentially independent media. These laws are being sold under the guise of empowering internet users and the right to protect personal information. The amendments follow legislation from 2014 that infringed on the rights of bloggers.
Paul Merrell

Common Crawl Founder Gil Elbaz Speaks About New Relationship With Amazon, Semantic Web ... - 0 views

  • The Common Crawl Foundation’s repository of openly and freely accessible web crawl data is about to go live as a Public Data Set on Amazon Web Services.
  • Elbaz’ goal in developing the repository: “You can’t access, let alone download, the Google or the Bing crawl data. So certainly we’re differentiated in being very open and transparent about what we’re crawling and actually making it available to developers,” he says. “You might ask why is it going to be revolutionary to allow many more engineers and researchers and developers and students access to this data, whereas historically you have to work for one of the big search engines…. The question is, the world has the largest-ever corpus of knowledge out there on the web, and is there more that one can do with it than Google and Microsoft and a handful of other search engines are already doing? And the answer is unquestionably yes. ”
  • Common Crawl’s data already is stored on Amazon’s S3 service, but now Amazon will be providing the storage space for free through the Public Data Set program. Not only does that remove from Common Crawl the storage burden and costs for hosting its crawl of 5 billion web pages – some 50 or 60 terabytes large – but it should make it easier for users to access the data, and remove the bandwidth-related costs they might incur for downloads. Users won’t have to deal with setting up accounts, being responsible for bandwidth bills incurred, and more complex authentication processes.
Gary Edwards

Meet OX Text, a collaborative, non-destructive alternative to Google Docs - Tech News a... - 0 views

  • The German software-as-a-service firm Open-Xchange, which provides apps that telcos and other service providers can bundle with their connectivity or hosting products, is adding a cloud-based office productivity toolset called OX Documents to its OX App Suite lineup. Open-Xchange has around 70 million users through its contracts with roughly 80 providers such as 1&1 Internet and Strato. Its OX App Suite takes the form of a virtual desktop of sorts, that lets users centralize their email and file storage accounts and view all sorts of documents through a unified portal. However, as of an early April release it will also include OX Text, a non-destructive, collaborative document editor that rivals Google Docs, and that has an interesting heritage of its own.
  • The team that created the HTML5- and JavaScript-based OX Text includes some of the core developers behind OpenOffice, the free alternative to Microsoft Office that passed from Sun Microsystems to Oracle before morphing into LibreOffice. The German developers we’re talking about hived off the project before LibreOffice happened, and ended up getting hired by Open-Xchange. “To them it was a once in a lifetime event, because we allowed them to start from scratch,” Open-Xchange CEO Rafael Laguna told me. “We said we wanted a fresh office productivity suite that runs inside the browser. In terms of the architecture and principles for the product, we wanted to make it fully round-trip capable, meaning whatever file format we run into needs to be retained.”
  • This is an extremely handy formatting and version control feature. Changes made to a document in OX Text get pushed through to Open-Xchange’s backend, where a changelog is maintained. “Power” Word features such as Smart Art or Charts, which are not necessarily supported by other productivity suites, are replaced with placeholders during editing and are there, as before, when the edited document is eventually downloaded. As the OX Text blurb says, “OX Text never damages your valuable work even if it does not understand it”.
  • ...1 more annotation...
  • “[This avoids] the big disadvantage of anything other than Microsoft Office,” Laguna said. “If you use OpenOffice with a .docx file, the whole document is converted, creating artefacts, then you convert it back. That’s one of the major reasons not everyone is using OpenOffice, and the same is true for Google Apps.” OX Text will be available as an extension to OX App Suite, which also includes calendaring and other productivity tools. However, it will also come out as a standalone product under both commercial licenses – effectively support-based subscriptions for Open-Xchange’s service provider customers – and open-source licenses, namely the GNU General Public License 2 and Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License, which will allow free personal, non-commercial use. You can find a demo of App Suite, including the OX Text functionality, here, and there’s a video too:
Paul Merrell

We finally gave Congress email addresses - Sunlight Foundation Blog - 0 views

  • On OpenCongress, you can now email your representatives and senators just as easily as you would a friend or colleague. We've added a new feature to OpenCongress. It's not flashy. It doesn't use D3 or integrate with social media. But we still think it's pretty cool. You might've already heard of it. Email. This may not sound like a big deal, but it's been a long time coming. A lot of people are surprised to learn that Congress doesn't have publicly available email addresses. It's the number one feature request that we hear from users of our APIs. Until recently, we didn't have a good response. That's because members of Congress typically put their feedback mechanisms behind captchas and zip code requirements. Sometimes these forms break; sometimes their requirements improperly lock out actual constituents. And they always make it harder to email your congressional delegation than it should be.
  • This is a real problem. According to the Congressional Management Foundation, 88% of Capitol Hill staffers agree that electronic messages from constituents influence their bosses' decisions. We think that it's inappropriate to erect technical barriers around such an essential democratic mechanism. Congress itself is addressing the problem. That effort has just entered its second decade, and people are feeling optimistic that a launch to a closed set of partners might be coming soon. But we weren't content to wait. So when the Electronic Frontier Foundation (EFF) approached us about this problem, we were excited to really make some progress. Building on groundwork first done by the Participatory Politics Foundation and more recent work within Sunlight, a network of 150 volunteers collected the data we needed from congressional websites in just two days. That information is now on Github, available to all who want to build the next generation of constituent communication tools. The EFF is already working on some exciting things to that end.
  • But we just wanted to be able to email our representatives like normal people. So now, if you visit a legislator's page on OpenCongress, you'll see an email address in the right-hand sidebar that looks like Sen.Reid@opencongress.org or Rep.Boehner@opencongress.org. You can also email myreps@opencongress.org to email both of your senators and your House representatives at once. The first time we get an email from you, we'll send one back asking for some additional details. This is necessary because our code submits your message by navigating those aforementioned congressional webforms, and we don't want to enter incorrect information. But for emails after the first one, all you'll have to do is click a link that says, "Yes, I meant to send that email."
  • ...1 more annotation...
  • One more thing: For now, our system will only let you email your own representatives. A lot of people dislike this. We do, too. In an age of increasing polarization, party discipline means that congressional leaders must be accountable to citizens outside their districts. But the unfortunate truth is that Congress typically won't bother reading messages from non-constituents — that's why those zip code requirements exist in the first place. Until that changes, we don't want our users to waste their time. So that's it. If it seems simple, it's because it is. But we think that unbreaking how Congress connects to the Internet is important. You should be able to send a call to action in a tweet, easily forward a listserv message to your representative and interact with your government using the tools you use to interact with everyone else.
Paul Merrell

EFF to Court: U.S. Warrants Don't Apply to Overseas Emails | Electronic Frontier Founda... - 0 views

  • The Electronic Frontier Foundation (EFF) has urged a federal court to block a U.S. search warrant ordering Microsoft to turn over a customer's emails held in an overseas server, arguing that the case has dangerous privacy implications for Internet users everywhere. The case started in December of last year, when a magistrate judge in New York signed a search warrant seeking records and emails from a Microsoft account in connection with a criminal investigation. However, Microsoft determined that the emails the government sought were on a Microsoft server in Dublin, Ireland. Because a U.S. judge has no authority to issue warrants to search and seize property or data abroad, Microsoft refused to turn over the emails and asked the magistrate to quash the warrant. But the magistrate denied Microsoft's request, ruling there was no foreign search because the data would be reviewed by law enforcement agents in the U.S.
  • Microsoft appealed the decision. In an amicus brief in support of Microsoft, EFF argues the magistrate's rationale ignores the fact that copying the emails is a "seizure" that takes place in Ireland. "The Fourth Amendment protects from unreasonable search and seizure. You can't ignore the 'seizure' part just because the property is digital and not physical," said EFF Staff Attorney Hanni Fakhoury. "Ignoring this basic point has dangerous implications – it could open the door to unfounded law enforcement access to and collection of data stored around the world."
  • For the full brief in this case:https://www.eff.org/document/eff-amicus-brief-support-microsoft
Paul Merrell

Hacking Online Polls and Other Ways British Spies Seek to Control the Internet - The In... - 0 views

  • The secretive British spy agency GCHQ has developed covert tools to seed the internet with false information, including the ability to manipulate the results of online polls, artificially inflate pageview counts on web sites, “amplif[y]” sanctioned messages on YouTube, and censor video content judged to be “extremist.” The capabilities, detailed in documents provided by NSA whistleblower Edward Snowden, even include an old standby for pre-adolescent prank callers everywhere: A way to connect two unsuspecting phone users together in a call.
  • he “tools” have been assigned boastful code names. They include invasive methods for online surveillance, as well as some of the very techniques that the U.S. and U.K. have harshly prosecuted young online activists for employing, including “distributed denial of service” attacks and “call bombing.” But they also describe previously unknown tactics for manipulating and distorting online political discourse and disseminating state propaganda, as well as the apparent ability to actively monitor Skype users in real-time—raising further questions about the extent of Microsoft’s cooperation with spy agencies or potential vulnerabilities in its Skype’s encryption. Here’s a list of how JTRIG describes its capabilities: • “Change outcome of online polls” (UNDERPASS) • “Mass delivery of email messaging to support an Information Operations campaign” (BADGER) and “mass delivery of SMS messages to support an Information Operations campaign” (WARPARTH) • “Disruption of video-based websites hosting extremist content through concerted target discovery and content removal.” (SILVERLORD)
  • • “Active skype capability. Provision of real time call records (SkypeOut and SkypetoSkype) and bidirectional instant messaging. Also contact lists.” (MINIATURE HERO) • “Find private photographs of targets on Facebook” (SPRING BISHOP) • “A tool that will permanently disable a target’s account on their computer” (ANGRY PIRATE) • “Ability to artificially increase traffic to a website” (GATEWAY) and “ability to inflate page views on websites” (SLIPSTREAM) • “Amplification of a given message, normally video, on popular multimedia websites (Youtube)” (GESTATOR) • “Targeted Denial Of Service against Web Servers” (PREDATORS FACE) and “Distributed denial of service using P2P. Built by ICTR, deployed by JTRIG” (ROLLING THUNDER)
  • ...1 more annotation...
  • • “A suite of tools for monitoring target use of the UK auction site eBay (www.ebay.co.uk)” (ELATE) • “Ability to spoof any email address and send email under that identity” (CHANGELING) • “For connecting two target phone together in a call” (IMPERIAL BARGE) While some of the tactics are described as “in development,” JTRIG touts “most” of them as “fully operational, tested and reliable.” It adds: “We only advertise tools here that are either ready to fire or very close to being ready.”
Paul Merrell

POGO Adds its Voice to Calls for Secret Law Oversight - 0 views

  • April 21, 2015 Dear Chairman Goodlatte, Ranking Member Conyers, Chairman Grassley, and Ranking Member Leahy: We urge you to end mass surveillance of Americans. Among us are civil liberties organizations from across the political spectrum that speak for millions of people, businesses, whistleblowers, and experts. The impending expiration of three USA PATRIOT Act provisions on June 1 is a golden opportunity to end mass surveillance and enact additional reforms. Current surveillance practices are virtually limitless. They are unnecessary, counterproductive, and costly. They undermine our economy and the public’s trust in government. And they undercut the proper functioning of government. Meaningful surveillance reform entails congressional repeal of laws and protocols the Executive secretly interprets to permit current mass surveillance practices. Additionally, it requires Congress to appreciably increase transparency, oversight, and accountability of intelligence agencies, especially those that have acted unconstitutionally.
  • A majority of the House of Representatives already has voted against mass surveillance. The Massie-Lofgren amendment to the National Defense Authorization Act [i] garnered 293 votes in support of defunding “backdoor searches.” Unfortunately, that amendment was not included in the “CRomnibus"[ii] despite overwhelming support.  We urge you to act once again to vindicate our fundamental liberties.
  •  
    Finally! A proposal for mass-surveillance reform that goes far beyond prior overly-modest proposals backed by ACLU, Electronic Frontier Foundation, etc., that were based on negotiation with members of Congress. This proposal is backed by a wide range of other organizations. A must-read.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

'UK surveillance is worse than 1984' says UN privacy chief (Wired UK) - 0 views

  • The UN's newly appointed special rapporteur on privacy, Joseph Cannataci, has described digital surveillance in the UK as "worse" than anything imagined in George Orwell's totalitarian dystopia 1984.Speaking to the Guardian, Cannataci -- who doesn't own a Facebook account or use Twitter -- lambasted the oversight of British digital surveillance as "a rather bad joke at its citizens' expense".Warning against the steady erosion of privacy and increasing levels of government intrusion, he also drew sinister parallels with Orwell's vision of a mass-surveilled society, adding that today's reality was far worse than the fiction: "At least Winston [a character in Orwell's 1984] was able to go out in the countryside and go under a tree and expect there wouldn't be any screen, as it was called. Whereas today there are many parts of the English countryside where there are more cameras than George Orwell could ever have imagined."
  • Cannataci, who holds posts as a professor of technology of law at the University of Groningen, and as head of the department of Information Policy and Governance at the University of Malta, also called for a "Geneva convention-style law" for the internet. "Some people may not want to buy into it. But you know, if one takes the attitude that some countries will not play ball, then, for example, the chemical weapons agreement would never have come about."
  • As part of his new role -- which elevates digital privacy to the same level of importance as other human rights -- Cannataci has vowed to begin systematically reviewing government policies and the business models of large corporations, which he accuses of "very often taking the data that you never even knew they were taking". Although the privacy chief admits that his mandate is more than likely "impossible to achieve in the next three years", he stressed the importance of a "longer-term view" in an effort to help protect people's data and safeguard their digital rights.
Paul Merrell

Wiretap Numbers Don't Add Up | Just Security - 0 views

  • Last week, the Administrative Office (AO) of the US Courts published the 2014 Wiretap Report, an annual report to Congress concerning intercepted wire, oral, or electronic communications as required by Title III of the Omnibus Crime Control and Safe Streets Act of 1968. News headlines touted that the number of federal and state wiretaps for 2014 was down 1% for a total of 3,554. Of these, there were few involving encrypted communications; and for those, law enforcement agencies were in most cases able to overcome the encryption. But there is a bigger story that calls into question the accuracy of the all of the prior reports submitted to the AO and the overall data provided to Congress and the public in the Wiretap Reports. Since the Snowden revelations, more and more companies have started publishing “transparency reports” about the number and nature of government demands to access their users’ data. AT&T, Verizon, and Sprint published data for 2014 earlier this year and T-Mobile published its first transparency report on the same day the AO released the Wiretap Report. In aggregate, the four companies state that they implemented 10,712 wiretaps, a threefold difference over the total number reported by the AO. Note that the 10,712 number is only for the four companies listed above and does not reflect wiretap orders received by other telephone carriers or online providers, so the discrepancy actually is larger.
  • So what accounts for the huge gap in reporting? That is a question Congress and the AO should be asking prosecutors and judges who are required by law to make complete and accurate reports of the number of wiretaps conducted each year. Are wiretaps being consistently under­reported to Congress and the public? Based on the data reported by the four major carriers for 2013 and 2014, it certainly would appear to be the case.
Paul Merrell

Hacking Team Asks Customers to Stop Using Its Software After Hack | Motherboard - 1 views

  • But the hack hasn’t just ruined the day for Hacking Team’s employees. The company, which sells surveillance software to government customers all over the world, from Morocco and Ethiopia to the US Drug Enforcement Agency and the FBI, has told all its customers to shut down all operations and suspend all use of the company’s spyware, Motherboard has learned. “They’re in full on emergency mode,” a source who has inside knowledge of Hacking Team’s operations told Motherboard.
  • Hacking Team notified all its customers on Monday morning with a “blast email,” requesting them to shut down all deployments of its Remote Control System software, also known as Galileo, according to multiple sources. The company also doesn’t have access to its email system as of Monday afternoon, a source said. On Sunday night, an unnamed hacker, who claimed to be the same person who breached Hacking Team’s competitor FinFisher last year, hijacked its Twitter account and posted links to 400GB of internal data. Hacking Team woke up to a massive breach of its systems.
  • A source told Motherboard that the hackers appears to have gotten “everything,” likely more than what the hacker has posted online, perhaps more than one terabyte of data. “The hacker seems to have downloaded everything that there was in the company’s servers,” the source, who could only speak on condition of anonymity, told Motherboard. “There’s pretty much everything here.” It’s unclear how the hackers got their hands on the stash, but judging from the leaked files, they broke into the computers of Hacking Team’s two systems administrators, Christian Pozzi and Mauro Romeo, who had access to all the company’s files, according to the source. “I did not expect a breach to be this big, but I’m not surprised they got hacked because they don’t take security seriously,” the source told me. “You can see in the files how much they royally fucked up.”
  • ...2 more annotations...
  • For example, the source noted, none of the sensitive files in the data dump, from employees passports to list of customers, appear to be encrypted. “How can you give all the keys to your infrastructure to a 20-something who just joined the company?” he added, referring to Pozzi, whose LinkedIn shows he’s been at Hacking Team for just over a year. “Nobody noticed that someone stole a terabyte of data? You gotta be a fuckwad,” the source said. “It means nobody was taking care of security.”
  • The future of the company, at this point, it’s uncertain. Employees fear this might be the beginning of the end, according to sources. One current employee, for example, started working on his resume, a source told Motherboard. It’s also unclear how customers will react to this, but a source said that it’s likely that customers from countries such as the US will pull the plug on their contracts. Hacking Team asked its customers to shut down operations, but according to one of the leaked files, as part of Hacking Team’s “crisis procedure,” it could have killed their operations remotely. The company, in fact, has “a backdoor” into every customer’s software, giving it ability to suspend it or shut it down—something that even customers aren’t told about. To make matters worse, every copy of Hacking Team’s Galileo software is watermarked, according to the source, which means Hacking Team, and now everyone with access to this data dump, can find out who operates it and who they’re targeting with it.
Paul Merrell

XKeyscore Exposé Reaffirms the Need to Rid the Web of Tracking Cookies | Elec... - 0 views

  • The Intercept published an expose on the NSA's XKeyscore program. Along with information on the breadth and scale of the NSA's metadata collection, The Intercept revealed how the NSA relies on unencrypted cookie data to identify users. As The Intercept says: "The NSA’s ability to piggyback off of private companies’ tracking of their own users is a vital instrument that allows the agency to trace the data it collects to individual users. It makes no difference if visitors switch to public Wi-Fi networks or connect to VPNs to change their IP addresses: the tracking cookie will follow them around as long as they are using the same web browser and fail to clear their cookies." The NSA slides released by The Intercept give detailed guides to understanding the data transmitted by these cookies, as well as how to find unique machine identifiers that analysts can use to differentiate between multiple machines using the same IP address. We've written before about how spy agencies piggyback on social media account data to find Internet users' names or other identifying info, and these slides drive home the point that HTTP cookies leave users vulnerable to government surveillance, since any intermediary (or spy agency) can read the sensitive data they contain.
  • Worse yet, most of the time these identifying cookies come from third-party sources on webpages, and users have no meaningful way to opt out of receiving them (short of blocking all third party cookies) since advertisers (the main server of these types of cookies) refuse to honor the Do Not Track header.  Browser makers could help address this sort of non-consensual tracking by both advertisers and the NSA with some simple technical changes—changes that have been shown to reduce the number of third party cookies received by 67%. So far, though, they've been unwilling to build privacy protecting features in by default. Until they do, the best way for users to protect themselves is by installing a privacy protecting app like Privacy Badger, which is designed to block these types of uniquely identifying tracking cookies, or HTTPS Everywhere to block the transmission of HTTP cookies.
Paul Merrell

Obama administration opts not to force firms to decrypt data - for now - The Washington... - 1 views

  • After months of deliberation, the Obama administration has made a long-awaited decision on the thorny issue of how to deal with encrypted communications: It will not — for now — call for legislation requiring companies to decode messages for law enforcement. Rather, the administration will continue trying to persuade companies that have moved to encrypt their customers’ data to create a way for the government to still peer into people’s data when needed for criminal or terrorism investigations. “The administration has decided not to seek a legislative remedy now, but it makes sense to continue the conversations with industry,” FBI Director James B. Comey said at a Senate hearing Thursday of the Homeland Security and Governmental Affairs Committee.
  • The decision, which essentially maintains the status quo, underscores the bind the administration is in — balancing competing pressures to help law enforcement and protect consumer privacy. The FBI says it is facing an increasing challenge posed by the encryption of communications of criminals, terrorists and spies. A growing number of companies have begun to offer encryption in which the only people who can read a message, for instance, are the person who sent it and the person who received it. Or, in the case of a device, only the device owner has access to the data. In such cases, the companies themselves lack “backdoors” or keys to decrypt the data for government investigators, even when served with search warrants or intercept orders.
  • The decision was made at a Cabinet meeting Oct. 1. “As the president has said, the United States will work to ensure that malicious actors can be held to account — without weakening our commitment to strong encryption,” National Security Council spokesman Mark Stroh said. “As part of those efforts, we are actively engaged with private companies to ensure they understand the public safety and national security risks that result from malicious actors’ use of their encrypted products and services.” But privacy advocates are concerned that the administration’s definition of strong encryption also could include a system in which a company holds a decryption key or can retrieve unencrypted communications from its servers for law enforcement. “The government should not erode the security of our devices or applications, pressure companies to keep and allow government access to our data, mandate implementation of vulnerabilities or backdoors into products, or have disproportionate access to the keys to private data,” said Savecrypto.org, a coalition of industry and privacy groups that has launched a campaign to petition the Obama administration.
  • ...3 more annotations...
  • To Amie Stepanovich, the U.S. policy manager for Access, one of the groups signing the petition, the status quo isn’t good enough. “It’s really crucial that even if the government is not pursuing legislation, it’s also not pursuing policies that will weaken security through other methods,” she said. The FBI and Justice Department have been talking with tech companies for months. On Thursday, Comey said the conversations have been “increasingly productive.” He added: “People have stripped out a lot of the venom.” He said the tech executives “are all people who care about the safety of America and also care about privacy and civil liberties.” Comey said the issue afflicts not just federal law enforcement but also state and local agencies investigating child kidnappings and car crashes — “cops and sheriffs . . . [who are] increasingly encountering devices they can’t open with a search warrant.”
  • One senior administration official said the administration thinks it’s making enough progress with companies that seeking legislation now is unnecessary. “We feel optimistic,” said the official, who spoke on the condition of anonymity to describe internal discussions. “We don’t think it’s a lost cause at this point.” Legislation, said Rep. Adam Schiff (D-Calif.), is not a realistic option given the current political climate. He said he made a recent trip to Silicon Valley to talk to Twitter, Facebook and Google. “They quite uniformly are opposed to any mandate or pressure — and more than that, they don’t want to be asked to come up with a solution,” Schiff said. Law enforcement officials know that legislation is a tough sell now. But, one senior official stressed, “it’s still going to be in the mix.” On the other side of the debate, technology, diplomatic and commerce agencies were pressing for an outright statement by Obama to disavow a legislative mandate on companies. But their position did not prevail.
  • Daniel Castro, vice president of the Information Technology & Innovation Foundation, said absent any new laws, either in the United States or abroad, “companies are in the driver’s seat.” He said that if another country tried to require companies to retain an ability to decrypt communications, “I suspect many tech companies would try to pull out.”
  •  
    # ! upcoming Elections...
Paul Merrell

Belgium sues Facebook over illegal Privacy Violations of Users and Non-Users | nsnbc in... - 0 views

  • The Belgian government will be suing Facebook. The Commission for the Protection of Privacy states that Facebook violates Belgian and EU law by tracking systems that target both Facebook users as well as non-Facebook users. Facebook is known for cooperating with the U.S.’ National Security Agency. 
  • The Belgian privacy watchdog’s case against the internet giant Facebook will be heard at a court in Brussels on Thursday. The Commission has repeatedly requested that Facebook should comply with Belgian and EU law. Facebook failed to comply, and the Commission has no power to enforce the law; hence the decision to sue Facebook to attain a a court ruling. The President of the Commission for the Protection of Privacy, Willem Debeuckelaere, told the press that: “Facebook treats its users’ private lives without respect and that needs tackling. It’s not because we want to start a lawsuit over this, but we cannot continue to negotiate through other means. .. We want a judge to impose our recommendations. These recommendations are chiefly aimed at protecting internet users who are not Facebook members.”
  • The Belgian privacy watchdog alleges that Facebook tracks the web browsing of all visitors, including those who have specifically turned the tracking function off; This gathering of private information allegedly also includes those who do not have a Facebook account. Moreover, the Commission claims that Facebook has the capability to surveil computers without consent, even when users are logged out; and Facebook can monitor every PC of users that use websites with Facebook plugins. The capability to monitor both Facebook users and non-Facebook users allegedly functions via Cookies that store information about user’s internet activities, including preferential settings of websites and which websites internet users have visited. The Commission claims that Facebook installs these Cookies on all computers that visit websites that for example have a Facebook plugin to share internet content. That includes the computers of persons who do not make use of Facebook’s “share” or “like” button.
  • ...1 more annotation...
  • In other words, Facebook has the capacity to monitor your browser settings as well as which websites you have visited if you have read this article or any other article on any website that contains a Facebook “share” button, whether you “like” it or not. The Commissions lawsuit against Facebook is or particular importance due to the fact that the corporation is known for its cooperation with the United States’ National Security Agency (NSA). While the lawsuit is of particular interest for Belgian and EU citizens, it also sheds light on Facebook’s monitoring of U.S. citizens.
Paul Merrell

Android phones outsell iPhone 2-to-1, says research firm - Computerworld - 2 views

  • Android-powered smartphones outsold iPhones in the U.S. by almost 2-to-1 in the third quarter, a research firm said today.
  • "We started to see Android take off in 2009 when Verizon added the [Motorola] Droid," said Ross Rubin, the executive director of industry analysis for the NPD Group. "A big part of Android success is its carrier distribution. Once it got to the Verizon and Sprint customer bases, with their mature 3G networks, that's when we started to see it take off." According to NPD's surveys of U.S. retailers, Android phones accounted for 44% of all consumer smartphone sales in the third quarter, an increase of 11 percentage points over 2010's second quarter. Meanwhile, Apple's iOS, which powers the iPhone, was up one point to 23%.
Gonzalo San Gil, PhD.

Facebook cierra la cuenta del grupo de ciberactivistas defensores de Wikileak... - 0 views

  •  
    [ El anuncio de un programa especial sobre la mujer condenada a muerte llevó a pensar que había salido de la cárcel. - La cadena aclara que se trata de una grabación en la que colaboró para reconstruir la muerte de su marido ... ]
  •  
    Speaks by Itself... :(
« First ‹ Previous 81 - 100 of 137 Next › Last »
Showing 20 items per page