Skip to main content

Home/ Open Web/ Group items tagged Wikipedia

Rss Feed Group items tagged

Paul Merrell

In Hearing on Internet Surveillance, Nobody Knows How Many Americans Impacted in Data C... - 0 views

  • The Senate Judiciary Committee held an open hearing today on the FISA Amendments Act, the law that ostensibly authorizes the digital surveillance of hundreds of millions of people both in the United States and around the world. Section 702 of the law, scheduled to expire next year, is designed to allow U.S. intelligence services to collect signals intelligence on foreign targets related to our national security interests. However—thanks to the leaks of many whistleblowers including Edward Snowden, the work of investigative journalists, and statements by public officials—we now know that the FISA Amendments Act has been used to sweep up data on hundreds of millions of people who have no connection to a terrorist investigation, including countless Americans. What do we mean by “countless”? As became increasingly clear in the hearing today, the exact number of Americans impacted by this surveillance is unknown. Senator Franken asked the panel of witnesses, “Is it possible for the government to provide an exact count of how many United States persons have been swept up in Section 702 surveillance? And if not the exact count, then what about an estimate?”
  • Elizabeth Goitein, the Brennan Center director whose articulate and thought-provoking testimony was the highlight of the hearing, noted that at this time an exact number would be difficult to provide. However, she asserted that an estimate should be possible for most if not all of the government’s surveillance programs. None of the other panel participants—which included David Medine and Rachel Brand of the Privacy and Civil Liberties Oversight Board as well as Matthew Olsen of IronNet Cybersecurity and attorney Kenneth Wainstein—offered an estimate. Today’s hearing reaffirmed that it is not only the American people who are left in the dark about how many people or accounts are impacted by the NSA’s dragnet surveillance of the Internet. Even vital oversight committees in Congress like the Senate Judiciary Committee are left to speculate about just how far-reaching this surveillance is. It's part of the reason why we urged the House Judiciary Committee to demand that the Intelligence Community provide the public with a number. 
  • The lack of information makes rigorous oversight of the programs all but impossible. As Senator Franken put it in the hearing today, “When the public lacks even a rough sense of the scope of the government’s surveillance program, they have no way of knowing if the government is striking the right balance, whether we are safeguarding our national security without trampling on our citizens’ fundamental privacy rights. But the public can’t know if we succeed in striking that balance if they don’t even have the most basic information about our major surveillance programs."  Senator Patrick Leahy also questioned the panel about the “minimization procedures” associated with this type of surveillance, the privacy safeguard that is intended to ensure that irrelevant data and data on American citizens is swiftly deleted. Senator Leahy asked the panel: “Do you believe the current minimization procedures ensure that data about innocent Americans is deleted? Is that enough?”  David Medine, who recently announced his pending retirement from the Privacy and Civil Liberties Oversight Board, answered unequivocally:
  • ...2 more annotations...
  • Senator Leahy, they don’t. The minimization procedures call for the deletion of innocent Americans’ information upon discovery to determine whether it has any foreign intelligence value. But what the board’s report found is that in fact information is never deleted. It sits in the databases for 5 years, or sometimes longer. And so the minimization doesn’t really address the privacy concerns of incidentally collected communications—again, where there’s been no warrant at all in the process… In the United States, we simply can’t read people’s emails and listen to their phone calls without court approval, and the same should be true when the government shifts its attention to Americans under this program. One of the most startling exchanges from the hearing today came toward the end of the session, when Senator Dianne Feinstein—who also sits on the Intelligence Committee—seemed taken aback by Ms. Goitein’s mention of “backdoor searches.” 
  • Feinstein: Wow, wow. What do you call it? What’s a backdoor search? Goitein: Backdoor search is when the FBI or any other agency targets a U.S. person for a search of data that was collected under Section 702, which is supposed to be targeted against foreigners overseas. Feinstein: Regardless of the minimization that was properly carried out. Goitein: Well the data is searched in its unminimized form. So the FBI gets raw data, the NSA, the CIA get raw data. And they search that raw data using U.S. person identifiers. That’s what I’m referring to as backdoor searches. It’s deeply concerning that any member of Congress, much less a member of the Senate Judiciary Committee and the Senate Intelligence Committee, might not be aware of the problem surrounding backdoor searches. In April 2014, the Director of National Intelligence acknowledged the searches of this data, which Senators Ron Wyden and Mark Udall termed “the ‘back-door search’ loophole in section 702.” The public was so incensed that the House of Representatives passed an amendment to that year's defense appropriations bill effectively banning the warrantless backdoor searches. Nonetheless, in the hearing today it seemed like Senator Feinstein might not recognize or appreciate the serious implications of allowing U.S. law enforcement agencies to query the raw data collected through these Internet surveillance programs. Hopefully today’s testimony helped convince the Senator that there is more to this topic than what she’s hearing in jargon-filled classified security briefings.
  •  
    The 4th Amendment: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and *particularly describing the place to be searched, and the* persons or *things to be seized."* So much for the particularized description of the place to be searched and the thngs to be seized.  Fah! Who needs a Constitution, anyway .... 
Paul Merrell

What to Do About Lawless Government Hacking and the Weakening of Digital Security | Ele... - 0 views

  • In our society, the rule of law sets limits on what government can and cannot do, no matter how important its goals. To give a simple example, even when chasing a fleeing murder suspect, the police have a duty not to endanger bystanders. The government should pay the same care to our safety in pursuing threats online, but right now we don’t have clear, enforceable rules for government activities like hacking and "digital sabotage." And this is no abstract question—these actions increasingly endanger everyone’s security
  • The problem became especially clear this year during the San Bernardino case, involving the FBI’s demand that Apple rewrite its iOS operating system to defeat security features on a locked iPhone. Ultimately the FBI exploited an existing vulnerability in iOS and accessed the contents of the phone with the help of an "outside party." Then, with no public process or discussion of the tradeoffs involved, the government refused to tell Apple about the flaw. Despite the obvious fact that the security of the computers and networks we all use is both collective and interwoven—other iPhones used by millions of innocent people presumably have the same vulnerability—the government chose to withhold information Apple could have used to improve the security of its phones. Other examples include intelligence activities like Stuxnet and Bullrun, and law enforcement investigations like the FBI’s mass use of malware against Tor users engaged in criminal behavior. These activities are often disproportionate to stopping legitimate threats, resulting in unpatched software for millions of innocent users, overbroad surveillance, and other collateral effects.  That’s why we’re working on a positive agenda to confront governmental threats to digital security. Put more directly, we’re calling on lawyers, advocates, technologists, and the public to demand a public discussion of whether, when, and how governments can be empowered to break into our computers, phones, and other devices; sabotage and subvert basic security protocols; and stockpile and exploit software flaws and vulnerabilities.  
  • Smart people in academia and elsewhere have been thinking and writing about these issues for years. But it’s time to take the next step and make clear, public rules that carry the force of law to ensure that the government weighs the tradeoffs and reaches the right decisions. This long post outlines some of the things that can be done. It frames the issue, then describes some of the key areas where EFF is already pursuing this agenda—in particular formalizing the rules for disclosing vulnerabilities and setting out narrow limits for the use of government malware. Finally it lays out where we think the debate should go from here.   
  •  
    It's not often that I disagree with EFF's positions, but on this one I do. The government should be prohibited from exploiting computer vulnerabilities and should be required to immediately report all vulnerabilities discovered to the relevant developers of hardware or software. It's been one long slippery slope since the Supreme Court first approved wiretapping in Olmstead v. United States, 277 US 438 (1928), https://goo.gl/NJevsr (.) Left undecided to this day is whether we have a right to whisper privately, a right that is undeniable. All communications intercept cases since Olmstead fly directly in the face of that right.
Paul Merrell

This project aims to make '404 not found' pages a thing of the past - 0 views

  • The Internet is always changing. Sites are rising and falling, content is deleted, and bad URLs can lead to '404 Not Found' errors that are as helpful as a brick wall. A new project proposes an do away with dead 404 errors by implementing new HTML code that will help access prior versions of hyperlinked content. With any luck, that means that you’ll never have to run into a dead link again. The “404-No-More” project is backed by a formidable coalition including members from organizations like the Harvard Library Innovation Lab, Los Alamos National Laboratory, Old Dominion University, and the Berkman Center for Internet & Society. Part of the Knight News Challenge, which seeks to strengthen the Internet for free expression and innovation through a variety of initiatives, 404-No-More recently reached the semifinal stage. The project aims to cure so-called link rot, the process by which hyperlinks become useless overtime because they point to addresses that are no longer available. If implemented, websites such as Wikipedia and other reference documents would be vastly improved. The new feature would also give Web authors a way provide links that contain both archived copies of content and specific dates of reference, the sort of information that diligent readers have to hunt down on a website like Archive.org.
  • While it may sound trivial, link rot can actually have real ramifications. Nearly 50 percent of the hyperlinks in Supreme Court decisions no longer work, a 2013 study revealed. Losing footnotes and citations in landmark legal decisions can mean losing crucial information and context about the laws that govern us. The same study found that 70 percent of URLs within the Harvard Law Review and similar journals didn’t link to the originally cited information, considered a serious loss surrounding the discussion of our laws. The project’s proponents have come up with more potential uses as well. Activists fighting censorship will have an easier time combatting government takedowns, for instance. Journalists will be much more capable of researching dynamic Web pages. “If every hyperlink was annotated with a publication date, you could automatically view an archived version of the content as the author intended for you to see it,” the project’s authors explain. The ephemeral nature of the Web could no longer be used as a weapon. Roger Macdonald, a director at the Internet Archive, called the 404-No-More project “an important contribution to preservation of knowledge.”
  • The new feature would come in the form of introducing the mset attribute to the <a> element in HTML, which would allow users of the code to specify multiple dates and copies of content as an external resource. For instance, if both the date of reference and the location of a copy of targeted content is known by an author, the new code would like like this: The 404-No-More project’s goals are numerous, but the ultimate goal is to have mset become a new HTML standard for hyperlinks. “An HTML standard that incorporates archives for hyperlinks will loop in these efforts and make the Web better for everyone,” project leaders wrote, “activists, journalists, and regular ol’ everyday web users.”
Paul Merrell

The FCC is about to kill the free Internet | PandoDaily - 0 views

  • The Federal Communications Commission is poised to ruin the free Internet on a technicality. The group is expected to introduce new net neutrality laws that would allow companies to pay for better access to consumers through deals similar to the one struck by Netflix and Comcast earlier this year. The argument is that those deals don’t technically fall under the net neutrality umbrella, so these new rules won’t apply to them even though they directly affect the Internet. At least the commission is being upfront about its disinterest in protecting the free Internet.
  • The Verge notes that the proposed rules will offer some protections to consumers: The Federal Communication Commission’s proposal for new net neutrality rules will allow internet service providers to charge companies for preferential treatment, effectively undermining the concept of net neutrality, according to The Wall Street Journal. The rules will reportedly allow providers to charge for preferential treatment so long as they offer that treatment to all interested parties on “commercially reasonable” terms, with the FCC will deciding whether the terms are reasonable on a case-by-case basis. Providers will not be able to block individual websites, however. The goal of net neutrality rules is to prevent service providers from discriminating between different content, allowing all types of data and all companies’ data to be treated equally. While it appears that outright blocking of individual services won’t be allowed, the Journal reports that some forms of discrimination will be allowed, though that will apparently not include slowing down websites.
  • Re/code summarizes the discontent with these proposed rules: Consumer groups have complained about that plan because they’re worried that Wheeler’s rules may not hold up in court either. A federal appeals court rejected two previous versions of net neutrality rules after finding fault in the FCC’s legal reasoning. During the latest smackdown, however, the court suggested that the FCC had some authority to impose net neutrality rules under a section of the law that gives the agency the ability to regulate the deployment of broadband lines. Internet activists would prefer that the FCC just re-regulate Internet lines under old rules designed for telephone networks, which they say would give the agency clear authority to police Internet lines. Wheeler has rejected that approach for now. Phone and cable companies, including Comcast, AT&T and Verizon, have vociferously fought that idea over the past few years.
  • ...2 more annotations...
  • The Chicago Tribune reports on the process directing these rules: The five-member regulatory commission may vote as soon as May to formally propose the rules and collect public comment on them. Virtually all large Internet service providers, such as Verizon Communications Inc. and Time Warner Cable Inc., have pledged to abide by the principles of open Internet reinforced by these rules. But critics have raised concerns that, without a formal rule, the voluntary pledges could be pulled back over time and also leave the door open for deals that would give unequal treatment to websites or services.
  • I wrote about the European Union’s attempts to defend the free Internet: The legislation is meant to provide access to online services ‘without discrimination, restriction or interference, independent of the sender, receiver, type, content, device, service or application.’ For example, ISPs would be barred from slowing down or ‘throttling’ the speed at which one service’s videos are delivered while allowing other services to stream at normal rates. To bastardize Gertrude Stein: a byte is a byte is a byte. Such restrictions would prevent deals like the one Comcast recently made with Netflix, which will allow the service’s videos to reach consumers faster than before. Comcast is also said to be in talks with Apple for a deal that would allow videos from its new streaming video service to reach consumers faster than videos from competitors. The Federal Communications Commission’s net neutrality laws don’t apply to those deals, according to FCC Chairman Tom Wheeler, so they are allowed to continue despite the threat they pose to the free Internet.
  •  
    Cute. Deliberately not using the authority the court of appeals said it could use to impose net neutrality. So Europe can have net neutrality but not in the U.S.
Paul Merrell

Spies and internet giants are in the same business: surveillance. But we can stop them ... - 0 views

  • On Tuesday, the European court of justice, Europe’s supreme court, lobbed a grenade into the cosy, quasi-monopolistic world of the giant American internet companies. It did so by declaring invalid a decision made by the European commission in 2000 that US companies complying with its “safe harbour privacy principles” would be allowed to transfer personal data from the EU to the US. This judgment may not strike you as a big deal. You may also think that it has nothing to do with you. Wrong on both counts, but to see why, some background might be useful. The key thing to understand is that European and American views about the protection of personal data are radically different. We Europeans are very hot on it, whereas our American friends are – how shall I put it? – more relaxed.
  • Given that personal data constitutes the fuel on which internet companies such as Google and Facebook run, this meant that their exponential growth in the US market was greatly facilitated by that country’s tolerant data-protection laws. Once these companies embarked on global expansion, however, things got stickier. It was clear that the exploitation of personal data that is the core business of these outfits would be more difficult in Europe, especially given that their cloud-computing architectures involved constantly shuttling their users’ data between server farms in different parts of the world. Since Europe is a big market and millions of its citizens wished to use Facebook et al, the European commission obligingly came up with the “safe harbour” idea, which allowed companies complying with its seven principles to process the personal data of European citizens. The circle having been thus neatly squared, Facebook and friends continued merrily on their progress towards world domination. But then in the summer of 2013, Edward Snowden broke cover and revealed what really goes on in the mysterious world of cloud computing. At which point, an Austrian Facebook user, one Maximilian Schrems, realising that some or all of the data he had entrusted to Facebook was being transferred from its Irish subsidiary to servers in the United States, lodged a complaint with the Irish data protection commissioner. Schrems argued that, in the light of the Snowden revelations, the law and practice of the United States did not offer sufficient protection against surveillance of the data transferred to that country by the government.
  • The Irish data commissioner rejected the complaint on the grounds that the European commission’s safe harbour decision meant that the US ensured an adequate level of protection of Schrems’s personal data. Schrems disagreed, the case went to the Irish high court and thence to the European court of justice. On Tuesday, the court decided that the safe harbour agreement was invalid. At which point the balloon went up. “This is,” writes Professor Lorna Woods, an expert on these matters, “a judgment with very far-reaching implications, not just for governments but for companies the business model of which is based on data flows. It reiterates the significance of data protection as a human right and underlines that protection must be at a high level.”
  • ...2 more annotations...
  • This is classic lawyerly understatement. My hunch is that if you were to visit the legal departments of many internet companies today you would find people changing their underpants at regular intervals. For the big names of the search and social media worlds this is a nightmare scenario. For those of us who take a more detached view of their activities, however, it is an encouraging development. For one thing, it provides yet another confirmation of the sterling service that Snowden has rendered to civil society. His revelations have prompted a wide-ranging reassessment of where our dependence on networking technology has taken us and stimulated some long-overdue thinking about how we might reassert some measure of democratic control over that technology. Snowden has forced us into having conversations that we needed to have. Although his revelations are primarily about government surveillance, they also indirectly highlight the symbiotic relationship between the US National Security Agency and Britain’s GCHQ on the one hand and the giant internet companies on the other. For, in the end, both the intelligence agencies and the tech companies are in the same business, namely surveillance.
  • And both groups, oddly enough, provide the same kind of justification for what they do: that their surveillance is both necessary (for national security in the case of governments, for economic viability in the case of the companies) and conducted within the law. We need to test both justifications and the great thing about the European court of justice judgment is that it starts us off on that conversation.
Paul Merrell

RDFa API - 0 views

  • RDFa APIAn API for extracting structured data from Web documentsW3C Working Draft 08 June 2010
  • RDFa [RDFA-CORE] enables authors to publish structured information that is both human- and machine-readable. Concepts that have traditionally been difficult for machines to detect, like people, places, events, music, movies, and recipes, are now easily marked up in Web documents. While publishing this data is vital to the growth of Linked Data, using the information to improve the collective utility of the Web for humankind is the true goal. To accomplish this goal, it must be simple for Web developers to extract and utilize structured information from a Web document. This document details such a mechanism; an RDFa Document Object Model Application Programming Interface (RDFa DOM API) that allows simple extraction and usage of structured information from a Web document.
  • This document is a detailed specification for an RDFa DOM API. The document is primarily intended for the following audiences: User Agent developers that are providing a mechanism to programatically extract RDF Triples from RDFa in a host language such as XHTML+RDFa [XHTML-RDFA], HTML+RDFa [HTML-RDFA] or SVG Tiny 1.2 [SVGTINY12], DOM tool developers that want to provide a mechanism for extracting RDFa content via programming languages such as JavaScript, Python, Ruby, or Perl, and Developers that want to understand the inner workings and design criteria for the RDFa DOM API.
Paul Merrell

Flash CS5 will export to HTML5 Canvas | 9 to 5 Mac - 0 views

  • In a previous post, I'd wondered why Adobe didn't spend their time building HTML5 Authoring tools rather than putting so much time/energy/money into their  Flash->iPhone Apps exporter tool for Flash CS5.   As it turns out, Adobe does have some, albeit rudimentary, HTML5 Canvas exporting tools as demonstrated in the video above.   Taking a simple animation, which is the beginning, middle and end of most Flash banner ads that we love, is an export/paste operation.
Paul Merrell

InfoQ: Google Wave's Architecture - 0 views

  • Operational Transformation This is the crucial part of Wave’s technology. Google Wave makes extensive use of Operational Transformations (OT) which are executed on the server. When an user edits a collaborative document opened by several users, the client program provides an Optimistic UI by immediately displaying what he/she types but it also sends the editing operation to the server to be ratified hoping that it will be accepted by the server. The client waits for the server to evaluate the operation and will cache any other operations until the server replies. After the server replies, all cached operations are sent from client to server in bulk. The server, considering operations received from other clients, will transform the operation accordingly and will inform all clients about the transformation, and the clients will update their UI accordingly. Operations are sent to the server and propagated to each client on a character by character basis, unless it is a bulk operation. The server is the keeper of the document and its version is considered the “correct” version. In the end, each client will be updated with the final version received from the server, which is the result of possibly many operational transformations. There are recovery means provided for communication failure or server/client crash. All XML documents exchanged between the client and the server carry a checksum for rapid identification of miscommunications.
Paul Merrell

American Surveillance Now Threatens American Business - The Atlantic - 0 views

  • What does it look like when a society loses its sense of privacy? <div><a href="http://pubads.g.doubleclick.net/gampad/jump?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" title=""><img style="border:none;" src="http://pubads.g.doubleclick.net/gampad/ad?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" alt="" /></a></div>In the almost 18 months since the Snowden files first received coverage, writers and critics have had to guess at the answer. Does a certain trend, consumer complaint, or popular product epitomize some larger shift? Is trust in tech companies eroding—or is a subset just especially vocal about it? Polling would make those answers clear, but polling so far has been… confused. A new study, conducted by the Pew Internet Project last January and released last week, helps make the average American’s view of his or her privacy a little clearer. And their confidence in their own privacy is ... low. The study's findings—and the statistics it reports—stagger. Vast majorities of Americans are uncomfortable with how the government uses their data, how private companies use and distribute their data, and what the government does to regulate those companies. No summary can equal a recounting of the findings. Americans are displeased with government surveillance en masse:   
  • A new study finds that a vast majority of Americans trust neither the government nor tech companies with their personal data.
  • What does it look like when a society loses its sense of privacy? <div><a href="http://pubads.g.doubleclick.net/gampad/jump?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" title=""><img style="border:none;" src="http://pubads.g.doubleclick.net/gampad/ad?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" alt="" /></a></div>In the almost 18 months since the Snowden files first received coverage, writers and critics have had to guess at the answer. Does a certain trend, consumer complaint, or popular product epitomize some larger shift? Is trust in tech companies eroding—or is a subset just especially vocal about it? Polling would make those answers clear, but polling so far has been… confused. A new study, conducted by the Pew Internet Project last January and released last week, helps make the average American’s view of his or her privacy a little clearer. And their confidence in their own privacy is ... low. The study's findings—and the statistics it reports—stagger. Vast majorities of Americans are uncomfortable with how the government uses their data, how private companies use and distribute their data, and what the government does to regulate those companies. No summary can equal a recounting of the findings. Americans are displeased with government surveillance en masse:   
  • ...3 more annotations...
  • According to the study, 70 percent of Americans are “at least somewhat concerned” with the government secretly obtaining information they post to social networking sites. Eighty percent of respondents agreed that “Americans should be concerned” with government surveillance of telephones and the web. They are also uncomfortable with how private corporations use their data: Ninety-one percent of Americans believe that “consumers have lost control over how personal information is collected and used by companies,” according to the study. Eighty percent of Americans who use social networks “say they are concerned about third parties like advertisers or businesses accessing the data they share on these sites.” And even though they’re squeamish about the government’s use of data, they want it to regulate tech companies and data brokers more strictly: 64 percent wanted the government to do more to regulate private data collection. Since June 2013, American politicians and corporate leaders have fretted over how much the leaks would cost U.S. businesses abroad.
  • “It’s clear the global community of Internet users doesn’t like to be caught up in the American surveillance dragnet,” Senator Ron Wyden said last month. At the same event, Google chairman Eric Schmidt agreed with him. “What occurred was a loss of trust between America and other countries,” he said, according to the Los Angeles Times. “It's making it very difficult for American firms to do business.” But never mind the world. Americans don’t trust American social networks. More than half of the poll’s respondents said that social networks were “not at all secure. Only 40 percent of Americans believe email or texting is at least “somewhat” secure. Indeed, Americans trusted most of all communication technologies where some protections has been enshrined into the law (though the report didn’t ask about snail mail). That is: Talking on the telephone, whether on a landline or cell phone, is the only kind of communication that a majority of adults believe to be “very secure” or “somewhat secure.”
  • (That may seem a bit incongruous, because making a telephone call is one area where you can be almost sure you are being surveilled: The government has requisitioned mass call records from phone companies since 2001. But Americans appear, when discussing security, to differentiate between the contents of the call and data about it.) Last month, Ramsey Homsany, the general counsel of Dropbox, said that one big thing could take down the California tech scene. “We have built this incredible economic engine in this region of the country,” said Homsany in the Los Angeles Times, “and [mistrust] is the one thing that starts to rot it from the inside out.” According to this poll, the mistrust has already begun corroding—and is already, in fact, well advanced. We’ve always assumed that the great hurt to American business will come globally—that citizens of other nations will stop using tech companies’s services. But the new Pew data shows that Americans suspect American businesses just as much. And while, unlike citizens of other nations, they may not have other places to turn, they may stop putting sensitive or delicate information online.
Paul Merrell

Help:CirrusSearch - MediaWiki - 0 views

  • CirrusSearch is a new search engine for MediaWiki. The Wikimedia Foundation is migrating to CirrusSearch since it features key improvements over the previously used search engine, LuceneSearch. This page describes the features that are new or different compared to the past solutions.
  • 1 Frequently asked questions 1.1 What's improved? 2 Updates 3 Search suggestions 4 Full text search 4.1 Stemming 4.2 Filters (intitle:, incategory: and linksto:) 4.3 prefix: 4.4 Special prefixes 4.5 Did you mean 4.6 Prefer phrase matches 4.7 Fuzzy search 4.8 Phrase search and proximity 4.9 Quotes and exact matches 4.10 prefer-recent: 4.11 hastemplate: 4.12 boost-templates: 4.13 insource: 4.14 Auxiliary Text 4.15 Lead Text 4.16 Commons Search 5 See also
  • Stemming In search terminology, support for "stemming" means that a search for "swim" will also include "swimming" and "swimmed", but not "swam". There is support for dozens of languages, but all languages are wanted. There is a list of currently supported languages at elasticsearch.org; see their documentation on contributing to submit requests or patches.
  • ...1 more annotation...
  • See also Full specifications in the browser tests
  •  
    Lots of new tricks to learn on sites using MediaWiki as folks update their installations, I'm not a big fan of programs written in PHP and Javascript, but they're impossible to avoid on the Web. So is MediaWiki, so any real improvements help.  
Paul Merrell

Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy? - Lawfare - 0 views

  • “We are truly fucked.” That was Motherboard’s spot-on reaction to deep fake sex videos (realistic-looking videos that swap a person’s face into sex scenes actually involving other people). And that sleazy application is just the tip of the iceberg. As Julian Sanchez tweeted, “The prospect of any Internet rando being able to swap anyone’s face into porn is incredibly creepy. But my first thought is that we have not even scratched the surface of how bad ‘fake news’ is going to get.” Indeed. Recent events amply demonstrate that false claims—even preposterous ones—can be peddled with unprecedented success today thanks to a combination of social media ubiquity and virality, cognitive biases, filter bubbles, and group polarization. The resulting harms are significant for individuals, businesses, and democracy. Belated recognition of the problem has spurred a variety of efforts to address this most recent illustration of truth decay, and at first blush there seems to be reason for optimism. Alas, the problem may soon take a significant turn for the worse thanks to deep fakes. Get used to hearing that phrase. It refers to digital manipulation of sound, images, or video to impersonate someone or make it appear that a person did something—and to do so in a manner that is increasingly realistic, to the point that the unaided observer cannot detect the fake. Think of it as a destructive variation of the Turing test: imitation designed to mislead and deceive rather than to emulate and iterate.
  • Fueled by artificial intelligence, digital impersonation is on the rise. Machine-learning algorithms (often neural networks) combined with facial-mapping software enable the cheap and easy fabrication of content that hijacks one’s identity—voice, face, body. Deep fake technology inserts individuals’ faces into videos without their permission. The result is “believable videos of people doing and saying things they never did.” Not surprisingly, this concept has been quickly leveraged to sleazy ends. The latest craze is fake sex videos featuring celebrities like Gal Gadot and Emma Watson. Although the sex scenes look realistic, they are not consensual cyber porn. Conscripting individuals (more often women) into fake porn undermines their agency, reduces them to sexual objects, engenders feeling of embarrassment and shame, and inflicts reputational harm that can devastate careers (especially for everyday people). Regrettably, cyber stalkers are sure to use fake sex videos to torment victims. What comes next? We can expect to see deep fakes used in other abusive, individually-targeted ways, such as undermining a rival’s relationship with fake evidence of an affair or an enemy’s career with fake evidence of a racist comment.
Paul Merrell

U.S. vs. Facebook: A Playbook for SEC, DOJ and EDNY - 0 views

  • Six4Three recently published a playbook for the FTC to get to the bottom of Facebook’s secretive deals selling user data without privacy controls. In light of The New York Times article reporting multiple criminal investigations into Facebook surrounding these secretive deals, we’re publishing the playbook for criminal investigators.Perhaps the most important recognition at the outset is that the secretive deals that have been reported, whether those with a handful of device manufacturers or with 150 large technology companies, are just the tip of the iceberg. Those secretive deals handing over user data in exchange for gobs of cash were merely part and parcel of a much broader illegal scheme that begins with Facebook’s transition to mobile in 2012 and continues to this very day. We believe this illegal scheme amounts to a clear RICO violation. The United Kingdom Parliament agrees. Here’s how criminal investigators can overcome Facebook’s incredibly effective concealment campaign and bring a viable RICO case.Facebook’s pattern of racketeering activity is a play in three acts from at least 2012 to present. The first act is all about the desperation resulting from the collapse of Facebook’s desktop advertising business right around its IPO and the various securities violations that resulted. The second act is about covering up those securities violations by illegally building its mobile advertising business via extortion and wire fraud in order to close the gap in Facebook’s revenue projections before the world took notice, which likely resulted in additional securities violations. The third act is about covering up the extortion and wire fraud by lying to government officials investigating Facebook while continuing to effectuate the scheme. We are still in the third act.For almost a decade now Facebook has been covering up one illegal act with another in order to hide how it managed to ramp up its mobile advertising business faster than any other business in the history of capitalism. The abuses of Facebook’s data, from Russian interference in the 2016 election to Cambridge Analytica and Brexit, all stem in substantial part from the decisions Facebook knowingly, willfully and maliciously made to facilitate this criminal conspiracy. Put simply, Facebook’s transition to mobile destabilized the world.
  •  
    This is so reminiscent of Microsoft tactics at the point that antitrust regulators stepped in.
‹ Previous 21 - 32 of 32
Showing 20 items per page