Skip to main content

Home/ Future of the Web/ Group items matching "people,rules" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Gonzalo San Gil, PhD.

Dangerous Ruling: EU Says Google Must Help People Disappear Stuff They Don't Like From The Internet | Techdirt - 0 views

  •  
    "from the right-to-be-forgotten dept For years now we've explained why Europe's concept of a "right to be forgotten" is a terrible, dangerous and impossible idea. The basic idea is that if you were involved in something that you're not happy about later, you can demand that the incident be stricken from the record... everywhere. It's a clear attack on free speech -- allowing people to censor others from saying truthful and accurate things about someone. "
Paul Merrell

USA Freedom Act Passes House, Codifying Bulk Collection For First Time, Critics Say - The Intercept - 0 views

  • After only one hour of floor debate, and no allowed amendments, the House of Representatives today passed legislation that opponents believe may give brand new authorization to the U.S. government to conduct domestic dragnets. The USA Freedom Act was approved in a 338-88 vote, with approximately equal numbers of Democrats and Republicans voting against. The bill’s supporters say it will disallow bulk collection of domestic telephone metadata, in which the Foreign Intelligence Surveillance Court has regularly ordered phone companies to turn over such data. The Obama administration claims such collection is authorized by Section 215 of the USA Patriot Act, which is set to expire June 1. However, the U.S. Court of Appeals for the Second Circuit recently held that Section 215 does not provide such authorization. Today’s legislation would prevent the government from issuing such orders for bulk collection and instead rely on telephone companies to store all their metadata — some of which the government could then demand using a “specific selection term” related to foreign terrorism. Bill supporters maintain this would prevent indiscriminate collection.
  • However, the legislation may not end bulk surveillance and in fact could codify the ability of the government to conduct dragnet data collection. “We’re taking something that was not permitted under regular section 215 … and now we’re creating a whole apparatus to provide for it,” Rep. Justin Amash, R-Mich., said on Tuesday night during a House Rules Committee proceeding. “The language does limit the amount of bulk collection, it doesn’t end bulk collection,” Rep. Amash said, arguing that the problematic “specific selection term” allows for “very large data collection, potentially in the hundreds of thousands of people, maybe even millions.” In a statement posted to Facebook ahead of the vote, Rep. Amash said the legislation “falls woefully short of reining in the mass collection of Americans’ data, and it takes us a step in the wrong direction by specifically authorizing such collection in violation of the Fourth Amendment to the Constitution.”
  • “While I appreciate a number of the reforms in the bill and understand the need for secure counter-espionage and terrorism investigations, I believe our nation is better served by allowing Section 215 to expire completely and replacing it with a measure that finds a better balance between national security interests and protecting the civil liberties of Americans,” Congressman Ted Lieu, D-Calif., said in a statement explaining his vote against the bill.
  • ...2 more annotations...
  • Not addressed in the bill, however, are a slew of other spying authorities in use by the NSA that either directly or inadvertently target the communications of American citizens. Lawmakers offered several amendments in the days leading up to the vote that would have tackled surveillance activities laid out in Section 702 of the Foreign Intelligence Surveillance Act and Executive Order 12333 — two authorities intended for foreign surveillance that have been used to collect Americans’ internet data, including online address books and buddy lists. The House Rules Committee, however, prohibited consideration of any amendment to the USA Freedom Act, claiming that any changes to the legislation would have weakened its chances of passage.
  • The measure now goes to the Senate where its future is uncertain. Majority Leader Mitch McConnell has declined to schedule the bill for consideration, and is instead pushing for a clean reauthorization of expiring Patriot Act provisions that includes no surveillance reforms. Senators Ron Wyden, D-Ore., and Rand Paul, R-Ky., have threated to filibuster any bill that extends the Patriot Act without also reforming the NSA.
  •  
    Surprise, surprise. U.S. "progressive" groups are waging an all-out email lobbying effort to sunset the Patriot Act. https://www.sunsetthepatriotact.com/ Same with civil liberties groups. e.g., https://action.aclu.org/secure/Section215 And a coalition of libertarian organizations. http://docs.techfreedom.org/Coalition_Letter_McConnell_215Reauth_4.27.15.pdf
Paul Merrell

The Newest Reforms on SIGINT Collection Still Leave Loopholes | Just Security - 0 views

  • Director of National Intelligence James Clapper this morning released a report detailing new rules aimed at reforming the way signals intelligence is collected and stored by certain members of the United States Intelligence Community (IC). The long-awaited changes follow up on an order announced by President Obama one year ago that laid out the White House’s principles governing the collection of signals intelligence. That order, commonly known as PPD-28, purports to place limits on the use of data collected in bulk and to increase privacy protections related to the data collected, regardless of nationality. Accordingly, most of the changes presented as “new” by Clapper’s office  (ODNI) stem directly from the guidance provided in PPD-28, and so aren’t truly new. And of the biggest changes outlined in the report, there are still large exceptions that appear to allow the government to escape the restrictions with relative ease. Here’s a quick rundown.
  • National security letters (NSLs). The report also states that the FBI’s gag orders related to NSLs expire three years after the opening of a full-blown investigation or three years after an investigation’s close, whichever is earlier. However, these expiration dates can be easily overridden by by an FBI Special Agent in Charge or a Deputy Assistant FBI Director who finds that the statutory standards for secrecy about the NSL continue to be satisfied (which at least one court has said isn’t a very high bar). This exception also doesn’t address concerns that NSL gag orders lack adequate due process protections, lack basic judicial oversight, and may violate the First Amendment.
  • Retention policy for non-U.S. persons. The new rules say that the IC must now delete information about “non-U.S. persons” that’s been gathered via signals intelligence after five-years. However, there is a loophole that will let spies hold onto that information indefinitely whenever the Director of National Intelligence determines (after considering the views of the ODNI’s Civil Liberties Protection Officer) that retaining information is in the interest of national security. The new rules don’t say whether the exceptions will be directed at entire groups of people or individual surveillance targets.  Section 215 metadata. Updates to the rules concerning the use of data collected under Section 215 of the Patriot Act includes the requirement that the Foreign Intelligence Surveillance Court (rather than authorized NSA officials) must determine spies have “reasonable, articulable suspicion” prior to query Section 215 data, outside of emergency circumstances. What qualifies as an emergency for these purposes? We don’t know. Additionally, the IC is now limited to two “hops” in querying the database. This means that spies can only play two degrees of Kevin Bacon, instead of the previously allowed three degrees, with the contacts of anyone targeted under Section 215. The report doesn’t explain what would prevent the NSA (or other agency using the 215 databases) from getting around this limit by redesignating a phone number found in the first or second hop as a new “target,” thereby allowing the agency to continue the contact chain.
  • ...1 more annotation...
  • The report also details the ODNI’s and IC’s plans for the future, including: (1) Working with Congress to reauthorize bulk collection under Section 215. (2) Updating agency guidelines under Executive Order 12333 “to protect the privacy and civil liberties of U.S. persons.” (3) Producing another annual report in January 2016 on the IC’s progress in implementing signals intelligence reforms. These plans raise more questions than they answer. Given the considerable doubts about Section 215’s effectiveness, why is the ODNI pushing for its reauthorization? And what will the ODNI consider appropriate privacy protections under Executive Order 12333?
Gonzalo San Gil, PhD.

'No Suicide' Ruled In Grooveshark Founder's DeathDigital Music News - 0 views

  •  
    "Despite incredibly suspicious circumstances, the death of Grooveshark co-founder Josh Greenberg has been ruled a 'no suicide' by coroners. In an autopsy report leaked Tuesday afternoon to Digital Music News, Greenberg's abrupt death was mysteriously identified as 'Undetermined,' with no abnormal concentrations of toxins or other unusual post-mortem conditions found."
Paul Merrell

Civil Rights Coalition files FCC Complaint Against Baltimore Police Department for Illegally Using Stingrays to Disrupt Cellular Communications | Electronic Frontier Foundation - 0 views

  • This week the Center for Media Justice, ColorOfChange.org, and New America’s Open Technology Institute filed a complaint with the Federal Communications Commission alleging the Baltimore police are violating the federal Communications Act by using cell site simulators, also known as Stingrays, that disrupt cellphone calls and interfere with the cellular network—and are doing so in a way that has a disproportionate impact on communities of color. Stingrays operate by mimicking a cell tower and directing all cellphones in a given area to route communications through the Stingray instead of the nearby tower. They are especially pernicious surveillance tools because they collect information on every single phone in a given area—not just the suspect’s phone—this means they allow the police to conduct indiscriminate, dragnet searches. They are also able to locate people inside traditionally-protected private spaces like homes, doctors’ offices, or places of worship. Stingrays can also be configured to capture the content of communications. Because Stingrays operate on the same spectrum as cellular networks but are not actually transmitting communications the way a cell tower would, they interfere with cell phone communications within as much as a 500 meter radius of the device (Baltimore’s devices may be limited to 200 meters). This means that any important phone call placed or text message sent within that radius may not get through. As the complaint notes, “[d]epending on the nature of an emergency, it may be urgently necessary for a caller to reach, for example, a parent or child, doctor, psychiatrist, school, hospital, poison control center, or suicide prevention hotline.” But these and even 911 calls could be blocked.
  • The Baltimore Police Department could be among the most prolific users of cell site simulator technology in the country. A Baltimore detective testified last year that the BPD used Stingrays 4,300 times between 2007 and 2015. Like other law enforcement agencies, Baltimore has used its devices for major and minor crimes—everything from trying to locate a man who had kidnapped two small children to trying to find another man who took his wife’s cellphone during an argument (and later returned it). According to logs obtained by USA Today, the Baltimore PD also used its Stingrays to locate witnesses, to investigate unarmed robberies, and for mysterious “other” purposes. And like other law enforcement agencies, the Baltimore PD has regularly withheld information about Stingrays from defense attorneys, judges, and the public. Moreover, according to the FCC complaint, the Baltimore PD’s use of Stingrays disproportionately impacts African American communities. Coming on the heels of a scathing Department of Justice report finding “BPD engages in a pattern or practice of conduct that violates the Constitution or federal law,” this may not be surprising, but it still should be shocking. The DOJ’s investigation found that BPD not only regularly makes unconstitutional stops and arrests and uses excessive force within African-American communities but also retaliates against people for constitutionally protected expression, and uses enforcement strategies that produce “severe and unjustified disparities in the rates of stops, searches and arrests of African Americans.”
  • Adding Stingrays to this mix means that these same communities are subject to more surveillance that chills speech and are less able to make 911 and other emergency calls than communities where the police aren’t regularly using Stingrays. A map included in the FCC complaint shows exactly how this is impacting Baltimore’s African-American communities. It plots hundreds of addresses where USA Today discovered BPD was using Stingrays over a map of Baltimore’s black population based on 2010 Census data included in the DOJ’s recent report:
  • ...2 more annotations...
  • The Communications Act gives the FCC the authority to regulate radio, television, wire, satellite, and cable communications in all 50 states, the District of Columbia and U.S. territories. This includes being responsible for protecting cellphone networks from disruption and ensuring that emergency calls can be completed under any circumstances. And it requires the FCC to ensure that access to networks is available “to all people of the United States, without discrimination on the basis of race, color, religion, national origin, or sex.” Considering that the spectrum law enforcement is utilizing without permission is public property leased to private companies for the purpose of providing them next generation wireless communications, it goes without saying that the FCC has a duty to act.
  • But we should not assume that the Baltimore Police Department is an outlier—EFF has found that law enforcement has been secretly using stingrays for years and across the country. No community should have to speculate as to whether such a powerful surveillance technology is being used on its residents. Thus, we also ask the FCC to engage in a rule-making proceeding that addresses not only the problem of harmful interference but also the duty of every police department to use Stingrays in a constitutional way, and to publicly disclose—not hide—the facts around acquisition and use of this powerful wireless surveillance technology.  Anyone can support the complaint by tweeting at FCC Commissioners or by signing the petitions hosted by Color of Change or MAG-Net.
  •  
    An important test case on the constitutionality of stingray mobile device surveillance.
Paul Merrell

FCC's Wheeler Promises Net Neutrality Action 'Shortly' | Adweek - 0 views

  • he pressure is mounting on the Federal Communications Commission to revisit how it will regulate net neutrality in the wake of the DC Circuit Court of Appeals decision that tossed the rules back in the regulator's lap.
  • More than 1 million people signed the petition urging the FCC to "reassert its clear authority over our nation's communications infrastructure" and classify the transmission component of broadband Internet as a telecommunications service. While the court struck down the non-discrimination and no-blocking rules, it also ruled the FCC had the authority to regulate the Internet. That decision leaves the FCC with a thorny legal choice about whether it regulates by classifying the Internet as a telecommunications service or as an information service. In seeking to reassure the petitioners, Wheeler affirmed the commission's commitment to preserve and protect the open Internet. "We interpret the court decision as an invitation and we will accept that invitation," Wheeler said in a press conference following Thursday's meeting. "One of the great things about what the Internet does and why it needs to stay open, it enables people to organize and express themselves. A million people? That's boffo."
  •  
    Over a million signed the petition. Wow! But note that the battle is not over. The FCC could reimplement net neutrality now if it reclassified broadband internet as a telecommunications service. That the FCC has not already set this in motion raises danger flags. All it takes is for a few contracts to be signed to give the ISPs 5th Amendment taking clause claims for damages against the government for reimplementing net neutrality the right way, A "reasonable investment-backed expectation" is the relevant 5th Amendment trigger. 
Gonzalo San Gil, PhD.

New Year's Message: Change, Innovation And Optimism, Despite Challenges | Techdirt - 0 views

  •  
    "so happy and optimistic despite constantly writing about negative things that were happening -- people trying to block innovation, politicians passing crazy laws, judges making bad rulings, etc. As I pointed out then, I actually found it rather easy to stay happy because I had seen how far we've come over the years since Techdirt began, way back in 1997. I had seen how much innovation had happened in spite of attempts to stop it. "
  •  
    "so happy and optimistic despite constantly writing about negative things that were happening -- people trying to block innovation, politicians passing crazy laws, judges making bad rulings, etc. As I pointed out then, I actually found it rather easy to stay happy because I had seen how far we've come over the years since Techdirt began, way back in 1997. I had seen how much innovation had happened in spite of attempts to stop it. "
Paul Merrell

U.S. Says It Spied on 89,000 Targets Last Year, But the Number Is Deceptive | Threat Level | WIRED - 0 views

  • About 89,000 foreigners or organizations were targeted for spying under a U.S. surveillance order last year, according to a new transparency report. The report was released for the first time Friday by the Office of the Director of Intelligence, upon order of the president, in the wake of surveillance leaks by NSA whistleblower Edward Snowden. But the report, which covers only surveillance orders issued in 2013, doesn’t tell the whole story about how many individuals the spying targeted or how many Americans were caught in the surveillance that targeted foreigners. Civil liberties groups say the real number is likely “orders of magnitude” larger than this. “Even if it was an honest definition of ‘target’—that is, an individual instead of a group—that also is not encompassing those who are ancillary to a target and are caught up in the dragnet,” says Kurt Opsahl, deputy general counsel of the Electronic Frontier Foundation.
  • The report, remarkably, shows that the government obtained just one order last year under Section 702 of FISA—which allows for bulk collection of data on foreigners—and that this one order covered 89,138 targets. But, as the report notes, “target” can refer to “an individual person, a group, an organization composed of multiple individuals or a foreign power that possesses or is likely to communicate foreign intelligence information.” Furthermore, Section 702 orders are actually certificates issued by the FISA Court that can cover surveillance of an entire facility. And since, as the government points out in its report, the government cannot know how many people use a facility, the figure only “reflects an estimate of the number of known users of particular facilities (sometimes referred to as selectors) subject to intelligence collection under those Certifications,” the report notes.
  • “If you’re actually trying to get a sense of the number of human beings affected or the number of Americans affected, the number of people affected is vastly, vastly larger,” says Julian Sanchez, senior fellow at the Cato Institute. “And how many of those are Americans is impossible to say. But [although] you may not think you are routinely communicating with foreign persons, [this] is not any kind of assurance that your communications are not part of the traffic subject to interception.” Sanchez points out that each individual targeted is likely communicating with dozens or hundred of others, whose communications will be picked up in the surveillance. “And probably a lot of these targets are not individuals but entire web sites or companies. While [a company like the Chinese firm] Huawei might be a target, thousands of emails used by thousands of employees will be swept up.” How many of those employees might be American or communicating with Americans is unknown.
  • ...5 more annotations...
  • Also revealed in today’s report is the number of times the government has queried the controversial phone records database it created by collecting the phone records of every subscriber from U.S. providers. According to the report, the government used 423 “selectors” to search its massive phone records database, which includes records going back to at least 2006 when the program began. A search involves querying a specific phone number or device ID that appears in the database. The government has long maintained that its collection of phone records isn’t a violation of its authority, since it only views the records of specific individuals targeted in an investigation. But such searches, even if targeted at phone numbers used by foreigners, would include calls made to and from Americans as well as calls exchanged with people two or three hops out from the targeted number.
  • In its report, the government indicated that the 423 selectors involved just 248 “known or presumed” Americans whose information was collected by the agency in the database. But Opsahl says that both of these numbers are deceptive given what we know about the database and how it’s been used. “We know it’s affecting millions of people,” he points out. But “then we have estimated numbers of affected people [that are just] in the three digits. That requires some effort [on the government's part] to find a way to do the definition of the number [in such a way] to make it as small as possible.”
  • One additional figure today’s report covers is the number of National Security Letters the government issued last year to businesses to obtain data on accountholders and users—19,212. NSLs are written demands from the FBI that compel internet service providers, credit companies, financial institutions and others to hand over confidential records about their customers, such as subscriber information, phone numbers and e-mail addresses, websites visited, and more. These letters are a powerful tool because they do not require court approval, and they come with a built-in gag order, preventing recipients from disclosing to anyone that they have received an NSL. An FBI agent looking into a possible anti-terrorism case can self-issue an NSL to a credit bureau, ISP, or phone company with only the sign-off of the Special Agent in Charge of their office. The FBI has merely to assert that the information is “relevant” to an investigation into international terrorism or clandestine intelligence activities.
  • The FBI has issued hundreds of thousands of NSLs over the years and has been reprimanded for abusing them. Last year a federal judge ruled that the use of NSLs is unconstitutional, due to the gag order that accompanies them, and ordered the government to stop using them. Her ruling, however, was stayed pending the government’s appeal.
  • According to the government’s report today, the 19,000 NSLs issued last year involved more than 38,000 requests for information.
Paul Merrell

Victory for Users: Librarian of Congress Renews and Expands Protections for Fair Uses | Electronic Frontier Foundation - 0 views

  • The new rules for exemptions to copyright's DRM-circumvention laws were issued today, and the Librarian of Congress has granted much of what EFF asked for over the course of months of extensive briefs and hearings. The exemptions we requested—ripping DVDs and Blurays for making fair use remixes and analysis; preserving video games and running multiplayer servers after publishers have abandoned them; jailbreaking cell phones, tablets, and other portable computing devices to run third party software; and security research and modification and repairs on cars—have each been accepted, subject to some important caveats.
  • The exemptions are needed thanks to a fundamentally flawed law that forbids users from breaking DRM, even if the purpose is a clearly lawful fair use. As software has become ubiquitous, so has DRM.  Users often have to circumvent that DRM to make full use of their devices, from DVDs to games to smartphones and cars. The law allows users to request exemptions for such lawful uses—but it doesn’t make it easy. Exemptions are granted through an elaborate rulemaking process that takes place every three years and places a heavy burden on EFF and the many other requesters who take part. Every exemption must be argued anew, even if it was previously granted, and even if there is no opposition. The exemptions that emerge are limited in scope. What is worse, they only apply to end users—the people who are actually doing the ripping, tinkering, jailbreaking, or research—and not to the people who make the tools that facilitate those lawful activities. The section of the law that creates these restrictions—the Digital Millennium Copyright Act's Section 1201—is fundamentally flawed, has resulted in myriad unintended consequences, and is long past due for reform or removal altogether from the statute books. Still, as long as its rulemaking process exists, we're pleased to have secured the following exemptions.
  • The new rules are long and complicated, and we'll be posting more details about each as we get a chance to analyze them. In the meantime, we hope each of these exemptions enable more exciting fair uses that educate, entertain, improve the underlying technology, and keep us safer. A better long-terms solution, though, is to eliminate the need for this onerous rulemaking process. We encourage lawmakers to support efforts like the Unlocking Technology Act, which would limit the scope of Section 1201 to copyright infringements—not fair uses. And as the White House looks for the next Librarian of Congress, who is ultimately responsible for issuing the exemptions, we hope to get a candidate who acts—as a librarian should—in the interest of the public's access to information.
Paul Merrell

How Edward Snowden Changed Everything | The Nation - 0 views

  • Ben Wizner, who is perhaps best known as Edward Snowden’s lawyer, directs the American Civil Liberties Union’s Speech, Privacy & Technology Project. Wizner, who joined the ACLU in August 2001, one month before the 9/11 attacks, has been a force in the legal battles against torture, watch lists, and extraordinary rendition since the beginning of the global “war on terror.” Ad Policy On October 15, we met with Wizner in an upstate New York pub to discuss the state of privacy advocacy today. In sometimes sardonic tones, he talked about the transition from litigating on issues of torture to privacy advocacy, differences between corporate and state-sponsored surveillance, recent developments in state legislatures and the federal government, and some of the obstacles impeding civil liberties litigation. The interview has been edited and abridged for publication.
  • en Wizner, who is perhaps best known as Edward Snowden’s lawyer, directs the American Civil Liberties Union’s Speech, Privacy & Technology Project. Wizner, who joined the ACLU in August 2001, one month before the 9/11 attacks, has been a force in the legal battles against torture, watch lists, and extraordinary rendition since the beginning of the global “war on terror.” Ad Policy On October 15, we met with Wizner in an upstate New York pub to discuss the state of privacy advocacy today. In sometimes sardonic tones, he talked about the transition from litigating on issues of torture to privacy advocacy, differences between corporate and state-sponsored surveillance, recent developments in state legislatures and the federal government, and some of the obstacles impeding civil liberties litigation. The interview has been edited and abridged for publication.
  • Many of the technologies, both military technologies and surveillance technologies, that are developed for purposes of policing the empire find their way back home and get repurposed. You saw this in Ferguson, where we had military equipment in the streets to police nonviolent civil unrest, and we’re seeing this with surveillance technologies, where things that are deployed for use in war zones are now commonly in the arsenals of local police departments. For example, a cellphone surveillance tool that we call the StingRay—which mimics a cellphone tower and communicates with all the phones around—was really developed as a military technology to help identify targets. Now, because it’s so inexpensive, and because there is a surplus of these things that are being developed, it ends up getting pushed down into local communities without local democratic consent or control.
  • ...4 more annotations...
  • SG & TP: How do you see the current state of the right to privacy? BW: I joked when I took this job that I was relieved that I was going to be working on the Fourth Amendment, because finally I’d have a chance to win. That was intended as gallows humor; the Fourth Amendment had been a dishrag for the last several decades, largely because of the war on drugs. The joke in civil liberties circles was, “What amendment?” But I was able to make this joke because I was coming to Fourth Amendment litigation from something even worse, which was trying to sue the CIA for torture, or targeted killings, or various things where the invariable outcome was some kind of non-justiciability ruling. We weren’t even reaching the merits at all. It turns out that my gallows humor joke was prescient.
  • The truth is that over the last few years, we’ve seen some of the most important Fourth Amendment decisions from the Supreme Court in perhaps half a century. Certainly, I think the Jones decision in 2012 [U.S. v. Jones], which held that GPS tracking was a Fourth Amendment search, was the most important Fourth Amendment decision since Katz in 1967 [Katz v. United States], in terms of starting a revolution in Fourth Amendment jurisprudence signifying that changes in technology were not just differences in degree, but they were differences in kind, and require the Court to grapple with it in a different way. Just two years later, you saw the Court holding that police can’t search your phone incident to an arrest without getting a warrant [Riley v. California]. Since 2012, at the level of Supreme Court jurisprudence, we’re seeing a recognition that technology has required a rethinking of the Fourth Amendment at the state and local level. We’re seeing a wave of privacy legislation that’s really passing beneath the radar for people who are not paying close attention. It’s not just happening in liberal states like California; it’s happening in red states like Montana, Utah, and Wyoming. And purple states like Colorado and Maine. You see as many libertarians and conservatives pushing these new rules as you see liberals. It really has cut across at least party lines, if not ideologies. My overall point here is that with respect to constraints on government surveillance—I should be more specific—law-enforcement government surveillance—momentum has been on our side in a way that has surprised even me.
  • Do you think that increased privacy protections will happen on the state level before they happen on the federal level? BW: I think so. For example, look at what occurred with the death penalty and the Supreme Court’s recent Eighth Amendment jurisprudence. The question under the Eighth Amendment is, “Is the practice cruel and unusual?” The Court has looked at what it calls “evolving standards of decency” [Trop v. Dulles, 1958]. It matters to the Court, when it’s deciding whether a juvenile can be executed or if a juvenile can get life without parole, what’s going on in the states. It was important to the litigants in those cases to be able to show that even if most states allowed the bad practice, the momentum was in the other direction. The states that were legislating on this most recently were liberalizing their rules, were making it harder to execute people under 18 or to lock them up without the possibility of parole. I think you’re going to see the same thing with Fourth Amendment and privacy jurisprudence, even though the Court doesn’t have a specific doctrine like “evolving standards of decency.” The Court uses this much-maligned test, “Do individuals have a reasonable expectation of privacy?” We’ll advance the argument, I think successfully, that part of what the Court should look at in considering whether an expectation of privacy is reasonable is showing what’s going on in the states. If we can show that a dozen or eighteen state legislatures have enacted a constitutional protection that doesn’t exist in federal constitutional law, I think that that will influence the Supreme Court.
  • The question is will it also influence Congress. I think there the answer is also “yes.” If you’re a member of the House or the Senate from Montana, and you see that your state legislature and your Republican governor have enacted privacy legislation, you’re not going to be worried about voting in that direction. I think this is one of those places where, unlike civil rights, where you saw most of the action at the federal level and then getting forced down to the states, we’re going to see more action at the state level getting funneled up to the federal government.
  •  
    A must-read. Ben Wizner discusses the current climate in the courts in government surveillance cases and how Edward Snowden's disclosures have affected that, and much more. Wizner is not only Edward Snowden's lawyer, he is also the coordinator of all ACLU litigation on electronic surveillance matters.
Paul Merrell

Net neutrality comment fraud will be investigated by government | Ars Technica - 0 views

  • The US Government Accountability Office (GAO) will investigate the use of impersonation in public comments on the Federal Communications Commission's net neutrality repeal. Congressional Democrats requested the investigation last month, and the GAO has granted the request. While the investigation request was spurred by widespread fraud in the FCC's net neutrality repeal docket, Democrats asked the GAO to also "examine whether this shady practice extends to other agency rulemaking processes." The GAO will do just that, having told Democrats in a letter that it will "review the extent and pervasiveness of fraud and the misuse of American identities during federal rulemaking processes."
  • The GAO provides independent, nonpartisan audits and investigations for Congress. The GAO previously agreed to investigate DDoS attacks that allegedly targeted the FCC comment system, also in response to a request by Democratic lawmakers. The Democrats charged that Chairman Ajit Pai's FCC did not provide enough evidence that the attacks actually happened, and they asked the GAO to find out what evidence the FCC used to make its determination. Democrats also asked the GAO to examine whether the FCC is prepared to prevent future attacks. The DDoS investigation should happen sooner than the new one on comment fraud because the GAO accepted that request in October.
  • The FCC's net neutrality repeal received more than 22 million comments, but millions were apparently submitted by bots and falsely attributed to real Americans (including some dead ones) who didn't actually submit comments. Various analyses confirmed the widespread spam and fraud; one analysis found that 98.5 percent of unique comments opposed the repeal plan.
  • ...1 more annotation...
  • The FCC's comment system makes no attempt to verify submitters' identities, and allows bulk uploads so that groups collecting signatures for letters and petitions can get them on the docket easily. It was like that even before Pai took over as chair, but the fraud became far more pervasive in the proceeding that led to the repeal of net neutrality rules. Pai's FCC did not remove any fraudulent comments from the record. Democratic FCC Commissioner Jessica Rosenworcel called for a delay in the net neutrality repeal vote because of the fraud, but the Republican majority pushed the vote through as scheduled last month. New York Attorney General Eric Schneiderman has been investigating the comment fraud and says the FCC has stonewalled the investigation by refusing to provide evidence. Schneiderman is also leading a lawsuit to reverse the FCC's net neutrality repeal, and the comment fraud could play a role in the case. "We understand that the FCC's rulemaking process requires it to address all comments it receives, regardless of who submits them," Congressional Democrats said in their letter requesting a GAO investigation. "However, we do not believe any outside parties should be permitted to generate any comments to any federal governmental entity using information it knows to be false, such as the identities of those submitting the comments."
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gary Edwards

Meteor: The NeXT Web - 0 views

  •  
    "Writing software is too hard and it takes too long. It's time for a new way to write software - especially application software, the user-facing software we use every day to talk to people and keep track of things. This new way should be radically simple. It should make it possible to build a prototype in a day or two, and a real production app in a few weeks. It should make everyday things easy, even when those everyday things involve hundreds of servers, millions of users, and integration with dozens of other systems. It should be built on collaboration, specialization, and division of labor, and it should be accessible to the maximum number of people. Today, there's a chance to create this new way - to build a new platform for cloud applications that will become as ubiquitous as previous platforms such as Unix, HTTP, and the relational database. It is not a small project. There are many big problems to tackle, such as: How do we transition the web from a "dumb terminal" model that is based on serving HTML, to a client/server model that is based on exchanging data? How do we design software to run in a radically distributed environment, where even everyday database apps are spread over multiple data centers and hundreds of intelligent client devices, and must integrate with other software at dozens of other organizations? How do we prepare for a world where most web APIs will be push-based (realtime), rather than polling-driven? In the face of escalating complexity, how can we simplify software engineering so that more people can do it? How will software developers collaborate and share components in this new world? Meteor is our audacious attempt to solve all of these big problems, at least for a certain large class of everyday applications. We think that success will come from hard work, respect for history and "classically beautiful" engineering patterns, and a philosophy of generally open and collaborative development. " .............. "It is not a
  •  
    "How do we transition the web from a "dumb terminal" model that is based on serving HTML, to a client/server model that is based on exchanging data?" From a litigation aspect, the best bet I know of is antitrust litigation against the W3C and the WHATWG Working Group for implementing a non-interoperable specification. See e.g., Commission v. Microsoft, No. T-167/08, European Community Court of First Instance (Grand Chamber Judgment of 17 September, 2007), para. 230, 374, 421, http://preview.tinyurl.com/chsdb4w (rejecting Microsoft's argument that "interoperability" has a 1-way rather than 2-way meaning; information technology specifications must be disclosed with sufficient specificity to place competitors on an "equal footing" in regard to interoperability; "the 12th recital to Directive 91/250 defines interoperability as 'the ability to exchange information and mutually to use the information which has been exchanged'"). Note that the Microsoft case was prosecuted on the E.U.'s "abuse of market power" law that corresponds to the U.S. Sherman Act § 2 (monopolies). But undoubtedly the E.U. courts would apply the same standard to "agreements among undertakings" in restraint of trade, counterpart to the Sherman Act's § 1 (conspiracies in restraint of trade), the branch that applies to development of voluntary standards by competitors. But better to innovate and obsolete HTML, I think. DG Competition and the DoJ won't prosecute such cases soon. For example, Obama ran for office promising to "reinvigorate antitrust enforcement" but his DoJ has yet to file its first antitrust case against a big company. Nb., virtually the same definition of interoperability announced by the Court of First Instance is provided by ISO/IEC JTC-1 Directives, annex I ("eye"), which is applicable to all international standards in the IT sector: "... interoperability is understood to be the ability of two or more IT systems to exchange information at one or more standardised interfaces
Gonzalo San Gil, PhD.

Internet providers will soon need permission to share your web browsing history - The Verge - 1 views

  •  
    "New privacy rules require opting-in to sensitive sharing by Jacob Kastrenakes Oct 27, 2016, 10:41a "
Paul Merrell

EU okays 'renewed' data transfer deal, lets US firms move Europeans' private info overseas - RT News - 0 views

  • The EU has accepted a new version of the so-called Private Shield law that would allow US companies to transfer Europeans’ private data to servers across the ocean. The EU struck down the previously-reached agreement over US surveillance concerns.
  • The majority of EU members voted in support of the Privacy Shield pact with the US that had been designed to replace its predecessor, the Safe Harbor system, which the highest EU court ruled “invalid” in October 2015 following Edward Snowden’s revelations about mass US surveillance.
  • The newly-adopted agreement will come into force starting Tuesday.The deal, which is said to be aimed at protecting European citizens’ private data, defines the rules of how the sharing of information should be handled. It gives legal ground for tech companies such as Google, Facebook and MasterCard to move Europeans’ personal data to US servers bypassing an EU ban on moving personal information out from the 28-nation bloc. The agreement covers everything from private data about employees to detailed records of what people do online.“For the first time, the US has given the EU written assurance that the access of public authorities for law enforcement and national security will be subject to clear limitations, safeguards and oversight mechanisms and has ruled out indiscriminate mass surveillance of European citizens' data,” the statement said.
  • ...2 more annotations...
  • The new deal now grants greater guarantees to European customers and provides “accessible and affordable redress mechanisms” in case any disputes concerning US spying arise. An ombudsman will also be created within the US State Department to review complaints filed by EU citizens.
  • Privacy Shield, however, has also faced sharp criticism. Concerns about extensive US spying activity were raised in Europe after whistleblower Edward Snowden released a trove of controversial material on Washington’s surveillance practices.Digital rights group Privacy International (PI) said the newly-adopted pact had been drawn up on a "flawed premise" and “remains full of holes and hence offers limited protection to personal data”. 
Paul Merrell

NSA Doesn't Want Court That Found Phone Dragnet Illegal to Actually Do Anything About It - 1 views

  • The National Security Agency doesn’t think it’s relevant that its dragnet of American telephone data — information on who’s calling who, when, and for how long — was ruled illegal back in May. An American Civil Liberties Union lawsuit is asking the Second Circuit Court of Appeals, which reached that conclusion, to immediately enjoin the program. But the U.S. government responded on Monday evening, saying that Congressional passage of the USA Freedom Act trumped the earlier ruling. The Freedom Act ordered an end to the program — but with a six-month wind-down period.
  • The ACLU still maintains that even temporary revival is a blatant infringement on American’s legal rights. “We strongly disagree with the government’s claim that recent reform legislation was meant to give the NSA’s phone-records dragnet a new lease on life,” said Jameel Jaffer, the ACLU’s deputy legal director in a statement. “The appeals court should order the NSA to end this surveillance now.  It’s unlawful and it’s an entirely unnecessary intrusion into the privacy of millions of people.” On Monday, the Obama administration announced that at the same time the National Security Agency ends the dragnet, it will also stop perusing the vast archive of data collected by the program. Read the U.S. government brief responding to the ACLU below:
  •  
    Go ACLU!
Paul Merrell

Trump administration pulls back curtain on secretive cybersecurity process - The Washington Post - 0 views

  • The White House on Wednesday made public for the first time the rules by which the government decides to disclose or keep secret software flaws that can be turned into cyberweapons — whether by U.S. agencies hacking for foreign intelligence, money-hungry criminals or foreign spies seeking to penetrate American computers. The move to publish an un­classified charter responds to years of criticism that the process was unnecessarily opaque, fueling suspicion that it cloaked a stockpile of software flaws that the National Security Agency was hoarding to go after foreign targets but that put Americans’ cyber­security at risk.
  • The rules are part of the “Vulnerabilities Equities Process,” which the Obama administration revamped in 2014 as a multi­agency forum to debate whether and when to inform companies such as Microsoft and Juniper that the government has discovered or bought a software flaw that, if weaponized, could affect the security of their product. The Trump administration has mostly not altered the rules under which the government reaches a decision but is disclosing its process. Under the VEP, an “equities review board” of at least a dozen national security and civilian agencies will meet monthly — or more often, if a need arises — to discuss newly discovered vulnerabilities. Besides the NSA, the CIA and the FBI, the list includes the Treasury, Commerce and State departments, and the Office of Management and Budget. The priority is on disclosure, the policy states, to protect core Internet systems, the U.S. economy and critical infrastructure, unless there is “a demonstrable, overriding interest” in using the flaw for intelligence or law enforcement purposes. The government has long said that it discloses the vast majority — more than 90 percent — of the vulnerabilities it discovers or buys in products from defense contractors or other sellers. In recent years, that has amounted to more than 100 a year, according to people familiar with the process. But because the process was classified, the National Security Council, which runs the discussion, was never able to reveal any numbers. Now, Joyce said, the number of flaws disclosed and the number retained will be made public in an annual report. A classified version will be sent to Congress, he said.
Paul Merrell

Microsoft Says U.S. Is Abusing Secret Warrants - 0 views

  • “WE APPRECIATE THAT there are times when secrecy around a government warrant is needed,” Microsoft President Brad Smith wrote in a blog post on Thursday. “But based on the many secrecy orders we have received, we question whether these orders are grounded in specific facts that truly demand secrecy. To the contrary, it appears that the issuance of secrecy orders has become too routine.” With those words, Smith announced that Microsoft was suing the Department of Justice for the right to inform its customers when the government is reading their emails. The last big fight between the Justice Department and Silicon Valley was started by law enforcement, when the FBI demanded that Apple unlock a phone used by San Bernardino killer Syed Rizwan Farook. This time, Microsoft is going on the offensive. The move is welcomed by privacy activists as a step forward for transparency — though it’s also for business reasons.
  • Secret government searches are eroding people’s trust in the cloud, Smith wrote — including large and small businesses now keeping massive amounts of records online. “The transition to the cloud does not alter people’s expectations of privacy and should not alter the fundamental constitutional requirement that the government must — with few exceptions — give notice when it searches and seizes private information or communications,” he wrote. According to the complaint, Microsoft received 5,624 federal demands for customer information or data in the past 18 months. Almost half — 2,576 — came with gag orders, and almost half of those — 1,752 — had “no fixed end date” by which Microsoft would no longer be sworn to secrecy. These requests, though signed off on by a judge, qualify as unconstitutional searches, the attorneys argue. It “violates both the Fourth Amendment, which affords people and businesses the right to know if the government searches or seizes their property, and the First Amendment, which enshrines Microsoft’s rights to talk to its customers and to discuss how the government conducts its investigations — subject only to restraints narrowly tailored to serve compelling government interests,” they wrote.
  •  
    The Fourth Amendment argument that people have a right to know when their property has been searched or seized is particularly interesting to me. If adopted by the Courts, that could spell the end of surveillance gag orders. 
Gonzalo San Gil, PhD.

5 Open Source Mobile OS Alternatives To Android | tsfoss.com/ [# ! Note...] - 0 views

  •  
    "If I say that Open Source mobile OS are ruling the world of mobile devices, it won't be an exaggeration. Though many don't consider, Android is still an open source project. This is another thing that the devices you use come with a bundle of proprietary software along with Android and hence many people don't consider it open source"
Gonzalo San Gil, PhD.

The death of patents and what comes after: Alicia Gibb at TEDxStockholm - YouTube - 0 views

  •  
    "Published on Dec 18, 2012 Alicia Gibb got her start as a technologist from her combination in backgrounds including: informatics and library science, a belief system of freedom of information, inspiration from art and design, and a passion for hardware hacking. Alicia has worked between the crossroads of art and electronics for the past nine years, and has worked for the open source hardware community for the past three. She currently founded and is running the Open Source Hardware Association, an organization to educate and promote building and using open source hardware of all types. In her spare time, Alicia is starting an open source hardware company specific to education. Previous to becoming an advocate and an entrepreneur, Alicia was a researcher and prototyper at Bug Labs where she ran the academic research program and the Test Kitchen, an open R&D Lab. Her projects centered around developing lightweight additions to the BUG platform, as well as a sensor-based data collection modules. She is a member of NYCResistor, co-chair of the Open Hardware Summit, and a member of the advisory board for Linux Journal. She holds a degree in art education, a M.S. in Art History and a M.L.I.S. in Information Science from Pratt Institute. She is self-taught in electronics. Her electronics work has appeared in Wired magazine, IEEE Spectrum, Hackaday and the New York Times. When Alicia is not researching at the crossroads of open technology and innovation she is prototyping artwork that twitches, blinks, and might even be tasty to eat. In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual
‹ Previous 21 - 40 of 72 Next › Last »
Showing 20 items per page