Skip to main content

Home/ Future of the Web/ Group items tagged community manager

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

EUROPA - Press Releases - Digital Agenda: Commission opens public consultation on prese... - 0 views

  •  
    [The European Commission is today launching a public consultation seeking answers to questions on transparency, switching and certain aspects of internet traffic management, with a view to its commitment to preserve the open and neutral character of the Internet. These questions have emerged as key issues in the "net neutrality" debate that has taken place in Europe over the past years, including the recent findings of the Body of European Regulators of European Communications (BEREC). ...]
Gonzalo San Gil, PhD.

The value of open source is the open development process: Scott Wilson OSS Watch | Open... - 0 views

  •  
    "Scott Wilson agrees that open source matters because of open code, but just as important is the process in which the code is made. Open development of code is in the social nature of many programmers, hackers, documentors, and project managers. So, what is it about open development? "
Paul Merrell

Mozilla Sets New Plans for Do Not Track Browser | Adweek - 0 views

  • Much to the disappointment of the digital advertising establishment, Mozilla is going ahead with plans to automatically block third-party cookie tracking in its Firefox browser. Mozilla first announced its Do Not Track browser in February, only to back off in May saying it needed to do more testing. But that didn't stop a growing chorus of loud protests from the advertising community, which argued that the browser would choke off the ad-supported Internet. The Interactive Advertising Bureau's general counsel Mike Zaneis called Mozilla's browser nothing less than a "nuclear first strike" against the ad community. No date has been set for when Firefox will turn on the feature, but advertisers, which have been regularly meeting with Mozilla and were hopeful for a compromise, are already lashing back at Mozilla.
  • "It's troubling," said Lou Mastria, the managing director for the Digital Advertising Alliance, which manages an online self-regulatory program called Ad Choices that provides consumers with the choice to opt-out of targeted ads. "They're putting this under the cloak of privacy, but it's disrupting a business model," Mastria said. Advertisers are worried that Mozilla's plans could be the death knell to thousands of small Web publishers that depend on third-party targeted ads to stay in business. Nearly 1,000 signed a petition urging Mozilla to change its plans.  "One publisher said that 20 percent of their business would go away. That's huge," said Mastria. "Mozilla is really picking business model winners and losers."
  • Not all cookies will be blocked under Mozilla's latest plans for its proposed browser; there will be exceptions. Through a partnership with the Center for Internet and Society at Stanford Law School, the two are launching a Cookie Clearinghouse. Overseen by a six-person panel, it will determine a list of undesirable cookies and then block those. "The Cookie Clearinghouse will create, maintain and publish objective information," Aleecia McDonald, director of privacy at CIS, said in a statement. "Web browser companies will be able to choose to adopt the lists we publish to provide new privacy options to their users." But others say the approach is far from objective. "What these organizations and the privacy groups that back them are really saying is 'let us choose for you because we know best,' " said Daniel Castro, a senior analyst with the Information Technology and Innovation Foundation. "The proponents of this model have claimed they are empowering users. ... This is basically Sarah Palin's 'Death Panels' but for the Internet."
  • ...1 more annotation...
  • Advertisers have so far resisted some of the Do Not Track proposals advocated by privacy groups arguing they are technological solutions that could quickly be rendered obsolete by the fast-moving Internet economy. When Micosoft launched its Do Not Track default browser, advertisers said they would not honor it. Meanwhile, members of the World Wide Web Consortium's tracking group, represented by advertisers, privacy groups and other stakeholders, have been unable to reach consensus about a universal Do Not Track browser solution. In Congress, where baseline privacy legislation has moved at a glacial pace, Mozilla's news gave Sen. Jay Rockefeller (D-W.Va.) more ammunition for his Do Not Track Online Act. Introduced earlier this year, the bill hasn't gotten much traction and only has one co-sponsor, Sen. Richard Blumenthal (D-Conn.). "With major Web browsers now starting to provide privacy protections by default, it's even more important to give businesses the regulatory certainty they need and consumers the privacy protections they deserve," Rockefeller said in a statement. "I hope this will end the emerging back and forth so we can act quickly to pass new legislation."
  •  
    Cookie Clearinghouse. Overseen by a six-person panel, it will determine a list of undesirable cookies and then block those.
Gonzalo San Gil, PhD.

The People Prevail: FCC Calls Off Closed-Door Meetings on Net Neutrality | Save the Int... - 0 views

  •  
    By Megan Tady, August 5, 2010 You called, you emailed and you signaled your outrage as the Federal Communications Commission continued to meet behind closed doors with Internet companies, and Google and Verizon hatched a side plan on how to manage the Internet. And then, you prevailed. Amidst a tidal wave of public pressure, FCC Chief of Staff Edward Lazarus called off closed-door negotiations with major ISPs and Internet companies, pledging "to seek broad input on this vital issue."
Gary Edwards

Digg - Intel and TSMC: What are they thinking? - CNET News - 0 views

  •  
    I posted a digg on Peter Glaskowsky's CNET article discussing the Intel - TSMC deal. In 1995, i somehow managed to get between Intel and TSMC regarding funding for Virtual Realty, a video conferencing based loan origination / real estate transaction processing company that used Intel ProShare. TSMC wanted to invest a ton of money in VRi, with the idea of providing a full graphical listing, brokerage and transaction service for all of Asia. Intel needed a business model proving the value of ProShare, and capable of putting down the basics of a wide bandwidth video conferencing communications-data network they could grow into a platform.

    At first this seemed to me like a win-win for everyone. Then i found out how seriously pissed Intel was about TSMC's deal with ViA and the resulting "WinBook". Although this is not the time or place to tell the story, i was truly stunned and shocked when i saw the Intel-TSMC deal announcement. Wow!

    My response to Peter focuses on his comments about how this deal will impact Nvidia. And then, how the Nvidia vision of an ION-Atom motherboard impacts WebKit and the future of the Open Web.
Gary Edwards

Collaboration Is At The Heart Of Open Source Content Management -- Open Source Content ... - 0 views

  •  
    As the economy tanks, open source proponents reflexively point to the low capital costs of acquiring open source software. But big customers want more than a bargain. They also want better. Thus, collaboration is more than just staying true to the open source credo of community and cooperation. It's also a smart business move. Drupal and Alfresco show us why.
Maluvia Haseltine

Kensho » XenServer - Citrix Community - 0 views

  •  
    Citrix Project Kensho provides administrators with highly usable tools that facilitate the export and import of virtual machines and virtual machine based workloads (virtual appliances) using the Open Virtual Machine Format (OVF) and Common Information Model (CIM) industry standards developed by the Distributed Management Task Force (DMTF).
Gary Edwards

Sun pitches new cloud as 'Open Platform' * - 0 views

  •  
    Sun takes on the problem of interoperability and portability of applications in a world where there will be many many clouds. At the roll out of the Sun Cloud, key executives explain Sun's implementation of Open Cloud API's and what they see as a pressing need for management tools that will allow some standardization across clouds.

    Sun's Open Cloud API plan is a clean reuse of existing Open Web API's.

    "..... The underpinning of the Open Cloud Platform that Sun will be pitching to developers is a set of cloud APIs, the creation of which is focused under Project Kenai and which has been released under a Community Commons open source license. Sun wants lots of feedback on the APIs and wants these APIs to become a standard too, hence the open license. These APIs describes how virtual elements in a cloud are created, started, stopped, and hibernated using HTTP commands such as GET, PUT, and POST...."

    "...... The upshot is that these APIs will allow programmatic access to virtual infrastructure from Java, PHP, Python, and Ruby and that means system admins can script how virtual resources are deployed. The APIs, as co-creator Tim Bray explains in his blog, are written in JavaScript Object Notation (JSON), not XML. The Q-Layer software is a graphical representation of what is going on down in the APIs, and you can moving virtual resources into the cloud with a click of a mouse using the dashboard or programmatically using the APIs from those four programming languages listed above. (PHP support is not yet available, but will be)....."
  •  
    I can see why Sun picked those four languages first. Can I assume that with a bit of work, this API will be usable from any language with a C "foreign function interface", such as Perl, Common Lisp, Bourne shell, Squeak Smalltalk, and others that your server application might be written in?
  •  
    I read this comment that largely answers my question at: http://www.tbray.org/ongoing/When/200x/2009/03/16/Sun-Cloud "So right now JSON out of a shell tool is not so good. More things like this will create pressure for development of tools to change that, but years of widespread XML/HTML deployment have only produced a few oddly maintained tools. Perhaps that's because you can scrape quite a bit of the web with a couple sed passes, and if I were to have to deal with the mentioned tools, that's probably the route I'd take." (seth w. klein) In other words, with a bit of work, _anything_ that can talk text over HTTP can do this with a bit of work, but an object-oriented is likely to be more at home with JSON (JavaScript Object Notation)
Gary Edwards

RDFa, Drupal and a Practical Semantic Web - 1 views

  •  
    CMSWire has a brief explanation of RDFa and why it's important. RDFa is also finding it's way into the Drupal CMS, which could be a game changer. Timothy Berners-Lee vision of a "Semantic Web" where the meaning of content is understood by both humans and machines depends on the emergence of capable information systems that make it transparently easy to add semantic markup. I'm not surprised that Drupal is jumping with both feet.

    "... In the march toward creating the semantic web, web content management systems such as Drupal (news, site) and many proprietary vendors struggle with the goal of emitting structured information that other sites and tools can usefully consume. There's a balance to be struck between human and machine utility, not to mention simplicity of instrumentation.

    With RDFa (see W3C proposal),  software and web developers have the specification they need to know how to structure data in order to lend meaning both to machines and to humans, all in a single file. And from what we've seen recently, the Drupal community is making the best of it.
Paul Merrell

FBI Flouts Obama Directive to Limit Gag Orders on National Security Letters - The Inter... - 0 views

  • Despite the post-Snowden spotlight on mass surveillance, the intelligence community’s easiest end-run around the Fourth Amendment since 2001 has been something called a National Security Letter. FBI agents can demand that an Internet service provider, telephone company or financial institution turn over its records on any number of people — without any judicial review whatsoever — simply by writing a letter that says the information is needed for national security purposes. The FBI at one point was cranking out over 50,000 such letters a year; by the latest count, it still issues about 60 a day. The letters look like this:
  • Recipients are legally required to comply — but it doesn’t stop there. They also aren’t allowed to mention the order to anyone, least of all the person whose data is being searched. Ever. That’s because National Security Letters almost always come with eternal gag orders. Here’s that part:
  • That means the NSL process utterly disregards the First Amendment as well. More than a year ago, President Obama announced that he was ordering the Justice Department to terminate gag orders “within a fixed time unless the government demonstrates a real need for further secrecy.” And on Feb. 3, when the Office of the Director of National Intelligence announced a handful of baby steps resulting from its “comprehensive effort to examine and enhance [its] privacy and civil liberty protections” one of the most concrete was — finally — to cap the gag orders: In response to the President’s new direction, the FBI will now presumptively terminate National Security Letter nondisclosure orders at the earlier of three years after the opening of a fully predicated investigation or the investigation’s close. Continued nondisclosures orders beyond this period are permitted only if a Special Agent in Charge or a Deputy Assistant Director determines that the statutory standards for nondisclosure continue to be satisfied and that the case agent has justified, in writing, why continued nondisclosure is appropriate.
  • ...6 more annotations...
  • Despite the use of the word “now” in that first sentence, however, the FBI has yet to do any such thing. It has not announced any such change, nor explained how it will implement it, or when. Media inquiries were greeted with stalling and, finally, a no comment — ostensibly on advice of legal counsel. “There is pending litigation that deals with a lot of the same questions you’re asking, out of the Ninth Circuit,” FBI spokesman Chris Allen told me. “So for now, we’ll just have to decline to comment.” FBI lawyers are working on a court filing for that case, and “it will address” the new policy, he said. He would not say when to expect it.
  • There is indeed a significant case currently before the federal appeals court in San Francisco. Oral arguments were in October. A decision could come any time. But in that case, the Electronic Frontier Foundation (EFF), which is representing two unnamed communications companies that received NSLs, is calling for the entire NSL statute to be thrown out as unconstitutional — not for a tweak to the gag. And it has a March 2013 district court ruling in its favor. “The gag is a prior restraint under the First Amendment, and prior restraints have to meet an extremely high burden,” said Andrew Crocker, a legal fellow at EFF. That means going to court and meeting the burden of proof — not just signing a letter. Or as the Cato Institute’s Julian Sanchez put it, “To have such a low bar for denying persons or companies the right to speak about government orders they have been served with is anathema. And it is not very good for accountability.”
  • In a separate case, a wide range of media companies (including First Look Media, the non-profit digital media venture that produces The Intercept) are supporting a lawsuit filed by Twitter, demanding the right to say specifically how many NSLs it has received. But simply releasing companies from a gag doesn’t assure the kind of accountability that privacy advocates are saying is required by the Constitution. “What the public has to remember is a NSL is asking for your information, but it’s not asking it from you,” said Michael German, a former FBI agent who is now a fellow with the Brennan Center for Justice. “The vast majority of these things go to the very large telecommunications and financial companies who have a large stake in maintaining a good relationship with the government because they’re heavily regulated entities.”
  • So, German said, “the number of NSLs that would be exposed as a result of the release of the gag order is probably very few. The person whose records are being obtained is the one who should receive some notification.” A time limit on gags going forward also raises the question of whether past gag orders will now be withdrawn. “Obviously there are at this point literally hundreds of thousands of National Security Letters that are more than three years old,” said Sanchez. Individual review is therefore unlikely, but there ought to be some recourse, he said. And the further back you go, “it becomes increasingly implausible that a significant percentage of those are going to entail some dire national security risk.” The NSL program has a troubled history. The absolute secrecy of the program and resulting lack of accountability led to systemic abuse as documented by repeated inspector-general investigations, including improperly authorized NSLs, factual misstatements in the NSLs, improper requests under NSL statutes, requests for information based on First Amendment protected activity, “after-the-fact” blanket NSLs to “cover” illegal requests, and hundreds of NSLs for “community of interest” or “calling circle” information without any determination that the telephone numbers were relevant to authorized national security investigations.
  • Obama’s own hand-selected “Review Group on Intelligence and Communications Technologies” recommended in December 2013 that NSLs should only be issued after judicial review — just like warrants — and that any gag should end within 180 days barring judicial re-approval. But FBI director James Comey objected to the idea, calling NSLs “a very important tool that is essential to the work we do.” His argument evidently prevailed with Obama.
  • NSLs have managed to stay largely under the American public’s radar. But, Crocker says, “pretty much every time I bring it up and give the thumbnail, people are shocked. Then you go into how many are issued every year, and they go crazy.” Want to send me your old NSL and see if we can set a new precedent? Here’s how to reach me. And here’s how to leak to me.
Paul Merrell

Tripling Its Collection, NSA Sucked Up Over 530 Million US Phone Records in 2017 - 0 views

  • he National Security Agency (NSA) collected over 530 million phone records of Americans in 2017—that's three times the amount the spy agency sucked up in 2016. The figures were released Friday in an annual report from the Office of the Director of National Intelligence (ODNI). It shows that the number of "call detail records" the agency collected from telecommunications providers during Trump's first year in office was 534 million, compared to 151 million the year prior. "The intelligence community's transparency has yet to extend to explaining dramatic increases in their collection," said Robyn Greene, policy counsel at the Open Technology Institute. The content of the calls itself is not collected but so-called "metadata," which, as Gizmodo notes, "is supposedly anonymous, but it can easily be used to identify an individual. The information can also be paired with other publicly available information from social media and other sources to paint a surprisingly detailed picture of a person's life." The report also revealed that the agency, using its controversial Section 702 authority, increased the number of foreign targets of warrantless surveillance. It was 129,080 in 2017 compared to 106,469 in 2016. As digital rights group EFF noted earlier this year, Under Section 702, the NSA collects billions of communications, including those belonging to innocent Americans who are not actually targeted. These communications are then placed in databases that other intelligence and law enforcement agencies can access—for purposes unrelated to national security—without a warrant or any judicial review. "Overall," Jake Laperruque, senior counsel at the Project On Government Oversight, said to ZDNet, "the numbers show that the scale of warrantless surveillance is growing at a significant rate, but ODNI still won't tell Americans how much it affects them."
Paul Merrell

He Was a Hacker for the NSA and He Was Willing to Talk. I Was Willing to Listen. - 2 views

  • he message arrived at night and consisted of three words: “Good evening sir!” The sender was a hacker who had written a series of provocative memos at the National Security Agency. His secret memos had explained — with an earthy use of slang and emojis that was unusual for an operative of the largest eavesdropping organization in the world — how the NSA breaks into the digital accounts of people who manage computer networks, and how it tries to unmask people who use Tor to browse the web anonymously. Outlining some of the NSA’s most sensitive activities, the memos were leaked by Edward Snowden, and I had written about a few of them for The Intercept. There is no Miss Manners for exchanging pleasantries with a man the government has trained to be the digital equivalent of a Navy SEAL. Though I had initiated the contact, I was wary of how he might respond. The hacker had publicly expressed a visceral dislike for Snowden and had accused The Intercept of jeopardizing lives by publishing classified information. One of his memos outlined the ways the NSA reroutes (or “shapes”) the internet traffic of entire countries, and another memo was titled “I Hunt Sysadmins.” I felt sure he could hack anyone’s computer, including mine. Good evening sir!
  • The sender was a hacker who had written a series of provocative memos at the National Security Agency. His secret memos had explained — with an earthy use of slang and emojis that was unusual for an operative of the largest eavesdropping organization in the world — how the NSA breaks into the digital accounts of people who manage computer networks, and how it tries to unmask people who use Tor to browse the web anonymously. Outlining some of the NSA’s most sensitive activities, the memos were leaked by Edward Snowden, and I had written about a few of them for The Intercept. There is no Miss Manners for exchanging pleasantries with a man the government has trained to be the digital equivalent of a Navy SEAL. Though I had initiated the contact, I was wary of how he might respond. The hacker had publicly expressed a visceral dislike for Snowden and had accused The Intercept of jeopardizing lives by publishing classified information. One of his memos outlined the ways the NSA reroutes (or “shapes”) the internet traffic of entire countries, and another memo was titled “I Hunt Sysadmins.” I felt sure he could hack anyone’s computer, including mine.
  • I got lucky with the hacker, because he recently left the agency for the cybersecurity industry; it would be his choice to talk, not the NSA’s. Fortunately, speaking out is his second nature.
  • ...7 more annotations...
  • He agreed to a video chat that turned into a three-hour discussion sprawling from the ethics of surveillance to the downsides of home improvements and the difficulty of securing your laptop.
  • In recent years, two developments have helped make hacking for the government a lot more attractive than hacking for yourself. First, the Department of Justice has cracked down on freelance hacking, whether it be altruistic or malignant. If the DOJ doesn’t like the way you hack, you are going to jail. Meanwhile, hackers have been warmly invited to deploy their transgressive impulses in service to the homeland, because the NSA and other federal agencies have turned themselves into licensed hives of breaking into other people’s computers. For many, it’s a techno sandbox of irresistible delights, according to Gabriella Coleman, a professor at McGill University who studies hackers. “The NSA is a very exciting place for hackers because you have unlimited resources, you have some of the best talent in the world, whether it’s cryptographers or mathematicians or hackers,” she said. “It is just too intellectually exciting not to go there.”
  • The Lamb’s memos on cool ways to hunt sysadmins triggered a strong reaction when I wrote about them in 2014 with my colleague Ryan Gallagher. The memos explained how the NSA tracks down the email and Facebook accounts of systems administrators who oversee computer networks. After plundering their accounts, the NSA can impersonate the admins to get into their computer networks and pilfer the data flowing through them. As the Lamb wrote, “sys admins generally are not my end target. My end target is the extremist/terrorist or government official that happens to be using the network … who better to target than the person that already has the ‘keys to the kingdom’?” Another of his NSA memos, “Network Shaping 101,” used Yemen as a theoretical case study for secretly redirecting the entirety of a country’s internet traffic to NSA servers.
  • “If I turn the tables on you,” I asked the Lamb, “and say, OK, you’re a target for all kinds of people for all kinds of reasons. How do you feel about being a target and that kind of justification being used to justify getting all of your credentials and the keys to your kingdom?” The Lamb smiled. “There is no real safe, sacred ground on the internet,” he replied. “Whatever you do on the internet is an attack surface of some sort and is just something that you live with. Any time that I do something on the internet, yeah, that is on the back of my mind. Anyone from a script kiddie to some random hacker to some other foreign intelligence service, each with their different capabilities — what could they be doing to me?”
  • “You know, the situation is what it is,” he said. “There are protocols that were designed years ago before anybody had any care about security, because when they were developed, nobody was foreseeing that they would be taken advantage of. … A lot of people on the internet seem to approach the problem [with the attitude of] ‘I’m just going to walk naked outside of my house and hope that nobody looks at me.’ From a security perspective, is that a good way to go about thinking? No, horrible … There are good ways to be more secure on the internet. But do most people use Tor? No. Do most people use Signal? No. Do most people use insecure things that most people can hack? Yes. Is that a bash against the intelligence community that people use stuff that’s easily exploitable? That’s a hard argument for me to make.”
  • I mentioned that lots of people, including Snowden, are now working on the problem of how to make the internet more secure, yet he seemed to do the opposite at the NSA by trying to find ways to track and identify people who use Tor and other anonymizers. Would he consider working on the other side of things? He wouldn’t rule it out, he said, but dismally suggested the game was over as far as having a liberating and safe internet, because our laptops and smartphones will betray us no matter what we do with them. “There’s the old adage that the only secure computer is one that is turned off, buried in a box ten feet underground, and never turned on,” he said. “From a user perspective, someone trying to find holes by day and then just live on the internet by night, there’s the expectation [that] if somebody wants to have access to your computer bad enough, they’re going to get it. Whether that’s an intelligence agency or a cybercrimes syndicate, whoever that is, it’s probably going to happen.”
  • There are precautions one can take, and I did that with the Lamb. When we had our video chat, I used a computer that had been wiped clean of everything except its operating system and essential applications. Afterward, it was wiped clean again. My concern was that the Lamb might use the session to obtain data from or about the computer I was using; there are a lot of things he might have tried, if he was in a scheming mood. At the end of our three hours together, I mentioned to him that I had taken these precautions—and he approved. “That’s fair,” he said. “I’m glad you have that appreciation. … From a perspective of a journalist who has access to classified information, it would be remiss to think you’re not a target of foreign intelligence services.” He was telling me the U.S. government should be the least of my worries. He was trying to help me. Documents published with this article: Tracking Targets Through Proxies & Anonymizers Network Shaping 101 Shaping Diagram I Hunt Sys Admins (first published in 2014)
Paul Merrell

YouTube gets the yuck out in comments cleanup | Internet & Media - CNET News - 0 views

  • Laugh all you want, fuzzball, but Google is changing how YouTube uploaders manage comments on their videos. The new system, which began rolling out to a limited number of uploaders on Tuesday, favors relevancy over recency and introduces enhanced moderation tools. The new commenting system, which is powered by Google+ and was developed in collaboration between the YouTube and Google+ teams, provides several new tools for moderation, said Nundu Janakiram, product manager at YouTube. It will default to showing YouTube viewers the most relevant comments first, such as those by the video uploader or channel owner. "Currently, you see comments from the last random person to stop by," Janakiram said. "The new system tries to surface the most meaningful conversation to you. We're trying to shift from comments to meaningful conversations," he said.
  • He explained that three main factors determine which comments are more relevant: community engagement by the commenter, up-votes for a particular comment, and commenter reputation. If you've been flagged for spam or abuse, don't be surprised to find your comments buried, but that also means that celebrities who have strong Google+ reputations will be boosted above others. There's more to the system than just relevancy, though. Because the system is powered by Google+, comments made on posts with YouTube links in the social network will show up on YouTube itself. So, you'll see comments from people in your Google+ Circles higher up, too. Just because it's powered by Google+ doesn't mean that you'll lose your YouTube identity, though. "You are still allowed to use pseudonyms," said Janakiram, whether you're "a Syrian dissident or SoulPancake". Another feature, and one that speaks directly to YouTube's goal of fostering conversations, is that you'll be able to comment publicly or privately to people in your Circles. Replies will be threaded like Gmail. The hope is that new moderation tools will make it easier for video owners to guide the conversation, Janakiram explained. "There have been challenges in the past with certain comments and what's been shown there."
Gary Edwards

The True Story of How the Patent Bar Captured a Court and Shrank the Intellectual Commo... - 1 views

  • The change in the law wrought by the Federal Circuit can also be viewed substantively through the controversy over software patents. Throughout the 1960s, the USPTO refused to award patents for software innovations. However, several of the USPTO’s decisions were overruled by the patent-friendly U.S. Court of Customs and Patent Appeals, which ordered that software patents be granted. In Gottschalk v. Benson (1972) and Parker v. Flook (1978), the U.S. Supreme Court reversed the Court of Customs and Patent Appeals, holding that mathematical algorithms (and therefore software) were not patentable subject matter. In 1981, in Diamond v. Diehr, the Supreme Court upheld a software patent on the grounds that the patent in question involved a physical process—the patent was issued for software used in the molding of rubber. While affirming their prior ruling that mathematical formulas are not patentable in the abstract, the Court held that an otherwise patentable invention did not become unpatentable simply because it utilized a computer.
  • In the hands of the newly established Federal Circuit, however, this small scope for software patents in precedent was sufficient to open the floodgates. In a series of decisions culminating in State Street Bank v. Signature Financial Group (1998), the Federal Circuit broadened the criteria for patentability of software and business methods substantially, allowing protection as long as the innovation “produces a useful, concrete and tangible result.” That broadened criteria led to an explosion of low-quality software patents, from Amazon’s 1-Click checkout system to Twitter’s pull-to-refresh feature on smartphones. The GAO estimates that more than half of all patents granted in recent years are software-related. Meanwhile, the Supreme Court continues to hold, as in Parker v. Flook, that computer software algorithms are not patentable, and has begun to push back against the Federal Circuit. In Bilski v. Kappos (2010), the Supreme Court once again held that abstract ideas are not patentable, and in Alice v. CLS (2014), it ruled that simply applying an abstract idea on a computer does not suffice to make the idea patent-eligible. It still is not clear what portion of existing software patents Alice invalidates, but it could be a significant one.
  • Supreme Court justices also recognize the Federal Circuit’s insubordination. In oral arguments in Carlsbad Technology v. HIF Bio (2009), Chief Justice John Roberts joked openly about it:
  • ...17 more annotations...
  • The Opportunity of the Commons
  • As a result of the Federal Circuit’s pro-patent jurisprudence, our economy has been flooded with patents that would otherwise not have been granted. If more patents meant more innovation, then we would now be witnessing a spectacular economic boom. Instead, we have been living through what Tyler Cowen has called a Great Stagnation. The fact that patents have increased while growth has not is known in the literature as the “patent puzzle.” As Michele Boldrin and David Levine put it, “there is no empirical evidence that [patents] serve to increase innovation and productivity, unless productivity is identified with the number of patents awarded—which, as evidence shows, has no correlation with measured productivity.”
  • While more patents have not resulted in faster economic growth, they have resulted in more patent lawsuits.
  • Software patents have characteristics that make them particularly susceptible to litigation. Unlike, say, chemical patents, software patents are plagued by a problem of description. How does one describe a software innovation in such a way that anyone searching for it will easily find it? As Christina Mulligan and Tim Lee demonstrate, chemical formulas are indexable, meaning that as the number of chemical patents grow, it will still be easy to determine if a molecule has been patented. Since software innovations are not indexable, they estimate that “patent clearance by all firms would require many times more hours of legal research than all patent lawyers in the United States can bill in a year. The result has been an explosion of patent litigation.” Software and business method patents, estimate James Bessen and Michael Meurer, are 2 and 7 times more likely to be litigated than other patents, respectively (4 and 13 times more likely than chemical patents).
  • Software patents make excellent material for predatory litigation brought by what are often called “patent trolls.”
  • Trolls use asymmetries in the rules of litigation to legally extort millions of dollars from innocent parties. For example, one patent troll, Innovatio IP Ventures, LLP, acquired patents that implicated Wi-Fi. In 2011, it started sending demand letters to coffee shops and hotels that offered wireless Internet access, offering to settle for $2,500 per location. This amount was far in excess of the 9.56 cents per device that Innovatio was entitled to under the “Fair, Reasonable, and Non-Discriminatory” licensing promises attached to their portfolio, but it was also much less than the cost of trial, and therefore it was rational for firms to pay. Cisco stepped in and spent $13 million in legal fees on the case, and settled on behalf of their customers for 3.2 cents per device. Other manufacturers had already licensed Innovatio’s portfolio, but that didn’t stop their customers from being targeted by demand letters.
  • Litigation cost asymmetries are magnified by the fact that most patent trolls are nonpracticing entities. This means that when patent infringement trials get to the discovery phase, they will cost the troll very little—a firm that does not operate a business has very few records to produce.
  • But discovery can cost a medium or large company millions of dollars. Using an event study methodology, James Bessen and coauthors find that infringement lawsuits by nonpracticing entities cost publicly traded companies $83 billion per year in stock market capitalization, while plaintiffs gain less than 10 percent of that amount.
  • Software patents also reduce innovation in virtue of their cumulative nature and the fact that many of them are frequently inputs into a single product. Law professor Michael Heller coined the phrase “tragedy of the anticommons” to refer to a situation that mirrors the well-understood “tragedy of the commons.” Whereas in a commons, multiple parties have the right to use a resource but not to exclude others, in an anticommons, multiple parties have the right to exclude others, and no one is therefore able to make effective use of the resource. The tragedy of the commons results in overuse of the resource; the tragedy of the anticommons results in underuse.
  • In order to cope with the tragedy of the anticommons, we should carefully investigate the opportunity of  the commons. The late Nobelist Elinor Ostrom made a career of studying how communities manage shared resources without property rights. With appropriate self-governance institutions, Ostrom found again and again that a commons does not inevitably lead to tragedy—indeed, open access to shared resources can provide collective benefits that are not available under other forms of property management.
  • This suggests that—litigation costs aside—patent law could be reducing the stock of ideas rather than expanding it at current margins.
  • Advocates of extensive patent protection frequently treat the commons as a kind of wasteland. But considering the problems in our patent system, it is worth looking again at the role of well-tailored limits to property rights in some contexts. Just as we all benefit from real property rights that no longer extend to the highest heavens, we would also benefit if the scope of patent protection were more narrowly drawn.
  • Reforming the Patent System
  • This analysis raises some obvious possibilities for reforming the patent system. Diane Wood, Chief Judge of the 7th Circuit, has proposed ending the Federal Circuit’s exclusive jurisdiction over patent appeals—instead, the Federal Circuit could share jurisdiction with the other circuit courts. While this is a constructive suggestion, it still leaves the door open to the Federal Circuit playing “a leading role in shaping patent law,” which is the reason for its capture by patent interests. It would be better instead simply to abolish the Federal Circuit and return to the pre-1982 system, in which patents received no special treatment in appeals. This leaves open the possibility of circuit splits, which the creation of the Federal Circuit was designed to mitigate, but there are worse problems than circuit splits, and we now have them.
  • Another helpful reform would be for Congress to limit the scope of patentable subject matter via statute. New Zealand has done just that, declaring that software is “not an invention” to get around WTO obligations to respect intellectual property. Congress should do the same with respect to both software and business methods.
  • Finally, even if the above reforms were adopted, there would still be a need to address the asymmetries in patent litigation that result in predatory “troll” lawsuits. While the holding in Alice v. CLS arguably makes a wide swath of patents invalid, those patents could still be used in troll lawsuits because a ruling of invalidity for each individual patent might not occur until late in a trial. Current legislation in Congress addresses this class of problem by mandating disclosures, shifting fees in the case of spurious lawsuits, and enabling a review of the patent’s validity before a trial commences.
  • What matters for prosperity is not just property rights in the abstract, but good property-defining institutions. Without reform, our patent system will continue to favor special interests and forestall economic growth.
  •  
    "Libertarians intuitively understand the case for patents: just as other property rights internalize the social benefits of improvements to land, automobile maintenance, or business investment, patents incentivize the creation of new inventions, which might otherwise be undersupplied. So far, so good. But it is important to recognize that the laws that govern property, intellectual or otherwise, do not arise out of thin air. Rather, our political institutions, with all their virtues and foibles, determine the contours of property-the exact bundle of rights that property holders possess, their extent, and their limitations. Outlining efficient property laws is not a trivial problem. The optimal contours of property are neither immutable nor knowable a priori. For example, in 1946, the U.S. Supreme Court reversed the age-old common law doctrine that extended real property rights to the heavens without limit. The advent of air travel made such extensive property rights no longer practicable-airlines would have had to cobble together a patchwork of easements, acre by acre, for every corridor through which they flew, and they would have opened themselves up to lawsuits every time their planes deviated from the expected path. The Court rightly abridged property rights in light of these empirical realities. In defining the limits of patent rights, our political institutions have gotten an analogous question badly wrong. A single, politically captured circuit court with exclusive jurisdiction over patent appeals has consistently expanded the scope of patentable subject matter. This expansion has resulted in an explosion of both patents and patent litigation, with destructive consequences. "
  •  
    I added a comment to the page's article. Patents are antithetical to the precepts of Libertarianism and do not involve Natural Law rights. But I agree with the author that the Court of Appeals for the Federal Circuit should be abolished. It's a failed experiment.
Paul Merrell

We finally gave Congress email addresses - Sunlight Foundation Blog - 0 views

  • On OpenCongress, you can now email your representatives and senators just as easily as you would a friend or colleague. We've added a new feature to OpenCongress. It's not flashy. It doesn't use D3 or integrate with social media. But we still think it's pretty cool. You might've already heard of it. Email. This may not sound like a big deal, but it's been a long time coming. A lot of people are surprised to learn that Congress doesn't have publicly available email addresses. It's the number one feature request that we hear from users of our APIs. Until recently, we didn't have a good response. That's because members of Congress typically put their feedback mechanisms behind captchas and zip code requirements. Sometimes these forms break; sometimes their requirements improperly lock out actual constituents. And they always make it harder to email your congressional delegation than it should be.
  • This is a real problem. According to the Congressional Management Foundation, 88% of Capitol Hill staffers agree that electronic messages from constituents influence their bosses' decisions. We think that it's inappropriate to erect technical barriers around such an essential democratic mechanism. Congress itself is addressing the problem. That effort has just entered its second decade, and people are feeling optimistic that a launch to a closed set of partners might be coming soon. But we weren't content to wait. So when the Electronic Frontier Foundation (EFF) approached us about this problem, we were excited to really make some progress. Building on groundwork first done by the Participatory Politics Foundation and more recent work within Sunlight, a network of 150 volunteers collected the data we needed from congressional websites in just two days. That information is now on Github, available to all who want to build the next generation of constituent communication tools. The EFF is already working on some exciting things to that end.
  • But we just wanted to be able to email our representatives like normal people. So now, if you visit a legislator's page on OpenCongress, you'll see an email address in the right-hand sidebar that looks like Sen.Reid@opencongress.org or Rep.Boehner@opencongress.org. You can also email myreps@opencongress.org to email both of your senators and your House representatives at once. The first time we get an email from you, we'll send one back asking for some additional details. This is necessary because our code submits your message by navigating those aforementioned congressional webforms, and we don't want to enter incorrect information. But for emails after the first one, all you'll have to do is click a link that says, "Yes, I meant to send that email."
  • ...1 more annotation...
  • One more thing: For now, our system will only let you email your own representatives. A lot of people dislike this. We do, too. In an age of increasing polarization, party discipline means that congressional leaders must be accountable to citizens outside their districts. But the unfortunate truth is that Congress typically won't bother reading messages from non-constituents — that's why those zip code requirements exist in the first place. Until that changes, we don't want our users to waste their time. So that's it. If it seems simple, it's because it is. But we think that unbreaking how Congress connects to the Internet is important. You should be able to send a call to action in a tweet, easily forward a listserv message to your representative and interact with your government using the tools you use to interact with everyone else.
Paul Merrell

Marriott fined $600,000 for jamming guest hotspots - SlashGear - 0 views

  • Marriott will cough up $600,000 in penalties after being caught blocking mobile hotspots so that guests would have to pay for its own WiFi services, the FCC has confirmed today. The fine comes after staff at the Gaylord Opryland Hotel and Convention Center in Nashville, Tennessee were found to be jamming individual hotspots and then charging people up to $1,000 per device to get online. Marriott has been operating the center since 2012, and is believed to have been running its interruption scheme since then. The first complaint to the FCC, however, wasn't until March 2013, when one guest warned the Commission that they suspected their hardware had been jammed. An investigation by the FCC's Enforcement Bureau revealed that was, in fact, the case. A WiFi monitoring system installed at the Gaylord Opryland would target access points with de-authentication packets, disconnecting users so that their browsing was interrupted.
  • The FCC deemed Marriott's behaviors as contravening Section 333 of the Communications Act, which states that "no person shall willfully or maliciously interfere with or cause interference to any radio communications of any station licensed or authorized by or under this chapter or operated by the United States Government." In addition to the $600,000 civil penalty, Marriott will have to cease blocking guests, hand over details of any access point containment features to the FCC across its entire portfolio of owned or managed properties, and finally file compliance and usage reports each quarter for the next three years.
  • Update: Marriott has issued the following statement on the FCC ruling: "Marriott has a strong interest in ensuring that when our guests use our Wi-Fi service, they will be protected from rogue wireless hotspots that can cause degraded service, insidious cyber-attacks and identity theft. Like many other institutions and companies in a wide variety of industries, including hospitals and universities, the Gaylord Opryland protected its Wi-Fi network by using FCC-authorized equipment provided by well-known, reputable manufacturers. We believe that the Gaylord Opryland's actions were lawful. We will continue to encourage the FCC to pursue a rulemaking in order to eliminate the ongoing confusion resulting from today's action and to assess the merits of its underlying policy."
Paul Merrell

XHTML Modularization 1.1 Released as W3C Recommendation - 0 views

  • XHTML Modularization is a decomposition of XHTML 1.0, and by reference HTML 4, into a collection of abstract modules that provide specific types of functionality.
  • The modularization of XHTML refers to the task of specifying well-defined sets of XHTML elements that can be combined and extended by document authors, document type architects, other XML standards specifications, and application and product designers to make it economically feasible for content developers to deliver content on a greater number and diversity of platforms. Over the last couple of years, many specialized markets have begun looking to HTML as a content language. There is a great movement toward using HTML across increasingly diverse computing platforms. Currently there is activity to move HTML onto mobile devices (hand held computers, portable phones, etc.), television devices (digital televisions, TV-based Web browsers, etc.), and appliances (fixed function devices). Each of these devices has different requirements and constraints.
  • XHTML Modularization is a decomposition of XHTML 1.0, and by reference HTML 4, into a collection of abstract modules that provide specific types of functionality. These abstract modules are implemented in this specification using the XML Schema and XML Document Type Definition languages. The rules for defining the abstract modules, and for implementing them using XML Schemas and XML DTDs, are also defined in this document. These modules may be combined with each other and with other modules to create XHTML subset and extension document types that qualify as members of the XHTML-family of document types.
  • ...1 more annotation...
  • Modularizing XHTML provides a means for product designers to specify which elements are supported by a device using standard building blocks and standard methods for specifying which building blocks are used. These modules serve as "points of conformance" for the content community. The content community can now target the installed base that supports a certain collection of modules, rather than worry about the installed base that supports this or that permutation of XHTML elements. The use of standards is critical for modularized XHTML to be successful on a large scale. It is not economically feasible for content developers to tailor content to each and every permutation of XHTML elements. By specifying a standard, either software processes can autonomously tailor content to a device, or the device can automatically load the software required to process a module. Modularization also allows for the extension of XHTML's layout and presentation capabilities, using the extensibility of XML, without breaking the XHTML standard. This development path provides a stable, useful, and implementable framework for content developers and publishers to manage the rapid pace of technological change on the Web.
Paul Merrell

EU looks into telecoms blocking Internet calls - International Herald Tribune - 0 views

  • European Union regulators are looking into whether mobile phone operators who block customers from making inexpensive wireless calls over the Internet are breaking competition rules. The European Commission, the EU antitrust authority, has sent questionnaires to phone companies asking what "tools" they use to "control, manage, block, slow down or otherwise restrict or filter" Internet-based voice calls. The EU deadline for responding to the survey was Tuesday. The questionnaire, obtained by Bloomberg News, does not identify any companies. Some mobile carriers have blocked services that use voice-over-Internet protocol, or VoIP, which allows users to make calls over the Web. Companies may be seeking to stop customers from accessing applications, like eBay's Skype, to defend voice revenue from the less expensive Internet services, Carolina Milanesi, research director for mobile devices at Gartner, the research company, said.
    • Paul Merrell
       
      Building a Connected World --- The Role of Antitrust Law and Lawyers.
  •  
    Superficially, this sounds like an application of the principles won by DG Competition in the Court of First Instance's Commission v. Microsoft interoperability decision. But note that here we deal with an investigation into deliberately-created interop barriers rather than those maintained by withholding full communication protocol specifications from competitors. Notice that the investigation encompasses throttling of internet connections for particular uses, an increasingly common practice by Comcast and other ISPs in the U.S., where both VOIP and P2P file-sharing are targeted uses. E.U. and U.S. antitrust law are similar, as efforts to harmonize antitrust law on both sides of The Pond are now decades old; this move does not bode well for bandwidth throttling in the U.S., particularly when aimed at throttling competition. It takes no giant mental leap to apply such principles to big vendor-dominated IT standards bodies that deliberately create or maintain interop barriers in data format standards. Indeed, DG Competition Commissioner Neelie Kroes has already served notice that interop barriers in standards-setting is an item of interest.
Paul Merrell

LocalOrg: Decentralizing Telecom - 0 views

  • SOPA, ACTA, the criminalization of sharing, and a myriad of other measures taken to perpetuate antiquated business models propping up enduring monopolies - all have become increasingly taxing on the tech community and informed citizens alike. When the storm clouds gather and torrential rain begins to fall, the people have managed to stave off the flood waters through collective effort and well organized activism - stopping, or at least delaying SOPA and ACTA. However, is it really sustainable to mobilize each and every time multi-billion dollar corporations combine their resources and attempt to pass another series of draconian rules and regulations? Instead of manning the sandbags during each storm, wouldn't it suit us all better to transform the surrounding landscape in such a way as to harmlessly divert the floods, or better yet, harness them to our advantage? In many ways the transformation has already begun.
  • While open source software and hardware, as well as innovative business models built around collaboration and crowd-sourcing have done much to build a paradigm independent of current centralized proprietary business models, large centralized corporations and the governments that do their bidding, still guard all the doors and carry all the keys. The Internet, the phone networks, radio waves, and satellite systems still remain firmly in the hands of big business. As long as they do, they retain the ability to not only reassert themselves in areas where gains have been made, but can impose preemptive measures to prevent any future progress. With the advent of hackerspaces, increasingly we see projects that hold the potential of replacing, at least on a local level, much of the centralized infrastructure we take for granted until disasters or greed-driven rules and regulations upset the balance. It is with the further developing of our local infrastructure that we can leave behind the sandbags of perpetual activism and enjoy a permanently altered landscape that favors our peace and prosperity. Decentralizing Telecom
  • As impressive as a hydroelectric dam may be and as overwhelming as it may seem as a project to undertake, it will always start with but a single shovelful of dirt. The work required becomes in its own way part of the payoff - with experienced gained and with a magnificent accomplishment to aspire toward. In the same way, a communication network that runs parallel to existing networks, with global coverage, but locally controlled, may seem an impossible, overwhelming objective - and for one individual, or even a small group of individuals, it is. However, the paradigm has shifted. In the age of digital collaboration made possible by existing networks, the building of such a network can be done in parallel. In an act of digital-judo, we can use the system's infrastructure as a means of supplanting and replacing it with something superior in both function and in form. 
Paul Merrell

Why the Sony hack is unlikely to be the work of North Korea. | Marc's Security Ramblings - 0 views

  • Everyone seems to be eager to pin the blame for the Sony hack on North Korea. However, I think it’s unlikely. Here’s why:1. The broken English looks deliberately bad and doesn’t exhibit any of the classic comprehension mistakes you actually expect to see in “Konglish”. i.e it reads to me like an English speaker pretending to be bad at writing English. 2. The fact that the code was written on a PC with Korean locale & language actually makes it less likely to be North Korea. Not least because they don’t speak traditional “Korean” in North Korea, they speak their own dialect and traditional Korean is forbidden. This is one of the key things that has made communication with North Korean refugees difficult. I would find the presence of Chinese far more plausible.
  • 3. It’s clear from the hard-coded paths and passwords in the malware that whoever wrote it had extensive knowledge of Sony’s internal architecture and access to key passwords. While it’s plausible that an attacker could have built up this knowledge over time and then used it to make the malware, Occam’s razor suggests the simpler explanation of an insider. It also fits with the pure revenge tact that this started out as. 4. Whoever did this is in it for revenge. The info and access they had could have easily been used to cash out, yet, instead, they are making every effort to burn Sony down. Just think what they could have done with passwords to all of Sony’s financial accounts? With the competitive intelligence in their business documents? From simple theft, to the sale of intellectual property, or even extortion – the attackers had many ways to become rich. Yet, instead, they chose to dump the data, rendering it useless. Likewise, I find it hard to believe that a “Nation State” which lives by propaganda would be so willing to just throw away such an unprecedented level of access to the beating heart of Hollywood itself.
  • 5. The attackers only latched onto “The Interview” after the media did – the film was never mentioned by GOP right at the start of their campaign. It was only after a few people started speculating in the media that this and the communication from DPRK “might be linked” that suddenly it became linked. I think the attackers both saw this as an opportunity for “lulz” and as a way to misdirect everyone into thinking it was a nation state. After all, if everyone believes it’s a nation state, then the criminal investigation will likely die.
  • ...4 more annotations...
  • 6. Whoever is doing this is VERY net and social media savvy. That, and the sophistication of the operation, do not match with the profile of DPRK up until now. Grugq did an excellent analysis of this aspect his findings are here – http://0paste.com/6875#md 7. Finally, blaming North Korea is the easy way out for a number of folks, including the security vendors and Sony management who are under the microscope for this. Let’s face it – most of today’s so-called “cutting edge” security defenses are either so specific, or so brittle, that they really don’t offer much meaningful protection against a sophisticated attacker or group of attackers.
  • 8. It probably also suits a number of political agendas to have something that justifies sabre-rattling at North Korea, which is why I’m not that surprised to see politicians starting to point their fingers at the DPRK also. 9. It’s clear from the leaked data that Sony has a culture which doesn’t take security very seriously. From plaintext password files, to using “password” as the password in business critical certificates, through to just the shear volume of aging unclassified yet highly sensitive data left out in the open. This isn’t a simple slip-up or a “weak link in the chain” – this is a serious organization-wide failure to implement anything like a reasonable security architecture.
  • The reality is, as things stand, Sony has little choice but to burn everything down and start again. Every password, every key, every certificate is tainted now and that’s a terrifying place for an organization to find itself. This hack should be used as the definitive lesson in why security matters and just how bad things can get if you don’t take it seriously. 10. Who do I think is behind this? My money is on a disgruntled (possibly ex) employee of Sony.
  • EDIT: This appears (at least in part) to be substantiated by a conversation the Verge had with one of the alleged hackers – http://www.theverge.com/2014/11/25/7281097/sony-pictures-hackers-say-they-want-equality-worked-with-staff-to-break-in Finally for an EXCELLENT blow by blow analysis of the breach and the events that followed, read the following post by my friends from Risk Based Security – https://www.riskbasedsecurity.com/2014/12/a-breakdown-and-analysis-of-the-december-2014-sony-hack EDIT: Also make sure you read my good friend Krypt3ia’s post on the hack – http://krypt3ia.wordpress.com/2014/12/18/sony-hack-winners-and-losers/
  •  
    Seems that the FBI overlooked a few clues before it told Obama to go ahead and declare war against North Korea. 
‹ Previous 21 - 40 of 50 Next ›
Showing 20 items per page