Skip to main content

Home/ Future of the Web/ Group items tagged identifiation

Rss Feed Group items tagged

Paul Merrell

Gmail blows up e-mail marketing by caching all images on Google servers | Ars Technica - 1 views

  • Ever wonder why most e-mail clients hide images by default? The reason for the "display images" button is because images in an e-mail must be loaded from a third-party server. For promotional e-mails and spam, usually this server is operated by the entity that sent the e-mail. So when you load these images, you aren't just receiving an image—you're also sending a ton of data about yourself to the e-mail marketer. Loading images from these promotional e-mails reveals a lot about you. Marketers get a rough idea of your location via your IP address. They can see the HTTP referrer, meaning the URL of the page that requested the image. With the referral data, marketers can see not only what client you are using (desktop app, Web, mobile, etc.) but also what folder you were viewing the e-mail in. For instance, if you had a Gmail folder named "Ars Technica" and loaded e-mail images, the referral URL would be "https://mail.google.com/mail/u/0/#label/Ars+Technica"—the folder is right there in the URL. The same goes for the inbox, spam, and any other location. It's even possible to uniquely identify each e-mail, so marketers can tell which e-mail address requested the images—they know that you've read the e-mail. And if it was spam, this will often earn you more spam since the spammers can tell you've read their last e-mail.
  • But Google has just announced a move that will shut most of these tactics down: it will cache all images for Gmail users. Embedded images will now be saved by Google, and the e-mail content will be modified to display those images from Google's cache, instead of from a third-party server. E-mail marketers will no longer be able to get any information from images—they will see a single request from Google, which will then be used to send the image out to all Gmail users. Unless you click on a link, marketers will have no idea the e-mail has been seen. While this means improved privacy from e-mail marketers, Google will now be digging deeper than ever into your e-mails and literally modifying the contents. If you were worried about e-mail scanning, this may take things a step further. However, if you don't like the idea of cached images, you can turn it off in the settings. This move will allow Google to automatically display images, killing the "display all images" button in Gmail. Google servers should also be faster than the usual third-party image host. Hosting all images sent to all Gmail users sounds like a huge bandwidth and storage undertaking, but if anyone can do it, it's Google. The new image handling will rollout to desktop users today, and it should hit mobile apps sometime in early 2014. There's also a bonus side effect for Google: e-mail marketing is advertising. Google exists because of advertising dollars, but they don't do e-mail marketing. They've just made a competitive form of advertising much less appealing and informative to advertisers. No doubt Google hopes this move pushes marketers to spend less on e-mail and more on Adsense.
  •  
    There's an antitrust angle to this; it could be viewed by a court as anti-competitive. But given the prevailing winds on digital privacy, my guess would be that Google would slide by.
Paul Merrell

Exclusive: Google mulling Wi-Fi for cities with Google Fiber - Network World - 0 views

  • Google is considering deploying Wi-Fi networks in towns and cities covered by its Google Fiber high-speed Internet service. The disclosure is made in a document Google is circulating to 34 cities that are the next candidates to receive Google Fiber in 2015.
  • Google Fiber is already available in Provo, Utah, and Kansas City, and is promised soon in Austin, Texas. It delivers a "basic speed" service for no charge, a gigabit-per-second service for US$70 per month and a $120 package that includes a bundle of more than 200 TV channels. Installation costs between nothing and $300. Google has sent the 34 cities that are next in line for Google Fiber a detailed request for information and they have until May 1 to reply.
  • Specific details of the Wi-Fi plan are not included in the document, which was seen by IDG News Service, but Google says it will be "discussing our Wi-Fi plans and related requirements with your city as we move forward with your city during this planning process."
  • ...1 more annotation...
  • Google is also asking cities to identify locations it would be able to install utility huts. Each 12-foot-by-30-foot (3.6-meter-by-9.1-meter) windowless hut needs to allow 24-hour access and be on land Google could lease for about 20 years. The huts, of which there will be between one and a handful in each city, would house the main networking equipment. From the hut, fiber cables would run along utility poles -- or in underground fiber ducts if they exist -- and terminate at neighborhood boxes, each serving up to 288 or 587 homes. The neighborhood boxes are around the same size or smaller than current utility cabinets often found on city streets.
Paul Merrell

How Secret Partners Expand NSA's Surveillance Dragnet - The Intercept - 0 views

  • Huge volumes of private emails, phone calls, and internet chats are being intercepted by the National Security Agency with the secret cooperation of more foreign governments than previously known, according to newly disclosed documents from whistleblower Edward Snowden. The classified files, revealed today by the Danish newspaper Dagbladet Information in a reporting collaboration with The Intercept, shed light on how the NSA’s surveillance of global communications has expanded under a clandestine program, known as RAMPART-A, that depends on the participation of a growing network of intelligence agencies.
  • It has already been widely reported that the NSA works closely with eavesdropping agencies in the United Kingdom, Canada, New Zealand, and Australia as part of the so-called Five Eyes surveillance alliance. But the latest Snowden documents show that a number of other countries, described by the NSA as “third-party partners,” are playing an increasingly important role – by secretly allowing the NSA to install surveillance equipment on their fiber-optic cables. The NSA documents state that under RAMPART-A, foreign partners “provide access to cables and host U.S. equipment.” This allows the agency to covertly tap into “congestion points around the world” where it says it can intercept the content of phone calls, faxes, e-mails, internet chats, data from virtual private networks, and calls made using Voice over IP software like Skype.
  • The secret documents reveal that the NSA has set up at least 13 RAMPART-A sites, nine of which were active in 2013. Three of the largest – codenamed AZUREPHOENIX, SPINNERET and MOONLIGHTPATH – mine data from some 70 different cables or networks. The precise geographic locations of the sites and the countries cooperating with the program are among the most carefully guarded of the NSA’s secrets, and these details are not contained in the Snowden files. However, the documents point towards some of the countries involved – Denmark and Germany among them. An NSA memo prepared for a 2012 meeting between the then-NSA director, Gen. Keith Alexander, and his Danish counterpart noted that the NSA had a longstanding partnership with the country’s intelligence service on a special “cable access” program. Another document, dated from 2013 and first published by Der Spiegel on Wednesday, describes a German cable access point under a program that was operated by the NSA, the German intelligence service BND, and an unnamed third partner.
  • ...2 more annotations...
  • The program, which the secret files show cost U.S. taxpayers about $170 million between 2011 and 2013, sweeps up a vast amount of communications at lightning speed. According to the intelligence community’s classified “Black Budget” for 2013, RAMPART-A enables the NSA to tap into three terabits of data every second as the data flows across the compromised cables – the equivalent of being able to download about 5,400 uncompressed high-definition movies every minute. In an emailed statement, the NSA declined to comment on the RAMPART-A program. “The fact that the U.S. government works with other nations, under specific and regulated conditions, mutually strengthens the security of all,” said NSA spokeswoman Vanee’ Vines. “NSA’s efforts are focused on ensuring the protection of the national security of the United States, its citizens, and our allies through the pursuit of valid foreign intelligence targets only.”
  • The Danish and German operations appear to be associated with RAMPART-A because it is the only NSA cable-access initiative that depends on the cooperation of third-party partners. Other NSA operations tap cables without the consent or knowledge of the countries that host the cables, or are operated from within the United States with the assistance of American telecommunications companies that have international links. One secret NSA document notes that most of the RAMPART-A projects are operated by the partners “under the cover of an overt comsat effort,” suggesting that the tapping of the fiber-optic cables takes place at Cold War-era eavesdropping stations in the host countries, usually identifiable by their large white satellite dishes and radomes. A shortlist of other countries potentially involved in the RAMPART-A operation is contained in the Snowden archive. A classified presentation dated 2013, published recently in Intercept editor Glenn Greenwald’s book No Place To Hide, revealed that the NSA had top-secret spying agreements with 33 third-party countries, including Denmark, Germany, and 15 other European Union member states:
  •  
    Don't miss the slide with the names of the NSA-partner nations. Lots of E.U. member nations.
  •  
    Very good info. Lucky me I came across your site by accident (stumbleupon). I have saved it for later. I Hate NSA's Surveilances. http://watchlive.us/movie/watch-Venus-in-Fur-online.html Howdy! I could have sworn I've visited this website before but after looking at many of the articles I realized it's new to me. Nonetheless, I'm certainly pleased I found it and I'll be book-marking it and checking back often. <
Paul Merrell

Court gave NSA broad leeway in surveillance, documents show - The Washington Post - 0 views

  • Virtually no foreign government is off-limits for the National Security Agency, which has been authorized to intercept information “concerning” all but four countries, according to top-secret documents. The United States has long had broad no-spying arrangements with those four countries — Britain, Canada, Australia and New Zealand — in a group known collectively with the United States as the Five Eyes. But a classified 2010 legal certification and other documents indicate the NSA has been given a far more elastic authority than previously known, one that allows it to intercept through U.S. companies not just the communications of its overseas targets but any communications about its targets as well.
  • The certification — approved by the Foreign Intelligence Surveillance Court and included among a set of documents leaked by former NSA contractor Edward Snowden — lists 193&nbsp;countries that would be of valid interest for U.S. intelligence. The certification also permitted the agency to gather intelligence about entities including the World Bank, the International Monetary Fund, the European Union and the International Atomic Energy Agency. The NSA is not necessarily targeting all the countries or organizations identified in the certification, the affidavits and an accompanying exhibit; it has only been given authority to do so. Still, the&nbsp;privacy implications are far-reaching, civil liberties advocates say, because of the wide spectrum of people who might be engaged in communication about foreign governments and entities and whose communications might be of interest to the United States.
  • That language could allow for surveillance of academics, journalists and human rights researchers. A Swiss academic who has information on the German government’s position in the run-up to an international trade negotiation, for instance, could be targeted if the government has determined there is a foreign-intelligence need for that information. If a U.S. college professor e-mails the Swiss professor’s e-mail address or phone number to a colleague, the American’s e-mail could be collected as well, under the program’s court-approved rules
  • ...4 more annotations...
  • On Friday, the Office of the Director of National Intelligence released a transparency report stating that in 2013 the government targeted nearly 90,000 foreign individuals or organizations for foreign surveillance under the program. Some tech-industry lawyers say the number is relatively low, considering that several billion people use U.S. e-mail services.
  • Still, some lawmakers are concerned that the potential for intrusions on Americans’ privacy has grown under the 2008 law because the government is intercepting not just communications of its targets but communications about its targets as well. The expansiveness of the foreign-powers certification increases that concern.
  • In a 2011 FISA court opinion, a judge using an NSA-provided sample estimated that the agency could be collecting as many as 46,000 wholly domestic e-mails a year that mentioned a particular target’s e-mail address or phone number, in what is referred to as “about” collection. “When Congress passed Section 702 back in 2008, most members of Congress had no idea that the government was collecting Americans’ communications simply because they contained a particular individual’s contact information,” Sen. Ron Wyden (D-Ore.), who has co-sponsored ­legislation to narrow “about” collection authority, said in an e-mail to The Washington Post. “If ‘about the target’ collection were limited to genuine national security threats, there would be very little privacy impact. In fact, this collection is much broader than that, and it is scooping up huge amounts of Americans’ wholly domestic communications.”
  • The only reason the court has oversight of the NSA program is that Congress in 2008 gave the government a new authority to gather intelligence from U.S. companies that own the Internet cables running through the United States, former officials noted. Edgar, the former privacy officer at the Office of the Director of National Intelligence, said ultimately he believes the authority should be narrowed. “There are valid privacy concerns with leaving these collection decisions entirely in the executive branch,” he said. “There shouldn’t be broad collection, using this authority, of foreign government information without any meaningful judicial role that defines the limits of what can be collected.”
Gary Edwards

The True Story of How the Patent Bar Captured a Court and Shrank the Intellectual Commo... - 1 views

  • The change in the law wrought by the Federal Circuit can also be viewed substantively through the controversy over software patents. Throughout the 1960s, the USPTO refused to award patents for software innovations. However, several of the USPTO’s decisions were overruled by the patent-friendly U.S. Court of Customs and Patent Appeals, which ordered that software patents be granted. In Gottschalk v. Benson (1972) and Parker v. Flook (1978), the U.S. Supreme Court reversed the Court of Customs and Patent Appeals, holding that mathematical algorithms (and therefore software) were not patentable subject matter. In 1981, in Diamond v. Diehr, the Supreme Court upheld a software patent on the grounds that the patent in question involved a physical process—the patent was issued for software used in the molding of rubber. While affirming their prior ruling that mathematical formulas are not patentable in the abstract, the Court held that an otherwise patentable invention did not become unpatentable simply because it utilized a computer.
  • In the hands of the newly established Federal Circuit, however, this small scope for software patents in precedent was sufficient to open the floodgates. In a series of decisions culminating in State Street Bank v. Signature Financial Group (1998), the Federal Circuit broadened the criteria for patentability of software and business methods substantially, allowing protection as long as the innovation “produces a useful, concrete and tangible result.” That broadened criteria led to an explosion of low-quality software patents, from Amazon’s 1-Click checkout system to Twitter’s pull-to-refresh feature on smartphones. The GAO estimates that more than half of all patents granted in recent years are software-related. Meanwhile, the Supreme Court continues to hold, as in Parker v. Flook, that computer software algorithms are not patentable, and has begun to push back against the Federal Circuit. In Bilski v. Kappos (2010), the Supreme Court once again held that abstract ideas are not patentable, and in Alice v. CLS (2014), it ruled that simply applying an abstract idea on a computer does not suffice to make the idea patent-eligible. It still is not clear what portion of existing software patents Alice invalidates, but it could be a significant one.
  • Supreme Court justices also recognize the Federal Circuit’s insubordination. In oral arguments in Carlsbad Technology v. HIF Bio (2009), Chief Justice John Roberts joked openly about it:
  • ...17 more annotations...
  • The Opportunity of the Commons
  • As a result of the Federal Circuit’s pro-patent jurisprudence, our economy has been flooded with patents that would otherwise not have been granted. If more patents meant more innovation, then we would now be witnessing a spectacular economic boom. Instead, we have been living through what Tyler Cowen has called a Great Stagnation. The fact that patents have increased while growth has not is known in the literature as the “patent puzzle.” As Michele Boldrin and David Levine put it, “there is no empirical evidence that [patents] serve to increase innovation and productivity, unless productivity is identified with the number of patents awarded—which, as evidence shows, has no correlation with measured productivity.”
  • While more patents have not resulted in faster economic growth, they have resulted in more patent lawsuits.
  • Software patents have characteristics that make them particularly susceptible to litigation. Unlike, say, chemical patents, software patents are plagued by a problem of description. How does one describe a software innovation in such a way that anyone searching for it will easily find it? As Christina Mulligan and Tim Lee demonstrate, chemical formulas are indexable, meaning that as the number of chemical patents grow, it will still be easy to determine if a molecule has been patented. Since software innovations are not indexable, they estimate that “patent clearance by all firms would require many times more hours of legal research than all patent lawyers in the United States can bill in a year. The result has been an explosion of patent litigation.” Software and business method patents, estimate James Bessen and Michael Meurer, are 2 and 7 times more likely to be litigated than other patents, respectively (4 and 13 times more likely than chemical patents).
  • Software patents make excellent material for predatory litigation brought by what are often called “patent trolls.”
  • Trolls use asymmetries in the rules of litigation to legally extort millions of dollars from innocent parties. For example, one patent troll, Innovatio IP Ventures, LLP, acquired patents that implicated Wi-Fi. In 2011, it started sending demand letters to coffee shops and hotels that offered wireless Internet access, offering to settle for $2,500 per location. This amount was far in excess of the 9.56 cents per device that Innovatio was entitled to under the “Fair, Reasonable, and Non-Discriminatory” licensing promises attached to their portfolio, but it was also much less than the cost of trial, and therefore it was rational for firms to pay. Cisco stepped in and spent $13 million in legal fees on the case, and settled on behalf of their customers for 3.2 cents per device. Other manufacturers had already licensed Innovatio’s portfolio, but that didn’t stop their customers from being targeted by demand letters.
  • Litigation cost asymmetries are magnified by the fact that most patent trolls are nonpracticing entities. This means that when patent infringement trials get to the discovery phase, they will cost the troll very little—a firm that does not operate a business has very few records to produce.
  • But discovery can cost a medium or large company millions of dollars. Using an event study methodology, James Bessen and coauthors find that infringement lawsuits by nonpracticing entities cost publicly traded companies $83 billion per year in stock market capitalization, while plaintiffs gain less than 10 percent of that amount.
  • Software patents also reduce innovation in virtue of their cumulative nature and the fact that many of them are frequently inputs into a single product. Law professor Michael Heller coined the phrase “tragedy of the anticommons” to refer to a situation that mirrors the well-understood “tragedy of the commons.” Whereas in a commons, multiple parties have the right to use a resource but not to exclude others, in an anticommons, multiple parties have the right to exclude others, and no one is therefore able to make effective use of the resource. The tragedy of the commons results in overuse of the resource; the tragedy of the anticommons results in underuse.
  • In order to cope with the tragedy of the anticommons, we should carefully investigate the opportunity of &nbsp;the commons. The late Nobelist Elinor Ostrom made a career of studying how communities manage shared resources without property rights. With appropriate self-governance institutions, Ostrom found again and again that a commons does not inevitably lead to tragedy—indeed, open access to shared resources can provide collective benefits that are not available under other forms of property management.
  • This suggests that—litigation costs aside—patent law could be reducing the stock of ideas rather than expanding it at current margins.
  • Advocates of extensive patent protection frequently treat the commons as a kind of wasteland. But considering the problems in our patent system, it is worth looking again at the role of well-tailored limits to property rights in some contexts. Just as we all benefit from real property rights that no longer extend to the highest heavens, we would also benefit if the scope of patent protection were more narrowly drawn.
  • Reforming the Patent System
  • This analysis raises some obvious possibilities for reforming the patent system. Diane Wood, Chief Judge of the 7th Circuit, has proposed ending the Federal Circuit’s exclusive jurisdiction over patent appeals—instead, the Federal Circuit could share jurisdiction with the other circuit courts. While this is a constructive suggestion, it still leaves the door open to the Federal Circuit playing “a leading role in shaping patent law,” which is the reason for its capture by patent interests. It would be better instead simply to abolish the Federal Circuit and return to the pre-1982 system, in which patents received no special treatment in appeals. This leaves open the possibility of circuit splits, which the creation of the Federal Circuit was designed to mitigate, but there are worse problems than circuit splits, and we now have them.
  • Another helpful reform would be for Congress to limit the scope of patentable subject matter via statute. New Zealand has done just that, declaring that software is “not an invention” to get around WTO obligations to respect intellectual property. Congress should do the same with respect to both software and business methods.
  • Finally, even if the above reforms were adopted, there would still be a need to address the asymmetries in patent litigation that result in predatory “troll” lawsuits. While the holding in Alice v. CLS arguably makes a wide swath of patents invalid, those patents could still be used in troll lawsuits because a ruling of invalidity for each individual patent might not occur until late in a trial. Current legislation in Congress addresses this class of problem by mandating disclosures, shifting fees in the case of spurious lawsuits, and enabling a review of the patent’s validity before a trial commences.
  • What matters for prosperity is not just property rights in the abstract, but good property-defining institutions. Without reform, our patent system will continue to favor special interests and forestall economic growth.
  •  
    "Libertarians intuitively understand the case for patents: just as other property rights internalize the social benefits of improvements to land, automobile maintenance, or business investment, patents incentivize the creation of new inventions, which might otherwise be undersupplied. So far, so good. But it is important to recognize that the laws that govern property, intellectual or otherwise, do not arise out of thin air. Rather, our political institutions, with all their virtues and foibles, determine the contours of property-the exact bundle of rights that property holders possess, their extent, and their limitations. Outlining efficient property laws is not a trivial problem. The optimal contours of property are neither immutable nor knowable a priori. For example, in 1946, the U.S. Supreme Court reversed the age-old common law doctrine that extended real property rights to the heavens without limit. The advent of air travel made such extensive property rights no longer practicable-airlines would have had to cobble together a patchwork of easements, acre by acre, for every corridor through which they flew, and they would have opened themselves up to lawsuits every time their planes deviated from the expected path. The Court rightly abridged property rights in light of these empirical realities. In defining the limits of patent rights, our political institutions have gotten an analogous question badly wrong. A single, politically captured circuit court with exclusive jurisdiction over patent appeals has consistently expanded the scope of patentable subject matter. This expansion has resulted in an explosion of both patents and patent litigation, with destructive consequences. "
  •  
    I added a comment to the page's article. Patents are antithetical to the precepts of Libertarianism and do not involve Natural Law rights. But I agree with the author that the Court of Appeals for the Federal Circuit should be abolished. It's a failed experiment.
Paul Merrell

Comcast Plans to Drop Time Warner Cable Deal - Bloomberg Business - 0 views

  • Fourteen months after unveiling a $45.2 billion merger that would create a new Internet and cable giant, Comcast Corp. is planning to walk away from its proposed takeover of Time Warner Cable Inc., people with knowledge of the matter said. The decision marks a swift unraveling of a deal that awaited federal approval for more than a year. Opposition from the U.S. Justice Department and Federal Communications Commission took shape over the past week, leaving officials of the two companies to conclude the deal wouldn’t pass muster.
  • Comcast’s board will meet to finalize the decision on Thursday, and an announcement may come as soon as Friday, said one of the people, who asked not to be identified because the information is private. Time Warner Cable executives plan to tell shareholders on an earnings conference call next Thursday how the company can survive independently, the person said.
  • On Wednesday, FCC staff joined lawyers at the Justice Department opposing the transaction. That day, FCC officials told representatives of the two companies they are leaning toward concluding the merger doesn’t help consumers, a person with knowledge of the matter said. The FCC’s plan to call a hearing effectively killed the deal’s chances of success. An FCC hearing can take months to complete and drag out the approval process beyond the companies’ time frame for completion. Bloomberg News reported last week that Justice Department staff was leaning against the deal. Senators including Al Franken, a Democrat from Minnesota, also voiced opposition. “Comcast’s withdrawal of its proposed merger with Time Warner Cable would be spectacularly good news for consumers,” Michael Copps, a Democratic former FCC commissioner working with Common Cause to oppose the deal, said in a statement.
  •  
    Looks like all that online lobbying from the internet community worked. 
Paul Merrell

Protocols of the Hackers of Zion? « LobeLog - 0 views

  • When Israeli Prime Minister Benjamin Netanyahu met with Google chairman Eric Schmidt on Tuesday afternoon, he boasted about Israel’s “robust hi-tech and cyber industries.” According to The Jerusalem Post, “Netanyahu also noted that ‘Israel was making great efforts to diversify the markets with which it is trading in the technological field.'” Just how diversified and developed Israeli hi-tech innovation has become was revealed the very next morning, when the&nbsp;Russian cyber-security firm Kaspersky Labs, which&nbsp;claims&nbsp;more than 400 million users internationally, announced that sophisticated spyware with the hallmarks of Israeli origin (although no country was explicitly identified) had targeted three European hotels that had been venues for negotiations over Iran’s nuclear program.
  • Wednesday’s Wall Street Journal, one of the first news sources to break the story, reported that Kaspersky itself had been hacked by malware whose code was remarkably similar to that of a virus attributed to Israel. Code-named “Duqu” because it used the letters DQ in the names of the files it created, the malware had first been detected in 2011.&nbsp;On Thursday, Symantec, another cyber-security firm, announced it too had discovered Duqu 2 on its global network, striking undisclosed telecommunication sites in Europe, North Africa, Hong Kong, and&nbsp; Southeast Asia. It said that Duqu 2 is much more difficult to detect that its predecessor because it lives exclusively in the memory of the computers it infects, rather than writing files to a drive or disk. The original Duqu shared coding with — and was written on the same platform as — Stuxnet, the computer worm&nbsp; that partially disabled enrichment centrifuges in Iranian nuclear power plants, according to a 2012 report in The New York Times. Intelligence and military experts said that Stuxnet was first tested at Dimona, a nuclear-reactor complex in the Negev desert that houses Israel’s own clandestine nuclear weapons program. While Stuxnet is widely believed to have been a joint Israeli-U.S. operation, Israel seems to have developed and implemented Duqu on its own.
  • Coding of the spyware that targeted two Swiss hotels and one in Vienna—both sites where talks were held between the P5+1 and Iran—so closely resembled that of Duqu that Kaspersky has dubbed it “Duqu 2.” A Kaspersky report contends that the new and improved Duqu would have been almost impossible to create without access to the original Duqu code.&nbsp;Duqu 2’s one hundred “modules” enabled the cyber attackers to commandeer infected computers, compress video feeds&nbsp; (including those from hotel surveillance cameras), monitor and disrupt telephone service and Wi-Fi, and steal electronic files. The hackers’ penetration of computers used by the front desk would have allowed them to determine the room numbers of negotiators and delegation members. Duqu 2 also gave the hackers the ability to operate two-way microphones in the hotels’ elevators and control their alarm systems.
Paul Merrell

Hackers Prove Fingerprints Are Not Secure, Now What? | nsnbc international - 0 views

  • The Office of Personnel Management (OPM) recently revealed that an estimated 5.6 million government employees were affected by the hack; and not 1.1 million as previously assumed.
  • Samuel Schumach, spokesman for the OPM, said: “As part of the government’s ongoing work to notify individuals affected by the theft of background investigation records, the Office of Personnel Management and the Department of Defense have been analyzing impacted data to verify its quality and completeness. Of the 21.5 million individuals whose Social Security Numbers and other sensitive information were impacted by the breach, the subset of individuals whose fingerprints have been stolen has increased from a total of approximately 1.1 million to approximately 5.6 million.” This endeavor expended the use of the Department of Defense (DoD), the Department of Homeland Security (DHS), the National Security Agency (NSA), and the Pentagon. Schumer added that “if, in the future, new means are developed to misuse the fingerprint data, the government will provide additional information to individuals whose fingerprints may have been stolen in this breach.” However, we do not need to wait for the future for fingerprint data to be misused and coveted by hackers.
  • Look no further than the security flaws in Samsung’s new Galaxy 5 smartphone as was demonstrated by researchers at Security Research Labs&nbsp;(SRL) showing how fingerprints, iris scans and other biometric identifiers could be fabricated and yet authenticated by the Apple Touch ID fingerprints scanner. The shocking part of this demonstration is that this hack was achieved less than 2 days after the technology was released to the public by Apple. Ben Schlabs, researcher for SRL explained: “We expected we’d be able to spoof the S5’s Finger Scanner, but I hoped it would at least be a challenge. The S5 Finger Scanner feature offers nothing new except—because of the way it is implemented in this Android device—slightly higher risk than that already posed by previous devices.” Schlabs and other researchers discovered that “the S5 has no mechanism requiring a password when encountering a large number of incorrect finger swipes.” By rebotting the smartphone, Schlabs could force “the handset to accept an unlimited number of incorrect swipes without requiring users to enter a password [and] the S5 fingerprint authenticator [could] be associated with sensitive banking or payment apps such as PayPal.”
  • ...1 more annotation...
  • Schlab said: “Perhaps most concerning is that Samsung does not seem to have learned from what others have done less poorly. Not only is it possible to spoof the fingerprint authentication even after the device has been turned off, but the implementation also allows for seemingly unlimited authentication attempts without ever requiring a password. Incorporation of fingerprint authentication into highly sensitive apps such as PayPal gives a would-be attacker an even greater incentive to learn the simple skill of fingerprint spoofing.” Last year Hackers from the Chaos Computer Club (CCC) proved Apple wrong when the corporation insisted that their new iPhone 5S fingerprint sensor is “a convenient and highly secure way to access your phone.” CCC stated that it is as easy as stealing a fingerprint from a drinking glass – and anyone can do it.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

How the way you type can shatter anonymity-even on Tor | Ars Technica - 0 views

  •  
    "Researchers perfect technique that profiles people based on unique keystroke traits. by Dan Goodin"
Paul Merrell

Google Chrome Listening In To Your Room Shows The Importance Of Privacy Defense In Depth - 0 views

  • Yesterday, news broke that Google has been stealth downloading audio listeners onto every computer that runs Chrome, and transmits audio data back to Google. Effectively, this means that Google had taken itself the right to listen to every conversation in every room that runs Chrome somewhere, without any kind of consent from the people eavesdropped on. In official statements, Google shrugged off the practice with what amounts to “we can do that”.It looked like just another bug report. "When I start Chromium, it downloads something." Followed by strange status information that notably included the lines "Microphone: Yes" and "Audio Capture Allowed: Yes".
  • Without consent, Google’s code had downloaded a black box of code that – according to itself – had turned on the microphone and was actively listening to your room.A brief explanation of the Open-source / Free-software philosophy is needed here. When you’re installing a version of GNU/Linux like Debian or Ubuntu onto a fresh computer, thousands of really smart people have analyzed every line of human-readable source code before that operating system was built into computer-executable binary code, to make it common and open knowledge what the machine actually does instead of trusting corporate statements on what it’s supposed to be doing. Therefore, you don’t install black boxes onto a Debian or Ubuntu system; you use software repositories that have gone through this source-code audit-then-build process. Maintainers of operating systems like Debian and Ubuntu use many so-called “upstreams” of source code to build the final product.Chromium, the open-source version of Google Chrome, had abused its position as trusted upstream to insert lines of source code that bypassed this audit-then-build process, and which downloaded and installed a black box of unverifiable executable code directly onto computers, essentially rendering them compromised. We don’t know and can’t know what this black box does. But we see reports that the microphone has been activated, and that Chromium considers audio capture permitted.
  • This was supposedly to enable the “Ok, Google” behavior – that when you say certain words, a search function is activated. Certainly a useful feature. Certainly something that enables eavesdropping of every conversation in the entire room, too.Obviously, your own computer isn’t the one to analyze the actual search command. Google’s servers do. Which means that your computer had been stealth configured to send what was being said in your room to somebody else, to a private company in another country, without your consent or knowledge, an audio transmission triggered by… an unknown and unverifiable set of conditions.Google had two responses to this. The first was to introduce a practically-undocumented switch to opt out of this behavior, which is not a fix: the default install will still wiretap your room without your consent, unless you opt out, and more importantly, know that you need to opt out, which is nowhere a reasonable requirement. But the second was more of an official statement following technical discussions on Hacker News and other places. That official statement amounted to three parts (paraphrased, of course):
  • ...4 more annotations...
  • 1) Yes, we’re downloading and installing a wiretapping black-box to your computer. But we’re not actually activating it. We did take advantage of our position as trusted upstream to stealth-insert code into open-source software that installed this black box onto millions of computers, but we would never abuse the same trust in the same way to insert code that activates the eavesdropping-blackbox we already downloaded and installed onto your computer without your consent or knowledge. You can look at the code as it looks right now to see that the code doesn’t do this right now.2) Yes, Chromium is bypassing the entire source code auditing process by downloading a pre-built black box onto people’s computers. But that’s not something we care about, really. We’re concerned with building Google Chrome, the product from Google. As part of that, we provide the source code for others to package if they like. Anybody who uses our code for their own purpose takes responsibility for it. When this happens in a Debian installation, it is not Google Chrome’s behavior, this is Debian Chromium’s behavior. It’s Debian’s responsibility entirely.3) Yes, we deliberately hid this listening module from the users, but that’s because we consider this behavior to be part of the basic Google Chrome experience. We don’t want to show all modules that we install ourselves.
  • If you think this is an excusable and responsible statement, raise your hand now.Now, it should be noted that this was Chromium, the open-source version of Chrome. If somebody downloads the Google product Google Chrome, as in the prepackaged binary, you don’t even get a theoretical choice. You’re already downloading a black box from a vendor. In Google Chrome, this is all included from the start.This episode highlights the need for hard, not soft, switches to all devices – webcams, microphones – that can be used for surveillance. A software on/off switch for a webcam is no longer enough, a hard shield in front of the lens is required. A software on/off switch for a microphone is no longer enough, a physical switch that breaks its electrical connection is required. That’s how you defend against this in depth.
  • Of course, people were quick to downplay the alarm. “It only listens when you say ‘Ok, Google’.” (Ok, so how does it know to start listening just before I’m about to say ‘Ok, Google?’) “It’s no big deal.” (A company stealth installs an audio listener that listens to every room in the world it can, and transmits audio data to the mothership when it encounters an unknown, possibly individually tailored, list of keywords – and it’s no big deal!?) “You can opt out. It’s in the Terms of Service.” (No. Just no. This is not something that is the slightest amount of permissible just because it’s hidden in legalese.) “It’s opt-in. It won’t really listen unless you check that box.” (Perhaps. We don’t know, Google just downloaded a black box onto my computer. And it may not be the same black box as was downloaded onto yours. )Early last decade, privacy activists practically yelled and screamed that the NSA’s taps of various points of the Internet and telecom networks had the technical potential for enormous abuse against privacy. Everybody else dismissed those points as basically tinfoilhattery – until the Snowden files came out, and it was revealed that precisely everybody involved had abused their technical capability for invasion of privacy as far as was possible.Perhaps it would be wise to not repeat that exact mistake. Nobody, and I really mean nobody, is to be trusted with a technical capability to listen to every room in the world, with listening profiles customizable at the identified-individual level, on the mere basis of “trust us”.
  • Privacy remains your own responsibility.
  •  
    And of course, Google would never succumb to a subpoena requiring it to turn over the audio stream to the NSA. The Tor Browser just keeps looking better and better. https://www.torproject.org/projects/torbrowser.html.en
Paul Merrell

AT&T ups the ante in speech recognition | Signal Strength - CNET News - 2 views

  • It's developed a core technology platform, known as Watson, which is a cloud-based system of services that not only identifies words but interprets meaning and context to deliver more accurate results. The system itself is built on servers that model and compare speech to recorded voices. Watson is an evolving platform that with more data is able to adapt and learn so that it continues to improve accuracy and also cross reference data to use speech as input for getting to all kinds of communication and data. "We are really on the cusp of a technology revolution in speech and language technology," said Mazin Gilbert, executive director of speech and language technology at AT&amp;T Labs. "It's no longer about simply trying to get the words right. It's about adding intelligence to interpret what is being said and then using that to apply to other modes of communication, such as text or video."
  • The system is designed to get more accurate over time as it learns the speech patterns of large numbers of users.
Paul Merrell

Conformance Testing and Certification Model for W3C Specifications - 0 views

  • The use of conformity assessment as a means by which buyers and sellers can communicate requirements will increase as information technology systems and applications grow more complex. Models for conformance testing and certification programs are necessary to understand principles and issues that are essential for successful conformity assessment programs. This paper presents one such model by identifying key roles, activities and products involved in any conformance testing and certification program. This model has been successfully used by NIST in helping private-sector organizations establish their certification programs.
  •  
    Like this http://cheaptravelbooker.com Like this http://cheaptravelbooker.com like this http://killdo.de.gg travel,hotel,fun,hotel new,new offer,hotel best,best hotel,hotel travel,seo,backlinks,edu,gov,ads,indexing,bookmark,killgoggle,gogglesuck,goggle bookmark,kill goggle,yahoo,bing,indexing,quality links,linkwell,traffic boster,index best
Paul Merrell

EU looks into telecoms blocking Internet calls - International Herald Tribune - 0 views

  • European Union regulators are looking into whether mobile phone operators who block customers from making inexpensive wireless calls over the Internet are breaking competition&nbsp;rules. The European Commission, the EU antitrust authority, has sent questionnaires to phone companies asking what "tools" they use to "control, manage, block, slow down or otherwise restrict or filter" Internet-based voice&nbsp;calls. The EU deadline for responding to the survey was Tuesday. The questionnaire, obtained by Bloomberg News, does not identify any&nbsp;companies. Some mobile carriers have blocked services that use voice-over-Internet protocol, or VoIP, which allows users to make calls over the Web. Companies may be seeking to stop customers from accessing applications, like eBay's Skype, to defend voice revenue from the less expensive Internet services, Carolina Milanesi, research director for mobile devices at Gartner, the research company,&nbsp;said.
    • Paul Merrell
       
      Building a Connected World --- The Role of Antitrust Law and Lawyers.
  •  
    Superficially, this sounds like an application of the principles won by DG Competition in the Court of First Instance's Commission v. Microsoft interoperability decision. But note that here we deal with an investigation into deliberately-created interop barriers rather than those maintained by withholding full communication protocol specifications from competitors. Notice that the investigation encompasses throttling of internet connections for particular uses, an increasingly common practice by Comcast and other ISPs in the U.S., where both VOIP and P2P file-sharing are targeted uses. E.U. and U.S. antitrust law are similar, as efforts to harmonize antitrust law on both sides of The Pond are now decades old; this move does not bode well for bandwidth throttling in the U.S., particularly when aimed at throttling competition. It takes no giant mental leap to apply such principles to big vendor-dominated IT standards bodies that deliberately create or maintain interop barriers in data format standards. Indeed, DG Competition Commissioner Neelie Kroes has already served notice that interop barriers in standards-setting is an item of interest.
Paul Merrell

The Business Of IT: Gartner Reveals Top 10 Technologies - 0 views

  • The good folks over at the Gartner Group have revealed the top 10 technologies that they believe will change the world over the next four years:Multicore and hybrid processorsVirtualization and fabric computingSocial networks and social softwareCloud computing and cloud/Web platformsWeb mashupsUser InterfaceUbiquitous computingContextual computingAugmented realitySemantics
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Paul Merrell

WG Review: Internet Wideband Audio Codec (codec) - 0 views

  • According to reports from developers of Internet audio applications and operators of Internet audio services, there are no standardized, high-quality audio codecs that meet all of the following three conditions: 1. Are optimized for use in interactive Internet applications. 2. Are published by a recognized standards development organization (SDO) and therefore subject to clear change control. 3. Can be widely implemented and easily distributed among application developers, service operators, and end users. There exist codecs that provide high quality encoding of audio information, but that are not optimized for the actual conditions of the Internet; according to reports, this mismatch between design and deployment has hindered adoption of such codecs in interactive Internet applications.
  • The goal of this working group is to develop a single high-quality audio codec that is optimized for use over the Internet and that can be widely implemented and easily distributed among application developers, service operators, and end users. Core technical considerations include, but are not necessarily limited to, the following: 1. Designing for use in interactive applications (examples include, but are not limited to, point-to-point voice calls, multi-party voice conferencing, telepresence, teleoperation, in-game voice chat, and live music performance) 2. Addressing the real transport conditions of the Internet as identified and prioritized by the working group 3. Ensuring interoperability with the Real-time Transport Protocol (RTP), including secure transport via SRTP 4. Ensuring interoperability with Internet signaling technologies such as Session Initiation Protocol (SIP), Session Description Protocol (SDP), and Extensible Messaging and Presence Protocol (XMPP); however, the result should not depend on the details of any particular signaling technology
Paul Merrell

Goggles turns Android into pocket translator - Google 24/7 - Fortune Tech - 2 views

  • Google's Goggles mobile application has always been a fun tool.&nbsp; The idea is that if you snap a picture and upload it to Google (as well as your location/time), Google could present more about that object, and by extension, your surroundings.&nbsp; It isn't always terribly accurate in identifying what is in the picture, but the results are sometimes helpful, if not amusing. Today, Goggles got a very specific use feature that will help travelers and readers of foreign language texts immensely.&nbsp; Now you can point your Android camera at a sign, book, or any sort of foreign word, snap a picture,&nbsp; and get a translation.&nbsp; Google uses optical character recognition, or OCR, to turn the image into words, and then uses its translation services to turn those words into a language you recognize
Paul Merrell

W3C Issues Report on Web and Television Convergence - 0 views

  • 28 March 2011 -- The Web and television convergence story was the focus of W3C's Second Web and TV Workshop, which took place in Berlin in February. Today, W3C publishes a report that summarizes the discussion among the 77 organizations that participated, including broadcasters, telecom companies, cable operators, OTT (over the top) companies, content providers, device vendors, software vendors, Web application providers, researchers, governments, and standardization organizations active in the TV space. Convergence priorities identified in the report include: Adaptive streaming over HTTP Home networking and second-screen scenarios The role of metadata and relation to Semantic Web technology Ensuring that convergent solutions are accessible. Profiling and testing Possible extensions to HTML5 for Television
Gonzalo San Gil, PhD.

Privacy Badger | Electronic Frontier Foundation - 0 views

  •  
    [Privacy Badger blocks spying ads and invisible trackers.]
  •  
    I've been using it for about a month as a Chrome extension, which at least at the time was still in beta. It hasn't caused any problems on either the Linux or Windows boxes. It appears to be working as intended on both systems. The sliders discussed in the article only appear if you are viewing a page that has identified or candidate cookie tracking characteristics. Some it blocks itself. Others, you have to use a slider on to set whether it will be blocked or wait until the program acquires enough data about that site to make a decision to block. The program does not use a blacklist of sites, although it comes with a white list built in of sites that honor the do not track browser setting. But once a tracking cookie is blocked, it's blocked for all sites you visit. So this isn't instant complete tracking cookie security. It's designed to improve your experience with the number of sites whose tracking cookies follow your tracks around the Web. But this is not a mature program. Its effectiveness will improve with each update.
Paul Merrell

A Short Guide to the Internet's Biggest Enemies | Electronic Frontier Foundation - 1 views

  • Reporters Without Borders (RSF) released its annual “Enemies of the Internet” index this week—a ranking first launched in 2006 intended to track countries that repress online speech, intimidate and arrest bloggers, and conduct surveillance of their citizens.&nbsp; Some countries have been mainstays on the annual index, while others have been able to work their way off the list.&nbsp; Two countries particularly deserving of praise in this area are Tunisia and Myanmar (Burma), both of which have stopped censoring the Internet in recent years and are headed in the right direction toward Internet freedom. In the former category are some of the world’s worst offenders: Cuba, North Korea, China, Iran, Saudi Arabia, Vietnam, Belarus, Bahrain, Turkmenistan, Syria. &nbsp;Nearly every one of these countries has amped up their online repression in recent years, from implementing sophisticated surveillance (Syria) to utilizing targeted surveillance tools (Vietnam) to increasing crackdowns on online speech (Saudi Arabia). &nbsp;These are countries where, despite advocacy efforts by local and international groups, no progress has been made. The newcomers&nbsp; A third, perhaps even more disheartening category, is the list of countries new to this year's index.&nbsp; A motley crew, these nations have all taken new, harsh approaches to restricting speech or monitoring citizens:
  • United States: This is the first time the US has made it onto RSF’s list. &nbsp;While the US government doesn’t censor online content, and pours money into promoting Internet freedom worldwide, the National Security Agency’s unapologetic dragnet surveillance and the government’s treatment of whistleblowers have earned it a spot on the index. United Kingdom: The European nation has been dubbed by RSF as the “world champion of surveillance” for its recently-revealed depraved strategies for spying on individuals worldwide. &nbsp;The UK also joins countries like Ethiopia and Morocco in using terrorism laws to go after journalists. &nbsp;Not noted by RSF, but also important, is the fact that the UK is also cracking down on legal pornography, forcing Internet users to opt-in with their ISP if they wish to view it and creating a slippery slope toward overblocking. &nbsp;This is in addition to the government’s use of an opaque, shadowy NGO to identify child sexual abuse images, sometimes resulting instead in censorship of legitimate speech.
« First ‹ Previous 61 - 80 of 105 Next › Last »
Showing 20 items per page