Skip to main content

Home/ Future of the Web/ Group items matching "credit" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Gonzalo San Gil, PhD.

Not Alone: Cooperative and Trade Union Solutions for Freelancers - Shareable - 0 views

  •  
    " By Pat Conaty April 6, 2016 Photo credit: The Blue Diamond Gallery / CC BY. A proliferation of atypical forms of work in Europe has become known as "The Gig Economy." For many, a permanent state of social economic uncertainty is the new normal. Casual work, temping, zero hour contracts, and diverse forms of self-employment are characteristic of this brave new world of "precarious work.""
Paul Merrell

Venezuelan Intelligence Services Arrest Credicard Directors - nsnbc international | nsnbc international - 0 views

  • Venezuelan President Nicolas Maduro confirmed Saturday that the state intelligence service SEBIN arrested several directors from the Credicard financial transaction company on Friday night. 
  • The financial consortium is accused of having deliberately taken advantage of a series of cyber attacks on state internet provider CANTV Friday to paralyse its online payment platform–responsible for the majority of the country’s accredited financial transactions, according to its website. “We have proof that it was a deliberate act what Credicard did yesterday. Right now the main people responsible for Credicard are under arrest,” confirmed the president. The government says that millions of attempted purchases using in-store credit and debit card payment machines provided by the company were interrupted after its platform went down for the most part of the day. Authorities also maintain that the company waited longer than the established protocol of one hour before responding to the issues.
  • According to CANTV President Manuel Fernandez, Venezuela’s internet platform suffered at least three attacks from an external source on Friday, one of which was aimed at state oil company PDVSA. CANTV was notified of the attacks by international provider LANautilus, which belongs to Telecom Italia. Nonetheless, Fernandez denied that Credicard’s platform was affected by the interferences to CANTV’s service, underscoring that other financial transaction companies that rely on the state enterprise continued to be operative.
  • ...1 more annotation...
  • On Friday SEBIN Director Gustavo Gonzalez Lopez also openly accused members of the rightwing coalition, the Democratic Unity Roundtable (MUD), of being implicated in the incident. “Members of the MUD involved in the attack on electronic banking service,” he tweeted. “The financial war continues inside and outside the country, internally they are damaging banking operability,” he added. Venezuelan news source La Iguana has reported that the server administrator of Credicard is the company Dayco Host, which belongs to the D’Agostino family. Diana D’Angostino is married to veteran opposition politician, Henry Ramos Allup, president of the National Assembly. On Saturday, the government-promoted Productive Economy Council held an extraordinary meeting of political and business representatives to reject the attack on the country’s financial system.
Paul Merrell

We're Halfway to Encrypting the Entire Web | Electronic Frontier Foundation - 0 views

  • The movement to encrypt the web has reached a milestone. As of earlier this month, approximately half of Internet traffic is now protected by HTTPS. In other words, we are halfway to a web safer from the eavesdropping, content hijacking, cookie stealing, and censorship that HTTPS can protect against. Mozilla recently reported that the average volume of encrypted web traffic on Firefox now surpasses the average unencrypted volume
  • Google Chrome’s figures on HTTPS usage are consistent with that finding, showing that over 50% of of all pages loaded are protected by HTTPS across different operating systems.
  • This milestone is a combination of HTTPS implementation victories: from tech giants and large content providers, from small websites, and from users themselves.
  • ...4 more annotations...
  • Starting in 2010, EFF members have pushed tech companies to follow crypto best practices. We applauded when Facebook and Twitter implemented HTTPS by default, and when Wikipedia and several other popular sites later followed suit. Google has also put pressure on the tech community by using HTTPS as a signal in search ranking algorithms and, starting this year, showing security warnings in Chrome when users load HTTP sites that request passwords or credit card numbers. EFF’s Encrypt the Web Report also played a big role in tracking and encouraging specific practices. Recently other organizations have followed suit with more sophisticated tracking projects. For example, Secure the News and Pulse track HTTPS progress among news media sites and U.S. government sites, respectively.
  • But securing large, popular websites is only one part of a much bigger battle. Encrypting the entire web requires HTTPS implementation to be accessible to independent, smaller websites. Let’s Encrypt and Certbot have changed the game here, making what was once an expensive, technically demanding process into an easy and affordable task for webmasters across a range of resource and skill levels. Let’s Encrypt is a Certificate Authority (CA) run by the Internet Security Research Group (ISRG) and founded by EFF, Mozilla, and the University of Michigan, with Cisco and Akamai as founding sponsors. As a CA, Let’s Encrypt issues and maintains digital certificates that help web users and their browsers know they’re actually talking to the site they intended to. CAs are crucial to secure, HTTPS-encrypted communication, as these certificates verify the association between an HTTPS site and a cryptographic public key. Through EFF’s Certbot tool, webmasters can get a free certificate from Let’s Encrypt and automatically configure their server to use it. Since we announced that Let’s Encrypt was the web’s largest certificate authority last October, it has exploded from 12 million certs to over 28 million. Most of Let’s Encrypt’s growth has come from giving previously unencrypted sites their first-ever certificates. A large share of these leaps in HTTPS adoption are also thanks to major hosting companies and platforms--like WordPress.com, Squarespace, and dozens of others--integrating Let’s Encrypt and providing HTTPS to their users and customers.
  • Unfortunately, you can only use HTTPS on websites that support it--and about half of all web traffic is still with sites that don’t. However, when sites partially support HTTPS, users can step in with the HTTPS Everywhere browser extension. A collaboration between EFF and the Tor Project, HTTPS Everywhere makes your browser use HTTPS wherever possible. Some websites offer inconsistent support for HTTPS, use unencrypted HTTP as a default, or link from secure HTTPS pages to unencrypted HTTP pages. HTTPS Everywhere fixes these problems by rewriting requests to these sites to HTTPS, automatically activating encryption and HTTPS protection that might otherwise slip through the cracks.
  • Our goal is a universally encrypted web that makes a tool like HTTPS Everywhere redundant. Until then, we have more work to do. Protect your own browsing and websites with HTTPS Everywhere and Certbot, and spread the word to your friends, family, and colleagues to do the same. Together, we can encrypt the entire web.
  •  
    HTTPS connections don't work for you if you don't use them. If you're not using HTTPS Everywhere in your browser, you should be; it's your privacy that is at stake. And every encrypted communication you make adds to the backlog of encrypted data that NSA and other internet voyeurs must process as encrypted traffic; because cracking encrypted messages is computer resource intensive, the voyeurs do not have the resources to crack more than a tiny fraction. HTTPS is a free extension for Firefox, Chrome, and Opera. You can get it here. https://www.eff.org/HTTPS-everywhere
Paul Merrell

US judge slams surveillance requests as "repugnant to the Fourth Amendment" - World Socialist Web Site - 0 views

  • Federal Magistrate Judge John M. Facciola denied a US government request earlier this month for a search and seizure warrant, targeting electronic data stored on Apple Inc. property. Facciola’s order, issued on March 7, 2014, rejected what it described as only the latest in a series of “overbroad search and seizure requests,” and “unconstitutional warrant applications” submitted by the US government to the US District Court for the District of Columbia. Facciola referred to the virtually unlimited warrant request submitted by the Justice Department as “repugnant to the Fourth Amendment.” The surveillance request sought information in relation to a “kickback investigation” of a defense contractor, details about which remain secret. It is significant, however, that the surveillance request denied by Facciola relates to a criminal investigation, unrelated to terrorism. This demonstrates that the use by the Obama administration of blanket warrants enabling them to seize all information on a person's Internet accounts is not limited to terrorism, as is frequently claimed, but is part of a program of general mass illegal spying on the American people.
  • Facciola’s ruling states in no uncertain terms that the Obama administration has aggressively and repeatedly sought expansive, unconstitutional warrants, ignoring the court’s insistence for specific, narrowly targeted surveillance requests. “The government continues to submit overly broad warrants and makes no effort to balance the law enforcement interest against the obvious expectation of privacy email account holders have in their communications…The government continues to ask for all electronically stored information in email accounts, irrespective of the relevance to the investigation,” wrote Judge Facciola. As stated in the ruling, the surveillance requests submitted to the court by the US government sought the following comprehensive, virtually limitless list of information about the target: “All records or other information stored by an individual using each account, including address books, contact and buddy lists, pictures, and files… All records or other information regarding the identification of the accounts, to include full name, physical address, telephone numbers and other identifies, records of session times and durations, the date on which each account was created, the length of service, the types of service utilized, the Internet Protocol (IP) address used to register each account, log-in IP addresses associated with session times and dates, account status, alternative email addresses provided during registration, methods of connecting, log files, and means of payment (including any credit or bank account number).”
  • Responding to these all-encompassing warrant requests, Judge Facciola ruled that evidence of probable cause was necessary for each specific item sought by the government. “This Court is increasingly concerned about the government’s applications for search warrants for electronic data. In essence, its applications ask for the entire universe of information tied to a particular account, even if it has established probable cause only for certain information,” Facciola wrote. “It is the Court’s duty to reject any applications for search warrants where the standard of probable cause has not been met… To follow the dictates of the Fourth Amendment and to avoid issuing a general warrant, a court must be careful to ensure that probable cause exists to seize each item specified in the warrant application… Any search of an electronic source has the potential to unearth tens or hundreds of thousands of individual documents, pictures, movies, or other constitutionally protected content.” Facciola also noted in the ruling that the government never reported the length of time it would keep the data, or whether it planned to destroy the data at any point.
  • ...2 more annotations...
  • Facciola’s ruling represents a reversal from a previous ruling, in which a Kansas judge allowed the government to conduct such unlimited searches of Yahoo accounts.
  • In testimony, De and his deputy Brad Wiegmann rejected the privacy board’s advice that the agency limit its data mining to specific targets approved by specific warrants. “If you have to go back to court every time you look at the information in your custody, you can imagine that would be quite burdensome,” said Wiegmann. De further said on the topic, “That information is at the government’s disposal to review in the first instance.” As these statements indicate, the intelligence establishment rejects any restrictions on their prerogative to spy on every aspect of citizens lives at will, even the entirely cosmetic regulations proposed by the Obama administration-appointed PCLOB.
Paul Merrell

Surveillance scandal rips through hacker community | Security & Privacy - CNET News - 0 views

  • One security start-up that had an encounter with the FBI was Wickr, a privacy-forward text messaging app for the iPhone with an Android version in private beta. Wickr's co-founder Nico Sell told CNET at Defcon, "Wickr has been approached by the FBI and asked for a backdoor. We said, 'No.'" The mistrust runs deep. "Even if [the NSA] stood up tomorrow and said that [they] have eliminated these programs," said Marlinspike, "How could we believe them? How can we believe that anything they say is true?" Where does security innovation go next? The immediate future of information security innovation most likely lies in software that provides an existing service but with heightened privacy protections, such as webmail that doesn't mine you for personal data.
  • Wickr's Sell thinks that her company has hit upon a privacy innovation that a few others are also doing, but many will soon follow: the company itself doesn't store user data. "[The FBI] would have to force us to build a new app. With the current app there's no way," she said, that they could incorporate backdoor access to Wickr users' texts or metadata. "Even if you trust the NSA 100 percent that they're going to use [your data] correctly," Sell said, "Do you trust that they're going to be able to keep it safe from hackers? What if somebody gets that database and posts it online?" To that end, she said, people will start seeing privacy innovation for services that don't currently provide it. Calling it "social networks 2.0," she said that social network competitors will arise that do a better job of protecting their customer's privacy and predicted that some that succeed will do so because of their emphasis on privacy. Abine's recent MaskMe browser add-on and mobile app for creating disposable e-mail addresses, phone numbers, and credit cards is another example of a service that doesn't have access to its own users' data.
  • Stamos predicted changes in services that companies with cloud storage offer, including offering customers the ability to store their data outside of the U.S. "If they want to stay competitive, they're going to have to," he said. But, he cautioned, "It's impossible to do a cloud-based ad supported service." Soghoian added, "The only way to keep a service running is to pay them money." This, he said, is going to give rise to a new wave of ad-free, privacy protective subscription services.
  • ...2 more annotations...
  • The issue with balancing privacy and surveillance is that the wireless carriers are not interested in privacy, he said. "They've been providing wiretapping for 100 years. Apple may in the next year protect voice calls," he said, and said that the best hope for ending widespread government surveillance will be the makers of mobile operating systems like Apple and Google. Not all upcoming security innovation will be focused on that kind of privacy protection. Security researcher Brandon Wiley showed off at Defcon a protocol he calls Dust that can obfuscate different kinds of network traffic, with the end goal of preventing censorship. "I only make products about letting you say what you want to say anywhere in the world," such as content critical of governments, he said. Encryption can hide the specifics of the traffic, but some governments have figured out that they can simply block all encrypted traffic, he said. The Dust protocol would change that, he said, making it hard to tell the difference between encrypted and unencrypted traffic. It's hard to build encryption into pre-existing products, Wiley said. "I think people are going to make easy-to-use, encrypted apps, and that's going to be the future."
  • Companies could face severe consequences from their security experts, said Stamos, if the in-house experts find out that they've been lied to about providing government access to customer data. You could see "lots of resignations and maybe publicly," he said. "It wouldn't hurt their reputations to go out in a blaze of glory." Perhaps not surprisingly, Marlinspike sounded a hopeful call for non-destructive activism on Defcon's 21st anniversary. "As hackers, we don't have a lot of influence on policy. I hope that's something that we can focus our energy on," he said.
  •  
    NSA as the cause of the next major disruption in the social networking service industry?  Grief ahead for Google? Note the point made that: "It's impossible to do a cloud-based ad supported service" where the encryption/decryption takes place on the client side. 
Paul Merrell

How an FBI informant orchestrated the Stratfor hack - 0 views

  • Sitting inside a medium-security federal prison in Kentucky, Jeremy Hammond looks defiant and frustrated.  “[The FBI] could've stopped me,” he told the Daily Dot last month at the Federal Correctional Institution, Manchester. “They could've. They knew about it. They could’ve stopped dozens of sites I was breaking into.” Hammond is currently serving the remainder of a 10-year prison sentence in part for his role in one of the most high-profile cyberattacks of the early 21st century. His 2011 breach of Strategic Forecasting, Inc. (Stratfor) left tens of thousands of Americans vulnerable to identity theft and irrevocably damaged the Texas-based intelligence firm's global reputation. He was also indicted for his role in the June 2011 hack of an Arizona state law enforcement agency's computer servers.
  • There's no question of his guilt: Hammond, 29, admittedly hacked into Stratfor’s network and exfiltrated an estimated 60,000 credit card numbers and associated data and millions of emails, information that was later shared with the whistleblower organization WikiLeaks and the hacker collective Anonymous.   Sealed court documents obtained by the Daily Dot and Motherboard, however, reveal that the attack was instigated and orchestrated not by Hammond, but by an informant, with the full knowledge of the Federal Bureau of Investigation (FBI).  In addition to directly facilitating the breach, the FBI left Stratfor and its customers—which included defense contractors, police chiefs, and National Security Agency employees—vulnerable to future attacks and fraud, and it requested knowledge of the data theft to be withheld from affected customers. This decision would ultimately allow for millions of dollars in damages.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gary Edwards

Desktop Web Applications using Sproutcore | rapid apps group - low cost, ethical web development & e-commerce websites for tight budgets in the credit crunch - 0 views

  •  
    Good article discussing the rapid advance of a WebOS for Web Applications based on the WebKit JavaScript model. Author focuses on Apple's SproutCore - Object C framework, but provides a very broad scope of discussion. Interesting stuff concerning the relationship between JavaScript, the SproutCore Framework, and Ruby. I found the link to this at the ReadWriteWeb story, "The Future of the Desktop" ........ "Desktop web applications offer the convenience of desktop applications and the interconnected power of web applications. This article looks at what they are, how they may evolve and focuses on Sproutcore, an open source framework for building them: The Internet is still evolving and the familiar struggle over who will control the platform of future web applications is still ongoing. Companies like Microsoft and Adobe provide platforms that build slick web applications but their aim is to dominate with proprietary systems that will effectively replace the browser. On the other side you have Google and Apple who have developed or support open web standards for developing web applications. If the proprietary companies win, future web applications could be locked into their systems and the incredible innovation that has driven the web to date may begin to falter.
Paul Merrell

Sir Tim Berners-Lee on 'Reinventing HTML' - 0 views

    • Paul Merrell
       
      Berners-Lee gives the obligaotry lip service to participation of "other stakeholders" but the stark reality is that W3C is the captive of the major browser developers. One may still credit W3C staff and Berners-Lee for what they have accomplished despite that reality, but in an organization that sells votes the needs of "other stakeholders" will always be neglected.
  • Some things are clearer with hindsight of several years. It is necessary to evolve HTML incrementally. The attempt to get the world to switch to XML, including quotes around attribute values and slashes in empty tags and namespaces all at once didn't work. The large HTML-generating public did not move, largely because the browsers didn't complain. Some large communities did shift and are enjoying the fruits of well-formed systems, but not all. It is important to maintain HTML incrementally, as well as continuing a transition to well-formed world, and developing more power in that world.
  • The plan is, informed by Webforms, to extend HTML forms. At the same time, there is a work item to look at how HTML forms (existing and extended) can be thought of as XForm equivalents, to allow an easy escalation path. A goal would be to have an HTML forms language which is a superset of the existing HTML language, and a subset of a XForms language wit added HTML compatibility.
  • ...7 more annotations...
  • There will be no dependency of HTML work on the XHTML2 work.
    • Paul Merrell
       
      He just confirms that that incremental migration from HTML forms to XForms is entirely a pie-in-the-sky aspiration, not a plan.
  • This is going to be a very major collaboration on a very important spec, one of the crown jewels of web technology. Even though hundreds of people will be involved, we are evolving the technology which millions going on billions will use in the future. There won't seem like enough thankyous to go around some days.
    • Paul Merrell
       
      This is the precise reason the major browser developers must be brought to heel rather than being catered to with a standard that serves only the needs of the browser developers and not the need of users for interoperable web applications. CSS is in the web app page templates, not in the markup that can be exchanged by web apps. Why can't MediaWiki exchange page content with Drupal? It's because HTML really sucks biig time as a data exchange format. All the power is in the CSS site templates, not in what users can stick in HTML forms.
    • Paul Merrell
       
      Bye-bye XForms.
    • Paul Merrell
       
      Perhaps a political reality. But I am 62 years old, have had three major heart attacks, and am still smoking cigarettes. I would like to experience interoperable web apps before I die. What does the incremental strategy do for me? I would much prefer to see Berners-Lee raising his considerable voice and stature against the dominance of the browser developers at W3C.
  • The perceived accountability of the HTML group has been an issue. Sometimes this was a departure from the W3C process, sometimes a sticking to it in principle, but not actually providing assurances to commenters. An issue was the formation of the breakaway WHAT WG, which attracted reviewers though it did not have a process or specific accountability measures itself.
  • Some things are very clear. It is really important to have real developers on the ground involved with the development of HTML. It is also really important to have browser makers intimately involved and committed. And also all the other stakeholders, including users and user companies and makers of related products.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Paul Merrell

Prepare to Hang Up the Phone, Forever - WSJ.com - 0 views

  • At decade's end, the trusty landline telephone could be nothing more than a memory. Telecom giants AT&T T +0.31% AT&T Inc. U.S.: NYSE $35.07 +0.11 +0.31% March 28, 2014 4:00 pm Volume (Delayed 15m) : 24.66M AFTER HOURS $35.03 -0.04 -0.11% March 28, 2014 7:31 pm Volume (Delayed 15m): 85,446 P/E Ratio 10.28 Market Cap $182.60 Billion Dividend Yield 5.25% Rev. per Employee $529,844 03/29/14 Prepare to Hang Up the Phone, ... 03/21/14 AT&T Criticizes Netflix's 'Arr... 03/21/14 Samsung's Galaxy S5 Smartphone... More quote details and news » T in Your Value Your Change Short position and Verizon Communications VZ -0.57% Verizon Communications Inc. U.S.: NYSE $47.42 -0.27 -0.57% March 28, 2014 4:01 pm Volume (Delayed 15m) : 24.13M AFTER HOURS $47.47 +0.05 +0.11% March 28, 2014 7:59 pm Volume (Delayed 15m): 1.57M
  • The two providers want to lay the crumbling POTS to rest and replace it with Internet Protocol-based systems that use the same wired and wireless broadband networks that bring Web access, cable programming and, yes, even your telephone service, into your homes. You may think you have a traditional landline because your home phone plugs into a jack, but if you have bundled your phone with Internet and cable services, you're making calls over an IP network, not twisted copper wires. California, Florida, Texas, Georgia, North Carolina, Wisconsin and Ohio are among states that agree telecom resources would be better redirected into modern telephone technologies and innovations, and will kill copper-based technologies in the next three years or so. Kentucky and Colorado are weighing similar laws, which force people to go wireless whether they want to or not. In Mantoloking, N.J., Verizon wants to replace the landline system, which Hurricane Sandy wiped out, with its wireless Voice Link. That would make it the first entire town to go landline-less, a move that isn't sitting well with all residents.
  • New Jersey's legislature, worried about losing data applications such as credit-card processing and alarm systems that wireless systems can't handle, wants a one-year moratorium to block that switch. It will vote on the measure this month. (Verizon tried a similar change in Fire Island, N.Y., when its copper lines were destroyed, but public opposition persuaded Verizon to install fiber-optic cable.) It's no surprise that landlines are unfashionable, considering many of us already have or are preparing to ditch them. More than 38% of adults and 45.5% of children live in households without a landline telephone, says the Centers for Disease Control and Prevention. That means two in every five U.S. homes, or 39%, are wireless, up from 26.6% three years ago. Moreover, a scant 8.5% of households relied only on a landline, while 2% were phoneless in 2013. Metropolitan residents have few worries about the end of landlines. High-speed wire and wireless services are abundant and work well, despite occasional dropped calls. Those living in rural areas, where cell towers are few and 4G capability limited, face different issues.
  • ...2 more annotations...
  • Safety is one of them. Call 911 from a landline and the emergency operator pinpoints your exact address, down to the apartment number. Wireless phones lack those specifics, and even with GPS navigation aren't as precise. Matters are worse in rural and even suburban areas that signals don't reach, sometimes because they're blocked by buildings or the landscape. That's of concern to the Federal Communications Commission, which oversees all forms of U.S. communications services. Universal access is a tenet of its mission, and, despite the state-by-state degradation of the mandate, it's unwilling to let telecom companies simply drop geographically undesirable customers. Telecom firms need FCC approval to ax services completely, and can't do so unless there is a viable competitor to pick up the slack. Last year AT&T asked to turn off its legacy network, which could create gaps in universal coverage and will force people off the grid to get a wireless provider.
  • AT&T and the FCC will soon begin trials to explore life without copper-wired landlines. Consumers will voluntarily test IP-connected networks and their impact on towns like Carbon Hills, Ala., population 2,071. They want to know how households will reach 911, how small businesses will connect to customers, how people with medical-monitoring devices or home alarms know they will always be connected to a reliable network, and what the costs are. "We cannot be a nation of opportunity without networks of opportunity," said FCC Chairman Tom Wheeler in unveiling the plan. "This pilot program will help us learn how fiber might be deployed where it is not now deployed…and how new forms of wireless can reach deep into the interior of rural America."
Paul Merrell

FBI Now Holding Up Michael Horowitz' Investigation into the DEA | emptywheel - 0 views

  • Man, at some point Congress is going to have to declare the FBI legally contemptuous and throw them in jail. They continue to refuse to cooperate with DOJ’s Inspector General, as they have been for basically 5 years. But in Michael Horowitz’ latest complaint to Congress, he adds a new spin: FBI is not only obstructing his investigation of the FBI’s management impaired surveillance, now FBI is obstructing his investigation of DEA’s management impaired surveillance. I first reported on DOJ IG’s investigation into DEA’s dragnet databases last April. At that point, the only dragnet we knew about was Hemisphere, which DEA uses to obtain years of phone records as well as location data and other details, before it them parallel constructs that data out of a defendant’s reach.
  • But since then, we’ve learned of what the government claims to be another database — that used to identify Shantia Hassanshahi in an Iranian sanctions case. After some delay, the government revealed that this was another dragnet, including just international calls. It claims that this database was suspended in September 2013 (around the time Hemisphere became public) and that it is no longer obtaining bulk records for it. According to the latest installment of Michael Horowitz’ complaints about FBI obstruction, he tried to obtain records on the DEA databases on November 20, 2014 (of note, during the period when the government was still refusing to tell even Judge Rudolph Contreras what the database implicating Hassanshahi was). FBI slow-walked production, but promised to provide everything to Horowitz by February 13, 2015. FBI has decided it has to keep reviewing the emails in question to see if there is grand jury, Title III electronic surveillance, and Fair Credit Reporting Act materials, which are the same categories of stuff FBI has refused in the past. So Horowitz is pointing to the language tied to DOJ’s appropriations for FY 2015 which (basically) defunded FBI obstruction. Only FBI continues to obstruct.
  • There’s one more question about this. As noted, this investigation is supposed to be about DEA’s databases. We’ve already seen that FBI uses Hemisphere (when I asked FBI for comment in advance of this February 4, 2014 article on FBI obstinance, Hemisphere was the one thing they refused all comment on). And obviously, FBI access another DEA database to go after Hassanshahi. So that may be the only reason why Horowitz needs the FBI’s cooperation to investigate the DEA’s dragnets. Plus, assuming FBI is parallel constructing these dragnets just like DEA is, I can understand why they’d want to withhold grand jury information, which would make that clear. Still, I can’t help but wonder — as I have in the past — whether these dragnets are all connected, a constantly moving shell game. That might explain why FBI is so intent on obstructing Horowitz again.
  •  
    Marcy Wheeler's specuiulation that various government databases simply move to another agency when they're brought to light is not without precedent. When Congress shut down DARPA's Total Information Awareness program, most of its software programs and databases were just moved to NSA. 
Joint Plan UK

Hackers claim attacks on World Cup-related websites - 0 views

  •  
    Credit: Reuters/Damir Sagolj The official match ball for the 2014 World Cup, the ''Brazuca'' , is displayed on the table before a news conference at the Corinthians arena in Sao Paulo June 11, 2014.
Paul Merrell

WASHINGTON: CIA admits it broke into Senate computers; senators call for spy chief's ouster | National Security & Defense | McClatchy DC - 0 views

  • An internal CIA investigation confirmed allegations that agency personnel improperly intruded into a protected database used by Senate Intelligence Committee staff to compile a scathing report on the agency’s detention and interrogation program, prompting bipartisan outrage and at least two calls for spy chief John Brennan to resign.“This is very, very serious, and I will tell you, as a member of the committee, someone who has great respect for the CIA, I am extremely disappointed in the actions of the agents of the CIA who carried out this breach of the committee’s computers,” said Sen. Saxby Chambliss, R-Ga., the committee’s vice chairman.
  • The rare display of bipartisan fury followed a three-hour private briefing by Inspector General David Buckley. His investigation revealed that five CIA employees, two lawyers and three information technology specialists improperly accessed or “caused access” to a database that only committee staff were permitted to use.Buckley’s inquiry also determined that a CIA crimes report to the Justice Department alleging that the panel staff removed classified documents from a top-secret facility without authorization was based on “inaccurate information,” according to a summary of the findings prepared for the Senate and House intelligence committees and released by the CIA.In other conclusions, Buckley found that CIA security officers conducted keyword searches of the emails of staffers of the committee’s Democratic majority _ and reviewed some of them _ and that the three CIA information technology specialists showed “a lack of candor” in interviews with Buckley’s office.
  • The inspector general’s summary did not say who may have ordered the intrusion or when senior CIA officials learned of it.Following the briefing, some senators struggled to maintain their composure over what they saw as a violation of the constitutional separation of powers between an executive branch agency and its congressional overseers.“We’re the only people watching these organizations, and if we can’t rely on the information that we’re given as being accurate, then it makes a mockery of the entire oversight function,” said Sen. Angus King, an independent from Maine who caucuses with the Democrats.The findings confirmed charges by the committee chairwoman, Sen. Dianne Feinstein, D-Calif., that the CIA intruded into the database that by agreement was to be used by her staffers compiling the report on the harsh interrogation methods used by the agency on suspected terrorists held in secret overseas prisons under the George W. Bush administration.The findings also contradicted Brennan’s denials of Feinstein’s allegations, prompting two panel members, Sens. Mark Udall, D-Colo., and Martin Heinrich, D-N.M., to demand that the spy chief resign.
  • ...7 more annotations...
  • Another committee member, Sen. Ron Wyden, D-Ore., and some civil rights groups called for a fuller investigation. The demands clashed with a desire by President Barack Obama, other lawmakers and the CIA to move beyond the controversy over the “enhanced interrogation program” after Feinstein releases her committee’s report, which could come as soon as next weekMany members demanded that Brennan explain his earlier denial that the CIA had accessed the Senate committee database.“Director Brennan should make a very public explanation and correction of what he said,” said Sen. Carl Levin, D-Mich. He all but accused the Justice Department of a coverup by deciding not to pursue a criminal investigation into the CIA’s intrusion.
  • “I thought there might have been information that was produced after the department reached their conclusion,” he said. “What I understand, they have all of the information which the IG has.”He hinted that the scandal goes further than the individuals cited in Buckley’s report.“I think it’s very clear that CIA people knew exactly what they were doing and either knew or should’ve known,” said Levin, adding that he thought that Buckley’s findings should be referred to the Justice Department.A person with knowledge of the issue insisted that the CIA personnel who improperly accessed the database “acted in good faith,” believing that they were empowered to do so because they believed there had been a security violation.“There was no malicious intent. They acted in good faith believing they had the legal standing to do so,” said the knowledgeable person, who asked not to be further identified because they weren’t authorized to discuss the issue publicly. “But it did not conform with the legal agreement reached with the Senate committee.”
  • Feinstein called Brennan’s apology and his decision to submit Buckley’s findings to the accountability board “positive first steps.”“This IG report corrects the record and it is my understanding that a declassified report will be made available to the public shortly,” she said in a statement.“The investigation confirmed what I said on the Senate floor in March _ CIA personnel inappropriately searched Senate Intelligence Committee computers in violation of an agreement we had reached, and I believe in violation of the constitutional separation of powers,” she said.It was not clear why Feinstein didn’t repeat her charges from March that the agency also may have broken the law and had sought to “thwart” her investigation into the CIA’s use of waterboarding, which simulates drowning, sleep deprivation and other harsh interrogation methods _ tactics denounced by many experts as torture.
  • Buckley’s findings clashed with denials by Brennan that he issued only hours after Feinstein’s blistering Senate speech.“As far as the allegations of, you know, CIA hacking into, you know, Senate computers, nothing could be further from the truth. I mean, we wouldn’t do that. I mean, that’s _ that’s just beyond the _ you know, the scope of reason in terms of what we would do,” he said in an appearance at the Council on Foreign Relations.White House Press Secretary Josh Earnest issued a strong defense of Brennan, crediting him with playing an “instrumental role” in the administration’s fight against terrorism, in launching Buckley’s investigation and in looking for ways to prevent such occurrences in the future.Earnest was asked at a news briefing whether there was a credibility issue for Brennan, given his forceful denial in March.“Not at all,” he replied, adding that Brennan had suggested the inspector general’s investigation in the first place. And, he added, Brennan had taken the further step of appointing the accountability board to review the situation and the conduct of those accused of acting improperly to “ensure that they are properly held accountable for that conduct.”
  • The allegations and the separate CIA charge that the committee staff removed classified documents from the secret CIA facility in Northern Virginia without authorization were referred to the Justice Department for investigation.The department earlier this month announced that it had found insufficient evidence on which to proceed with criminal probes into either matter “at this time.” Thursday, Justice Department officials declined comment.
  • In her speech, Feinstein asserted that her staff found the material _ known as the Panetta review, after former CIA Director Leon Panetta, who ordered it _ in the protected database and that the CIA discovered the staff had it by monitoring its computers in violation of the user agreement.The inspector general’s summary, which was prepared for the Senate and the House intelligence committees, didn’t identify the CIA personnel who had accessed the Senate’s protected database.Furthermore, it said, the CIA crimes report to the Justice Department alleging that panel staffers had removed classified materials without permission was grounded on inaccurate information. The report is believed to have been sent by the CIA’s then acting general counsel, Robert Eatinger, who was a legal adviser to the interrogation program.“The factual basis for the referral was not supported, as the author of the referral had been provided inaccurate information on which the letter was based,” said the summary, noting that the Justice Department decided not to pursue the issue.
  • Christopher Anders, senior legislative counsel with the American Civil Liberties Union, criticized the CIA announcement, saying that “an apology isn’t enough.”“The Justice Department must refer the (CIA) inspector general’s report to a federal prosecutor for a full investigation into any crimes by CIA personnel or contractors,” said Anders.
  •  
    And no one but the lowest ranking staffer knew anything about it, not even the CIA lawyer who made the criminal referral to the Justice Dept., alleging that the Senate Intelligence Committee had accessed classified documents it wasn't authorized to access. So the Justice Dept. announces that there's insufficient evidence to warrant a criminal investigation. As though the CIA lawyer's allegations were not based on the unlawful surveillance of the Senate Intelligence Committee's network.  Can't we just get an official announcement that Attorney General Holder has decided that there shall be a cover-up? 
Paul Merrell

How to Encrypt the Entire Web for Free - The Intercept - 0 views

  • If we’ve learned one thing from the Snowden revelations, it’s that what can be spied on will be spied on. Since the advent of what used to be known as the World Wide Web, it has been a relatively simple matter for network attackers—whether it’s the NSA, Chinese intelligence, your employer, your university, abusive partners, or teenage hackers on the same public WiFi as you—to spy on almost everything you do online. HTTPS, the technology that encrypts traffic between browsers and websites, fixes this problem—anyone listening in on that stream of data between you and, say, your Gmail window or bank’s web site would get nothing but useless random characters—but is woefully under-used. The ambitious new non-profit Let’s Encrypt aims to make the process of deploying HTTPS not only fast, simple, and free, but completely automatic. If it succeeds, the project will render vast regions of the internet invisible to prying eyes.
  • Encryption also prevents attackers from tampering with or impersonating legitimate websites. For example, the Chinese government censors specific pages on Wikipedia, the FBI impersonated The Seattle Times to get a suspect to click on a malicious link, and Verizon and AT&T injected tracking tokens into mobile traffic without user consent. HTTPS goes a long way in preventing these sorts of attacks. And of course there’s the NSA, which relies on the limited adoption of HTTPS to continue to spy on the entire internet with impunity. If companies want to do one thing to meaningfully protect their customers from surveillance, it should be enabling encryption on their websites by default.
  • Let’s Encrypt, which was announced this week but won’t be ready to use until the second quarter of 2015, describes itself as “a free, automated, and open certificate authority (CA), run for the public’s benefit.” It’s the product of years of work from engineers at Mozilla, Cisco, Akamai, Electronic Frontier Foundation, IdenTrust, and researchers at the University of Michigan. (Disclosure: I used to work for the Electronic Frontier Foundation, and I was aware of Let’s Encrypt while it was being developed.) If Let’s Encrypt works as advertised, deploying HTTPS correctly and using all of the best practices will be one of the simplest parts of running a website. All it will take is running a command. Currently, HTTPS requires jumping through a variety of complicated hoops that certificate authorities insist on in order prove ownership of domain names. Let’s Encrypt automates this task in seconds, without requiring any human intervention, and at no cost.
  • ...2 more annotations...
  • The benefits of using HTTPS are obvious when you think about protecting secret information you send over the internet, like passwords and credit card numbers. It also helps protect information like what you search for in Google, what articles you read, what prescription medicine you take, and messages you send to colleagues, friends, and family from being monitored by hackers or authorities. But there are less obvious benefits as well. Websites that don’t use HTTPS are vulnerable to “session hijacking,” where attackers can take over your account even if they don’t know your password. When you download software without encryption, sophisticated attackers can secretly replace the download with malware that hacks your computer as soon as you try installing it.
  • The transition to a fully encrypted web won’t be immediate. After Let’s Encrypt is available to the public in 2015, each website will have to actually use it to switch over. And major web hosting companies also need to hop on board for their customers to be able to take advantage of it. If hosting companies start work now to integrate Let’s Encrypt into their services, they could offer HTTPS hosting by default at no extra cost to all their customers by the time it launches.
  •  
    Don't miss the video. And if you have a web site, urge your host service to begin preparing for Let's Encrypt. (See video on why it's good for them.)
Paul Merrell

The Great SIM Heist: How Spies Stole the Keys to the Encryption Castle - 0 views

  • AMERICAN AND BRITISH spies hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe, according to top-secret documents provided to The Intercept by National Security Agency whistleblower Edward Snowden. The hack was perpetrated by a joint unit consisting of operatives from the NSA and its British counterpart Government Communications Headquarters, or GCHQ. The breach, detailed in a secret 2010 GCHQ document, gave the surveillance agencies the potential to secretly monitor a large portion of the world’s cellular communications, including both voice and data. The company targeted by the intelligence agencies, Gemalto, is a multinational firm incorporated in the Netherlands that makes the chips used in mobile phones and next-generation credit cards. Among its clients are AT&T, T-Mobile, Verizon, Sprint and some 450 wireless network providers around the world. The company operates in 85 countries and has more than 40 manufacturing facilities. One of its three global headquarters is in Austin, Texas and it has a large factory in Pennsylvania. In all, Gemalto produces some 2 billion SIM cards a year. Its motto is “Security to be Free.”
  • With these stolen encryption keys, intelligence agencies can monitor mobile communications without seeking or receiving approval from telecom companies and foreign governments. Possessing the keys also sidesteps the need to get a warrant or a wiretap, while leaving no trace on the wireless provider’s network that the communications were intercepted. Bulk key theft additionally enables the intelligence agencies to unlock any previously encrypted communications they had already intercepted, but did not yet have the ability to decrypt.
  • Leading privacy advocates and security experts say that the theft of encryption keys from major wireless network providers is tantamount to a thief obtaining the master ring of a building superintendent who holds the keys to every apartment. “Once you have the keys, decrypting traffic is trivial,” says Christopher Soghoian, the principal technologist for the American Civil Liberties Union. “The news of this key theft will send a shock wave through the security community.”
  • ...2 more annotations...
  • According to one secret GCHQ slide, the British intelligence agency penetrated Gemalto’s internal networks, planting malware on several computers, giving GCHQ secret access. We “believe we have their entire network,” the slide’s author boasted about the operation against Gemalto. Additionally, the spy agency targeted unnamed cellular companies’ core networks, giving it access to “sales staff machines for customer information and network engineers machines for network maps.” GCHQ also claimed the ability to manipulate the billing servers of cell companies to “suppress” charges in an effort to conceal the spy agency’s secret actions against an individual’s phone. Most significantly, GCHQ also penetrated “authentication servers,” allowing it to decrypt data and voice communications between a targeted individual’s phone and his or her telecom provider’s network. A note accompanying the slide asserted that the spy agency was “very happy with the data so far and [was] working through the vast quantity of product.”
  • The U.S. and British intelligence agencies pulled off the encryption key heist in great stealth, giving them the ability to intercept and decrypt communications without alerting the wireless network provider, the foreign government or the individual user that they have been targeted. “Gaining access to a database of keys is pretty much game over for cellular encryption,” says Matthew Green, a cryptography specialist at the Johns Hopkins Information Security Institute. The massive key theft is “bad news for phone security. Really bad news.”
  •  
    Remember all those NSA claims that no evidence of their misbehavior has emerged? That one should never take wing again. Monitoring call content without the involvement of any court? Without a warrant? Without probable cause?  Was there even any Congressional authorization?  Wiretapping unequivocally requires a judicially-approved search warrant. It's going to be very interesting to learn the government's argument for this misconduct's legality. 
Paul Merrell

Facebook to pay $5bn fine as regulator settles Cambridge Analytica complaint | Technology | The Guardian - 0 views

  • Facebook will pay a record $5bn (£4bn) penalty in the US for “deceiving” users about their ability to keep personal information private, after a year-long investigation into the Cambridge Analytica data breach. The Federal Trade Commission (FTC), the US consumer regulator, also announced a lawsuit against Cambridge Analytica and proposed settlements with the data analysis firm’s former chief executive Alexander Nix and its app developer Aleksandr Kogan. The $5bn fine for Facebook dwarfs the previous record for the largest fine handed down by the FTC for violation of consumers’ privacy, which was a $275m penalty for consumer credit agency Equifax.
Paul Merrell

US Court Vindicates Snowden Leaks - Rules NSA Mass Surveillance "Illegal" & Officials Lied  | Zero Hedge - 3 views

  • Though we doubt the broader public needed convincing, this is a significant milestone nonetheless, also after last month Trump shocked reporters by suggesting he could take a look at pardoning Edward Snowden:  Seven years after former National Security Agency contractor Edward Snowden blew the whistle on the mass surveillance of Americans’ telephone records, an appeals court has found the program was unlawful - and that the U.S. intelligence leaders who publicly defended it were not telling the truth.
  • And the ACLU said “Today’s ruling is a victory for our privacy rights,” adding that it “makes plain that the NSA’s bulk collection of Americans’ phone records violated the Constitution.” Crucially, the three judge panel on the 9th Circuit specifically credited Edward Snowden for exposing it, as Politico notes: Judge Marsha Berzon's opinion, which contains a half-dozen references to the role of former NSA contractor and whistleblower Edward Snowden in disclosing the NSA metadata program, concludes that the "bulk collection" of such data violated the Foreign Intelligence Surveillance Act.
Paul Merrell

Is Apple an Illegal Monopoly? | OneZero - 0 views

  • That’s not a bug. It’s a function of Apple policy. With some exceptions, the company doesn’t let users pay app makers directly for their apps or digital services. They can only pay Apple, which takes a 30% cut of all revenue and then passes 70% to the developer. (For subscription services, which account for the majority of App Store revenues, that 30% cut drops to 15% after the first year.) To tighten its grip, Apple prohibits the affected apps from even telling users how they can pay their creators directly.In 2018, unwilling to continue paying the “Apple tax,” Netflix followed Spotify and Amazon’s Kindle books app in pulling in-app purchases from its iOS app. Users must now sign up elsewhere, such as on the company’s website, in order for the app to become usable. Of course, these brands are big enough to expect that many users will seek them out anyway.
  • Smaller app developers, meanwhile, have little choice but to play by Apple’s rules. That’s true even when they’re competing with Apple’s own apps, which pay no such fees and often enjoy deeper access to users’ devices and information.Now, a handful of developers are speaking out about it — and government regulators are beginning to listen. David Heinemeier Hansson, the co-founder of the project management software company Basecamp, told members of the U.S. House antitrust subcommittee in January that navigating the App Store’s fees, rules, and review processes can feel like a “Kafka-esque nightmare.”One of the world’s most beloved companies, Apple has long enjoyed a reputation for user-friendly products, and it has cultivated an image as a high-minded protector of users’ privacy. The App Store, launched in 2008, stands as one of its most underrated inventions; it has powered the success of the iPhone—perhaps the most profitable product in human history. The concept was that Apple and developers could share in one another’s success with the iPhone user as the ultimate beneficiary.
  • But critics say that gauzy success tale belies the reality of a company that now wields its enormous market power to bully, extort, and sometimes even destroy rivals and business partners alike. The iOS App Store, in their telling, is a case study in anti-competitive corporate behavior. And they’re fighting to change that — by breaking its choke hold on the Apple ecosystem.
  • ...4 more annotations...
  • Whether Apple customers have a real choice in mobile platforms, once they’ve bought into the company’s ecosystem, is another question. In theory, they could trade in their pricey hardware for devices that run Android, which offers equivalents of many iOS features and apps. In reality, Apple has built its empire on customer lock-in: making its own gadgets and services work seamlessly with one another, but not with those of rival companies. Tasks as simple as texting your friends can become a migraine-inducing mess when you switch from iOS to Android. The more Apple products you buy, the more onerous it becomes to abandon ship.
  • The case against Apple goes beyond iOS. At a time when Apple is trying to reinvent itself as a services company to offset plateauing hardware sales — pushing subscriptions to Apple Music, Apple TV+, Apple News+, and Apple Arcade, as well as its own credit card — the antitrust concerns are growing more urgent. Once a theoretical debate, the question of whether its App Store constitutes an illegal monopoly is now being actively litigated on multiple fronts.
  • The company faces an antitrust lawsuit from consumers; a separate antitrust lawsuit from developers; a formal antitrust complaint from Spotify in the European Union; investigations by the Federal Trade Commission and the Department of Justice; and an inquiry by the antitrust subcommittee of the U.S House of Representatives. At stake are not only Apple’s profits, but the future of mobile software.Apple insists that it isn’t a monopoly, and that it strives to make the app store a fair and level playing field even as its own apps compete on that field. But in the face of unprecedented scrutiny, there are signs that the famously stubborn company may be feeling the pressure to prove it.
  • Tile is hardly alone in its grievances. Apple’s penchant for copying key features of third-party apps and integrating them into its operating system is so well-known among developers that it has a name: “Sherlocking.” It’s a reference to the time—in the early 2000s—when Apple kneecapped a popular third-party web-search interface for Mac OS X, called Watson. Apple built virtually all of Watson’s functionality into its own feature, called Sherlock.In a 2006 blog post, Watson’s developer, Karelia Software, recalled how Apple’s then-CEO Steve Jobs responded when they complained about the company’s 2002 power play. “Here’s how I see it,” Jobs said, according to Karelia founder Dan Wood’s loose paraphrase. “You know those handcars, the little machines that people stand on and pump to move along on the train tracks? That’s Karelia. Apple is the steam train that owns the tracks.”From an antitrust standpoint, the metaphor is almost too perfect. It was the monopoly power of railroads in the late 19th century — and their ability to make or break the businesses that used their tracks — that spurred the first U.S. antitrust regulations.There’s another Jobs quote that’s relevant here. Referencing Picasso’s famous saying, “Good artists copy, great artists steal,” Jobs said of Apple in 2006. “We have always been shameless about stealing great ideas.” Company executives later tried to finesse the quote’s semantics, but there’s no denying that much of iOS today is built on ideas that were not originally Apple’s.
Paul Merrell

Facebook to Pay $550 Million to Settle Facial Recognition Suit - The New York Times - 2 views

  • Facebook said on Wednesday that it had agreed to pay $550 million to settle a class-action lawsuit over its use of facial recognition technology in Illinois, giving privacy groups a major victory that again raised questions about the social network’s data-mining practices.The case stemmed from Facebook’s photo-labeling service, Tag Suggestions, which uses face-matching software to suggest the names of people in users’ photos. The suit said the Silicon Valley company violated an Illinois biometric privacy law by harvesting facial data for Tag Suggestions from the photos of millions of users in the state without their permission and without telling them how long the data would be kept. Facebook has said the allegations have no merit.Under the agreement, Facebook will pay $550 million to eligible Illinois users and for the plaintiffs’ legal fees. The sum dwarfs the $380.5 million that the Equifax credit reporting agency agreed this month to pay to settle a class-action case over a 2017 consumer data breach.
Paul Merrell

Barr Ignores Lawyers' Calls to Go Slow on Google Antitrust Case - The New York Times - 0 views

  • The Justice Department plans to bring an antitrust case against Google as soon as this month, after Attorney General William P. Barr overruled career lawyers who said they needed more time to build a strong case against one of the world’s wealthiest, most formidable technology companies, according to five people briefed on internal department conversations.Justice Department officials told lawyers involved in the antitrust inquiry into Alphabet, the parent company of Google and YouTube, to wrap up their work by the end of September, according to three of the people. Most of the 40-odd lawyers who had been working on the investigation opposed the deadline. Some said they would not sign the complaint, and several of them left the case this summer.Some argued this summer in a memo that ran hundreds of pages that they could bring a strong case but needed more time, according to people who described the document. Disagreement persisted among the team over how broad the complaint should be and what Google could do to resolve the problems the government uncovered. The lawyers viewed the deadline as arbitrary.While there were disagreements about tactics, career lawyers also expressed concerns that Mr. Barr wanted to announce the case in September to take credit for action against a powerful tech company under the Trump administration.
‹ Previous 21 - 40 of 40
Showing 20 items per page