Skip to main content

Home/ Future of the Web/ Group items tagged over

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

Nikto - OWASP - 0 views

  •  
    "Nikto is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 3500 potentially dangerous files/CGIs, versions on over 900 servers, and version specific problems on over 250 servers. Scan items and plugins are frequently updated and can be automatically updated (if desired). "
  •  
    "Nikto is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 3500 potentially dangerous files/CGIs, versions on over 900 servers, and version specific problems on over 250 servers. Scan items and plugins are frequently updated and can be automatically updated (if desired). "
Paul Merrell

Revealed: How DOJ Gagged Google over Surveillance of WikiLeaks Volunteer - The Intercept - 0 views

  • The Obama administration fought a legal battle against Google to secretly obtain the email records of a security researcher and journalist associated with WikiLeaks. Newly unsealed court documents obtained by The Intercept reveal the Justice Department won an order forcing Google to turn over more than one year’s worth of data from the Gmail account of Jacob Appelbaum (pictured above), a developer for the Tor online anonymity project who has worked with WikiLeaks as a volunteer. The order also gagged Google, preventing it from notifying Appelbaum that his records had been provided to the government. The surveillance of Appelbaum’s Gmail account was tied to the Justice Department’s long-running criminal investigation of WikiLeaks, which began in 2010 following the transparency group’s publication of a large cache of U.S. government diplomatic cables. According to the unsealed documents, the Justice Department first sought details from Google about a Gmail account operated by Appelbaum in January 2011, triggering a three-month dispute between the government and the tech giant. Government investigators demanded metadata records from the account showing email addresses of those with whom Appelbaum had corresponded between the period of November 2009 and early 2011; they also wanted to obtain information showing the unique IP addresses of the computers he had used to log in to the account.
  • The Justice Department argued in the case that Appelbaum had “no reasonable expectation of privacy” over his email records under the Fourth Amendment, which protects against unreasonable searches and seizures. Rather than seeking a search warrant that would require it to show probable cause that he had committed a crime, the government instead sought and received an order to obtain the data under a lesser standard, requiring only “reasonable grounds” to believe that the records were “relevant and material” to an ongoing criminal investigation. Google repeatedly attempted to challenge the demand, and wanted to immediately notify Appelbaum that his records were being sought so he could have an opportunity to launch his own legal defense. Attorneys for the tech giant argued in a series of court filings that the government’s case raised “serious First Amendment concerns.” They noted that Appelbaum’s records “may implicate journalistic and academic freedom” because they could “reveal confidential sources or information about WikiLeaks’ purported journalistic or academic activities.” However, the Justice Department asserted that “journalists have no special privilege to resist compelled disclosure of their records, absent evidence that the government is acting in bad faith,” and refused to concede Appelbaum was in fact a journalist. It claimed it had acted in “good faith throughout this criminal investigation, and there is no evidence that either the investigation or the order is intended to harass the … subscriber or anyone else.” Google’s attempts to fight the surveillance gag order angered the government, with the Justice Department stating that the company’s “resistance to providing the records” had “frustrated the government’s ability to efficiently conduct a lawful criminal investigation.”
  • Google accused the government of hyperbole and argued that the backlash over the Twitter order did not justify secrecy related to the Gmail surveillance. “Rather than demonstrating how unsealing the order will harm its well-publicized investigation, the government lists a parade of horribles that have allegedly occurred since it unsealed the Twitter order, yet fails to establish how any of these developments could be further exacerbated by unsealing this order,” wrote Google’s attorneys. “The proverbial toothpaste is out of the tube, and continuing to seal a materially identical order will not change it.” But Google’s attempt to overturn the gag order was denied by magistrate judge Ivan D. Davis in February 2011. The company launched an appeal against that decision, but this too was rebuffed, in March 2011, by District Court judge Thomas Selby Ellis, III.
  • ...4 more annotations...
  • The Justice Department wanted to keep the surveillance secret largely because of an earlier public backlash over its WikiLeaks investigation. In January 2011, Appelbaum and other WikiLeaks volunteers’ – including Icelandic parlimentarian Birgitta Jonsdottir – were notified by Twitter that the Justice Department had obtained data about their accounts. This disclosure generated widepread news coverage and controversy; the government says in the unsealed court records that it “failed to anticipate the degree of  damage that would be caused” by the Twitter disclosure and did not want to “exacerbate this problem” when it went after Appelbaum’s Gmail data. The court documents show the Justice Department said the disclosure of its Twitter data grab “seriously jeopardized the [WikiLeaks] investigation” because it resulted in efforts to “conceal evidence” and put public pressure on other companies to resist similar surveillance orders. It also claimed that officials named in the subpeona ordering Twitter to turn over information were “harassed” after a copy was published by Intercept co-founder Glenn Greenwald at Salon in 2011. (The only specific evidence of the alleged harassment cited by the government is an email that was sent to an employee of the U.S. Attorney’s office that purportedly said: “You guys are fucking nazis trying to controll [sic] the whole fucking world. Well guess what. WE DO NOT FORGIVE. WE DO NOT FORGET. EXPECT US.”)
  • The government agreed to unseal some of the court records on Apr. 1 this year, and they were apparently turned over to Appelbaum on May 14 through a notification sent to his Gmail account. The files were released on condition that they would contain some redactions, which are bizarre and inconsistent, in some cases censoring the name of “WikiLeaks” from cited public news reports. Not all of the documents in the case – such as the original surveillance orders contested by Google – were released as part of the latest disclosure. Some contain “specific and sensitive details of the investigation” and “remain properly sealed while the grand jury investigation continues,” according to the court records from April this year. Appelbaum, an American citizen who is based in Berlin, called the case “a travesty that continues at a slow pace” and said he felt it was important to highlight “the absolute madness in these documents.”
  • He told The Intercept: “After five years, receiving such legal documents is neither a shock nor a needed confirmation. … Will we ever see the full documents about our respective cases? Will we even learn the names of those signing so-called legal orders against us in secret sealed documents? Certainly not in a timely manner and certainly not in a transparent, just manner.” The 32-year-old, who has recently collaborated with Intercept co-founder Laura Poitras to report revelations about National Security Agency surveillance for German news magazine Der Spiegel, said he plans to remain in Germany “in exile, rather than returning to the U.S. to experience more harassment of a less than legal kind.”
  • “My presence in Berlin ensures that the cost of physically harassing me or politically harassing me is much higher than when I last lived on U.S. soil,” Appelbaum said. “This allows me to work as a journalist freely from daily U.S. government interference. It also ensures that any further attempts to continue this will be forced into the open through [a Mutal Legal Assistance Treaty] and other international processes. The German goverment is less likely to allow the FBI to behave in Germany as they do on U.S. soil.” The Justice Department’s WikiLeaks investigaton is headed by prosecutors in the Eastern District of Virginia. Since 2010, the secretive probe has seen activists affiliated with WikiLeaks compelled to appear before a grand jury and the FBI attempting to infiltrate the group with an informant. Earlier this year, it was revealed that the government had obtained the contents of three core WikiLeaks staffers’ Gmail accounts as part of the investigation.
Gonzalo San Gil, PhD.

The War Over Control Of The Net Is A War Over Information Advantage | TorrentFreak - 0 views

  •  
    " Rick Falkvinge on February 15, 2015 C: 0 Opinion Throughout history, you can observe that many groups have fought over the information advantage - to know more about other people than those others know in return. Whoever has held the information advantage has usually risen to power."
  •  
    " Rick Falkvinge on February 15, 2015 C: 0 Opinion Throughout history, you can observe that many groups have fought over the information advantage - to know more about other people than those others know in return. Whoever has held the information advantage has usually risen to power."
Gary Edwards

Skynet rising: Google acquires 512-qubit quantum computer; NSA surveillance to be turne... - 0 views

  •  
    "The ultimate code breakers" If you know anything about encryption, you probably also realize that quantum computers are the secret KEY to unlocking all encrypted files. As I wrote about last year here on Natural News, once quantum computers go into widespread use by the NSA, the CIA, Google, etc., there will be no more secrets kept from the government. All your files - even encrypted files - will be easily opened and read. Until now, most people believed this day was far away. Quantum computing is an "impractical pipe dream," we've been told by scowling scientists and "flat Earth" computer engineers. "It's not possible to build a 512-qubit quantum computer that actually works," they insisted. Don't tell that to Eric Ladizinsky, co-founder and chief scientist of a company called D-Wave. Because Ladizinsky's team has already built a 512-qubit quantum computer. And they're already selling them to wealthy corporations, too. DARPA, Northrup Grumman and Goldman Sachs In case you're wondering where Ladizinsky came from, he's a former employee of Northrup Grumman Space Technology (yes, a weapons manufacturer) where he ran a multi-million-dollar quantum computing research project for none other than DARPA - the same group working on AI-driven armed assault vehicles and battlefield robots to replace human soldiers. .... When groundbreaking new technology is developed by smart people, it almost immediately gets turned into a weapon. Quantum computing will be no different. This technology grants God-like powers to police state governments that seek to dominate and oppress the People.  ..... Google acquires "Skynet" quantum computers from D-Wave According to an article published in Scientific American, Google and NASA have now teamed up to purchase a 512-qubit quantum computer from D-Wave. The computer is called "D-Wave Two" because it's the second generation of the system. The first system was a 128-qubit computer. Gen two
  •  
    Normally, I'd be suspicious of anything published by Infowars because its editors are willing to publish really over the top stuff, but: [i] this is subject matter I've maintained an interest in over the years and I was aware that working quantum computers were imminent; and [ii] the pedigree on this particular information does not trace to Scientific American, as stated in the article. I've known Scientific American to publish at least one soothing and lengthy article on the subject of chlorinated dioxin hazard -- my specialty as a lawyer was litigating against chemical companies that generated dioxin pollution -- that was generated by known closet chemical industry advocates long since discredited and was totally lacking in scientific validity and contrary to established scientific knowledge. So publication in Scientific American doesn't pack a lot of weight with me. But checking the Scientific American linked article, notes that it was reprinted by permission from Nature, a peer-reviewed scientific journal and news organization that I trust much more. That said, the InfoWars version is a rewrite that contains lots of information not in the Nature/Scientific American version of a sensationalist nature, so heightened caution is still in order. Check the reprinted Nature version before getting too excited: "The D-Wave computer is not a 'universal' computer that can be programmed to tackle any kind of problem. But scientists have found they can usefully frame questions in machine-learning research as optimisation problems. "D-Wave has battled to prove that its computer really operates on a quantum level, and that it is better or faster than a conventional computer. Before striking the latest deal, the prospective customers set a series of tests for the quantum computer. D-Wave hired an outside expert in algorithm-racing, who concluded that the speed of the D-Wave Two was above average overall, and that it was 3,600 times faster than a leading conventional comput
Gonzalo San Gil, PhD.

EFF in 2015 - Annual Report - 0 views

  •  
    [The Electronic Frontier Foundation was founded in 1990 to protect the rights of technology users, a mission that expands dramatically as digital devices and networks transform modern life and culture. With over 25,000 dues-paying members around the world and a social media reach of well over 1 million followers across different social networks, EFF engages directly with digital users worldwide and provides leadership on cutting-edge issues of free expression, privacy, and human rights. Our annual report features reflections from several EFF staff members about some of our most significant efforts, as well as financial information for the fiscal year ending June 2015. To learn more, read our Year in Review series. ...]
  •  
    [The Electronic Frontier Foundation was founded in 1990 to protect the rights of technology users, a mission that expands dramatically as digital devices and networks transform modern life and culture. With over 25,000 dues-paying members around the world and a social media reach of well over 1 million followers across different social networks, EFF engages directly with digital users worldwide and provides leadership on cutting-edge issues of free expression, privacy, and human rights. Our annual report features reflections from several EFF staff members about some of our most significant efforts, as well as financial information for the fiscal year ending June 2015. To learn more, read our Year in Review series. ...]
Gary Edwards

Microsoft Office whips Google Docs: It's finally game over | Computerworld Blogs - 0 views

  •  
    "If there was ever any doubt about whether Microsoft or Google would win the war of office suites, there should be no longer. Within the last several weeks, Microsoft has pulled so far ahead that it's game over. Here's why. When it comes to which suite is more fully featured, there's never been any real debate: Microsoft Office wins hands down. Whether you're creating entire presentations, creating complicated word-processing documents, or even doing something as simple as handling text attributes, Office is a far better tool. Until the last few weeks, Google Docs had one significant advantage over Microsoft Office: It's available for Android and the iPad as well as PCs because it's Web-based. The same wasn't the case for Office. So if you wanted to use an office suite on all your mobile devices, Google Docs was the way to go. Google Docs lost that advantage when Microsoft released Office for the iPad. There's not yet a native version for Android tablets, but Microsoft is working on that, telling GeekWire, "Let me tell you conclusively: Yes, we are also building Android native applications for tablets for Word, Excel and PowerPoint." Google Docs is still superior to Office's Web-based version, but that's far less important than it used to be. There's no need to go with a Web-based office suite if a superior suite is available as a native apps on all platforms, mobile or otherwise. And Office's collaboration capabilities are quite considerable now. Of course, there's always the question of price. Google Docs is free. Microsoft Office isn't. But at $100 a year for up to five devices, or $70 a year for two, no one will be going broke paying for Microsoft Office. It's worth paying that relatively small price for a much better office suite. Google Docs won't die. It'll be around as second fiddle for a long time. But that's what it will always remain: a second fiddle to the better Microsoft Office."
  •  
    Google acquired "Writely", a small company in Portola Valley that pioneered document editing in a browser. Writely was perhaps the first cloud computing editor to go beyond simple HTML; eventually crafting some really cool CSS-JavaScript-JSON document layout and editing methods. But it can't edit native MSOffice documents. It converts them. There are more than a few problems with the Google Docs approach to editing advanced "compound" documents, but two stick out and are certain to give pause to anyone making the great transition from local workgroup computing, to the highly mobile, always connected, cloud computing. The first problem certain to become a show stopper is that Google converts documents to their native on-line format for editing and collaboration. And then they convert back. To many this isn't a problem. But if the document is part of a workflow or business process, conversion is a killer. There is an old saw affectionately known as "Reuters Law", dating back to the ODF-OXML document wars, that emphatically states; "Conversion breaks documents." The breakage includes both the visual layout of the document, and, the "compound" aspects and data connections that are internal to the document. Think of this way. A business document that is part of a legacy Windows Workgroup workflow is opened up in gDocs. Google converts the document for editing purposes. The data and the workflow internals that bind the document to the local business system are broken on conversion. The look of the document is also visually shredded as the gDocs layout engine is applied. For all practical purposes, no matter what magic editing and collaboration value is added, a broken document means a broken business process. Let me say that again, with the emphasis of having witnessed this first hand during the year long ODF transition trials the Commonwealth of Massachusetts conducted in 2005 and 2006. The business process broke every time a conversion was conducted "on a busines
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

Audio-Video Production On Linux: Best Software - Datamation - 0 views

  •  
    "Over the years, I've heard both Windows users and Linux enthusiasts make the claim that professional media production on Linux is impossible. While there may be some workflows so over-engineered that legacy software is a must, I firmly believe that, with effort, using Linux for media production is doable."
  •  
    "Over the years, I've heard both Windows users and Linux enthusiasts make the claim that professional media production on Linux is impossible. While there may be some workflows so over-engineered that legacy software is a must, I firmly believe that, with effort, using Linux for media production is doable."
Paul Merrell

The Fundamentals of US Surveillance: What Edward Snowden Never Told Us? | Global Resear... - 0 views

  • Former US intelligence contractor Edward Snowden’s revelations rocked the world.  According to his detailed reports, the US had launched massive spying programs and was scrutinizing the communications of American citizens in a manner which could only be described as extreme and intense. The US’s reaction was swift and to the point. “”Nobody is listening to your telephone calls,” President Obama said when asked about the NSA. As quoted in The Guardian,  Obama went on to say that surveillance programs were “fully overseen not just by Congress but by the Fisa court, a court specially put together to evaluate classified programs to make sure that the executive branch, or government generally, is not abusing them”. However, it appears that Snowden may have missed a pivotal part of the US surveillance program. And in stating that the “nobody” is not listening to our calls, President Obama may have been fudging quite a bit.
  • In fact, Great Britain maintains a “listening post” at NSA HQ. The laws restricting live wiretaps do not apply to foreign countries  and thus this listening post  is not subject to  US law.  In other words, the restrictions upon wiretaps, etc. do not apply to the British listening post.  So when Great Britain hands over the recordings to the NSA, technically speaking, a law is not being broken and technically speaking, the US is not eavesdropping on our each and every call. It is Great Britain which is doing the eavesdropping and turning over these records to US intelligence. According to John Loftus, formerly an attorney with  the Department of Justice and author of a number of books concerning US intelligence activities, back in the late seventies  the USDOJ issued a memorandum proposing an amendment to FISA. Loftus, who recalls seeing  the memo, stated in conversation this week that the DOJ proposed inserting the words “by the NSA” into the FISA law  so the scope of the law would only restrict surveillance by the NSA, not by the British.  Any subsequent sharing of the data culled through the listening posts was strictly outside the arena of FISA. Obama was less than forthcoming when he insisted that “What I can say unequivocally is that if you are a US person, the NSA cannot listen to your telephone calls, and the NSA cannot target your emails … and have not.”
  • According to Loftus, the NSA is indeed listening as Great Britain is turning over the surveillance records en masse to that agency. Loftus states that the arrangement is reciprocal, with the US maintaining a parallel listening post in Great Britain. In an interview this past week, Loftus told this reporter that  he believes that Snowden simply did not know about the arrangement between Britain and the US. As a contractor, said Loftus, Snowden would not have had access to this information and thus his detailed reports on the extent of US spying, including such programs as XKeyscore, which analyzes internet data based on global demographics, and PRISM, under which the telecommunications companies, such as Google, Facebook, et al, are mandated to collect our communications, missed the critical issue of the FISA loophole.
  • ...2 more annotations...
  • U.S. government officials have defended the program by asserting it cannot be used on domestic targets without a warrant. But once again, the FISA courts and their super-secret warrants  do not apply to foreign government surveillance of US citizens. So all this sturm and drang about whether or not the US is eavesdropping on our communications is, in fact, irrelevant and diversionary.
  • In fact, the USA Freedom Act reinstituted a number of the surveillance protocols of Section 215, including  authorization for  roving wiretaps  and tracking “lone wolf terrorists.”  While mainstream media heralded the passage of the bill as restoring privacy rights which were shredded under 215, privacy advocates have maintained that the bill will do little, if anything, to reverse the  surveillance situation in the US. The NSA went on the record as supporting the Freedom Act, stating it would end bulk collection of telephone metadata. However, in light of the reciprocal agreement between the US and Great Britain, the entire hoopla over NSA surveillance, Section 215, FISA courts and the USA Freedom Act could be seen as a giant smokescreen. If Great Britain is collecting our real time phone conversations and turning them over to the NSA, outside the realm or reach of the above stated laws, then all this posturing over the privacy rights of US citizens and surveillance laws expiring and being resurrected doesn’t amount to a hill of CDs.
Paul Merrell

Reset The Net - Privacy Pack - 1 views

  • This June 5th, I pledge to take strong steps to protect my freedom from government mass surveillance. I expect the services I use to do the same.
  • Fight for the Future and Center for Rights will contact you about future campaigns. Privacy Policy
  •  
    I wound up joining this campaign at the urging of the ACLU after checking the Privacy Policy. The Reset the Net campaign seems to be endorsed by a lot of change-oriented groups, from the ACLU to Greenpeac to the Pirate Party. A fair number of groups with a Progressive agenda, but certainly not limited to them. The right answer to that situation is to urge other groups to endorse, not to avoid the campaign. Single-issue coalition-building is all about focusing on an area of agreement rather than worrying about who you are rubbing elbows with.  I have been looking for a a bipartisan group that's tackling government surveillance issues via mass actions but has no corporate sponsors. This might be the one. The reason: Corporate types like Google have no incentive to really butt heads with the government voyeurs. They are themselves engaged in massive surveillance of their users and certainly will not carry the battle for digital privacy over to the private sector. But this *is* a battle over digital privacy and legally defining user privacy rights in the private sector is just as important as cutting back on government surveillance. As we have learned through the Snowden disclosures, what the private internet companies have, the NSA can and does get.  The big internet services successfully pushed in the U.S. for authorization to publish more numbers about how many times they pass private data to the government, but went no farther. They wanted to be able to say they did something, but there's a revolving door of staffers between NSA and the big internet companies and the internet service companies' data is an open book to the NSA.   The big internet services are not champions of their users' privacy. If they were, they would be featuring end-to-end encryption with encryption keys unique to each user and unknown to the companies.  Like some startups in Europe are doing. E.g., the Wuala.com filesync service in Switzerland (first 5 GB of storage free). Compare tha
  •  
    "This June 5th, I pledge to take strong steps to protect my freedom from government mass surveillance. I expect the services I use to do the same."
  •  
    I wound up joining this campaign at the urging of the ACLU after checking the Privacy Policy. The Reset the Net campaign seems to be endorsed by a lot of change-oriented groups, from the ACLU to Greenpeac to the Pirate Party. A fair number of groups with a Progressive agenda, but certainly not limited to them. The right answer to that situation is to urge other groups to endorse, not to avoid the campaign. Single-issue coalition-building is all about focusing on an area of agreement rather than worrying about who you are rubbing elbows with.  I have been looking for a a bipartisan group that's tackling government surveillance issues via mass actions but has no corporate sponsors. This might be the one. The reason: Corporate types like Google have no incentive to really butt heads with the government voyeurs. They are themselves engaged in massive surveillance of their users and certainly will not carry the battle for digital privacy over to the private sector. But this *is* a battle over digital privacy and legally defining user privacy rights in the private sector is just as important as cutting back on government surveillance. As we have learned through the Snowden disclosures, what the private internet companies have, the NSA can and does get.  The big internet services successfully pushed in the U.S. for authorization to publish more numbers about how many times they pass private data to the government, but went no farther. They wanted to be able to say they did something, but there's a revolving door of staffers between NSA and the big internet companies and the internet service companies' data is an open book to the NSA.   The big internet services are not champions of their users' privacy. If they were, they would be featuring end-to-end encryption with encryption keys unique to each user and unknown to the companies.  Like some startups in Europe are doing. E.g., the Wuala.com filesync service in Switzerland (first 5 GB of storage free). Com
Paul Merrell

Bankrolled by broadband donors, lawmakers lobby FCC on net neutrality | Ars Technica - 1 views

  • The 28 House members who lobbied the Federal Communications Commission to drop net neutrality this week have received more than twice the amount in campaign contributions from the broadband sector than the average for all House members. These lawmakers, including the top House leadership, warned the FCC that regulating broadband like a public utility "harms" providers, would be "fatal to the Internet," and could "limit economic freedom."​ According to research provided Friday by Maplight, the 28 House members received, on average, $26,832 from the "cable & satellite TV production & distribution" sector over a two-year period ending in December. According to the data, that's 2.3 times more than the House average of $11,651. What's more, one of the lawmakers who told the FCC that he had "grave concern" (PDF) about the proposed regulation took more money from that sector than any other member of the House. Rep. Greg Walden (R-OR) was the top sector recipient, netting more than $109,000 over the two-year period, the Maplight data shows.
  • Dan Newman, cofounder and president of Maplight, the California research group that reveals money in politics, said the figures show that "it's hard to take seriously politicians' claims that they are acting in the public interest when their campaigns are funded by companies seeking huge financial benefits for themselves." Signing a letter to the FCC along with Walden, who chairs the House Committee on Energy and Commerce, were three other key members of the same committee: Reps. Fred Upton (R-MI), Robert Latta (R-OH), and Marsha Blackburn (R-TN). Over the two-year period, Upton took in $65,000, Latta took $51,000, and Blackburn took $32,500. In a letter (PDF) those representatives sent to the FCC two days before Thursday's raucous FCC net neutrality hearing, the four wrote that they had "grave concern" over the FCC's consideration of "reclassifying Internet broadband service as an old-fashioned 'Title II common carrier service.'" The letter added that a switchover "harms broadband providers, the American economy, and ultimately broadband consumers, actually doing so would be fatal to the Internet as we know it."
  • Not every one of the 28 members who publicly lobbied the FCC against net neutrality in advance of Thursday's FCC public hearing received campaign financing from the industry. One representative took no money: Rep. Nick Rahall (D-WV). In all, the FCC received at least three letters from House lawmakers with 28 signatures urging caution on classifying broadband as a telecommunications service, which would open up the sector to stricter "common carrier" rules, according to letters the members made publicly available. The US has long applied common carrier status to the telephone network, providing justification for universal service obligations that guarantee affordable phone service to all Americans and other rules that promote competition and consumer choice. Some consumer advocates say that common carrier status is needed for the FCC to impose strong network neutrality rules that would force ISPs to treat all traffic equally, not degrading competing services or speeding up Web services in exchange for payment. ISPs have argued that common carrier rules would saddle them with too much regulation and would force them to spend less on network upgrades and be less innovative.
  • ...2 more annotations...
  • Of the 28 House members signing on to the three letters, Republicans received, on average, $59,812 from the industry over the two-year period compared to $13,640 for Democrats, according to the Maplight data. Another letter (PDF) sent to the FCC this week from four top members of the House, including Speaker John Boehner (R-OH), Majority Leader Eric Cantor (R-VA), Majority Whip Kevin McCarthy (R-CA), and Republican Conference Chair Cathy McMorris Rodgers (R-WA), argued in favor of cable companies: "We are writing to respectfully urge you to halt your consideration of any plan to impose antiquated regulation on the Internet, and to warn that implementation of such a plan will needlessly inhibit the creation of American private sector jobs, limit economic freedom and innovation, and threaten to derail one of our economy's most vibrant sectors," they wrote. Over the two-year period, Boehner received $75,450; Cantor got $80,800; McCarthy got $33,000; and McMorris Rodgers got $31,500.
  • The third letter (PDF) forwarded to the FCC this week was signed by 20 House members. "We respectfully urge you to consider the effect that regressing to a Title II approach might have on private companies' ability to attract capital and their continued incentives to invest and innovate, as well as the potentially negative impact on job creation that might result from any reduction in funding or investment," the letter said. Here are the 28 lawmakers who lobbied the FCC this week and their reported campaign contributions:
Gonzalo San Gil, PhD.

The File-Sharing Wars Are Anything But Over | TorrentFreak - 0 views

  •  
    " Rick Falkvinge on June 29, 2014 C: 7 Opinion The past year, the copyright industry appears to have calmed down a bit, thinking it won the file-sharing wars. At the same time, people sharing culture and knowledge have done the same thing. This conflict is far from over."
  •  
    " Rick Falkvinge on June 29, 2014 C: 7 Opinion The past year, the copyright industry appears to have calmed down a bit, thinking it won the file-sharing wars. At the same time, people sharing culture and knowledge have done the same thing. This conflict is far from over."
Gonzalo San Gil, PhD.

Mega Ordered to Hand Over Users' Details to U.S. Court - TorrentFreakwho uploads to hav... - 0 views

  •  
    " Andy on May 12, 2016 C: 28 News Mega, the cloud storage site founded by Kim Dotcom, has been ordered to hand the IP addresses and personal details of some of its users to a U.S. court. The ruling follows the uploading of sensitive documents to Mega following a hack on a foreign government computer system. Speaking with TorrentFreak, Mega chairman Stephen Hall expressed concerns over the process."
  •  
    " Andy on May 12, 2016 C: 28 News Mega, the cloud storage site founded by Kim Dotcom, has been ordered to hand the IP addresses and personal details of some of its users to a U.S. court. The ruling follows the uploading of sensitive documents to Mega following a hack on a foreign government computer system. Speaking with TorrentFreak, Mega chairman Stephen Hall expressed concerns over the process."
Paul Merrell

WhatsApp Encryption Said to Stymie Wiretap Order - The New York Times - 0 views

  • While the Justice Department wages a public fight with Apple over access to a locked iPhone, government officials are privately debating how to resolve a prolonged standoff with another technology company, WhatsApp, over access to its popular instant messaging application, officials and others involved in the case said. No decision has been made, but a court fight with WhatsApp, the world’s largest mobile messaging service, would open a new front in the Obama administration’s dispute with Silicon Valley over encryption, security and privacy.WhatsApp, which is owned by Facebook, allows customers to send messages and make phone calls over the Internet. In the last year, the company has been adding encryption to those conversations, making it impossible for the Justice Department to read or eavesdrop, even with a judge’s wiretap order.
  • As recently as this past week, officials said, the Justice Department was discussing how to proceed in a continuing criminal investigation in which a federal judge had approved a wiretap, but investigators were stymied by WhatsApp’s encryption.The Justice Department and WhatsApp declined to comment. The government officials and others who discussed the dispute did so on condition of anonymity because the wiretap order and all the information associated with it were under seal. The nature of the case was not clear, except that officials said it was not a terrorism investigation. The location of the investigation was also unclear.
  • To understand the battle lines, consider this imperfect analogy from the predigital world: If the Apple dispute is akin to whether the F.B.I. can unlock your front door and search your house, the issue with WhatsApp is whether it can listen to your phone calls. In the era of encryption, neither question has a clear answer.Some investigators view the WhatsApp issue as even more significant than the one over locked phones because it goes to the heart of the future of wiretapping. They say the Justice Department should ask a judge to force WhatsApp to help the government get information that has been encrypted. Others are reluctant to escalate the dispute, particularly with senators saying they will soon introduce legislation to help the government get data in a format it can read.
Paul Merrell

FCC's Wheeler Promises Net Neutrality Action 'Shortly' | Adweek - 0 views

  • he pressure is mounting on the Federal Communications Commission to revisit how it will regulate net neutrality in the wake of the DC Circuit Court of Appeals decision that tossed the rules back in the regulator's lap.
  • More than 1 million people signed the petition urging the FCC to "reassert its clear authority over our nation's communications infrastructure" and classify the transmission component of broadband Internet as a telecommunications service. While the court struck down the non-discrimination and no-blocking rules, it also ruled the FCC had the authority to regulate the Internet. That decision leaves the FCC with a thorny legal choice about whether it regulates by classifying the Internet as a telecommunications service or as an information service. In seeking to reassure the petitioners, Wheeler affirmed the commission's commitment to preserve and protect the open Internet. "We interpret the court decision as an invitation and we will accept that invitation," Wheeler said in a press conference following Thursday's meeting. "One of the great things about what the Internet does and why it needs to stay open, it enables people to organize and express themselves. A million people? That's boffo."
  •  
    Over a million signed the petition. Wow! But note that the battle is not over. The FCC could reimplement net neutrality now if it reclassified broadband internet as a telecommunications service. That the FCC has not already set this in motion raises danger flags. All it takes is for a few contracts to be signed to give the ISPs 5th Amendment taking clause claims for damages against the government for reimplementing net neutrality the right way, A "reasonable investment-backed expectation" is the relevant 5th Amendment trigger. 
Paul Merrell

Internet Giants Erect Barriers to Spy Agencies - NYTimes.com - 0 views

  • As fast as it can, Google is sealing up cracks in its systems that Edward J. Snowden revealed the N.S.A. had brilliantly exploited. It is encrypting more data as it moves among its servers and helping customers encode their own emails. Facebook, Microsoft and Yahoo are taking similar steps.
  • After years of cooperating with the government, the immediate goal now is to thwart Washington — as well as Beijing and Moscow. The strategy is also intended to preserve business overseas in places like Brazil and Germany that have threatened to entrust data only to local providers. Google, for example, is laying its own fiber optic cable under the world’s oceans, a project that began as an effort to cut costs and extend its influence, but now has an added purpose: to assure that the company will have more control over the movement of its customer data.
  • A year after Mr. Snowden’s revelations, the era of quiet cooperation is over. Telecommunications companies say they are denying requests to volunteer data not covered by existing law. A.T.&T., Verizon and others say that compared with a year ago, they are far more reluctant to cooperate with the United States government in “gray areas” where there is no explicit requirement for a legal warrant.
  • ...8 more annotations...
  • Eric Grosse, Google’s security chief, suggested in an interview that the N.S.A.'s own behavior invited the new arms race.“I am willing to help on the purely defensive side of things,” he said, referring to Washington’s efforts to enlist Silicon Valley in cybersecurity efforts. “But signals intercept is totally off the table,” he said, referring to national intelligence gathering.“No hard feelings, but my job is to make their job hard,” he added.
  • In Washington, officials acknowledge that covert programs are now far harder to execute because American technology companies, fearful of losing international business, are hardening their networks and saying no to requests for the kind of help they once quietly provided.Continue reading the main story Robert S. Litt, the general counsel of the Office of the Director of National Intelligence, which oversees all 17 American spy agencies, said on Wednesday that it was “an unquestionable loss for our nation that companies are losing the willingness to cooperate legally and voluntarily” with American spy agencies.
  • Many point to an episode in 2012, when Russian security researchers uncovered a state espionage tool, Flame, on Iranian computers. Flame, like the Stuxnet worm, is believed to have been produced at least in part by American intelligence agencies. It was created by exploiting a previously unknown flaw in Microsoft’s operating systems. Companies argue that others could have later taken advantage of this defect.Worried that such an episode undercuts confidence in its wares, Microsoft is now fully encrypting all its products, including Hotmail and Outlook.com, by the end of this year with 2,048-bit encryption, a stronger protection that would take a government far longer to crack. The software is protected by encryption both when it is in data centers and when data is being sent over the Internet, said Bradford L. Smith, the company’s general counsel.
  • Mr. Smith also said the company was setting up “transparency centers” abroad so that technical experts of foreign governments could come in and inspect Microsoft’s proprietary source code. That will allow foreign governments to check to make sure there are no “back doors” that would permit snooping by United States intelligence agencies. The first such center is being set up in Brussels.Microsoft has also pushed back harder in court. In a Seattle case, the government issued a “national security letter” to compel Microsoft to turn over data about a customer, along with a gag order to prevent Microsoft from telling the customer it had been compelled to provide its communications to government officials. Microsoft challenged the gag order as violating the First Amendment. The government backed down.
  • Hardware firms like Cisco, which makes routers and switches, have found their products a frequent subject of Mr. Snowden’s disclosures, and their business has declined steadily in places like Asia, Brazil and Europe over the last year. The company is still struggling to convince foreign customers that their networks are safe from hackers — and free of “back doors” installed by the N.S.A. The frustration, companies here say, is that it is nearly impossible to prove that their systems are N.S.A.-proof.
  • In one slide from the disclosures, N.S.A. analysts pointed to a sweet spot inside Google’s data centers, where they could catch traffic in unencrypted form. Next to a quickly drawn smiley face, an N.S.A. analyst, referring to an acronym for a common layer of protection, had noted, “SSL added and removed here!”
  • Facebook and Yahoo have also been encrypting traffic among their internal servers. And Facebook, Google and Microsoft have been moving to more strongly encrypt consumer traffic with so-called Perfect Forward Secrecy, specifically devised to make it more labor intensive for the N.S.A. or anyone to read stored encrypted communications.One of the biggest indirect consequences from the Snowden revelations, technology executives say, has been the surge in demands from foreign governments that saw what kind of access to user information the N.S.A. received — voluntarily or surreptitiously. Now they want the same.
  • The latest move in the war between intelligence agencies and technology companies arrived this week, in the form of a new Google encryption tool. The company released a user-friendly, email encryption method to replace the clunky and often mistake-prone encryption schemes the N.S.A. has readily exploited.But the best part of the tool was buried in Google’s code, which included a jab at the N.S.A.'s smiley-face slide. The code included the phrase: “ssl-added-and-removed-here-; - )”
Gonzalo San Gil, PhD.

#irespectmusic says 100 Years is Long Enough: The Danger of Pie-ism for All Creators - 0 views

  •  
    [# ! Is not copyright to promote creation? so, it should we enough -for the 'creators' to hold the rights DURING Author's Life...? Everything beyond is a swindle...] Baiting the Pie Last year's hearings on music licensing at the House Judiciary Committee's IP Subcommittee revealed an old argument from broadcasters and a new twist on that argument adopted by webcasters. We already pay for music-you people go fight over that pie.
  • ...1 more comment...
  •  
    # ! copyright is not to promoe creation? so, it should we enough to hold the rightss DURING Author's Life? Everything beyond is a swindle... Baiting the Pie Last year's hearings on music licensing at the House Judiciary Committee's IP Subcommittee revealed an old argument from broadcasters and a new twist on that argument adopted by webcasters. We already pay for music-you people go fight over that pie.
  •  
    [# ! Is not copyright to promote creation? so, it should we enough -for the 'creators' to hold the rights DURING Author's Life...? Everything beyond is a swindle...] Baiting the Pie Last year's hearings on music licensing at the House Judiciary Committee's IP Subcommittee revealed an old argument from broadcasters and a new twist on that argument adopted by webcasters. We already pay for music-you people go fight over that pie.
  •  
    [# ! Is not copyright to promote creation? so, it should we enough -for the 'creators' to hold the rights DURING Author's Life...? Everything beyond is a swindle...] Baiting the Pie Last year's hearings on music licensing at the House Judiciary Committee's IP Subcommittee revealed an old argument from broadcasters and a new twist on that argument adopted by webcasters. We already pay for music-you people go fight over that pie.
Paul Merrell

Microsoft to host data in Germany to evade US spying | Naked Security - 0 views

  • Microsoft's new plan to keep the US government's hands off its customers' data: Germany will be a safe harbor in the digital privacy storm. Microsoft on Wednesday announced that beginning in the second half of 2016, it will give foreign customers the option of keeping data in new European facilities that, at least in theory, should shield customers from US government surveillance. It will cost more, according to the Financial Times, though pricing details weren't forthcoming. Microsoft Cloud - including Azure, Office 365 and Dynamics CRM Online - will be hosted from new datacenters in the German regions of Magdeburg and Frankfurt am Main. Access to data will be controlled by what the company called a German data trustee: T-Systems, a subsidiary of the independent German company Deutsche Telekom. Without the permission of Deutsche Telekom or customers, Microsoft won't be able to get its hands on the data. If it does get permission, the trustee will still control and oversee Microsoft's access.
  • Microsoft CEO Satya Nadella dropped the word "trust" into the company's statement: Microsoft’s mission is to empower every person and every individual on the planet to achieve more. Our new datacenter regions in Germany, operated in partnership with Deutsche Telekom, will not only spur local innovation and growth, but offer customers choice and trust in how their data is handled and where it is stored.
  • On Tuesday, at the Future Decoded conference in London, Nadella also announced that Microsoft would, for the first time, be opening two UK datacenters next year. The company's also expanding its existing operations in Ireland and the Netherlands. Officially, none of this has anything to do with the long-drawn-out squabbling over the transatlantic Safe Harbor agreement, which the EU's highest court struck down last month, calling the agreement "invalid" because it didn't protect data from US surveillance. No, Nadella said, the new datacenters and expansions are all about giving local businesses and organizations "transformative technology they need to seize new global growth." But as Diginomica reports, Microsoft EVP of Cloud and Enterprise Scott Guthrie followed up his boss’s comments by saying that yes, the driver behind the new datacenters is to let customers keep data close: We can guarantee customers that their data will always stay in the UK. Being able to very concretely tell that story is something that I think will accelerate cloud adoption further in the UK.
  • ...2 more annotations...
  • Microsoft and T-Systems' lawyers may well think that storing customer data in a German trustee data center will protect it from the reach of US law, but for all we know, that could be wishful thinking. Forrester cloud computing analyst Paul Miller: To be sure, we must wait for the first legal challenge. And the appeal. And the counter-appeal. As with all new legal approaches, we don’t know it is watertight until it is challenged in court. Microsoft and T-Systems’ lawyers are very good and say it's watertight. But we can be sure opposition lawyers will look for all the holes. By keeping data offshore - particularly in Germany, which has strong data privacy laws - Microsoft could avoid the situation it's now facing with the US demanding access to customer emails stored on a Microsoft server in Dublin. The US has argued that Microsoft, as a US company, comes under US jurisdiction, regardless of where it keeps its data.
  • Running away to Germany isn't a groundbreaking move; other US cloud services providers have already pledged expansion of their EU presences, including Amazon's plan to open a UK datacenter in late 2016 that will offer what CTO Werner Vogels calls "strong data sovereignty to local users." Other big data operators that have followed suit: Salesforce, which has already opened datacenters in the UK and Germany and plans to open one in France next year, as well as new EU operations pledged for the new year by NetSuite and Box. Can Germany keep the US out of its datacenters? Can Ireland? Time, and court cases, will tell.
  •  
    The European Community's Court of Justice decision in the Safe Harbor case --- and Edward Snowden --- are now officially downgrading the U.S. as a cloud data center location. NSA is good business for Europeans looking to displace American cloud service providers, as evidenced by Microsoft's decision. The legal test is whether Microsoft has "possession, custody, or control" of the data. From the info given in the article, it seems that Microsoft has done its best to dodge that bullet by moving data centers to Germany and placing their data under the control of a European company. Do ownership of the hardware and profits from their rent mean that Microsoft still has "possession, custody, or control" of the data? The fine print of the agreement with Deutsche Telekom and the customer EULAs will get a thorough going over by the Dept. of Justice for evidence of Microsoft "control" of the data. That will be the crucial legal issue. The data centers in Germany may pass the test. But the notion that data centers in the UK can offer privacy is laughable; the UK's legal authority for GCHQ makes it even easier to get the data than the NSA can in the U.S.  It doesn't even require a court order. 
Gonzalo San Gil, PhD.

Google Is One Big Fat Pirate-Linking Search Engine - Digital Music News - 0 views

  •  
    "Is Google bullying the entire media industry? On Wednesday, Getty Images filed a complaint with the European Union's antitrust commission over Google's alleged piracy of its content. Getty Images claims that Google 'siphons traffic' away from the company's premium website."
  •  
    "Is Google bullying the entire media industry? On Wednesday, Getty Images filed a complaint with the European Union's antitrust commission over Google's alleged piracy of its content. Getty Images claims that Google 'siphons traffic' away from the company's premium website."
Gary Edwards

Windows XP: How end of support sparked one organisation's shift from Microsoft | ZDNet - 1 views

  •  
    Good story of how a UK Company responded to Microsoft's announcement if XP end of life. After examining many alternatives, they settled on a ChromeBook-ChromeBox - Citrix solution. Most of the existing desktop hardware was repurposed as ChromeTops running Chrome Browser apps and Citrix XenDesktop for legacy data apps. excerpt/intro: "There are the XP diehards, and the Windows 7 and 8 migrators. But in a world facing up to the end of Windows XP support, one UK organisation belongs to another significant group - those breaking with Microsoft as their principal OS provider. Microsoft's end of routine security patching and software updates on 8 April helped push the London borough of Barking and Dagenham to a decision it might otherwise not have taken over the fate of its 3,500 Windows XP desktops and 800 laptops. "They were beginning to creak but they would have gone on for a while. It's fair to say if XP wasn't going out of life, we probably wouldn't be doing this now," Barking and Dagenham general manager IT Sheyne Lucock said. Around one-eighth of corporate Windows XP users are moving away from Microsoft, according to recent Tech Pro Research. Lucock said it had become clear that the local authority was locked into a regular Windows operating system refresh cycle that it could no longer afford. "If we just replaced all the Windows desktops with newer versions running a newer version of Windows, four years later we would have to do the same again and so on," he said. "So there was an inclination to try and do something different - especially as we know that with all the budget challenges that local government is going to be faced with, we're going to have to halve the cost of our ICT service over the next five years." Barking and Dagenham outsourced its IT in December 2010 to Elevate East London, which is a joint-venture between the council and services firm Agilisys. Lucock and systems architect Rupert Hay-Campbell are responsible for strategy, policy
  •  
    Meanwhile, some organizations missed the end of life deadline and are now paying Microsoft for extended support. E.g., the U.S. Internal Revenue Service, which is still running 58,000 desktops on WinXP. http://arstechnica.com/information-technology/2014/04/irs-another-windows-xp-laggard-will-pay-microsoft-for-patches/
1 - 20 of 545 Next › Last »
Showing 20 items per page