Skip to main content

Home/ Future of the Web/ Group items tagged critic

Rss Feed Group items tagged

Paul Merrell

FCC Putting Comcast/Time Warner Cable Investigation On Hold - 0 views

  • On Friday, the U.S. Federal Communications Commission said that it has extended its time to file responses and oppositions for the Comcast/Time Warner merger from October 8 to October 29. This is due to a motion filed by DISH Network, which said that Comcast didn't fully respond to the Commission's Request to Responses and Oppositions. The FCC is taking 180 days to determine if the Comcast and Time Warner merger will be in the best interest of the public. As of Friday, the investigation was at day 85, and it will resume once October 29 arrives. Originally, the investigation was expected to be complete on January 6, 2015. According to Reuters, a number of competitors and consumer advocates have rejected the merger, stating that the combined entity will have too much power over American consumers' viewing habits. Comcast disagrees of course, indicating that Time Warner is not a competitor and that their combined forces would bring better subscription services to a larger consumer audience.
  • Back in August, the FCC sent questions to both Comcast and Time Warner Cable asking for additional information about their broadband and video services, such as their Web traffic management practices. However, the FCC said on Friday that both companies failed to provide enough answers to please the merger reviewers. Comcast disagrees but said it will work with the reviewers to provide the missing information. "We will work with the staff to determine the additional information the FCC is seeking (including the document production that the FCC had asked us to delay filing) and will submit supplemental answers and documents quickly thereafter so that the FCC can complete its review early in 2015," Comcast spokeswoman Sena Fitzmaurice told Reuters.
  • Currently, the FCC is trying to retrieve Comcast's programming and retransmission consent agreements, but media companies have objected to the collection, saying that these documents are highly confidential. However, the documents have made their way to the Justice Department, which is conducting its own review for antitrust issues. The delay in the FCC's deadline also stems from a large 850-page document supplied by Comcast. The FCC indicated that this volume of information is critical to the investigation.
Paul Merrell

USA Freedom Act Passes House, Codifying Bulk Collection For First Time, Critics Say - T... - 0 views

  • After only one hour of floor debate, and no allowed amendments, the House of Representatives today passed legislation that opponents believe may give brand new authorization to the U.S. government to conduct domestic dragnets. The USA Freedom Act was approved in a 338-88 vote, with approximately equal numbers of Democrats and Republicans voting against. The bill’s supporters say it will disallow bulk collection of domestic telephone metadata, in which the Foreign Intelligence Surveillance Court has regularly ordered phone companies to turn over such data. The Obama administration claims such collection is authorized by Section 215 of the USA Patriot Act, which is set to expire June 1. However, the U.S. Court of Appeals for the Second Circuit recently held that Section 215 does not provide such authorization. Today’s legislation would prevent the government from issuing such orders for bulk collection and instead rely on telephone companies to store all their metadata — some of which the government could then demand using a “specific selection term” related to foreign terrorism. Bill supporters maintain this would prevent indiscriminate collection.
  • However, the legislation may not end bulk surveillance and in fact could codify the ability of the government to conduct dragnet data collection. “We’re taking something that was not permitted under regular section 215 … and now we’re creating a whole apparatus to provide for it,” Rep. Justin Amash, R-Mich., said on Tuesday night during a House Rules Committee proceeding. “The language does limit the amount of bulk collection, it doesn’t end bulk collection,” Rep. Amash said, arguing that the problematic “specific selection term” allows for “very large data collection, potentially in the hundreds of thousands of people, maybe even millions.” In a statement posted to Facebook ahead of the vote, Rep. Amash said the legislation “falls woefully short of reining in the mass collection of Americans’ data, and it takes us a step in the wrong direction by specifically authorizing such collection in violation of the Fourth Amendment to the Constitution.”
  • “While I appreciate a number of the reforms in the bill and understand the need for secure counter-espionage and terrorism investigations, I believe our nation is better served by allowing Section 215 to expire completely and replacing it with a measure that finds a better balance between national security interests and protecting the civil liberties of Americans,” Congressman Ted Lieu, D-Calif., said in a statement explaining his vote against the bill.
  • ...2 more annotations...
  • Not addressed in the bill, however, are a slew of other spying authorities in use by the NSA that either directly or inadvertently target the communications of American citizens. Lawmakers offered several amendments in the days leading up to the vote that would have tackled surveillance activities laid out in Section 702 of the Foreign Intelligence Surveillance Act and Executive Order 12333 — two authorities intended for foreign surveillance that have been used to collect Americans’ internet data, including online address books and buddy lists. The House Rules Committee, however, prohibited consideration of any amendment to the USA Freedom Act, claiming that any changes to the legislation would have weakened its chances of passage.
  • The measure now goes to the Senate where its future is uncertain. Majority Leader Mitch McConnell has declined to schedule the bill for consideration, and is instead pushing for a clean reauthorization of expiring Patriot Act provisions that includes no surveillance reforms. Senators Ron Wyden, D-Ore., and Rand Paul, R-Ky., have threated to filibuster any bill that extends the Patriot Act without also reforming the NSA.
  •  
    Surprise, surprise. U.S. "progressive" groups are waging an all-out email lobbying effort to sunset the Patriot Act. https://www.sunsetthepatriotact.com/ Same with civil liberties groups. e.g., https://action.aclu.org/secure/Section215 And a coalition of libertarian organizations. http://docs.techfreedom.org/Coalition_Letter_McConnell_215Reauth_4.27.15.pdf
Paul Merrell

PATRIOT Act spying programs on death watch - Seung Min Kim and Kate Tummarello - POLITICO - 0 views

  • With only days left to act and Rand Paul threatening a filibuster, Senate Republicans remain deeply divided over the future of the PATRIOT Act and have no clear path to keep key government spying authorities from expiring at the end of the month. Crucial parts of the PATRIOT Act, including a provision authorizing the government’s controversial bulk collection of American phone records, first revealed by Edward Snowden, are due to lapse May 31. That means Congress has barely a week to figure out a fix before before lawmakers leave town for Memorial Day recess at the end of the next week. Story Continued Below The prospects of a deal look grim: Senate Majority Leader Mitch McConnell on Thursday night proposed just a two-month extension of expiring PATRIOT Act provisions to give the two sides more time to negotiate, but even that was immediately dismissed by critics of the program.
  •  
    A must-read. The major danger is that the the Senate could pass the USA Freedom Act, which has already been passed by the House. Passage of that Act, despite its name, would be bad news for civil liberties.  Now is the time to let your Congress critters know that you want them to fight to the Patriot Act provisions expire on May 31, without any replacement legislation.  Keep in mind that Section 502 does not apply just to telephone metadata. It authorizes the FBI to gather without notice to their victims "any tangible thing", specifically including as examples "library circulation records, library patron lists, book sales records, book customer lists, firearms sales records, tax return records, educational records, or medical records containing information that would identify a person." The breadth of the section is illustrated by telephone metadata not even being mentioned in the section.  NSA going after your medical records souand far fetched? Former NSA technical director William Binney says they're already doing it: "Binney alludes to even more extreme intelligence practices that are not yet public knowledge, including the collection of Americans' medical data, the collection and use of client-attorney conversations, and law enforcement agencies' "direct access," without oversight, to NSA databases." https://consortiumnews.com/2015/03/05/seeing-the-stasi-through-nsa-eyes/ So please, contact your Congress critters right now and tell them to sunset the Patriot Act NOW. This will be decided in the next few days so the sooner you contact them the better. 
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

After Two Years, White House Finally Responds to Snowden Pardon Petition - With a "No" - 1 views

  • The White House on Tuesday ended two years of ignoring a hugely popular whitehouse.gov petition calling for NSA whistleblower Edward Snowden to be “immediately issued a full, free, and absolute pardon,” saying thanks for signing, but no. “We live in a dangerous world,” Lisa Monaco, President Obama’s adviser on homeland security and terrorism, said in a statement. More than 167,000 people signed the petition, which surpassed the 100,000 signatures that the White House’s “We the People” website said would garner a guaranteed response on June 24, 2013. In Tuesday’s response, the White House acknowledged that “This is an issue that many Americans feel strongly about.”
  • Monaco then explained her position: “Instead of constructively addressing these issues, Mr. Snowden’s dangerous decision to steal and disclose classified information had severe consequences for the security of our country and the people who work day in and day out to protect it.” Snowden didn’t actually disclose any classified information — news organizations including the Guardian, Washington Post, New York Times and The Intercept did the disclosing. And the Obama administration has yet to specify any “severe consequences” that can be independently confirmed.
  • The White House on Tuesday ended two years of ignoring a hugely popular whitehouse.gov petition calling for NSA whistleblower Edward Snowden to be “immediately issued a full, free, and absolute pardon,” saying thanks for signing, but no. “We live in a dangerous world,” Lisa Monaco, President Obama’s adviser on homeland security and terrorism, said in a statement. More than 167,000 people signed the petition, which surpassed the 100,000 signatures that the White House’s “We the People” website said would garner a guaranteed response on June 24, 2013. In Tuesday’s response, the White House acknowledged that “This is an issue that many Americans feel strongly about.”
  • ...1 more annotation...
  • The Snowden response was one of 20 responses to what the White House called “our We the People backlog.” The White House had been criticized for avoiding uncomfortable topics despite their popular support. On Twitter, the responses to the Snowden response, some from signers of the petition, were highly critica
Paul Merrell

The Fundamentals of US Surveillance: What Edward Snowden Never Told Us? | Global Resear... - 0 views

  • Former US intelligence contractor Edward Snowden’s revelations rocked the world.  According to his detailed reports, the US had launched massive spying programs and was scrutinizing the communications of American citizens in a manner which could only be described as extreme and intense. The US’s reaction was swift and to the point. “”Nobody is listening to your telephone calls,” President Obama said when asked about the NSA. As quoted in The Guardian,  Obama went on to say that surveillance programs were “fully overseen not just by Congress but by the Fisa court, a court specially put together to evaluate classified programs to make sure that the executive branch, or government generally, is not abusing them”. However, it appears that Snowden may have missed a pivotal part of the US surveillance program. And in stating that the “nobody” is not listening to our calls, President Obama may have been fudging quite a bit.
  • In fact, Great Britain maintains a “listening post” at NSA HQ. The laws restricting live wiretaps do not apply to foreign countries  and thus this listening post  is not subject to  US law.  In other words, the restrictions upon wiretaps, etc. do not apply to the British listening post.  So when Great Britain hands over the recordings to the NSA, technically speaking, a law is not being broken and technically speaking, the US is not eavesdropping on our each and every call. It is Great Britain which is doing the eavesdropping and turning over these records to US intelligence. According to John Loftus, formerly an attorney with  the Department of Justice and author of a number of books concerning US intelligence activities, back in the late seventies  the USDOJ issued a memorandum proposing an amendment to FISA. Loftus, who recalls seeing  the memo, stated in conversation this week that the DOJ proposed inserting the words “by the NSA” into the FISA law  so the scope of the law would only restrict surveillance by the NSA, not by the British.  Any subsequent sharing of the data culled through the listening posts was strictly outside the arena of FISA. Obama was less than forthcoming when he insisted that “What I can say unequivocally is that if you are a US person, the NSA cannot listen to your telephone calls, and the NSA cannot target your emails … and have not.”
  • According to Loftus, the NSA is indeed listening as Great Britain is turning over the surveillance records en masse to that agency. Loftus states that the arrangement is reciprocal, with the US maintaining a parallel listening post in Great Britain. In an interview this past week, Loftus told this reporter that  he believes that Snowden simply did not know about the arrangement between Britain and the US. As a contractor, said Loftus, Snowden would not have had access to this information and thus his detailed reports on the extent of US spying, including such programs as XKeyscore, which analyzes internet data based on global demographics, and PRISM, under which the telecommunications companies, such as Google, Facebook, et al, are mandated to collect our communications, missed the critical issue of the FISA loophole.
  • ...2 more annotations...
  • U.S. government officials have defended the program by asserting it cannot be used on domestic targets without a warrant. But once again, the FISA courts and their super-secret warrants  do not apply to foreign government surveillance of US citizens. So all this sturm and drang about whether or not the US is eavesdropping on our communications is, in fact, irrelevant and diversionary.
  • In fact, the USA Freedom Act reinstituted a number of the surveillance protocols of Section 215, including  authorization for  roving wiretaps  and tracking “lone wolf terrorists.”  While mainstream media heralded the passage of the bill as restoring privacy rights which were shredded under 215, privacy advocates have maintained that the bill will do little, if anything, to reverse the  surveillance situation in the US. The NSA went on the record as supporting the Freedom Act, stating it would end bulk collection of telephone metadata. However, in light of the reciprocal agreement between the US and Great Britain, the entire hoopla over NSA surveillance, Section 215, FISA courts and the USA Freedom Act could be seen as a giant smokescreen. If Great Britain is collecting our real time phone conversations and turning them over to the NSA, outside the realm or reach of the above stated laws, then all this posturing over the privacy rights of US citizens and surveillance laws expiring and being resurrected doesn’t amount to a hill of CDs.
Paul Merrell

New Leak Of Final TPP Text Confirms Attack On Freedom Of Expression, Public Health - 0 views

  • Offering a first glimpse of the secret 12-nation “trade” deal in its final form—and fodder for its growing ranks of opponents—WikiLeaks on Friday published the final negotiated text for the Trans-Pacific Partnership (TPP)’s Intellectual Property Rights chapter, confirming that the pro-corporate pact would harm freedom of expression by bolstering monopolies while and injure public health by blocking patient access to lifesaving medicines. The document is dated October 5, the same day it was announced in Atlanta, Georgia that the member states to the treaty had reached an accord after more than five years of negotiations. Aside from the WikiLeaks publication, the vast majority of the mammoth deal’s contents are still being withheld from the public—which a WikiLeaks press statement suggests is a strategic move by world leaders to forestall public criticism until after the Canadian election on October 19. Initial analyses suggest that many of the chapter’s more troubling provisions, such as broader patent and data protections that pharmaceutical companies use to delay generic competition, have stayed in place since draft versions were leaked in 2014 and 2015. Moreover, it codifies a crackdown on freedom of speech with rules allowing widespread internet censorship.
Paul Merrell

Germany's top prosecutor fired over treason probe - Yahoo News - 0 views

  • A treason investigation against two German journalists claimed its first casualty Tuesday — the country's top prosecutor who ordered the probe.
  • Justice Minister Heiko Maas announced he was seeking the dismissal of Harald Range hours after the chief federal prosecutor accused the government of interfering in his investigation.Maas said he made the decision in consultation with Chancellor Angela Merkel's office, indicating that the sacking was approved at the highest level.The Justice Ministry had questioned Range's decision to open the investigation against two journalists from the website Netzpolitik.org who had reported that Germany's domestic spy agency plans to expand surveillance of online communication.The treason probe was widely criticized and regarded as an embarrassment to the government. Senior officials stressed in recent days that Germany is committed to protecting press freedom.
Paul Merrell

t r u t h o u t | China Suicides: Is Apple Headed for a Consumer Backlash? - 0 views

  • Beijing, China - As Apple released the iPad today across Europe and Japan, a key supplier in China continued fortifying factory buildings with anti-suicide nets and bracing against a growing tide of public criticism about working conditions after 10 apparent employee suicides this year — including one this week hours after the company chief visited.   While tentative calls have emerged in China for boycotts of Apple products and other items made by electronics giant Foxconn, what remains entirely unclear is the impact this will have on the electronics manufacturing industry at large. The massive Foxconn plant, possibly the largest factory in the world, has been under the microscope for years over poor working conditions. In the past six months, renewed concerns have hit other electronics suppliers as well.
  • Now, with an apparent suicide cluster well underway at a key Apple supplier, labor activists have begun to wonder if that tide might be about to turn in the same way it did for international apparel and shoe companies in the 1980s and 1990s. “I think there is a tendency for consumers of iconic products like iPhones to stick their head in the sand when it comes abusive labor practices,” said Geoffrey Crothall of the Hong Kong-based China Labour Bulletin. “Their iPhone reflects who they are, or rather the image of themselves they wish to present to the world, and they don't want that image tarnished.”
Gary Edwards

Nokia and Google: Too much emphasis on the mobile OS? | ge TalkBack on ZDNet - 0 views

  • It's the document model! There is nothing wrong with RiA. Adobe is doing great stuff, and they fully support the WebKit flow document model in their RiA. Silverlight on the other hand is a true threat to the Open Web because it implements uniquely proprietary format, protocol and interface alternatives. The problem with AJAX is that it's difficult to scale. Advancing JavaScript libraries, structured WebKit, BrowserPlus and Google's GWT-Google Gears promise to tame that problem. At the end of the day though, i see AJAX as an important aspect of the browser as an RiA runtime engine. Here's what concerns me; 500 million MSOffice desktops that anchor most of the world's client/server business processes speak XAML "fixed/flow". These desktops are the information pumps for billions of business critical documents. And they do not speak the language of the Open Web. They speak the language of the Microsoft Web-Stack and Mesh services.
  •  
    Response to questions about RiA vs AJAX.
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Gary Edwards

The lock-in battle shifts to Sharepoint | Blankenhorn - ZDNet.com - 0 views

  • On the surface SharePoint is merely a document management system which lets everyone in your company share and find Office documents easily. But critics like our own Matt Asay call it a Trojan Horse, which will bind companies which deploy it to Microsoft forever. You can put together everything SharePoint does using open source projects, but it takes work. You can combine Alfresco (from the Electronic Content Management (ECM) software company Matt works for), the Liferay portal, JasperSoft for reporting, and Zimbra’s e-mail server. Throw in some Jive forums and you’re more than done. But what does that cost, really, compared to just using something from Microsoft which already works with your current Office applications? Exactly. Once companies start using SharePoint, Asay worries, there is no way for them to ever ditch Microsoft applications and file formats. SharePoint is tied to those formats, and as the share fills the cost of switching away rises exponentially.
  •  
    August 2007 discussion about SharePoint. Everythign projected in this article is happening - including the proprietary file format and protocol lock-in
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Paul Merrell

Office Business Applications for Store Operations - 0 views

  • Service orientation addresses these challenges by centering on rapidly evolving XML and Web services standards that are revolutionizing how developers compose systems and integrate them over distributed networks. No longer are developers forced to make do with rigid and proprietary languages and object models that used to be the norm before service orientation came into play. The emergence of this new methodology is helping to develop new approaches specifically for Web-based distributed computing. This revolution is transforming the business by integrating disparate systems to establish a real-time enterprise. Making information available where it is needed to simplify merchandising processes requires a methodology that is based on loosely coupled integration between various in-store and back-end applications. This demand makes it critical for an architecture that is based on service orientation for integration between disparate applications. In addition, surfacing information at the right place requires the ability to compose dynamic applications using an array of underlying services. The Office Business Applications platform provides this ability to create composite applications, such as dashboards for the store, regional, and corporate managers.
  •  
    Summary: Changing market conditions require agility in business applications. Service orientation answers the challenge by centering on XML and Web services standards that revolutionize how developers compose systems and integrate them over distributed networks. Once integrated, how is the information presented to the decision makers? (36 printed pages)
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Paul Merrell

Pronunciation Lexicon Specification (PLS) Version 1.0 - 0 views

  • Pronunciation Lexicon Specification (PLS) Version 1.0 W3C Recommendation 14 October 2008
  • The accurate specification of pronunciation is critical to the success of speech applications. Most Automatic Speech Recognition (ASR) and Text-To-Speech (TTS) engines internally provide extensive high quality lexicons with pronunciation information for many words or phrases. To ensure a maximum coverage of the words or phrases used by an application, application-specific pronunciations may be required. For example, these may be needed for proper nouns such as surnames or business names. The Pronunciation Lexicon Specification (PLS) is designed to enable interoperable specification of pronunciation information for both ASR and TTS engines. The language is intended to be easy to use by developers while supporting the accurate specification of pronunciation information for international use.
Paul Merrell

XHTML Modularization 1.1 Released as W3C Recommendation - 0 views

  • XHTML Modularization is a decomposition of XHTML 1.0, and by reference HTML 4, into a collection of abstract modules that provide specific types of functionality.
  • The modularization of XHTML refers to the task of specifying well-defined sets of XHTML elements that can be combined and extended by document authors, document type architects, other XML standards specifications, and application and product designers to make it economically feasible for content developers to deliver content on a greater number and diversity of platforms. Over the last couple of years, many specialized markets have begun looking to HTML as a content language. There is a great movement toward using HTML across increasingly diverse computing platforms. Currently there is activity to move HTML onto mobile devices (hand held computers, portable phones, etc.), television devices (digital televisions, TV-based Web browsers, etc.), and appliances (fixed function devices). Each of these devices has different requirements and constraints.
  • XHTML Modularization is a decomposition of XHTML 1.0, and by reference HTML 4, into a collection of abstract modules that provide specific types of functionality. These abstract modules are implemented in this specification using the XML Schema and XML Document Type Definition languages. The rules for defining the abstract modules, and for implementing them using XML Schemas and XML DTDs, are also defined in this document. These modules may be combined with each other and with other modules to create XHTML subset and extension document types that qualify as members of the XHTML-family of document types.
  • ...1 more annotation...
  • Modularizing XHTML provides a means for product designers to specify which elements are supported by a device using standard building blocks and standard methods for specifying which building blocks are used. These modules serve as "points of conformance" for the content community. The content community can now target the installed base that supports a certain collection of modules, rather than worry about the installed base that supports this or that permutation of XHTML elements. The use of standards is critical for modularized XHTML to be successful on a large scale. It is not economically feasible for content developers to tailor content to each and every permutation of XHTML elements. By specifying a standard, either software processes can autonomously tailor content to a device, or the device can automatically load the software required to process a module. Modularization also allows for the extension of XHTML's layout and presentation capabilities, using the extensibility of XML, without breaking the XHTML standard. This development path provides a stable, useful, and implementable framework for content developers and publishers to manage the rapid pace of technological change on the Web.
Matteo Spreafico

Google Redefines Disruption: The "Less Than Free" Business Model - 0 views

  • In the summer of 2007, excitement regarding the criticality of map data (specifically turn-by-turn navigation data) reached a fever pitch.  On July 23, 2007, TomTom, the leading portable GPS device maker, agreed to buy Tele Atlas for US$2.7 billion. Shortly thereafter, on October 1, Nokia agreed to buy NavTeq for a cool US$8.1 billion. Meanwhile Google was still evolving its strategy and no longer wanted to be limited by the terms of its two contracts. As such, they informed Tele Atlas and NavTeq that they wanted to modify their license terms to allow more liberty with respect to syndication and proliferation. NavTeq balked, and in September of 2008 Google quietly dropped NavTeq, moving to just one partner for its core mapping data. Tele Atlas eventually agreed to the term modifications, but perhaps they should have sensed something bigger at play.
  • Rumors abound about just how many cars Google has on the roads building it own turn-by-turn mapping data as well as its unique “Google Streetview” database. Whatever it is, it must be huge. This October 13th, just over one year after dropping NavTeq, the other shoe dropped as well. Google disconnected from Tele Atlas and began to offer maps that were free and clear of either license. These maps are based on a combination of their own data as well as freely available data. Two weeks after this, Google announces free turn-by-turn directions for all Android phones. This couldn’t have been a great day for the deal teams that worked on the respective Tele Atlas and NavTeq acquisitions.
  • Google’s free navigation feature announcement dealt a crushing blow to the GPS stocks. Garmin fell 16%. TomTom fell 21%. Imagine trying to maintain high royalty rates against this strategic move by Google. Android is not only a phone OS, it’s a CE OS. If Ford or BMW want to build an in-dash Android GPS, guess what? Google will give it to them for free.
  • ...2 more annotations...
  • I then asked my friend, “so why would they ever use the Google (non open source) license version.”  (EDIT: One of the commenters below pointed out that all Android is open source, and the Google apps pack, including the GPS, is licensed on top.  Doesn’t change the argument, but wanted the correct data included here.)  Here was the big punch line – because Google will give you ad splits on search if you use that version!  That’s right; Google will pay you to use their mobile OS. I like to call this the “less than free” business model.
  • “Less than free” may not stop with the mobile phone. Google’s CEO Eric Schmidt has been quite outspoken about his support for the Google Chrome OS. And there is no reason to believe that the “less than free” business model will not be used here as well. If Sony or HP or Dell builds a netbook based on Chrome OS, they will make money on every search each user initiates. Google, eager to protect its search share and market volume, will gladly pay the ad splits. Microsoft, who was already forced to lower Windows netbook pricing to fend off Linux, will be dancing with a business model inversion of epic proportion – from “you pay me” to “I pay you.”  It’s really hard to build a compensation package for your sales team on those economics.
Paul Merrell

Japan, U.S. trade chiefs seek to clinch bilateral TPP deal - 毎日新聞 - 0 views

  • Talks on the TPP, which would create a massive free trade zone encompassing some 40 percent of global output, have long been stalled due partly to bickering between Japan and the United States -- the biggest economies in the TPP framework -- over removal of barriers for agricultural and automotive trade. The biggest sticking point has been Tokyo's proposed exceptions to tariff cuts on its five sensitive farm product categories -- rice, wheat, beef and pork, dairy products and sugar -- and safeguard measures it wants to introduce should imports of the products surge under the TPP, which aims for zero tariffs in principle. It is uncertain how much closer the two sides can move given that their recent working-level talks saw little progress, negotiation sources said.
  • A summit meeting of the Asia-Pacific Economic Cooperation forum scheduled for November in Beijing that Obama and leaders from other TPP countries are slated to join is seen as an occasion for concluding the TPP talks, which have entered their fifth year. But the odds on an agreement depend on whether Japan and the United States can bridge their gaps before that.
  • Hiroshi Oe, Japan's deputy chief TPP negotiator, has admitted that talks with his counterpart Wendy Cutler, Froman's top deputy, earlier this month in Tokyo made very little progress. One negotiation source said the hurdle for solving the outstanding bilateral problems is "extremely high," suggesting it is still premature to bring the talks to the ministerial level. Amari himself had been reluctant to hold a one-on-one meeting with Froman with the working-level negotiations failing to see enough progress. But he apparently decided to ramp up efforts in response to strong calls from Washington for arranging a meeting with Froman, who has said the two sides are "now at a critical juncture in this negotiation."
  • ...1 more annotation...
  • The TPP comprises Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, the United States and Vietnam.
  •  
    Get ready to fight TPP fast-tracking in member states. see also 'Wikileaks' free trade documents reveal 'drastic' Australian concessions.' Source: The Guardian. http://goo.gl/hicb5h Remember that in the U.S., only Senate ratification is required. The measure will not go before the House before implementation. 
Paul Merrell

Operation Socialist: How GCHQ Spies Hacked Belgium's Largest Telco - 0 views

  • When the incoming emails stopped arriving, it seemed innocuous at first. But it would eventually become clear that this was no routine technical problem. Inside a row of gray office buildings in Brussels, a major hacking attack was in progress. And the perpetrators were British government spies. It was in the summer of 2012 that the anomalies were initially detected by employees at Belgium’s largest telecommunications provider, Belgacom. But it wasn’t until a year later, in June 2013, that the company’s security experts were able to figure out what was going on. The computer systems of Belgacom had been infected with a highly sophisticated malware, and it was disguising itself as legitimate Microsoft software while quietly stealing data. Last year, documents from National Security Agency whistleblower Edward Snowden confirmed that British surveillance agency Government Communications Headquarters was behind the attack, codenamed Operation Socialist. And in November, The Intercept revealed that the malware found on Belgacom’s systems was one of the most advanced spy tools ever identified by security researchers, who named it “Regin.”
  • The full story about GCHQ’s infiltration of Belgacom, however, has never been told. Key details about the attack have remained shrouded in mystery—and the scope of the attack unclear. Now, in partnership with Dutch and Belgian newspapers NRC Handelsblad and De Standaard, The Intercept has pieced together the first full reconstruction of events that took place before, during, and after the secret GCHQ hacking operation. Based on new documents from the Snowden archive and interviews with sources familiar with the malware investigation at Belgacom, The Intercept and its partners have established that the attack on Belgacom was more aggressive and far-reaching than previously thought. It occurred in stages between 2010 and 2011, each time penetrating deeper into Belgacom’s systems, eventually compromising the very core of the company’s networks.
  • Snowden told The Intercept that the latest revelations amounted to unprecedented “smoking-gun attribution for a governmental cyber attack against critical infrastructure.” The Belgacom hack, he said, is the “first documented example to show one EU member state mounting a cyber attack on another…a breathtaking example of the scale of the state-sponsored hacking problem.”
  • ...7 more annotations...
  • When the incoming emails stopped arriving, it seemed innocuous at first. But it would eventually become clear that this was no routine technical problem. Inside a row of gray office buildings in Brussels, a major hacking attack was in progress. And the perpetrators were British government spies. It was in the summer of 2012 that the anomalies were initially detected by employees at Belgium’s largest telecommunications provider, Belgacom. But it wasn’t until a year later, in June 2013, that the company’s security experts were able to figure out what was going on. The computer systems of Belgacom had been infected with a highly sophisticated malware, and it was disguising itself as legitimate Microsoft software while quietly stealing data. Last year, documents from National Security Agency whistleblower Edward Snowden confirmed that British surveillance agency Government Communications Headquarters was behind the attack, codenamed Operation Socialist. And in November, The Intercept revealed that the malware found on Belgacom’s systems was one of the most advanced spy tools ever identified by security researchers, who named it “Regin.”
  • Publicly, Belgacom has played down the extent of the compromise, insisting that only its internal systems were breached and that customers’ data was never found to have been at risk. But secret GCHQ documents show the agency gained access far beyond Belgacom’s internal employee computers and was able to grab encrypted and unencrypted streams of private communications handled by the company. Belgacom invested several million dollars in its efforts to clean-up its systems and beef-up its security after the attack. However, The Intercept has learned that sources familiar with the malware investigation at the company are uncomfortable with how the clean-up operation was handled—and they believe parts of the GCHQ malware were never fully removed.
  • The revelations about the scope of the hacking operation will likely alarm Belgacom’s customers across the world. The company operates a large number of data links internationally (see interactive map below), and it serves millions of people across Europe as well as officials from top institutions including the European Commission, the European Parliament, and the European Council. The new details will also be closely scrutinized by a federal prosecutor in Belgium, who is currently carrying out a criminal investigation into the attack on the company. Sophia in ’t Veld, a Dutch politician who chaired the European Parliament’s recent inquiry into mass surveillance exposed by Snowden, told The Intercept that she believes the British government should face sanctions if the latest disclosures are proven.
  • What sets the secret British infiltration of Belgacom apart is that it was perpetrated against a close ally—and is backed up by a series of top-secret documents, which The Intercept is now publishing.
  • Between 2009 and 2011, GCHQ worked with its allies to develop sophisticated new tools and technologies it could use to scan global networks for weaknesses and then penetrate them. According to top-secret GCHQ documents, the agency wanted to adopt the aggressive new methods in part to counter the use of privacy-protecting encryption—what it described as the “encryption problem.” When communications are sent across networks in encrypted format, it makes it much harder for the spies to intercept and make sense of emails, phone calls, text messages, internet chats, and browsing sessions. For GCHQ, there was a simple solution. The agency decided that, where possible, it would find ways to hack into communication networks to grab traffic before it’s encrypted.
  • The Snowden documents show that GCHQ wanted to gain access to Belgacom so that it could spy on phones used by surveillance targets travelling in Europe. But the agency also had an ulterior motive. Once it had hacked into Belgacom’s systems, GCHQ planned to break into data links connecting Belgacom and its international partners, monitoring communications transmitted between Europe and the rest of the world. A map in the GCHQ documents, named “Belgacom_connections,” highlights the company’s reach across Europe, the Middle East, and North Africa, illustrating why British spies deemed it of such high value.
  • Documents published with this article: Automated NOC detection Mobile Networks in My NOC World Making network sense of the encryption problem Stargate CNE requirements NAC review – October to December 2011 GCHQ NAC review – January to March 2011 GCHQ NAC review – April to June 2011 GCHQ NAC review – July to September 2011 GCHQ NAC review – January to March 2012 GCHQ Hopscotch Belgacom connections
Paul Merrell

Why the Sony hack is unlikely to be the work of North Korea. | Marc's Security Ramblings - 0 views

  • Everyone seems to be eager to pin the blame for the Sony hack on North Korea. However, I think it’s unlikely. Here’s why:1. The broken English looks deliberately bad and doesn’t exhibit any of the classic comprehension mistakes you actually expect to see in “Konglish”. i.e it reads to me like an English speaker pretending to be bad at writing English. 2. The fact that the code was written on a PC with Korean locale & language actually makes it less likely to be North Korea. Not least because they don’t speak traditional “Korean” in North Korea, they speak their own dialect and traditional Korean is forbidden. This is one of the key things that has made communication with North Korean refugees difficult. I would find the presence of Chinese far more plausible.
  • 3. It’s clear from the hard-coded paths and passwords in the malware that whoever wrote it had extensive knowledge of Sony’s internal architecture and access to key passwords. While it’s plausible that an attacker could have built up this knowledge over time and then used it to make the malware, Occam’s razor suggests the simpler explanation of an insider. It also fits with the pure revenge tact that this started out as. 4. Whoever did this is in it for revenge. The info and access they had could have easily been used to cash out, yet, instead, they are making every effort to burn Sony down. Just think what they could have done with passwords to all of Sony’s financial accounts? With the competitive intelligence in their business documents? From simple theft, to the sale of intellectual property, or even extortion – the attackers had many ways to become rich. Yet, instead, they chose to dump the data, rendering it useless. Likewise, I find it hard to believe that a “Nation State” which lives by propaganda would be so willing to just throw away such an unprecedented level of access to the beating heart of Hollywood itself.
  • 5. The attackers only latched onto “The Interview” after the media did – the film was never mentioned by GOP right at the start of their campaign. It was only after a few people started speculating in the media that this and the communication from DPRK “might be linked” that suddenly it became linked. I think the attackers both saw this as an opportunity for “lulz” and as a way to misdirect everyone into thinking it was a nation state. After all, if everyone believes it’s a nation state, then the criminal investigation will likely die.
  • ...4 more annotations...
  • 6. Whoever is doing this is VERY net and social media savvy. That, and the sophistication of the operation, do not match with the profile of DPRK up until now. Grugq did an excellent analysis of this aspect his findings are here – http://0paste.com/6875#md 7. Finally, blaming North Korea is the easy way out for a number of folks, including the security vendors and Sony management who are under the microscope for this. Let’s face it – most of today’s so-called “cutting edge” security defenses are either so specific, or so brittle, that they really don’t offer much meaningful protection against a sophisticated attacker or group of attackers.
  • 8. It probably also suits a number of political agendas to have something that justifies sabre-rattling at North Korea, which is why I’m not that surprised to see politicians starting to point their fingers at the DPRK also. 9. It’s clear from the leaked data that Sony has a culture which doesn’t take security very seriously. From plaintext password files, to using “password” as the password in business critical certificates, through to just the shear volume of aging unclassified yet highly sensitive data left out in the open. This isn’t a simple slip-up or a “weak link in the chain” – this is a serious organization-wide failure to implement anything like a reasonable security architecture.
  • The reality is, as things stand, Sony has little choice but to burn everything down and start again. Every password, every key, every certificate is tainted now and that’s a terrifying place for an organization to find itself. This hack should be used as the definitive lesson in why security matters and just how bad things can get if you don’t take it seriously. 10. Who do I think is behind this? My money is on a disgruntled (possibly ex) employee of Sony.
  • EDIT: This appears (at least in part) to be substantiated by a conversation the Verge had with one of the alleged hackers – http://www.theverge.com/2014/11/25/7281097/sony-pictures-hackers-say-they-want-equality-worked-with-staff-to-break-in Finally for an EXCELLENT blow by blow analysis of the breach and the events that followed, read the following post by my friends from Risk Based Security – https://www.riskbasedsecurity.com/2014/12/a-breakdown-and-analysis-of-the-december-2014-sony-hack EDIT: Also make sure you read my good friend Krypt3ia’s post on the hack – http://krypt3ia.wordpress.com/2014/12/18/sony-hack-winners-and-losers/
  •  
    Seems that the FBI overlooked a few clues before it told Obama to go ahead and declare war against North Korea. 
Paul Merrell

Beware the Dangers of Congress' Latest Cybersecurity Bill | American Civil Liberties Union - 0 views

  • A new cybersecurity bill poses serious threats to our privacy, gives the government extraordinary powers to silence potential whistleblowers, and exempts these dangerous new powers from transparency laws. The Cybersecurity Information Sharing Act of 2014 ("CISA") was scheduled to be marked up by the Senate Intelligence Committee yesterday but has been delayed until after next week's congressional recess. The response to the proposed legislation from the privacy, civil liberties, tech, and open government communities was quick and unequivocal – this bill must not go through. The bill would create a massive loophole in our existing privacy laws by allowing the government to ask companies for "voluntary" cooperation in sharing information, including the content of our communications, for cybersecurity purposes. But the definition they are using for the so-called "cybersecurity information" is so broad it could sweep up huge amounts of innocent Americans' personal data. The Fourth Amendment protects Americans' personal data and communications from undue government access and monitoring without suspicion of criminal activity. The point of a warrant is to guard that protection. CISA would circumvent the warrant requirement by allowing the government to approach companies directly to collect personal information, including telephonic or internet communications, based on the new broadly drawn definition of "cybersecurity information."
  • While we hope many companies would jealously guard their customers' information, there is a provision in the bill that would excuse sharers from any liability if they act in "good faith" that the sharing was lawful. Collected information could then be used in criminal proceedings, creating a dangerous end-run around laws like the Electronic Communications Privacy Act, which contain warrant requirements. In addition to the threats to every American's privacy, the bill clearly targets potential government whistleblowers. Instead of limiting the use of data collection to protect against actual cybersecurity threats, the bill allows the government to use the data in the investigation and prosecution of people for economic espionage and trade secret violations, and under various provisions of the Espionage Act. It's clear that the law is an attempt to give the government more power to crack down on whistleblowers, or "insider threats," in popular bureaucratic parlance. The Obama Administration has brought more "leaks" prosecutions against government whistleblowers and members of the press than all previous administrations combined. If misused by this or future administrations, CISA could eliminate due process protections for such investigations, which already favor the prosecution.
  • While actively stripping Americans' privacy protections, the bill also cloaks "cybersecurity"-sharing in secrecy by exempting it from critical government transparency protections. It unnecessarily and dangerously provides exemptions from state and local sunshine laws as well as the federal Freedom of Information Act. These are both powerful tools that allow citizens to check government activities and guard against abuse. Edward Snowden's revelations from the past year, of invasive spying programs like PRSIM and Stellar Wind, have left Americans shocked and demanding more transparency by government agencies. CISA, however, flies in the face of what the public clearly wants. (Two coalition letters, here and here, sent to key members of the Senate yesterday detail the concerns of a broad coalition of organizations, including the ACLU.)
  •  
    Text of the bill is on Sen. Diane Feinstein's site, http://goo.gl/2cdsSA It is truly a bummer.
Paul Merrell

Visit the Wrong Website, and the FBI Could End Up in Your Computer | Threat Level | WIRED - 0 views

  • Security experts call it a “drive-by download”: a hacker infiltrates a high-traffic website and then subverts it to deliver malware to every single visitor. It’s one of the most powerful tools in the black hat arsenal, capable of delivering thousands of fresh victims into a hackers’ clutches within minutes. Now the technique is being adopted by a different kind of a hacker—the kind with a badge. For the last two years, the FBI has been quietly experimenting with drive-by hacks as a solution to one of law enforcement’s knottiest Internet problems: how to identify and prosecute users of criminal websites hiding behind the powerful Tor anonymity system. The approach has borne fruit—over a dozen alleged users of Tor-based child porn sites are now headed for trial as a result. But it’s also engendering controversy, with charges that the Justice Department has glossed over the bulk-hacking technique when describing it to judges, while concealing its use from defendants. Critics also worry about mission creep, the weakening of a technology relied on by human rights workers and activists, and the potential for innocent parties to wind up infected with government malware because they visited the wrong website. “This is such a big leap, there should have been congressional hearings about this,” says ACLU technologist Chris Soghoian, an expert on law enforcement’s use of hacking tools. “If Congress decides this is a technique that’s perfectly appropriate, maybe that’s OK. But let’s have an informed debate about it.”
  • The FBI’s use of malware is not new. The bureau calls the method an NIT, for “network investigative technique,” and the FBI has been using it since at least 2002 in cases ranging from computer hacking to bomb threats, child porn to extortion. Depending on the deployment, an NIT can be a bulky full-featured backdoor program that gives the government access to your files, location, web history and webcam for a month at a time, or a slim, fleeting wisp of code that sends the FBI your computer’s name and address, and then evaporates. What’s changed is the way the FBI uses its malware capability, deploying it as a driftnet instead of a fishing line. And the shift is a direct response to Tor, the powerful anonymity system endorsed by Edward Snowden and the State Department alike.
« First ‹ Previous 81 - 100 of 151 Next › Last »
Showing 20 items per page