Skip to main content

Home/ Open Web/ Group items tagged software

Rss Feed Group items tagged

Paul Merrell

Popular Security Software Came Under Relentless NSA and GCHQ Attacks - The Intercept - 0 views

  • The National Security Agency and its British counterpart, Government Communications Headquarters, have worked to subvert anti-virus and other security software in order to track users and infiltrate networks, according to documents from NSA whistleblower Edward Snowden. The spy agencies have reverse engineered software products, sometimes under questionable legal authority, and monitored web and email traffic in order to discreetly thwart anti-virus software and obtain intelligence from companies about security software and users of such software. One security software maker repeatedly singled out in the documents is Moscow-based Kaspersky Lab, which has a holding registered in the U.K., claims more than 270,000 corporate clients, and says it protects more than 400 million people with its products. British spies aimed to thwart Kaspersky software in part through a technique known as software reverse engineering, or SRE, according to a top-secret warrant renewal request. The NSA has also studied Kaspersky Lab’s software for weaknesses, obtaining sensitive customer information by monitoring communications between the software and Kaspersky servers, according to a draft top-secret report. The U.S. spy agency also appears to have examined emails inbound to security software companies flagging new viruses and vulnerabilities.
  • The efforts to compromise security software were of particular importance because such software is relied upon to defend against an array of digital threats and is typically more trusted by the operating system than other applications, running with elevated privileges that allow more vectors for surveillance and attack. Spy agencies seem to be engaged in a digital game of cat and mouse with anti-virus software companies; the U.S. and U.K. have aggressively probed for weaknesses in software deployed by the companies, which have themselves exposed sophisticated state-sponsored malware.
  • The requested warrant, provided under Section 5 of the U.K.’s 1994 Intelligence Services Act, must be renewed by a government minister every six months. The document published today is a renewal request for a warrant valid from July 7, 2008 until January 7, 2009. The request seeks authorization for GCHQ activities that “involve modifying commercially available software to enable interception, decryption and other related tasks, or ‘reverse engineering’ software.”
  • ...9 more annotations...
  • The NSA, like GCHQ, has studied Kaspersky Lab’s software for weaknesses. In 2008, an NSA research team discovered that Kaspersky software was transmitting sensitive user information back to the company’s servers, which could easily be intercepted and employed to track users, according to a draft of a top-secret report. The information was embedded in “User-Agent” strings included in the headers of Hypertext Transfer Protocol, or HTTP, requests. Such headers are typically sent at the beginning of a web request to identify the type of software and computer issuing the request.
  • According to the draft report, NSA researchers found that the strings could be used to uniquely identify the computing devices belonging to Kaspersky customers. They determined that “Kaspersky User-Agent strings contain encoded versions of the Kaspersky serial numbers and that part of the User-Agent string can be used as a machine identifier.” They also noted that the “User-Agent” strings may contain “information about services contracted for or configurations.” Such data could be used to passively track a computer to determine if a target is running Kaspersky software and thus potentially susceptible to a particular attack without risking detection.
  • Another way the NSA targets foreign anti-virus companies appears to be to monitor their email traffic for reports of new vulnerabilities and malware. A 2010 presentation on “Project CAMBERDADA” shows the content of an email flagging a malware file, which was sent to various anti-virus companies by François Picard of the Montréal-based consulting and web hosting company NewRoma. The presentation of the email suggests that the NSA is reading such messages to discover new flaws in anti-virus software. Picard, contacted by The Intercept, was unaware his email had fallen into the hands of the NSA. He said that he regularly sends out notification of new viruses and malware to anti-virus companies, and that he likely sent the email in question to at least two dozen such outfits. He also said he never sends such notifications to government agencies. “It is strange the NSA would show an email like mine in a presentation,” he added.
  • As government spies have sought to evade anti-virus software, the anti-virus firms themselves have exposed malware created by government spies. Among them, Kaspersky appears to be the sharpest thorn in the side of government hackers. In the past few years, the company has proven to be a prolific hunter of state-sponsored malware, playing a role in the discovery and/or analysis of various pieces of malware reportedly linked to government hackers, including the superviruses Flame, which Kaspersky flagged in 2012; Gauss, also detected in 2012; Stuxnet, discovered by another company in 2010; and Regin, revealed by Symantec. In February, the Russian firm announced its biggest find yet: the “Equation Group,” an organization that has deployed espionage tools widely believed to have been created by the NSA and hidden on hard drives from leading brands, according to Kaspersky. In a report, the company called it “the most advanced threat actor we have seen” and “probably one of the most sophisticated cyber attack groups in the world.”
  • The Project CAMBERDADA presentation lists 23 additional AV companies from all over the world under “More Targets!” Those companies include Check Point software, a pioneering maker of corporate firewalls based Israel, whose government is a U.S. ally. Notably omitted are the American anti-virus brands McAfee and Symantec and the British company Sophos.
  • The NSA presentation goes on to state that its signals intelligence yields about 10 new “potentially malicious files per day for malware triage.” This is a tiny fraction of the hostile software that is processed. Kaspersky says it detects 325,000 new malicious files every day, and an internal GCHQ document indicates that its own system “collect[s] around 100,000,000 malware events per day.” After obtaining the files, the NSA analysts “[c]heck Kaspersky AV to see if they continue to let any of these virus files through their Anti-Virus product.” The NSA’s Tailored Access Operations unit “can repurpose the malware,” presumably before the anti-virus software has been updated to defend against the threat.
  • Hacks deployed by the Equation Group operated undetected for as long as 14 to 19 years, burrowing into the hard drive firmware of sensitive computer systems around the world, according to Kaspersky. Governments, militaries, technology companies, nuclear research centers, media outlets and financial institutions in 30 countries were among those reportedly infected. Kaspersky estimates that the Equation Group could have implants in tens of thousands of computers, but documents published last year by The Intercept suggest the NSA was scaling up their implant capabilities to potentially infect millions of computers with malware. Kaspersky’s adversarial relationship with Western intelligence services is sometimes framed in more sinister terms; the firm has been accused of working too closely with the Russian intelligence service FSB. That accusation is partly due to the company’s apparent success in uncovering NSA malware, and partly due to the fact that its founder, Eugene Kaspersky, was educated by a KGB-backed school in the 1980s before working for the Russian military.
  • Kaspersky has repeatedly denied the insinuations and accusations. In a recent blog post, responding to a Bloomberg article, he complained that his company was being subjected to “sensationalist … conspiracy theories,” sarcastically noting that “for some reason they forgot our reports” on an array of malware that trace back to Russian developers. He continued, “It’s very hard for a company with Russian roots to become successful in the U.S., European and other markets. Nobody trusts us — by default.”
  • Documents published with this article: Kaspersky User-Agent Strings — NSA Project CAMBERDADA — NSA NDIST — GCHQ’s Developing Cyber Defence Mission GCHQ Application for Renewal of Warrant GPW/1160 Software Reverse Engineering — GCHQ Reverse Engineering — GCHQ Wiki Malware Analysis & Reverse Engineering — ACNO Skill Levels — GCHQ
Gary Edwards

WE'RE BLOWN AWAY: This Startup Could Literally Change The Entire Software Industry - Bu... - 0 views

  •  
    "Startup Numecent has come out of stealth mode today with some of the most impressive enterprise technology we've seen in a decade. Plus the company is interesting for other reasons, like its business model and its founder. Numecent offers something it calls "cloud paging" and, if successful, it could be a game-changer for enterprise software, video gaming, and smartphone apps. Red Hat thinks so. It has already partnered with the company to help it offer Windows software to Linux users. "Cloud paging" instantly "cloudifies" any software, even an operating system like Windows itself, says founder and CEO Osman Kent. It lets any software, with no modification, be delivered from the cloud and run as fast or faster than if the app was on your desktop. Lots of so-called "desktop virtualization" services work fast. But cloud-paging can even operate the cloud software if the PC gets disconnected from the network or Internet. It can also turn a smartphone into a server. That means a bunch of devices like tablets can run the software -- like a game -- off of the smartphone. Imagine showing up to a party and letting all your friends play the latest version of Halo from your phone. That's crazy cool. Cloudpaging can do all this because it doesn't use "pixel-streaming" technology like other virtualization tech. Instead it temporarily downloads bits of the application itself (instructions) and runs them on the device. It can almost magically predict which parts of the app the user will need, and downloads only those parts. For business owners, that's not even the best part. It also helps enterprises sidestep extra licensing fees associated with the cloud. For instance, Microsoft licenses its software by the device, not by the user, and, in many cases, charges a "Virtual Desktop Access" fee for each device using a virtual version of Windows. (For a bit of light reading, check out the Microsoft virtual desktop licensing white paper: PDF) Cloudpaging has what Kent calls "f
Paul Merrell

The All Writs Act, Software Licenses, and Why Judges Should Ask More Questions | Just S... - 0 views

  • Pending before federal magistrate judge James Orenstein is the government’s request for an order obligating Apple, Inc. to unlock an iPhone and thereby assist prosecutors in decrypting data the government has seized and is authorized to search pursuant to a warrant. In an order questioning the government’s purported legal basis for this request, the All Writs Act of 1789 (AWA), Judge Orenstein asked Apple for a brief informing the court whether the request would be technically feasible and/or burdensome. After Apple filed, the court asked it to file a brief discussing whether the government had legal grounds under the AWA to compel Apple’s assistance. Apple filed that brief and the government filed a reply brief last week in the lead-up to a hearing this morning.
  • We’ve long been concerned about whether end users own software under the law. Software owners have rights of adaptation and first sale enshrined in copyright law. But software publishers have claimed that end users are merely licensees, and our rights under copyright law can be waived by mass-market end user license agreements, or EULAs. Over the years, Granick has argued that users should retain their rights even if mass-market licenses purport to take them away. The government’s brief takes advantage of Apple’s EULA for iOS to argue that Apple, the software publisher, is responsible for iPhones around the world. Apple’s EULA states that when you buy an iPhone, you’re not buying the iOS software it runs, you’re just licensing it from Apple. The government argues that having designed a passcode feature into a copy of software which it owns and licenses rather than sells, Apple can be compelled under the All Writs Act to bypass the passcode on a defendant’s iPhone pursuant to a search warrant and thereby access the software owned by Apple. Apple’s supplemental brief argues that in defining its users’ contractual rights vis-à-vis Apple with regard to Apple’s intellectual property, Apple in no way waived its own due process rights vis-à-vis the government with regard to users’ devices. Apple’s brief compares this argument to forcing a car manufacturer to “provide law enforcement with access to the vehicle or to alter its functionality at the government’s request” merely because the car contains licensed software. 
  • This is an interesting twist on the decades-long EULA versus users’ rights fight. As far as we know, this is the first time that the government has piggybacked on EULAs to try to compel software companies to provide assistance to law enforcement. Under the government’s interpretation of the All Writs Act, anyone who makes software could be dragooned into assisting the government in investigating users of the software. If the court adopts this view, it would give investigators immense power. The quotidian aspects of our lives increasingly involve software (from our cars to our TVs to our health to our home appliances), and most of that software is arguably licensed, not bought. Conscripting software makers to collect information on us would afford the government access to the most intimate information about us, on the strength of some words in some license agreements that people never read. (And no wonder: The iPhone’s EULA came to over 300 pages when the government filed it as an exhibit to its brief.)
  • ...1 more annotation...
  • The government’s brief does not acknowledge the sweeping implications of its arguments. It tries to portray its requested unlocking order as narrow and modest, because it “would not require Apple to make any changes to its software or hardware, … [or] to introduce any new ability to access data on its phones. It would simply require Apple to use its existing capability to bypass the passcode on a passcode-locked iOS 7 phone[.]” But that undersells the implications of the legal argument the government is making: that anything a company already can do, it could be compelled to do under the All Writs Act in order to assist law enforcement. Were that the law, the blow to users’ trust in their encrypted devices, services, and products would be little different than if Apple and other companies were legally required to design backdoors into their encryption mechanisms (an idea the government just can’t seem to drop, its assurances in this brief notwithstanding). Entities around the world won’t buy security software if its makers cannot be trusted not to hand over their users’ secrets to the US government. That’s what makes the encryption in iOS 8 and later versions, which Apple has told the court it “would not have the technical ability” to bypass, so powerful — and so despised by the government: Because no matter how broadly the All Writs Act extends, no court can compel Apple to do the impossible.
Gary Edwards

The Man Who Makes the Future: Wired Icon Marc Andreessen | Epicenter | Wired.com - 1 views

  •  
    Must read interview. Marc Andreessen explains his five big ideas, taking us from the beginning of the Web, into the Cloud and beyond. Great stuff! ... (1) 1992 - Everyone Will Have the Web ... (2) 1995 - The Browser will the Operating System ... (3) 1999 - Web business will live in the Cloud ... (4) 2004 - Everything will be Social ... (5) 2009 - Software will Eat the World excerpt: Technology is like water; it wants to find its level. So if you hook up your computer to a billion other computers, it just makes sense that a tremendous share of the resources you want to use-not only text or media but processing power too-will be located remotely. People tend to think of the web as a way to get information or perhaps as a place to carry out ecommerce. But really, the web is about accessing applications. Think of each website as an application, and every single click, every single interaction with that site, is an opportunity to be on the very latest version of that application. Once you start thinking in terms of networks, it just doesn't make much sense to prefer local apps, with downloadable, installable code that needs to be constantly updated.

    "We could have built a social element into Mosaic. But back then the Internet was all about anonymity."
    Anderson: Assuming you have enough bandwidth.

    Andreessen: That's the very big if in this equation. If you have infinite network bandwidth, if you have an infinitely fast network, then this is what the technology wants. But we're not yet in a world of infinite speed, so that's why we have mobile apps and PC and Mac software on laptops and phones. That's why there are still Xbox games on discs. That's why everything isn't in the cloud. But eventually the technology wants it all to be up there.

    Anderson: Back in 1995, Netscape began pursuing this vision by enabling the browser to do more.

    Andreessen: We knew that you would need some pro
Gary Edwards

Google News - 0 views

  •  
    Prepare to be blown away. I viewed a demo of Numecent today and then did some research. There is no doubt in my mind that this is the end of the shrink wrapped- Microsoft business model. It's also perhaps the end of software application design and construction as we know it. Mobile apps in particular will get blasted by the Numecent "Cloud - Paging" concept. Extraordinary stuff. I'll leave a few useful links on Diigo "Open Web". "Numecent, a company that has a new kind of cloud computing technology that could potentially completely reorganize the way software is delivered and handled - upending the business as we know it - has another big feather in its cap. The company is showing how enterprises can use this technology to instantly put all of their enterprise software in the cloud, without renegotiating contracts and licenses with their software vendors. It signed $3 billion engineering construction company Parsons as a customer. Parsons is using Numecent's tech to deliver 4 million huge computer-aided design (CAD) files to its nearly 12,000 employees around the world. CAD drawings are bigger than video files and they can only be opened and edited by specific CAD apps like AutoCAD. Numecent offers a tech called "cloud paging" which instantly "cloudifies" any Windows app. Instead of being installed on a PC, the enterprise setup can deliver the app over the cloud. Unlike similar cloud technologies (called virtualization), this makes the app run faster and continue working even when the Internet connection goes down. "It's offers a 95% reduction in download times and 95% in download network usage," CEO Osman Kent told Business Insider. "It makes 8G of memory work like 800G." It also lets enterprises check in and check out software, like a library book, so more PCs can legally share software without violating licensing terms, saving money on software license fees, Kent says. Parson is using it to let employees share over 700 huge applications such as Au
  •  
    Sounds like Microsoft must-buy-or-kill technology.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Gary Edwards

Google's uProxy could help fight Internet censorship - 0 views

  •  
    "At its Ideas Summit in New York, Google has announced that it is working on developing a browser extension that will act as an easy-to-use way to bypass country-specific Internet censorship and make connections safer and more private. Safer connections The tool, which was developed by the University of Washington and seeded by Google, is at its core a peer-to-peer personalized virtual private network (VPN) that redirects Internet traffic coming from an initial, less secure connection through a second, trusted connection, and then encrypts the pathway between the two terminals. Whenever you access the Internet, the connection is routed through a number of terminals. At each step of the way the connection may be blocked, surveilled, or even tampered with (especially if the data is not encrypted). On the whole, the safety and privacy of your data is only as good as the weakest link in the chain. Google's solution with uProxy was to develop a tool that makes it much easier to make an unsafe connection more secure, with the help of a trusted friend. The software, which will be available as a Chrome and Firefox extension to begin with, can use existing social networks like Facebook or Google Hangouts to help find users who already have uProxy installed on their system. If two users agree to use the service in tandem, the software can begin to make data connections safer. How it works Let's assume that Alice, who lives in a country with an Internet censorship problem such as China or Iran, contacts Bob, who has much safer, or uncensored, or unmonitored access to the Internet. Bob agrees to act as a proxy for Alice, and as long as his browser is open, Alice's outgoing web traffic will now be routed through Bob's connection, and so she'll now be able to access websites that she wouldn't otherwise be able to reach on her own. The connection between Alice and Bob is also encrypted. To an external observer looking at Bob's connection, it would appear that he is simply s
Gary Edwards

The Advantage of Cloud Infrastructure: Servers are Software - ReadWriteCloud - 0 views

  •  
    Excellent discussion and capture of the importance of Cloud-computing!   Guest author Joe Masters Emison is VP of research and development at BuildFax writes for readwriteweb: excerpt:  More and more companies are moving from traditional servers to virtual servers in the cloud, and many new service-based deployments are starting in the cloud. However, despite the overwhelming popularity of the cloud here, deployments in the cloud look a lot like deployments on traditional servers. Companies are not changing their systems architecture to take advantage of some of the unique aspects of being in the cloud. The key difference between remotely-hosted, virtualized, on-demand-by-API servers (the definition of the "cloud" for this post) and any other hardware-based deployment (e.g., dedicated, co-located, or not-on-demand-by-API virtualized servers) is that servers are software on the cloud. Software applications traditionally differ from server environments in several key ways: ..... Traditional servers require humans and hours-if not days-to launch; Software launches automatically and on demand in seconds or minutes ...... Traditional servers are physically limited-companies have a finite number available to them; Software, as a virtual/information resource, has no such physical limitation ..... Traditional servers are designed to serve many functions (often because of the above-mentioned physical limitations); Software is generally designed to serve a single function ...... Traditional servers are not designed to be discarded; Software is built around the idea that it runs ephemerally and can be terminated at any moment On the cloud, these differences can disappear.
Gary Edwards

How To Win The Cloud Wars - Forbes - 0 views

  •  
    Byron Deeter is right, but perhaps he's holding back on his reasoning.  Silicon Valley is all about platform, and platform plays only come about once every ten to twenty years.  They come like great waves of change, not replacing the previous waves as much as taking away and running with the future.   Cloud Computing is the fourth great wave.  It will replace the PC and Network Computing waves as the future.  It is the target of all developers and entrepreneurs.   The four great waves are mainframe, workstation, pc and networked pc, and the Internet.  Cloud Computing takes the Internet to such a high level of functionality that it will now replace the pc-netwroking wave.  It's going to be enormous.  Especially as enterprises move their business productivity and data / content apps from the desktop/workgroup to the Cloud.  Enormous. The key was the perfect storm of 2008, where mobility (iPhone) converged with the standardization of tagged PDF, which converged with the Cloud Computing application and data model, which all happened at the time of the great financial collapse.   The financial collapase of 2008 caused a tectonic shift in productivity.  Survival meant doing more with less.  Particularly less labor since cost of labor was and continues to be a great uncertainty.  But that's also the definition of productivity and automation.  To survive, companies were compelled to reduce labor and invest in software/hardware systems based productivity.  The great leap to a new platform had it's fuel; survival. Social applications and services are just the simplest manifestation of productivity through managed connectivity in the Cloud.  Wait until this new breed of productivity reaches business apps!  The platform wars have begun, and it's for all the marbles. One last thought.  The Internet was always going to win as the next computing platform wave.  It's the first time communications have been combined and integrated into content, and vast dat
Paul Merrell

Leaked docs show spyware used to snoop on US computers | Ars Technica - 0 views

  • Software created by the controversial UK-based Gamma Group International was used to spy on computers that appear to be located in the United States, the UK, Germany, Russia, Iran, and Bahrain, according to a leaked trove of documents analyzed by ProPublica. It's not clear whether the surveillance was conducted by governments or private entities. Customer e-mail addresses in the collection appeared to belong to a German surveillance company, an independent consultant in Dubai, the Bosnian and Hungarian Intelligence services, a Dutch law enforcement officer, and the Qatari government.
  • The leaked files—which were posted online by hackers—are the latest in a series of revelations about how state actors including repressive regimes have used Gamma's software to spy on dissidents, journalists, and activist groups. The documents, leaked last Saturday, could not be readily verified, but experts told ProPublica they believed them to be genuine. "I think it's highly unlikely that it's a fake," said Morgan Marquis-Bore, a security researcher who while at The Citizen Lab at the University of Toronto had analyzed Gamma Group's software and who authored an article about the leak on Thursday. The documents confirm many details that have already been reported about Gamma, such as that its tools were used to spy on Bahraini activists. Some documents in the trove contain metadata tied to e-mail addresses of several Gamma employees. Bill Marczak, another Gamma Group expert at the Citizen Lab, said that several dates in the documents correspond to publicly known events—such as the day that a particular Bahraini activist was hacked.
  • The leaked files contain more than 40 gigabytes of confidential technical material, including software code, internal memos, strategy reports, and user guides on how to use Gamma Group software suite called FinFisher. FinFisher enables customers to monitor secure Web traffic, Skype calls, webcams, and personal files. It is installed as malware on targets' computers and cell phones. A price list included in the trove lists a license of the software at almost $4 million. The documents reveal that Gamma uses technology from a French company called Vupen Security that sells so-called computer "exploits." Exploits include techniques called "zero days" for "popular software like Microsoft Office, Internet Explorer, Adobe Acrobat Reader, and many more." Zero days are exploits that have not yet been detected by the software maker and therefore are not blocked.
  • ...2 more annotations...
  • Many of Gamma's product brochures have previously been published by the Wall Street Journal and Wikileaks, but the latest trove shows how the products are getting more sophisticated. In one document, engineers at Gamma tested a product called FinSpy, which inserts malware onto a user's machine, and found that it could not be blocked by most antivirus software. Documents also reveal that Gamma had been working to bypass encryption tools including a mobile phone encryption app, Silent Circle, and were able to bypass the protection given by hard-drive encryption products TrueCrypt and Microsoft's Bitlocker.
  • The documents also describe a "country-wide" surveillance product called FinFly ISP which promises customers the ability to intercept Internet traffic and masquerade as ordinary websites in order to install malware on a target's computer. The most recent date-stamp found in the documents is August 2, coincidung with the first tweet by a parody Twitter account, @GammaGroupPR, which first announced the hack and may be run by the hacker or hackers responsible for the leak. On Reddit, a user called PhineasFisher claimed responsibility for the leak. "Two years ago their software was found being widely used by governments in the middle east, especially Bahrain, to hack and spy on the computers and phones of journalists and dissidents," the user wrote. The name on the @GammaGroupPR Twitter account is also "Phineas Fisher." GammaGroup, the surveillance company whose documents were released, is no stranger to the spotlight. The security firm F-Secure first reported the purchase of FinFisher software by the Egyptian State Security agency in 2011. In 2012, Bloomberg News and The Citizen Lab showed how the company's malware was used to target activists in Bahrain. In 2013, the software company Mozilla sent a cease-and-desist letter to the company after a report by The Citizen Lab showed that a spyware-infected version of the Firefox browser manufactured by Gamma was being used to spy on Malaysian activists.
Paul Merrell

German Parliament Says No More Software Patents | Electronic Frontier Foundation - 0 views

  •  
    Note that an unofficial translation of the parliamentary motion is linked from the article. This adds substantially to the pressure internationally to end software patents because Germany has been the strongest defender of software patents in Europe. The same legal grounds would not apply in the U.S. The strongest argument for the non-patentability in the U.S., in my opinion, is that software patents embody embody both prior art and obviousness. A general purpose computer can accomplish nothing unforeseen by the prior art of the computing device. And it is impossible for software to do more than cause different sequences of bit register states to be executed. This is the province of "skilled artisans" using known methods to produce predictable results. There is a long line of Supreme Court decisions holding that an "invention" with such traits is non-patentable. I have summarized that argument with citations at . 
Gary Edwards

OpenStack Open Source Cloud Computing Software - 0 views

  •  
    OpenStack: The 5-minute Overview What the software does: The goal of OpenStack is to allow any organization to create and offer cloud computing capabilities using open source software running on standard hardware. OpenStack Compute is software for automatically creating and managing large groups of virtual private servers. OpenStack Storage is software for creating redundant, scalable object storage using clusters of commodity servers to store terabytes or even petabytes of data. Why open matters: All of the code for OpenStack is freely available under the Apache 2.0 license. Anyone can run it, build on it, or submit changes back to the project. We strongly believe that an open development model is the only way to foster badly-needed cloud standards, remove the fear of proprietary lock-in for cloud customers, and create a large ecosystem that spans cloud providers. Who it's for: Institutions and service providers with physical hardware that they'd like to use for large-scale cloud deployments. (Additionally, companies who have specific requirements that prevent them from running in a public cloud.) How it's being used today: Organizations like Rackspace Hosting and NASA are using OpenStack technologies to manage tens of thousands of compute instances and petabytes of storage. Timeline: Openstack was announced July 19th, 2010. While many components of OpenStack have been used in production for years, we are in the very early stages of our efforts to offer these technologies broadly as open source software. Early code is now available on LaunchPad, with an inital release for OpenStack Storage expected in mid-September and an initial release for OpenStack Compute expected in mid-October.
Gary Edwards

WhiteHat Aviator - The most secure browser online - 1 views

  •  
    "FREQUENTLY ASKED QUESTIONS What is WhiteHat Aviator? WhiteHat Aviator; is the most secure , most private Web browser available anywhere. By default, it provides an easy way to bank, shop, and use social networks while stopping viruses from infecting computers, preventing accounts from being hacked, and blocking advertisers from invisibly spying on every click. Why do I need a secure Web browser? According to CA Technologies, 84 percent of hacker attacks in 2009 took advantage of vulnerabilities in Web browsers. Similarly, Symantec found that four of the top five vulnerabilities being exploited were client-side vulnerabilities that were frequently targeted by Web-based attacks. The fact is, that when you visit any website you run the risk of having your surfing history, passwords, real name, workplace, home address, phone number, email, gender, political affiliation, sexual preferences, income bracket, education level, and medical history stolen - and your computer infected with viruses. Sadly, this happens on millions of websites every day. Before you have any chance at protecting yourself, other browsers force you to follow complicated how-to guides, modify settings that only serve advertising empires and install obscure third-party software. What makes WhiteHat Aviator so secure? WhiteHat Aviator; is built on Chromium, the same open-source foundation used by Google Chrome. Chromium has several unique, powerful security features. One is a "sandbox" that prevents websites from stealing files off your computer or infecting it with viruses. As good as Chromium is, we went much further to create the safest online experience possible. WhiteHat Aviator comes ready-to-go with hardened security and privacy settings, giving hackers less to work with. And our browser downloads to you - without any hidden user-tracking functionality. Our default search engine is DuckDuckGo - not Google, which logs your activity. For good measure, Aviator integrates Disconnect
Gary Edwards

Microsoft, Apple, Oracle, EMC Consortium Plan Withdrawn - PCWorld - 0 views

  •  
    Early in December Microsoft, Apple, EMC and Oracle notified the German regulator that they planned to form CPTN Holdings with a view to purchasing 882 of Novell's patents. But the filing was withdrawn (Rücknahme) on Dec. 30. No reason was given for the withdrawal by German authorities, but it is likely voluntary as authorities would not yet have had time to investigate the proposal. However, in recent weeks the German Federal Cartel Office has received letters and recommendations from various open-source organizations including the U.S.-based Open Source Initiative (OSI) and the Free Software Foundation Europe (FSFE). These open-source advocates are extremely alarmed that patents with claims on some elements of open-source software could fall into the hands of companies that compete with that open-source software. Given Novell's past involvement in free software development, it's seems very likely that at least some of the company's patents would cover free software technologies.
Gary Edwards

Box.net looks to keep it simple with new version of cloud storage software | VentureBeat - 0 views

  •  
    Enterprise cloud storage provider Box.net is launching a new version of its software that includes a front-facing interface built from scratch and additional mobile features, the company announced today. The new Box.net interface is a mash-up of micro-blogging activity streams like FriendFeed and online storage like Dropbox. Box users can drag and drop files from their computer directly onto the site to send files into cloud storage. There are also folders that are synched up directly with the cloud, like Dropbox, that automatically update files as they are changed. Users can preview those files directly within Box.net - and the software supports a lot of file formats. Box developer Kim Lockhart showed off the capabilities by opening up Adobe Illustrator files within the web interface and previewing other files from Photoshop and the like. Whenever any file is viewed, commented on or changed, Box.net users get an update on their activity feed. "This basically kills the software problem," Lockhart said. "You can view files like illustrator files and pretty much anything else as we move forward without ever having to have the software." The idea was to remake the front-facing application from scratch because it was becoming too complicated with too many features. Box.net released a new update just about every week last year and added more and more features, and that was clouding up the service and making it too complicated for some end users, said Box.net CEO Aaron Levie. While Box is mainly focused on the enterprise, Levie said Box had plenty of potential in the consumer space - to compete with cloud storage providers like Dropbox and the like.
Gary Edwards

13 Free Software Alternatives to Save You Money: Coupon Shoebox - 2 views

  •  
    Good List for essential software apps. One of the ways that you can save a little more money is to look for free alternatives to software products. Outfitting your computer with the software applications that you need can start to become expensive. The good news, though, is that there are free options that can help you accomplish a number of tasks. Here are some thoughts on free software alternatives.
Gary Edwards

Nebula Builds a Cloud Computer for the Masses - Businessweek - 0 views

  •  
    Fascinating story about Chris Kemp of OpenStack fame, and his recent effort to commoditize Cloud Computing hardware/software systems - Nebula excerpt: "Though it doesn't look like much, (about the size of a four-inch-tall pizza box) Nebula One is the product of dozens of engineers working for two years in secrecyin Mountain View, Calif. It has attracted the attention of some of Silicon Valley's top investors. The three billionaires who made the first investment in Google-Andy Bechtolsheim, David Cheriton, and Ram Shriram-joined forces again to back Nebula One, betting that its technology will invite a dramatic shift in corporate computing that outflanks the titans of the industry. "This is an example of where traditional technology companies have failed the market," says Bechtolsheim, a co-founder of Sun Microsystems (ORCL) and famed hardware engineer. Kleiner Perkins Caufield & Byers, Comcast Ventures, and Highland Capital Partners have also backed Kemp's startup, itself called Nebula, which has raised more than $30 million. The origins of Nebula One go back to Kemp's days at NASA, which he joined in 2006 as director of strategic business development. In 2007, he became a chief information officer, making him, at 29, the youngest senior executive in the U.S. government. In 2010, he became NASA's chief technology officer. Kemp spent much his time at NASA developing more efficient data centers for the agency's various computing efforts. He and a team of engineers built the early parts of what is now known as OpenStack, software that makes it possible to control an entire data center as one computer. To see if other companies could take the idea further, Kemp made the software open source. Big players such as AT&T (T), Hewlett-Packard, IBM, and Rackspace Hosting (RAX) have since incorporated OpenStack into the cloud computing services they sell customers. Kemp had an additional idea: He wanted to use OpenStack as a way to give every company its
Gary Edwards

Microsoft Office to get a dose of OpenDocument - CNET News - 0 views

  •  
    While trying to help a friend understand the issues involved with exchanging MSOffice documnets between the many different versions of MSOffice, I stumbled on this oldy but goody ......... "A group of software developers have created a program to make Microsoft Office work with files in the OpenDocument format, a move that would bridge currently incompatible desktop applications. Gary Edwards, an engineer involved in the open-source OpenOffice.org project and founder of the OpenDocument Foundation, on Thursday discussed the software plug-in on the Web site Groklaw. The new program, which has been under development for about year and finished initial testing last week, is designed to let Microsoft Office manipulate OpenDocument format (ODF) files, Edwards said. "The ODF Plugin installs on the file menu as a natural and transparent part of the 'open,' 'save,' and 'save as' sequences. As far as end users and other application add-ons are concerned, ODF Plugin renders ODF documents as if (they) were native to MS Office," according to Edwards. If the software, which is not yet available, works as described, it will be a significant twist to an ongoing contest between Microsoft and the backers of OpenDocument, a document format gaining more interest lately, particularly among governments. Microsoft will not natively support OpenDocument in Office 2007, which will come out later this year. Company executives have said that there is not sufficient demand and OpenDocument is less functional that its own Office formats. Having a third-party product to save OpenDocument files from Office could give OpenDocument-based products a bump in the marketplace, said Stephen O'Grady, a RedMonk analyst. OpenDocument is the native format for the OpenOffice open-source desktop productivity suite and is supported in others, including KOffice, Sun Microsystems' StarOffice and IBM's Workplace. "To the extent that you get people authoring documents in a format that is natively compatible with
Paul Merrell

Long-Secret Stingray Manuals Detail How Police Can Spy on Phones - 0 views

  • Harris Corp.’s Stingray surveillance device has been one of the most closely guarded secrets in law enforcement for more than 15 years. The company and its police clients across the United States have fought to keep information about the mobile phone-monitoring boxes from the public against which they are used. The Intercept has obtained several Harris instruction manuals spanning roughly 200 pages and meticulously detailing how to create a cellular surveillance dragnet. Harris has fought to keep its surveillance equipment, which carries price tags in the low six figures, hidden from both privacy activists and the general public, arguing that information about the gear could help criminals. Accordingly, an older Stingray manual released under the Freedom of Information Act to news website TheBlot.com last year was almost completely redacted. So too have law enforcement agencies at every level, across the country, evaded almost all attempts to learn how and why these extremely powerful tools are being used — though court battles have made it clear Stingrays are often deployed without any warrant. The San Bernardino Sheriff’s Department alone has snooped via Stingray, sans warrant, over 300 times.
  • The documents described and linked below, instruction manuals for the software used by Stingray operators, were provided to The Intercept as part of a larger cache believed to have originated with the Florida Department of Law Enforcement. Two of them contain a “distribution warning” saying they contain “Proprietary Information and the release of this document and the information contained herein is prohibited to the fullest extent allowable by law.”  Although “Stingray” has become a catch-all name for devices of its kind, often referred to as “IMSI catchers,” the manuals include instructions for a range of other Harris surveillance boxes, including the Hailstorm, ArrowHead, AmberJack, and KingFish. They make clear the capability of those devices and the Stingray II to spy on cellphones by, at minimum, tracking their connection to the simulated tower, information about their location, and certain “over the air” electronic messages sent to and from them. Wessler added that parts of the manuals make specific reference to permanently storing this data, something that American law enforcement has denied doing in the past.
  • One piece of Windows software used to control Harris’s spy boxes, software that appears to be sold under the name “Gemini,” allows police to track phones across 2G, 3G, and LTE networks. Another Harris app, “iDen Controller,” provides a litany of fine-grained options for tracking phones. A law enforcement agent using these pieces of software along with Harris hardware could not only track a large number of phones as they moved throughout a city but could also apply nicknames to certain phones to keep track of them in the future. The manual describing how to operate iDEN, the lengthiest document of the four at 156 pages, uses an example of a target (called a “subscriber”) tagged alternately as Green Boy and Green Ben:
  • ...2 more annotations...
  • In order to maintain an uninterrupted connection to a target’s phone, the Harris software also offers the option of intentionally degrading (or “redirecting”) someone’s phone onto an inferior network, for example, knocking a connection from LTE to 2G:
  • A video of the Gemini software installed on a personal computer, obtained by The Intercept and embedded below, provides not only an extensive demonstration of the app but also underlines how accessible the mass surveillance code can be: Installing a complete warrantless surveillance suite is no more complicated than installing Skype. Indeed, software such as Photoshop or Microsoft Office, which require a registration key or some other proof of ownership, are more strictly controlled by their makers than software designed for cellular interception.
Gary Edwards

The GPL Does Not Depend on the Copyrightability of APIs | Public Knowledge - 0 views

  •  
    Excellent legal piece explaining the options and methods of how software programs use licensed and copyrighted third party libraries through an API. Finally, some clear thinking about Google Android and the Oracle Java Law Suit.
    excerpt: Another option for a developer is to do what Google did when it created Android, and create replacement code libraries that are compatible with the existing code libraries, but which are new copyrighted works. Being "compatible" in this context means that the new libraries are called in the same way that the old libraries are--that is, using the same APIs. But the actual copyrighted code that is being called is a new work. As long as the new developer didn't actually copy code from the original libraries, the new libraries are not infringing. It does not infringe on the copyright of a piece of software to create a new piece of software that works the same way; copyright protects the actual expression (lines of code) but not the functionality of a program. The functionality of a program is protected by patent, or not at all.
    In the Oracle/Google case, no one is arguing that code libraries themselves are not copyrightable. Of course they are and this is why the Google/Oracle dispute has no bearing on the enforceability of the GPL. Instead, the argument is about whether the method of using a code library, the APIs, is subject to a copyright that is independent of the copyright of the code itself. If the argument that APIs are not copyrightable prevails, programs that are created by statically-linking GPL'd code libraries will still be considered derivative works of the code libraries and will still have to be released under the GPL.
    Though irrelevant to the enforceability of the GPL, the Oracle/Google dispute is still interesting. Oracle is claiming that Google, by creating compatible, replacement code libraries that are "called" in the same way as Oracle's code libraries (that is, using the same APIs), infringed
1 - 20 of 290 Next › Last »
Showing 20 items per page