Skip to main content

Home/ Open Web/ Group items tagged computer

Rss Feed Group items tagged

Gary Edwards

Office Productivity Software Is No Closer To Becoming A Commodity | Forrester Blogs - 0 views

  • We just published a report on the state of adoption of Office 2013 And Productivity Suite Alternatives based on a survey of 155 Forrester clients with responsibility for those investments. The sample does not fully represent the market, but lets us draw comparisons to the results of our previous survey in 2011. Some key takeaways from the data:   One in five firms uses email in the cloud. Another quarter plans to move at some point. More are using Office 365 (14%) than Google Apps (9%).  Just 22% of respondents are on Office 2013. Another 36% have plans to be on it. Office 2013's uptake will be slower than Office 2010 because fewer firms plan to combine the rollout of Office 2013 with Windows 8 as they combined Office 2010 with Windows 7. Alternatives to Microsoft Office show little traction. In 2011, 13% of respondents supported open source alternatives to Office. This year the number is just 5%. Google Docs has slightly higher adoption and is in use at 13% of companies. 
  • Microsoft continues to have a stranglehold on office productivity in the enterprise: Just 6% of companies in our survey give all or some employees an alternative instead of the installed version of Microsoft Office. Most surprising of all, multi-platform support is NOT a priority. Apps on iOS and Android devices were important to 16% of respondents, and support for non-Windows PCs was important to only 11%. For now, most technology decision-makers seem satisfied with leaving employees to self-provision office productivity apps on their smartphones and tablets if they really want them. 
  • Do you think we're getting closer to replacing Microsoft Office in the workplace?
  •  
    "We (Forrester) just published a report on the state of adoption of Office 2013 And Productivity Suite Alternatives based on a survey of 155 Forrester clients with responsibility for those investments. The sample does not fully represent the market, but lets us draw comparisons to the results of our previous survey in 2011. Some key takeaways from the data:   One in five firms uses email in the cloud. Another quarter plans to move at some point. More are using Office 365 (14%) than Google Apps (9%).  Just 22% of respondents are on Office 2013. Another 36% have plans to be on it. Office 2013's uptake will be slower than Office 2010 because fewer firms plan to combine the rollout of Office 2013 with Windows 8 as they combined Office 2010 with Windows 7. Alternatives to Microsoft Office show little traction. In 2011, 13% of respondents supported open source alternatives to Office. This year the number is just 5%. Google Docs has slightly higher adoption and is in use at 13% of companies. "
Gary Edwards

Microsoft Office Web Apps vs. Google Docs CIO.com - 0 views

  •  
    excellent comparison!
Paul Merrell

Wikimedia and Twitter Bots Are Breaking the News | Motherboard - 0 views

  • We already knew that bots were writing news content, automating narrative stories from data-rich topics like sports scores and financial markets. Now, robo-reporters are starting to get scoops. They're not just writing stories; they're breaking them. Thomas Steiner, a Google engineer in Germany, designed an algorithm that covers the news as it's breaking by monitoring activity on Wikipedia (old school journalists everywhere are wincing) and watching for spikes in editing activity. The idea is that if something big is happening—especially if it’s a global event—multiple editors around the world will be updating Wikipedia and Wikidata pages at once, in different languages. That spike in activity tips off the bot to the story. According to Steiner, his news bot spotted major stories like the Boston Marathon bombing and the disappearance of Malaysia Airlines MH370.
  • The bare-bones site tracking real-time editing is called Wikipedia Live Monitor. It was first created last year, and now Steiner's has extended his robo-news operation to Twitter. The bot mines the social media site for a particular search term triggered by the Wikipedia activity and pulls out all relevant photos to illustrate the story.
  • You can check out the visual news events on the Twitter bot account @mediagalleries. The earliest are from a case study Steiner did to test out the program during the Olympics in Sochi. More recently, there are galleries illustrating major sports events, and the latest updates to flight MH370 and the conflict in Crimea.
  • ...2 more annotations...
  • You can see, it's still a rudimentary process, hardly about to put the staff of the New York Times out of business. But it says a lot about the direction automating the news is heading in.
  • Still, the Fourth Estate is one of the more disconcerting industries being taken over by robots, and not just because it’s my own livelihood. And it’s more common than you think; Kristian Hammond, cofounder of Narrative Science, a company that's been automating content for several years now, predicted that 90 percent of the news could be written by computers by 2030.
Paul Merrell

Exclusive: How FBI Informant Sabu Helped Anonymous Hack Brazil | Motherboard - 0 views

  • In early 2012, members of the hacking collective Anonymous carried out a series of cyber attacks on government and corporate websites in Brazil. They did so under the direction of a hacker who, unbeknownst to them, was wearing another hat: helping the Federal Bureau of Investigation carry out one of its biggest cybercrime investigations to date. A year after leaked files exposed the National Security Agency's efforts to spy on citizens and companies in Brazil, previously unpublished chat logs obtained by Motherboard reveal that while under the FBI's supervision, Hector Xavier Monsegur, widely known by his online persona, "Sabu," facilitated attacks that affected Brazilian websites. The operation raises questions about how the FBI uses global internet vulnerabilities during cybercrime investigations, how it works with informants, and how it shares information with other police and intelligence agencies. 
  • After his arrest in mid-2011, Monsegur continued to organize cyber attacks while working for the FBI. According to documents and interviews, Monsegur passed targets and exploits to hackers to disrupt government and corporate servers in Brazil and several other countries. Details about his work as a federal informant have been kept mostly secret, aired only in closed-door hearings and in redacted documents that include chat logs between Monsegur and other hackers. The chat logs remain under seal due to a protective order upheld in court, but in April, they and other court documents were obtained by journalists at Motherboard and the Daily Dot. 
Paul Merrell

Internet Giants Erect Barriers to Spy Agencies - NYTimes.com - 0 views

  • As fast as it can, Google is sealing up cracks in its systems that Edward J. Snowden revealed the N.S.A. had brilliantly exploited. It is encrypting more data as it moves among its servers and helping customers encode their own emails. Facebook, Microsoft and Yahoo are taking similar steps.
  • After years of cooperating with the government, the immediate goal now is to thwart Washington — as well as Beijing and Moscow. The strategy is also intended to preserve business overseas in places like Brazil and Germany that have threatened to entrust data only to local providers. Google, for example, is laying its own fiber optic cable under the world’s oceans, a project that began as an effort to cut costs and extend its influence, but now has an added purpose: to assure that the company will have more control over the movement of its customer data.
  • A year after Mr. Snowden’s revelations, the era of quiet cooperation is over. Telecommunications companies say they are denying requests to volunteer data not covered by existing law. A.T.&T., Verizon and others say that compared with a year ago, they are far more reluctant to cooperate with the United States government in “gray areas” where there is no explicit requirement for a legal warrant.
  • ...8 more annotations...
  • Eric Grosse, Google’s security chief, suggested in an interview that the N.S.A.'s own behavior invited the new arms race.“I am willing to help on the purely defensive side of things,” he said, referring to Washington’s efforts to enlist Silicon Valley in cybersecurity efforts. “But signals intercept is totally off the table,” he said, referring to national intelligence gathering.“No hard feelings, but my job is to make their job hard,” he added.
  • In Washington, officials acknowledge that covert programs are now far harder to execute because American technology companies, fearful of losing international business, are hardening their networks and saying no to requests for the kind of help they once quietly provided.Continue reading the main story Robert S. Litt, the general counsel of the Office of the Director of National Intelligence, which oversees all 17 American spy agencies, said on Wednesday that it was “an unquestionable loss for our nation that companies are losing the willingness to cooperate legally and voluntarily” with American spy agencies.
  • Many point to an episode in 2012, when Russian security researchers uncovered a state espionage tool, Flame, on Iranian computers. Flame, like the Stuxnet worm, is believed to have been produced at least in part by American intelligence agencies. It was created by exploiting a previously unknown flaw in Microsoft’s operating systems. Companies argue that others could have later taken advantage of this defect.Worried that such an episode undercuts confidence in its wares, Microsoft is now fully encrypting all its products, including Hotmail and Outlook.com, by the end of this year with 2,048-bit encryption, a stronger protection that would take a government far longer to crack. The software is protected by encryption both when it is in data centers and when data is being sent over the Internet, said Bradford L. Smith, the company’s general counsel.
  • Mr. Smith also said the company was setting up “transparency centers” abroad so that technical experts of foreign governments could come in and inspect Microsoft’s proprietary source code. That will allow foreign governments to check to make sure there are no “back doors” that would permit snooping by United States intelligence agencies. The first such center is being set up in Brussels.Microsoft has also pushed back harder in court. In a Seattle case, the government issued a “national security letter” to compel Microsoft to turn over data about a customer, along with a gag order to prevent Microsoft from telling the customer it had been compelled to provide its communications to government officials. Microsoft challenged the gag order as violating the First Amendment. The government backed down.
  • Hardware firms like Cisco, which makes routers and switches, have found their products a frequent subject of Mr. Snowden’s disclosures, and their business has declined steadily in places like Asia, Brazil and Europe over the last year. The company is still struggling to convince foreign customers that their networks are safe from hackers — and free of “back doors” installed by the N.S.A. The frustration, companies here say, is that it is nearly impossible to prove that their systems are N.S.A.-proof.
  • In one slide from the disclosures, N.S.A. analysts pointed to a sweet spot inside Google’s data centers, where they could catch traffic in unencrypted form. Next to a quickly drawn smiley face, an N.S.A. analyst, referring to an acronym for a common layer of protection, had noted, “SSL added and removed here!”
  • Facebook and Yahoo have also been encrypting traffic among their internal servers. And Facebook, Google and Microsoft have been moving to more strongly encrypt consumer traffic with so-called Perfect Forward Secrecy, specifically devised to make it more labor intensive for the N.S.A. or anyone to read stored encrypted communications.One of the biggest indirect consequences from the Snowden revelations, technology executives say, has been the surge in demands from foreign governments that saw what kind of access to user information the N.S.A. received — voluntarily or surreptitiously. Now they want the same.
  • The latest move in the war between intelligence agencies and technology companies arrived this week, in the form of a new Google encryption tool. The company released a user-friendly, email encryption method to replace the clunky and often mistake-prone encryption schemes the N.S.A. has readily exploited.But the best part of the tool was buried in Google’s code, which included a jab at the N.S.A.'s smiley-face slide. The code included the phrase: “ssl-added-and-removed-here-; - )”
Paul Merrell

Hacking Online Polls and Other Ways British Spies Seek to Control the Internet - The In... - 0 views

  • The secretive British spy agency GCHQ has developed covert tools to seed the internet with false information, including the ability to manipulate the results of online polls, artificially inflate pageview counts on web sites, “amplif[y]” sanctioned messages on YouTube, and censor video content judged to be “extremist.” The capabilities, detailed in documents provided by NSA whistleblower Edward Snowden, even include an old standby for pre-adolescent prank callers everywhere: A way to connect two unsuspecting phone users together in a call.
  • he “tools” have been assigned boastful code names. They include invasive methods for online surveillance, as well as some of the very techniques that the U.S. and U.K. have harshly prosecuted young online activists for employing, including “distributed denial of service” attacks and “call bombing.” But they also describe previously unknown tactics for manipulating and distorting online political discourse and disseminating state propaganda, as well as the apparent ability to actively monitor Skype users in real-time—raising further questions about the extent of Microsoft’s cooperation with spy agencies or potential vulnerabilities in its Skype’s encryption. Here’s a list of how JTRIG describes its capabilities: • “Change outcome of online polls” (UNDERPASS) • “Mass delivery of email messaging to support an Information Operations campaign” (BADGER) and “mass delivery of SMS messages to support an Information Operations campaign” (WARPARTH) • “Disruption of video-based websites hosting extremist content through concerted target discovery and content removal.” (SILVERLORD)
  • • “Active skype capability. Provision of real time call records (SkypeOut and SkypetoSkype) and bidirectional instant messaging. Also contact lists.” (MINIATURE HERO) • “Find private photographs of targets on Facebook” (SPRING BISHOP) • “A tool that will permanently disable a target’s account on their computer” (ANGRY PIRATE) • “Ability to artificially increase traffic to a website” (GATEWAY) and “ability to inflate page views on websites” (SLIPSTREAM) • “Amplification of a given message, normally video, on popular multimedia websites (Youtube)” (GESTATOR) • “Targeted Denial Of Service against Web Servers” (PREDATORS FACE) and “Distributed denial of service using P2P. Built by ICTR, deployed by JTRIG” (ROLLING THUNDER)
  • ...1 more annotation...
  • • “A suite of tools for monitoring target use of the UK auction site eBay (www.ebay.co.uk)” (ELATE) • “Ability to spoof any email address and send email under that identity” (CHANGELING) • “For connecting two target phone together in a call” (IMPERIAL BARGE) While some of the tactics are described as “in development,” JTRIG touts “most” of them as “fully operational, tested and reliable.” It adds: “We only advertise tools here that are either ready to fire or very close to being ready.”
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Gary Edwards

These 28 Words Explain Why PayPal's Creators Are Funding A Startup To Kill It - Busines... - 0 views

  •  
    "One of the strangest things about Stripe - or perhaps, one of the strangest things about Paypal - is the list of people who are funding Stripe. Three of its biggest individual backers are people who played a key role in making PayPal a success: cofounders Peter Thiel and Max Levchin, along with Elon Musk, who joined PayPal through an acquisition. Why would Thiel, Levchin, and Musk fund a machine built destroy their baby? Probably because, in Silicon Valley, PayPal is viewed as a lost cause. We've heard a lot of complaints about how awful and hard it is to implement. " Stripe isn't the only well-funded startup going after what it views as a decrepit, disrupt-ble incumbent. Jack Dorsey's Square is too, and it's now worth billions of dollars. Another heavily funded startup, Braintree, owns the technology millions of people use to pay for things inside apps like Uber. Finally, some of eBay's bigger rivals such as Google, Amazon, and Microsoft are gunning for PayPal too.
Paul Merrell

Sloppy Cyber Threat Sharing Is Surveillance by Another Name | Just Security - 0 views

  • Imagine you are the target of a phishing attack: Someone sends you an email attachment containing malware. Your email service provider shares the attachment with the government, so that others can configure their computer systems to spot similar attacks. The next day, your provider gets a call. It’s the Department of Homeland Security (DHS), and they’re curious. The malware appears to be from Turkey. Why, DHS wants to know, might someone in Turkey be interested in attacking you? So, would your email company please share all your emails with the government? Knowing more about you, investigators might better understand the attack. Normally, your email provider wouldn’t be allowed to give this information over without your consent or a search warrant. But that could soon change. The Senate may soon make another attempt at passing the Cybersecurity Information Sharing Act, a bill that would waive privacy laws in the name of cybersecurity. In April, the US House of Representatives passed by strong majorities two similar “cyber threat” information sharing bills. These bills grant companies immunity for giving DHS information about network attacks, attackers, and online crimes.
  • Sharing information about security vulnerabilities is a good idea. Shared vulnerability data empowers other system operators to check and see if they, too, have been attacked, and also to guard against being similarly attacked in the future. I’ve spent most of my career fighting for researchers’ rights to share this kind of information against threats from companies that didn’t want their customers to know their products were flawed. But, these bills gut legal protections against government fishing expeditions exactly at a time when individuals and Internet companies need privacy laws to get stronger, not weaker. 
  • Worse, the bills aren’t needed. Private companies share threat data with each other, and even with the government, all the time. The threat data that security professionals use to protect networks from future attacks is a far more narrow category of information than those included in the bills being considered by Congress, and will only rarely contain private information. And none of the recent cyberattacks — not Sony, not Target, and not the devastating grab of sensitive background check interviews on government employees at the Office of Personnel Management — would have been mitigated by these bills.
Paul Merrell

Here Are All the Sketchy Government Agencies Buying Hacking Team's Spy Tech | Motherboard - 0 views

  • They say what goes around comes around, and there's perhaps nowhere that rings more true than in the world of government surveillance. Such was the case on Monday morning when Hacking Team, the Italian company known for selling electronic intrusion tools to police and federal agencies around the world, awoke to find that it had been hacked itself—big time—apparently exposing its complete client list, email spools, invoices, contracts, source code, and more. Those documents show that not only has the company been selling hacking tools to a long list of foreign governments with dubious human rights records, but it’s also establishing a nice customer base right here in the good old US of A. The cache, which sources told Motherboard is legitimate, contains more than 400 gigabytes of files, many of which confirm previous reports that the company has been selling industrial-grade surveillance software to authoritarian governments. Hacking Team is known in the surveillance world for its flagship hacking suite, Remote Control System (RCS) or Galileo, which allows its government and law enforcement clients to secretly install “implants” on remote machines that can steal private emails, record Skype calls, and even monitor targets through their computer's webcam. Hacking Team in North America
  • According to leaked contracts, invoices and an up-to-date list of customer subscriptions, Hacking Team’s clients—which the company has consistently refused to name—also include Kazakhstan, Azerbaijan, Oman, Saudi Arabia, Uzbekistan, Bahrain, Ethiopia, Nigeria, Sudan and many others. The list of names matches the findings of Citizen Lab, a research lab at the University of Toronto's Munk School of Global Affairs that previously found traces of Hacking Team on the computers of journalists and activists around the world. Last year, the Lab's researchers mapped out the worldwide collection infrastructure used by Hacking Team's customers to covertly transport stolen data, unveiling a massive network comprised of servers based in 21 countries. Reporters Without Borders later named the company one of the “Enemies of the Internet” in its annual report on government surveillance and censorship.
  • we’ve only scratched the surface of this massive leak, and it’s unclear how Hacking Team will recover from having its secrets spilling across the internet for all to see. In the meantime, the company is asking all customers to stop using its spyware—and likely preparing for the worst.
Paul Merrell

Hacking Team Asks Customers to Stop Using Its Software After Hack | Motherboard - 0 views

  • But the hack hasn’t just ruined the day for Hacking Team’s employees. The company, which sells surveillance software to government customers all over the world, from Morocco and Ethiopia to the US Drug Enforcement Agency and the FBI, has told all its customers to shut down all operations and suspend all use of the company’s spyware, Motherboard has learned. “They’re in full on emergency mode,” a source who has inside knowledge of Hacking Team’s operations told Motherboard.
  • Hacking Team notified all its customers on Monday morning with a “blast email,” requesting them to shut down all deployments of its Remote Control System software, also known as Galileo, according to multiple sources. The company also doesn’t have access to its email system as of Monday afternoon, a source said. On Sunday night, an unnamed hacker, who claimed to be the same person who breached Hacking Team’s competitor FinFisher last year, hijacked its Twitter account and posted links to 400GB of internal data. Hacking Team woke up to a massive breach of its systems.
  • A source told Motherboard that the hackers appears to have gotten “everything,” likely more than what the hacker has posted online, perhaps more than one terabyte of data. “The hacker seems to have downloaded everything that there was in the company’s servers,” the source, who could only speak on condition of anonymity, told Motherboard. “There’s pretty much everything here.” It’s unclear how the hackers got their hands on the stash, but judging from the leaked files, they broke into the computers of Hacking Team’s two systems administrators, Christian Pozzi and Mauro Romeo, who had access to all the company’s files, according to the source. “I did not expect a breach to be this big, but I’m not surprised they got hacked because they don’t take security seriously,” the source told me. “You can see in the files how much they royally fucked up.”
  • ...2 more annotations...
  • For example, the source noted, none of the sensitive files in the data dump, from employees passports to list of customers, appear to be encrypted. “How can you give all the keys to your infrastructure to a 20-something who just joined the company?” he added, referring to Pozzi, whose LinkedIn shows he’s been at Hacking Team for just over a year. “Nobody noticed that someone stole a terabyte of data? You gotta be a fuckwad,” the source said. “It means nobody was taking care of security.”
  • The future of the company, at this point, it’s uncertain. Employees fear this might be the beginning of the end, according to sources. One current employee, for example, started working on his resume, a source told Motherboard. It’s also unclear how customers will react to this, but a source said that it’s likely that customers from countries such as the US will pull the plug on their contracts. Hacking Team asked its customers to shut down operations, but according to one of the leaked files, as part of Hacking Team’s “crisis procedure,” it could have killed their operations remotely. The company, in fact, has “a backdoor” into every customer’s software, giving it ability to suspend it or shut it down—something that even customers aren’t told about. To make matters worse, every copy of Hacking Team’s Galileo software is watermarked, according to the source, which means Hacking Team, and now everyone with access to this data dump, can find out who operates it and who they’re targeting with it.
Paul Merrell

Security Experts Oppose Government Access to Encrypted Communication - The New York Times - 0 views

  • An elite group of security technologists has concluded that the American and British governments cannot demand special access to encrypted communications without putting the world’s most confidential data and critical infrastructure in danger.A new paper from the group, made up of 14 of the world’s pre-eminent cryptographers and computer scientists, is a formidable salvo in a skirmish between intelligence and law enforcement leaders, and technologists and privacy advocates. After Edward J. Snowden’s revelations — with security breaches and awareness of nation-state surveillance at a record high and data moving online at breakneck speeds — encryption has emerged as a major issue in the debate over privacy rights.
  • That has put Silicon Valley at the center of a tug of war. Technology companies including Apple, Microsoft and Google have been moving to encrypt more of their corporate and customer data after learning that the National Security Agency and its counterparts were siphoning off digital communications and hacking into corporate data centers.
  • Yet law enforcement and intelligence agency leaders argue that such efforts thwart their ability to monitor kidnappers, terrorists and other adversaries. In Britain, Prime Minister David Cameron threatened to ban encrypted messages altogether. In the United States, Michael S. Rogers, the director of the N.S.A., proposed that technology companies be required to create a digital key to unlock encrypted data, but to divide the key into pieces and secure it so that no one person or government agency could use it alone.The encryption debate has left both sides bitterly divided and in fighting mode. The group of cryptographers deliberately issued its report a day before James B. Comey Jr., the director of the Federal Bureau of Investigation, and Sally Quillian Yates, the deputy attorney general at the Justice Department, are scheduled to testify before the Senate Judiciary Committee on the concerns that they and other government agencies have that encryption technologies will prevent them from effectively doing their jobs.
  • ...2 more annotations...
  • The new paper is the first in-depth technical analysis of government proposals by leading cryptographers and security thinkers, including Whitfield Diffie, a pioneer of public key cryptography, and Ronald L. Rivest, the “R” in the widely used RSA public cryptography algorithm. In the report, the group said any effort to give the government “exceptional access” to encrypted communications was technically unfeasible and would leave confidential data and critical infrastructure like banks and the power grid at risk. Handing governments a key to encrypted communications would also require an extraordinary degree of trust. With government agency breaches now the norm — most recently at the United States Office of Personnel Management, the State Department and the White House — the security specialists said authorities could not be trusted to keep such keys safe from hackers and criminals. They added that if the United States and Britain mandated backdoor keys to communications, China and other governments in foreign markets would be spurred to do the same.
  • “Such access will open doors through which criminals and malicious nation-states can attack the very individuals law enforcement seeks to defend,” the report said. “The costs would be substantial, the damage to innovation severe and the consequences to economic growth hard to predict. The costs to the developed countries’ soft power and to our moral authority would also be considerable.”
  •  
    Our system of government does not expect that every criminal will be apprehended and convicted. There are numerous values our society believes are more important. Some examples: [i] a presumption of innocence unless guilt is established beyond any reasonable doubt; [ii] the requirement that government officials convince a neutral magistrate that they have probable cause to believe that a search or seizure will produce evidence of a crime; [iii] many communications cannot be compelled to be disclosed and used in evidence, such as attorney-client communications, spousal communications, and priest-penitent communications; and [iv] etc. Moral of my story: the government needs a much stronger reason to justify interception of communications than saying, "some crooks will escape prosecution if we can't do that." We have a right to whisper to each other, concealing our communicatons from all others. Why does the right to whisper privately disappear if our whisperings are done electronically? The Supreme Court took its first step on a very slippery slope when it permitted wiretapping in Olmstead v. United States, 277 U.S. 438, 48 S. Ct. 564, 72 L. Ed. 944 (1928). https://goo.gl/LaZGHt It's been a long slide ever since. It's past time to revisit Olmstead and recognize that American citizens have the absolute right to communicate privately. "The President … recognizes that U.S. citizens and institutions should have a reasonable expectation of privacy from foreign or domestic intercept when using the public telephone system." - Brent Scowcroft, U.S. National Security Advisor, National Security Decision Memorandum 338 (1 September 1976) (Nixon administration), http://www.fas.org/irp/offdocs/nsdm-ford/nsdm-338.pdf   
Paul Merrell

WikiLeaks' Julian Assange warns: Google is not what it seems - 0 views

  • Back in 2011, Julian Assange met up with Eric Schmidt for an interview that he considers the best he’s ever given. That doesn’t change, however, the opinion he now has about Schmidt and the company he represents, Google.In fact, the WikiLeaks leader doesn’t believe in the famous “Don’t Be Evil” mantra that Google has been preaching for years.Assange thinks both Schmidt and Google are at the exact opposite spectrum.“Nobody wants to acknowledge that Google has grown big and bad. But it has. Schmidt’s tenure as CEO saw Google integrate with the shadiest of US power structures as it expanded into a geographically invasive megacorporation. But Google has always been comfortable with this proximity,” Assange writes in an opinion piece for Newsweek.
  • “Long before company founders Larry Page and Sergey Brin hired Schmidt in 2001, their initial research upon which Google was based had been partly funded by the Defense Advanced Research Projects Agency (DARPA). And even as Schmidt’s Google developed an image as the overly friendly giant of global tech, it was building a close relationship with the intelligence community,” Assange continues.Throughout the lengthy article, Assange goes on to explain how the 2011 meeting came to be and talks about the people the Google executive chairman brought along - Lisa Shields, then vice president of the Council on Foreign Relationship, Jared Cohen, who would later become the director of Google Ideas, and Scott Malcomson, the book’s editor, who would later become the speechwriter and principal advisor to Susan Rice.“At this point, the delegation was one part Google, three parts US foreign-policy establishment, but I was still none the wiser.” Assange goes on to explain the work Cohen was doing for the government prior to his appointment at Google and just how Schmidt himself plays a bigger role than previously thought.In fact, he says that his original image of Schmidt, as a politically unambitious Silicon Valley engineer, “a relic of the good old days of computer science graduate culture on the West Coast,” was wrong.
  • However, Assange concedes that that is not the sort of person who attends Bilderberg conferences, who regularly visits the White House, and who delivers speeches at the Davos Economic Forum.He claims that Schmidt’s emergence as Google’s “foreign minister” did not come out of nowhere, but it was “presaged by years of assimilation within US establishment networks of reputation and influence.” Assange makes further accusations that, well before Prism had even been dreamed of, the NSA was already systematically violating the Foreign Intelligence Surveillance Act under its director at the time, Michael Hayden. He states, however, that during the same period, namely around 2003, Google was accepting NSA money to provide the agency with search tools for its rapidly-growing database of information.Assange continues by saying that in 2008, Google helped launch the NGA spy satellite, the GeoEye-1, into space and that the search giant shares the photographs from the satellite with the US military and intelligence communities. Later on, 2010, after the Chinese government was accused of hacking Google, the company entered into a “formal information-sharing” relationship with the NSA, which would allow the NSA’s experts to evaluate the vulnerabilities in Google’s hardware and software.
  • ...1 more annotation...
  • “Around the same time, Google was becoming involved in a program known as the “Enduring Security Framework” (ESF), which entailed the sharing of information between Silicon Valley tech companies and Pentagon-affiliated agencies at network speed.’’Emails obtained in 2014 under Freedom of Information requests show Schmidt and his fellow Googler Sergey Brin corresponding on first-name terms with NSA chief General Keith Alexander about ESF,” Assange writes.Assange seems to have a lot of backing to his statements, providing links left and right, which people can go check on their own.
  •  
    The "opinion piece for Newsweek" is an excerpt from Assange's new book, When Google met Wikileaks.  The chapter is well worth the read. http://www.newsweek.com/assange-google-not-what-it-seems-279447
Paul Merrell

Revealed: How DOJ Gagged Google over Surveillance of WikiLeaks Volunteer - The Intercept - 0 views

  • The Obama administration fought a legal battle against Google to secretly obtain the email records of a security researcher and journalist associated with WikiLeaks. Newly unsealed court documents obtained by The Intercept reveal the Justice Department won an order forcing Google to turn over more than one year’s worth of data from the Gmail account of Jacob Appelbaum (pictured above), a developer for the Tor online anonymity project who has worked with WikiLeaks as a volunteer. The order also gagged Google, preventing it from notifying Appelbaum that his records had been provided to the government. The surveillance of Appelbaum’s Gmail account was tied to the Justice Department’s long-running criminal investigation of WikiLeaks, which began in 2010 following the transparency group’s publication of a large cache of U.S. government diplomatic cables. According to the unsealed documents, the Justice Department first sought details from Google about a Gmail account operated by Appelbaum in January 2011, triggering a three-month dispute between the government and the tech giant. Government investigators demanded metadata records from the account showing email addresses of those with whom Appelbaum had corresponded between the period of November 2009 and early 2011; they also wanted to obtain information showing the unique IP addresses of the computers he had used to log in to the account.
  • The Justice Department argued in the case that Appelbaum had “no reasonable expectation of privacy” over his email records under the Fourth Amendment, which protects against unreasonable searches and seizures. Rather than seeking a search warrant that would require it to show probable cause that he had committed a crime, the government instead sought and received an order to obtain the data under a lesser standard, requiring only “reasonable grounds” to believe that the records were “relevant and material” to an ongoing criminal investigation. Google repeatedly attempted to challenge the demand, and wanted to immediately notify Appelbaum that his records were being sought so he could have an opportunity to launch his own legal defense. Attorneys for the tech giant argued in a series of court filings that the government’s case raised “serious First Amendment concerns.” They noted that Appelbaum’s records “may implicate journalistic and academic freedom” because they could “reveal confidential sources or information about WikiLeaks’ purported journalistic or academic activities.” However, the Justice Department asserted that “journalists have no special privilege to resist compelled disclosure of their records, absent evidence that the government is acting in bad faith,” and refused to concede Appelbaum was in fact a journalist. It claimed it had acted in “good faith throughout this criminal investigation, and there is no evidence that either the investigation or the order is intended to harass the … subscriber or anyone else.” Google’s attempts to fight the surveillance gag order angered the government, with the Justice Department stating that the company’s “resistance to providing the records” had “frustrated the government’s ability to efficiently conduct a lawful criminal investigation.”
  • The Justice Department wanted to keep the surveillance secret largely because of an earlier public backlash over its WikiLeaks investigation. In January 2011, Appelbaum and other WikiLeaks volunteers’ – including Icelandic parlimentarian Birgitta Jonsdottir – were notified by Twitter that the Justice Department had obtained data about their accounts. This disclosure generated widepread news coverage and controversy; the government says in the unsealed court records that it “failed to anticipate the degree of  damage that would be caused” by the Twitter disclosure and did not want to “exacerbate this problem” when it went after Appelbaum’s Gmail data. The court documents show the Justice Department said the disclosure of its Twitter data grab “seriously jeopardized the [WikiLeaks] investigation” because it resulted in efforts to “conceal evidence” and put public pressure on other companies to resist similar surveillance orders. It also claimed that officials named in the subpeona ordering Twitter to turn over information were “harassed” after a copy was published by Intercept co-founder Glenn Greenwald at Salon in 2011. (The only specific evidence of the alleged harassment cited by the government is an email that was sent to an employee of the U.S. Attorney’s office that purportedly said: “You guys are fucking nazis trying to controll [sic] the whole fucking world. Well guess what. WE DO NOT FORGIVE. WE DO NOT FORGET. EXPECT US.”)
  • ...4 more annotations...
  • Google accused the government of hyperbole and argued that the backlash over the Twitter order did not justify secrecy related to the Gmail surveillance. “Rather than demonstrating how unsealing the order will harm its well-publicized investigation, the government lists a parade of horribles that have allegedly occurred since it unsealed the Twitter order, yet fails to establish how any of these developments could be further exacerbated by unsealing this order,” wrote Google’s attorneys. “The proverbial toothpaste is out of the tube, and continuing to seal a materially identical order will not change it.” But Google’s attempt to overturn the gag order was denied by magistrate judge Ivan D. Davis in February 2011. The company launched an appeal against that decision, but this too was rebuffed, in March 2011, by District Court judge Thomas Selby Ellis, III.
  • The government agreed to unseal some of the court records on Apr. 1 this year, and they were apparently turned over to Appelbaum on May 14 through a notification sent to his Gmail account. The files were released on condition that they would contain some redactions, which are bizarre and inconsistent, in some cases censoring the name of “WikiLeaks” from cited public news reports. Not all of the documents in the case – such as the original surveillance orders contested by Google – were released as part of the latest disclosure. Some contain “specific and sensitive details of the investigation” and “remain properly sealed while the grand jury investigation continues,” according to the court records from April this year. Appelbaum, an American citizen who is based in Berlin, called the case “a travesty that continues at a slow pace” and said he felt it was important to highlight “the absolute madness in these documents.”
  • He told The Intercept: “After five years, receiving such legal documents is neither a shock nor a needed confirmation. … Will we ever see the full documents about our respective cases? Will we even learn the names of those signing so-called legal orders against us in secret sealed documents? Certainly not in a timely manner and certainly not in a transparent, just manner.” The 32-year-old, who has recently collaborated with Intercept co-founder Laura Poitras to report revelations about National Security Agency surveillance for German news magazine Der Spiegel, said he plans to remain in Germany “in exile, rather than returning to the U.S. to experience more harassment of a less than legal kind.”
  • “My presence in Berlin ensures that the cost of physically harassing me or politically harassing me is much higher than when I last lived on U.S. soil,” Appelbaum said. “This allows me to work as a journalist freely from daily U.S. government interference. It also ensures that any further attempts to continue this will be forced into the open through [a Mutal Legal Assistance Treaty] and other international processes. The German goverment is less likely to allow the FBI to behave in Germany as they do on U.S. soil.” The Justice Department’s WikiLeaks investigaton is headed by prosecutors in the Eastern District of Virginia. Since 2010, the secretive probe has seen activists affiliated with WikiLeaks compelled to appear before a grand jury and the FBI attempting to infiltrate the group with an informant. Earlier this year, it was revealed that the government had obtained the contents of three core WikiLeaks staffers’ Gmail accounts as part of the investigation.
Paul Merrell

Activists send the Senate 6 million faxes to oppose cyber bill - CBS News - 0 views

  • Activists worried about online privacy are sending Congress a message with some old-school technology: They're sending faxes -- more than 6.2 million, they claim -- to express opposition to the Cybersecurity Information Sharing Act (CISA).Why faxes? "Congress is stuck in 1984 and doesn't understand modern technology," according to the campaign Fax Big Brother. The week-long campaign was organized by the nonpartisan Electronic Frontier Foundation, the group Access and Fight for the Future, the activist group behind the major Internet protests that helped derail a pair of anti-piracy bills in 2012. It also has the backing of a dozen groups like the ACLU, the American Library Association, National Association of Criminal Defense Lawyers and others.
  • CISA aims to facilitate information sharing regarding cyberthreats between the government and the private sector. The bill gained more attention following the massive hack in which the records of nearly 22 million people were stolen from government computers."The ability to easily and quickly share cyber attack information, along with ways to counter attacks, is a key method to stop them from happening in the first place," Sen. Dianne Feinstein, D-California, who helped introduce CISA, said in a statement after the hack. Senate leadership had planned to vote on CISA this week before leaving for its August recess. However, the bill may be sidelined for the time being as the Republican-led Senate puts precedent on a legislative effort to defund Planned Parenthood.Even as the bill was put on the backburner, the grassroots campaign to stop it gained steam. Fight for the Future started sending faxes to all 100 Senate offices on Monday, but the campaign really took off after it garnered attention on the website Reddit and on social media. The faxed messages are generated by Internet users who visit faxbigbrother.com or stopcyberspying.com -- or who simply send a message via Twitter with the hashtag #faxbigbrother. To send all those faxes, Fight for the Future set up a dedicated server and a dozen phone lines and modems they say are capable of sending tens of thousands of faxes a day.
  • Fight for the Future told CBS News that it has so many faxes queued up at this point, that it may take months for Senate offices to receive them all, though the group is working on scaling up its capability to send them faster. They're also limited by the speed at which Senate offices can receive them.
  •  
    From an Fight For the Future mailing: "Here's the deal: yesterday the Senate delayed its expected vote on CISA, the Cybersecurity Information Sharing Act that would let companies share your private information--like emails and medical records--with the government. "The delay is good news; but it's a delay, not a victory. "We just bought some precious extra time to fight CISA, but we need to use it to go big like we did with SOPA or this bill will still pass. Even if we stop it in September, they'll try again after that. "The truth is that right now, things are looking pretty grim. Democrats and Republicans have been holding closed-door meetings to work out a deal to pass CISA quickly when they return from recess. "Right before the expected Senate vote on CISA, the Obama Administration endorsed the bill, which means if Congress passes it, the White House will definitely sign it.  "We've stalled and delayed CISA and bills like it nearly half a dozen times, but this month could be our last chance to stop it for good." See also http://tumblr.fightforthefuture.org/post/125953876003/senate-fails-to-advance-cisa-before-recess-amid (;) http://www.cbsnews.com/news/activists-send-the-senate-6-million-faxes-to-oppose-cyber-bill/ (;) http://www.npr.org/2015/08/04/429386027/privacy-advocates-to-senate-cyber-security-bill (.)
Paul Merrell

Microsoft Unveils 'Turnkey' Cloud Appliance -- InformationWeek - 0 views

  • Microsoft on Monday unveiled a preconfigured system designed to help businesses move to cloud computing quickly and efficiently without disrupting existing IT operations. The Windows Azure platform appliance consists of the Windows Azure cloud operating system, Microsoft SQL Azure, and, according to the company, "a Microsoft-specified configuration" of network, storage, and server hardware.
  • Hewlett-Packard, Dell, and Fujitsu have signed agreements to offer versions of the Windows Azure platform appliance based on their own gear, Microsoft said. eBay, meanwhile, has successfully tested the offering and is moving some of its Web pages to Microsoft's cloud.
Gary Edwards

Cloudy Battle in Los Angeles: Microturf vs. Googzilla -- Redmond Developer News - 0 views

  •  
    Talk about a game changer: Excerpt:  An epic battle is brewing out West with much more than a lucrative technology contract at stake: Microsoft Office or Google's cloud? As the Los Angeles Times reported yesterday, Microsoft and Google are bidding for a $7.25 million contract to replace the city of Los Angeles' outdated email system. Los Angeles put out a call for bids in 2008. "Google Apps got the nod because city administrators believed it would be cheaper and less labor-intensive," writes LA Times reporter David Sarno. We all knew this day of reckoning was coming. For Microsoft, the fight to hold on to its Office base is on. Google Apps, the Web-based office suite that includes the viral Gmail, promises less overhead and potentially big savings to fiscally strapped cities, corporations and college campuses. In addition to dispatching teams of lobbyists, both Steve Ballmer and Eric Schmidt have offered to put in appearances at city hall, if city officials think it will help, according to a city councilman quoted in the article.
Gary Edwards

Glide Extends the IPad, Converts Flash on the Fly - PCWorld Business Center - 2 views

  •  
    Wow!  30GB free.  250 file formats with a "universal translation engine".  And HTML5. excerpt: "You can't have convergence unless you have the ability to translate files across different platforms and devices," Donald Leka, TransMedia's CEO, told Macworld. "There's a war between the big tech companies like Adobe, Apple, Microsoft, and Google, and these compatibility issues are not going to go away." Glide also lets you share any documents or media in your account with other users or the public. And with new desktop clients for Mac and PC that can sync a local folder up to your cloud storage space, Glide is taking on popular competitors like Dropbox, SugarSync, and Apple's own iDisk. Glide is free to use in desktop browsers and on the iPad, and free accounts get 30GB of space to start. Premium accounts offer 250GB of space for $50 per year.
Gary Edwards

Wi-Fi Direct certification begins today, device-to-device transmission starting soon --... - 0 views

  •  
    As for functionality, the claims are fairly impressive. In order to make a direct device-to-device connection over WiFi, just one of the two need to be Wi-Fi Direct certified. In other words, a Wi-Fi Direct printer can recognize and interface with your Latitude D410 laptop from 1999, as all Wi-Fi Direct certified devices have to be able to control the one-to-one relationship. The goal here is pretty simple -- it's to create a protected connection between two devices over WiFi with as little hassle as possible. Think Bluetooth, but using WiFi. We also learned that "most" products certified will also support "one-to-many" connections, enabling a Wi-Fi Direct laptop to be in contact with a printer, connected HDTV and a tablet simultaneously, with no router in-between at any point. We should also point out that while 802.11a/g/n is supported over 2.4GHz and 5GHz bands, there's no requirement for Wi-Fi Direct products to support 802.11b, so legacy users may want to pay attention to that quirk. There's also no new hardware requirements here, so in theory, any existing WiFi chipset could be upgraded via firmware to handle Wi-Fi Direct
Gary Edwards

9 Ways Content Management is Going Mobile : FierceContentManagement.com - 0 views

  •  
    Excellent review of 9 CMS solutions going mobile.  Fascinating
« First ‹ Previous 281 - 300 of 413 Next › Last »
Showing 20 items per page