Skip to main content

Home/ Future of the Web/ Group items tagged wait

Rss Feed Group items tagged

Paul Merrell

Venezuelan Intelligence Services Arrest Credicard Directors - nsnbc international | nsn... - 0 views

  • Venezuelan President Nicolas Maduro confirmed Saturday that the state intelligence service SEBIN arrested several directors from the Credicard financial transaction company on Friday night. 
  • The financial consortium is accused of having deliberately taken advantage of a series of cyber attacks on state internet provider CANTV Friday to paralyse its online payment platform–responsible for the majority of the country’s accredited financial transactions, according to its website. “We have proof that it was a deliberate act what Credicard did yesterday. Right now the main people responsible for Credicard are under arrest,” confirmed the president. The government says that millions of attempted purchases using in-store credit and debit card payment machines provided by the company were interrupted after its platform went down for the most part of the day. Authorities also maintain that the company waited longer than the established protocol of one hour before responding to the issues.
  • According to CANTV President Manuel Fernandez, Venezuela’s internet platform suffered at least three attacks from an external source on Friday, one of which was aimed at state oil company PDVSA. CANTV was notified of the attacks by international provider LANautilus, which belongs to Telecom Italia. Nonetheless, Fernandez denied that Credicard’s platform was affected by the interferences to CANTV’s service, underscoring that other financial transaction companies that rely on the state enterprise continued to be operative.
  • ...1 more annotation...
  • On Friday SEBIN Director Gustavo Gonzalez Lopez also openly accused members of the rightwing coalition, the Democratic Unity Roundtable (MUD), of being implicated in the incident. “Members of the MUD involved in the attack on electronic banking service,” he tweeted. “The financial war continues inside and outside the country, internally they are damaging banking operability,” he added. Venezuelan news source La Iguana has reported that the server administrator of Credicard is the company Dayco Host, which belongs to the D’Agostino family. Diana D’Angostino is married to veteran opposition politician, Henry Ramos Allup, president of the National Assembly. On Saturday, the government-promoted Productive Economy Council held an extraordinary meeting of political and business representatives to reject the attack on the country’s financial system.
Paul Merrell

New open-source router firmware opens your Wi-Fi network to strangers | Ars Technica - 0 views

  • We’ve often heard security folks explain their belief that one of the best ways to protect Web privacy and security on one's home turf is to lock down one's private Wi-Fi network with a strong password. But a coalition of advocacy organizations is calling such conventional wisdom into question. Members of the “Open Wireless Movement,” including the Electronic Frontier Foundation (EFF), Free Press, Mozilla, and Fight for the Future are advocating that we open up our Wi-Fi private networks (or at least a small slice of our available bandwidth) to strangers. They claim that such a random act of kindness can actually make us safer online while simultaneously facilitating a better allocation of finite broadband resources. The OpenWireless.org website explains the group’s initiative. “We are aiming to build technologies that would make it easy for Internet subscribers to portion off their wireless networks for guests and the public while maintaining security, protecting privacy, and preserving quality of access," its mission statement reads. "And we are working to debunk myths (and confront truths) about open wireless while creating technologies and legal precedent to ensure it is safe, private, and legal to open your network.”
  • One such technology, which EFF plans to unveil at the Hackers on Planet Earth (HOPE X) conference next month, is open-sourced router firmware called Open Wireless Router. This firmware would enable individuals to share a portion of their Wi-Fi networks with anyone nearby, password-free, as Adi Kamdar, an EFF activist, told Ars on Friday. Home network sharing tools are not new, and the EFF has been touting the benefits of open-sourcing Web connections for years, but Kamdar believes this new tool marks the second phase in the open wireless initiative. Unlike previous tools, he claims, EFF’s software will be free for all, will not require any sort of registration, and will actually make surfing the Web safer and more efficient.
  • Kamdar said that the new firmware utilizes smart technologies that prioritize the network owner's traffic over others', so good samaritans won't have to wait for Netflix to load because of strangers using their home networks. What's more, he said, "every connection is walled off from all other connections," so as to decrease the risk of unwanted snooping. Additionally, EFF hopes that opening one’s Wi-Fi network will, in the long run, make it more difficult to tie an IP address to an individual. “From a legal perspective, we have been trying to tackle this idea that law enforcement and certain bad plaintiffs have been pushing, that your IP address is tied to your identity. Your identity is not your IP address. You shouldn't be targeted by a copyright troll just because they know your IP address," said Kamdar.
  • ...1 more annotation...
  • While the EFF firmware will initially be compatible with only one specific router, the organization would like to eventually make it compatible with other routers and even, perhaps, develop its own router. “We noticed that router software, in general, is pretty insecure and inefficient," Kamdar said. “There are a few major players in the router space. Even though various flaws have been exposed, there have not been many fixes.”
Paul Merrell

We finally gave Congress email addresses - Sunlight Foundation Blog - 0 views

  • On OpenCongress, you can now email your representatives and senators just as easily as you would a friend or colleague. We've added a new feature to OpenCongress. It's not flashy. It doesn't use D3 or integrate with social media. But we still think it's pretty cool. You might've already heard of it. Email. This may not sound like a big deal, but it's been a long time coming. A lot of people are surprised to learn that Congress doesn't have publicly available email addresses. It's the number one feature request that we hear from users of our APIs. Until recently, we didn't have a good response. That's because members of Congress typically put their feedback mechanisms behind captchas and zip code requirements. Sometimes these forms break; sometimes their requirements improperly lock out actual constituents. And they always make it harder to email your congressional delegation than it should be.
  • This is a real problem. According to the Congressional Management Foundation, 88% of Capitol Hill staffers agree that electronic messages from constituents influence their bosses' decisions. We think that it's inappropriate to erect technical barriers around such an essential democratic mechanism. Congress itself is addressing the problem. That effort has just entered its second decade, and people are feeling optimistic that a launch to a closed set of partners might be coming soon. But we weren't content to wait. So when the Electronic Frontier Foundation (EFF) approached us about this problem, we were excited to really make some progress. Building on groundwork first done by the Participatory Politics Foundation and more recent work within Sunlight, a network of 150 volunteers collected the data we needed from congressional websites in just two days. That information is now on Github, available to all who want to build the next generation of constituent communication tools. The EFF is already working on some exciting things to that end.
  • But we just wanted to be able to email our representatives like normal people. So now, if you visit a legislator's page on OpenCongress, you'll see an email address in the right-hand sidebar that looks like Sen.Reid@opencongress.org or Rep.Boehner@opencongress.org. You can also email myreps@opencongress.org to email both of your senators and your House representatives at once. The first time we get an email from you, we'll send one back asking for some additional details. This is necessary because our code submits your message by navigating those aforementioned congressional webforms, and we don't want to enter incorrect information. But for emails after the first one, all you'll have to do is click a link that says, "Yes, I meant to send that email."
  • ...1 more annotation...
  • One more thing: For now, our system will only let you email your own representatives. A lot of people dislike this. We do, too. In an age of increasing polarization, party discipline means that congressional leaders must be accountable to citizens outside their districts. But the unfortunate truth is that Congress typically won't bother reading messages from non-constituents — that's why those zip code requirements exist in the first place. Until that changes, we don't want our users to waste their time. So that's it. If it seems simple, it's because it is. But we think that unbreaking how Congress connects to the Internet is important. You should be able to send a call to action in a tweet, easily forward a listserv message to your representative and interact with your government using the tools you use to interact with everyone else.
Gonzalo San Gil, PhD.

Awful Spanish Copyright Law May Be Stalled Waiting For EU Court Ruling On Plans To Chan... - 0 views

  •  
    "from the stopping-good-ideas,-stopping-bad-ideas dept Techdirt has written about Spain's new copyright law a couple of times. There, we concentrated on the "Google tax" that threatens the digital commons and open access in that country. But alongside this extremely foolish idea, there was another good one: getting rid of the anachronistic levy on recording devices that was supposed to "compensate" for private copying (as if any such compensation were needed), and paying collecting societies directly out of Spain's state budget. "
  •  
    "from the stopping-good-ideas,-stopping-bad-ideas dept Techdirt has written about Spain's new copyright law a couple of times. There, we concentrated on the "Google tax" that threatens the digital commons and open access in that country. But alongside this extremely foolish idea, there was another good one: getting rid of the anachronistic levy on recording devices that was supposed to "compensate" for private copying (as if any such compensation were needed), and paying collecting societies directly out of Spain's state budget. "
Paul Merrell

Hackers Prove Fingerprints Are Not Secure, Now What? | nsnbc international - 0 views

  • The Office of Personnel Management (OPM) recently revealed that an estimated 5.6 million government employees were affected by the hack; and not 1.1 million as previously assumed.
  • Samuel Schumach, spokesman for the OPM, said: “As part of the government’s ongoing work to notify individuals affected by the theft of background investigation records, the Office of Personnel Management and the Department of Defense have been analyzing impacted data to verify its quality and completeness. Of the 21.5 million individuals whose Social Security Numbers and other sensitive information were impacted by the breach, the subset of individuals whose fingerprints have been stolen has increased from a total of approximately 1.1 million to approximately 5.6 million.” This endeavor expended the use of the Department of Defense (DoD), the Department of Homeland Security (DHS), the National Security Agency (NSA), and the Pentagon. Schumer added that “if, in the future, new means are developed to misuse the fingerprint data, the government will provide additional information to individuals whose fingerprints may have been stolen in this breach.” However, we do not need to wait for the future for fingerprint data to be misused and coveted by hackers.
  • Look no further than the security flaws in Samsung’s new Galaxy 5 smartphone as was demonstrated by researchers at Security Research Labs (SRL) showing how fingerprints, iris scans and other biometric identifiers could be fabricated and yet authenticated by the Apple Touch ID fingerprints scanner. The shocking part of this demonstration is that this hack was achieved less than 2 days after the technology was released to the public by Apple. Ben Schlabs, researcher for SRL explained: “We expected we’d be able to spoof the S5’s Finger Scanner, but I hoped it would at least be a challenge. The S5 Finger Scanner feature offers nothing new except—because of the way it is implemented in this Android device—slightly higher risk than that already posed by previous devices.” Schlabs and other researchers discovered that “the S5 has no mechanism requiring a password when encountering a large number of incorrect finger swipes.” By rebotting the smartphone, Schlabs could force “the handset to accept an unlimited number of incorrect swipes without requiring users to enter a password [and] the S5 fingerprint authenticator [could] be associated with sensitive banking or payment apps such as PayPal.”
  • ...1 more annotation...
  • Schlab said: “Perhaps most concerning is that Samsung does not seem to have learned from what others have done less poorly. Not only is it possible to spoof the fingerprint authentication even after the device has been turned off, but the implementation also allows for seemingly unlimited authentication attempts without ever requiring a password. Incorporation of fingerprint authentication into highly sensitive apps such as PayPal gives a would-be attacker an even greater incentive to learn the simple skill of fingerprint spoofing.” Last year Hackers from the Chaos Computer Club (CCC) proved Apple wrong when the corporation insisted that their new iPhone 5S fingerprint sensor is “a convenient and highly secure way to access your phone.” CCC stated that it is as easy as stealing a fingerprint from a drinking glass – and anyone can do it.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

CISPA is back! - 0 views

  • OPERATION: Fax Big Brother Congress is rushing toward a vote on CISA, the worst spying bill yet. CISA would grant sweeping legal immunity to giant companies like Facebook and Google, allowing them to do almost anything they want with your data. In exchange, they'll share even more of your personal information with the government, all in the name of "cybersecurity." CISA won't stop hackers — Congress is stuck in 1984 and doesn't understand modern technology. So this week we're sending them thousands of faxes — technology that is hopefully old enough for them to understand. Stop CISA. Send a fax now!
  • (Any tweet w/ #faxbigbrother will get faxed too!) Your email is only shown in your fax to Congress. We won't add you to any mailing lists.
  • CISA: the dirty deal between government and corporate giants. It's the dirty deal that lets much of government from the NSA to local police get your private data from your favorite websites and lets them use it without due process. The government is proposing a massive bribe—they will give corporations immunity for breaking virtually any law if they do so while providing the NSA, DHS, DEA, and local police surveillance access to everyone's data in exchange for getting away with crimes, like fraud, money laundering, or illegal wiretapping. Specifically it incentivizes companies to automatically and simultaneously transfer your data to the DHS, NSA, FBI, and local police with all of your personally-indentifying information by giving companies legal immunity (notwithstanding any law), and on top of that, you can't use the Freedom of Information Act to find out what has been shared.
  • ...1 more annotation...
  • The NSA and members of Congress want to pass a "cybersecurity" bill so badly, they’re using the recent hack of the Office of Personnel Management as justification for bringing CISA back up and rushing it through. In reality, the OPM hack just shows that the government has not been a good steward of sensitive data and they need to institute real security measures to fix their problems. The truth is that CISA could not have prevented the OPM hack, and no Senator could explain how it could have. Congress and the NSA are using irrational hysteria to turn the Internet into a place where the government has overly broad, unchecked powers. Why Faxes? Since 2012, online and civil liberties groups and 30,000+ sites have driven more than 2.6 million emails and hundreds of thousands of calls, tweets and more to Congress opposing overly broad cybersecurity legislation. Congress has tried to pass CISA in one form or another 4 times, and they were beat back every time by people like you. It's clear Congress is completely out of touch with modern technology, so this week, as Congress rushes toward a vote on CISA, we are going to send them thousands of faxes, a technology from the 1980s that is hopefully antiquated enough for them to understand. Sending a fax is super easy — you can use this page to send a fax. Any tweet with the hashtag #faxbigbrother will get turned into a fax to Congress too, so what are you waiting for? Click here to send a fax now!
Paul Merrell

Activists send the Senate 6 million faxes to oppose cyber bill - CBS News - 0 views

  • Activists worried about online privacy are sending Congress a message with some old-school technology: They're sending faxes -- more than 6.2 million, they claim -- to express opposition to the Cybersecurity Information Sharing Act (CISA).Why faxes? "Congress is stuck in 1984 and doesn't understand modern technology," according to the campaign Fax Big Brother. The week-long campaign was organized by the nonpartisan Electronic Frontier Foundation, the group Access and Fight for the Future, the activist group behind the major Internet protests that helped derail a pair of anti-piracy bills in 2012. It also has the backing of a dozen groups like the ACLU, the American Library Association, National Association of Criminal Defense Lawyers and others.
  • CISA aims to facilitate information sharing regarding cyberthreats between the government and the private sector. The bill gained more attention following the massive hack in which the records of nearly 22 million people were stolen from government computers."The ability to easily and quickly share cyber attack information, along with ways to counter attacks, is a key method to stop them from happening in the first place," Sen. Dianne Feinstein, D-California, who helped introduce CISA, said in a statement after the hack. Senate leadership had planned to vote on CISA this week before leaving for its August recess. However, the bill may be sidelined for the time being as the Republican-led Senate puts precedent on a legislative effort to defund Planned Parenthood.Even as the bill was put on the backburner, the grassroots campaign to stop it gained steam. Fight for the Future started sending faxes to all 100 Senate offices on Monday, but the campaign really took off after it garnered attention on the website Reddit and on social media. The faxed messages are generated by Internet users who visit faxbigbrother.com or stopcyberspying.com -- or who simply send a message via Twitter with the hashtag #faxbigbrother. To send all those faxes, Fight for the Future set up a dedicated server and a dozen phone lines and modems they say are capable of sending tens of thousands of faxes a day.
  • Fight for the Future told CBS News that it has so many faxes queued up at this point, that it may take months for Senate offices to receive them all, though the group is working on scaling up its capability to send them faster. They're also limited by the speed at which Senate offices can receive them.
  •  
    From an Fight For the Future mailing: "Here's the deal: yesterday the Senate delayed its expected vote on CISA, the Cybersecurity Information Sharing Act that would let companies share your private information--like emails and medical records--with the government. "The delay is good news; but it's a delay, not a victory. "We just bought some precious extra time to fight CISA, but we need to use it to go big like we did with SOPA or this bill will still pass. Even if we stop it in September, they'll try again after that. "The truth is that right now, things are looking pretty grim. Democrats and Republicans have been holding closed-door meetings to work out a deal to pass CISA quickly when they return from recess. "Right before the expected Senate vote on CISA, the Obama Administration endorsed the bill, which means if Congress passes it, the White House will definitely sign it.  "We've stalled and delayed CISA and bills like it nearly half a dozen times, but this month could be our last chance to stop it for good." See also http://tumblr.fightforthefuture.org/post/125953876003/senate-fails-to-advance-cisa-before-recess-amid (;) http://www.cbsnews.com/news/activists-send-the-senate-6-million-faxes-to-oppose-cyber-bill/ (;) http://www.npr.org/2015/08/04/429386027/privacy-advocates-to-senate-cyber-security-bill (.)
Gary Edwards

Is Microsoft slow to the punch on SOA, or just waiting for the right moment? | Joe McKe... - 0 views

  • I agree with DonnieBoy. Microsoft will try to leverage their MSOffice monopoly to dominate the newly emerging marketplace of Web-Stack and Cloud Computing solutions. I also believe that for Microsoft, the final pieces of this puzzle fell into place on March 29th, 2008 with ISO approval of the MSOffice-OOXML document format.
  •  
    Extesnive reply to Joe McKendrik's article about Microsoft and SOA.
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Paul Merrell

Sun Modular Datacenter S20 - Overview - 0 views

  • Sun Modular Datacenter S20, widely known as Project Blackbox, is revolutionizing how companies, universities, and governments add datacenter capacity. With its high-density, eco-friendly design that enables rapid deployment, game-changing economics, and unimaginable mobility, Sun Modular Datacenter is reaching new customers world-wide who have been waiting for just this type of break-through solution.
  •  
    I was wrong. This page is the hub for Sun Blackbox containerized data center. Check the link titles to press coverage on the Perspectives page and you begin to get an idea why IBM was so desperate to force a hardware deal with Sun and Microsoft. Sun and Microsoft are ripping out and replacing IBM's hardware business.
Gary Edwards

The Time to Pounce Has Come : Is Microsoft slow to the punch on SOA, or just waiting fo... - 0 views

  • The Time to Pounce Has Come I agree with DonnieBoy. Microsoft will try to leverage their MSOffice monopoly to dominate the newly emerging marketplace of Web-Stack and Cloud Computing solutions. I also believe that for Microsoft, the final pieces of this puzzle fell into place on March 29th, 2008 with ISO approval of the MSOffice-OOXML document format. For most businesses, Microsoft is the "client" in "client/server". The great transition from client/server to client/ Web-Stack /server has been slow because Microsoft was uncertain as to how they could control this transition. Some light was shed on the nature of this "uncertainty" when the Combs vs. Microsoft antitrust case brought forth a 1998 eMail from Chairman Bill to the MSOffice development team. The issue for the good Chairman was that of controlling the formats and protocols used to connect MSOffice to a Web centric world. MSOffice support for Open Web formats and protocols like (X)HTML, CSS, and WebDAV were out of the question. Microsoft needed to figure out how pull off this transition with proprietary formats and protocols. And avoid the wrath of antitrust in the process!
  •  
    ge response to Joe McKendrick's SOA article.
Gary Edwards

The Monkey On Microsoft's Back - Forbes.com - 0 views

  • The new technology, dubbed TraceMonkey, promises to speed up Firefox's ability to deliver complex applications. The move heightens the threat posed by a nascent group of online alternatives to Microsoft's most profitable software: PC applications, like Microsoft Office, that allow Microsoft to burn hundreds of millions of dollars on efforts to seize control of the online world. Microsoft's Business Division, which gets 90% of its revenues from sales of Microsoft Office, spat out $12.4 billion in operating income for the fiscal year ending June 30. Google (nasdaq: GOOG - news - people ), however, is playing a parallel game, using profits from its online advertising business to fund alternatives to Microsoft's desktop offerings. Google already says it has "millions" of users for its free, Web-based alternative to desktop staples, including Microsoft's Word, Excel and PowerPoint software. The next version of Firefox, which could debut by the end of this year, promises to speed up such applications, thanks to a new technology built into the developer's version of the software last week. Right now, rich Web applications such as Google Gmail rely on a technology known as Javascript to turn them from lifeless Web pages into applications that respond as users mouse about a Web page. TraceMonkey aims to turn the most frequently used chunks of Javascript code embedded into Web pages into binary form--allowing computers to hustle through the most used bits of code--without waiting around to render all of the code into binary form.
  •  
    I did send a very lenghthy comment to Brian Caulfield, the Forbes author of this article. Of course, i disagreed with his perspective. TraceMonkey is great, performing an acceleration of JavaScript in FireFox in much the same way that Squirrel Fish accelleratees WebKit Browsers. What Brian misses though is that the RiA war that is taking place both inside and outside the browser (RIA = fully functional Web applications that WILL replace the "client/server" apps model)
Gary Edwards

When You're a WebKit Hammer, Everything Looks Like an Open Web Nail ... As it should! - 0 views

  • You’re still waiting for me to explain what I meant when I referred to JavaScript as a last resort. I hinted at it in the preceding paragraph. Not the part on JavaScript debugging, but my reference to CSS and HTML. These do a lot more than paint screens. They are a browser's client-side framework. Everything they do is handled as native code. In other words, they're fast. CSS3 and HTML5 are too inconsistently implemented (if at all) across browsers to design to unless you're specifically targeting Safari, iPhone, or other WebKit-based browsers.
    • Gary Edwards
       
      Tom makes the point that the use of AJAX JavaScript breaks Web interoperability. He further points out that HTML is a static layout language, where CSS is dynamic and adaptive. (Use HTML5/DOM for document structure, and CSS4 for presentation - layout, formatting and visual interface).

      It is the consistency of the WebKit document model across all WebKit browsers that makes for an interoperable Open Web future. I would not however discount the importance of Firefox and Opera embracing the WebKit document model (HTML5, CSS4, SVG/Canvas, JavaScript, DOM2). That's our guarantee that the future of the Open Web will actually be open.

      Tom goes on to suggest that instead of "AJAX", developers would be better off thinking in terms of "ACHJAX": Asynchronous CSS4 - HTML5 - JavaScript and XML ..... with the focus on getting as much done in CSS4 as possible.
  •  
    InfoWorld's Tom Yager makes the case for the WebKit visual document model over AJAX. The problem with AJAX as he sees it is that it's JavaScript heavy. And that breaks precious Web interoperability. He makes the point that if something can be done in CSS, it should. He also argues that WebKit is the best tool because the document model is that of advanced HTML5 and CSS3.

    "... These [WebKit] browsers also share a stellar accelerated JavaScript interpreter that makes the edit/run/debug cycle go faster. They are also the only browsers that deliver on CSS4 and HTML5 standards (with some elements that are proposed to the W3C standards body). Sites that are visually rich may start sprouting "best viewed with Safari" banners until other browsers catch up. The banner would also let users know that your site is optimized for iPhone....."

    Humm. Did you catch that? CSS4!!! I guess he's referring to the WebKit penchant for putting advanced graphical transitions and animations into CSS instead of relying on a device specific or OS specific API.

    Placing the visual interface instructions in the documents presentation layer (CSS4) is a revolutionary idea. The WebKit model will go a long way towards creating a global interoperability layer that rides above lower device, OS, browser and application specifics. So yes, by all means let's go with CSS4 :)

Gonzalo San Gil, PhD.

Privacy Badger | Electronic Frontier Foundation - 0 views

  •  
    [Privacy Badger blocks spying ads and invisible trackers.]
  •  
    I've been using it for about a month as a Chrome extension, which at least at the time was still in beta. It hasn't caused any problems on either the Linux or Windows boxes. It appears to be working as intended on both systems. The sliders discussed in the article only appear if you are viewing a page that has identified or candidate cookie tracking characteristics. Some it blocks itself. Others, you have to use a slider on to set whether it will be blocked or wait until the program acquires enough data about that site to make a decision to block. The program does not use a blacklist of sites, although it comes with a white list built in of sites that honor the do not track browser setting. But once a tracking cookie is blocked, it's blocked for all sites you visit. So this isn't instant complete tracking cookie security. It's designed to improve your experience with the number of sites whose tracking cookies follow your tracks around the Web. But this is not a mature program. Its effectiveness will improve with each update.
Gonzalo San Gil, PhD.

MPAA Research: Blocking The Pirate Bay Works, So..... | TorrentFreak - 1 views

  •  
    " Ernesto on August 28, 2014 C: 61 News Hollywood has helped to get The Pirate Bay blocked in many countries, but not on its home turf. There are now various signs that this may change in the near future. Among other things, the MPAA has conducted internal research to show that site blocking is rather effective."
  •  
    " Ernesto on August 28, 2014 C: 61 News Hollywood has helped to get The Pirate Bay blocked in many countries, but not on its home turf. There are now various signs that this may change in the near future. Among other things, the MPAA has conducted internal research to show that site blocking is rather effective."
  •  
    Domain blocking in the U.S. is largely a non-starter in the U.S. because of the Constitution's First Amendment, although it has been allowed in some circumstances. Over-generalizing, but the more legal content a site has, the less susceptible it is to domain-blocking. It's even more difficult at the ISP level because of statutory protections that immunize ISPs from private content-related suit. Major U.S. ISPs zealously protect those protections in Congress. At the request of Hollywood, President Obama convened a meeting that persuaded major ISPs to voluntarily block download of particular movies, using DRM filters. But my understanding is that users can still download them if they are using the Tor browser. I haven't checked because there's nothing Hollywood releases that I can't wait until it's available on my cable television service. Even then, I mainly use the television to find something just interesting enough to persuade me to look up from my computer monitors for a moment, to reduce eye strain from monitor glare. I'm not a movie buff nor am I enamored of thinly veiled propaganda. So Hollywood does not figure largely in my life. As yet, there is no comparable blocking on music downloads.
Paul Merrell

The Digital Hunt for Duqu, a Dangerous and Cunning U.S.-Israeli Spy Virus - The Intercept - 1 views

  • “Is this related to what we talked about before?” Bencsáth said, referring to a previous discussion they’d had about testing new services the company planned to offer customers. “No, something else,” Bartos said. “Can you come now? It’s important. But don’t tell anyone where you’re going.” Bencsáth wolfed down the rest of his lunch and told his colleagues in the lab that he had a “red alert” and had to go. “Don’t ask,” he said as he ran out the door. A while later, he was at Bartos’ office, where a triage team had been assembled to address the problem they wanted to discuss. “We think we’ve been hacked,” Bartos said.
  • They found a suspicious file on a developer’s machine that had been created late at night when no one was working. The file was encrypted and compressed so they had no idea what was inside, but they suspected it was data the attackers had copied from the machine and planned to retrieve later. A search of the company’s network found a few more machines that had been infected as well. The triage team felt confident they had contained the attack but wanted Bencsáth’s help determining how the intruders had broken in and what they were after. The company had all the right protections in place—firewalls, antivirus, intrusion-detection and -prevention systems—and still the attackers got in.
  • Bencsáth was a teacher, not a malware hunter, and had never done such forensic work before. At the CrySyS Lab, where he was one of four advisers working with a handful of grad students, he did academic research for the European Union and occasional hands-on consulting work for other clients, but the latter was mostly run-of-the-mill cleanup work—mopping up and restoring systems after random virus infections. He’d never investigated a targeted hack before, let alone one that was still live, and was thrilled to have the chance. The only catch was, he couldn’t tell anyone what he was doing. Bartos’ company depended on the trust of customers, and if word got out that the company had been hacked, they could lose clients. The triage team had taken mirror images of the infected hard drives, so they and Bencsáth spent the rest of the afternoon poring over the copies in search of anything suspicious. By the end of the day, they’d found what they were looking for—an “infostealer” string of code that was designed to record passwords and other keystrokes on infected machines, as well as steal documents and take screenshots. It also catalogued any devices or systems that were connected to the machines so the attackers could build a blueprint of the company’s network architecture. The malware didn’t immediately siphon the stolen data from infected machines but instead stored it in a temporary file, like the one the triage team had found. The file grew fatter each time the infostealer sucked up data, until at some point the attackers would reach out to the machine to retrieve it from a server in India that served as a command-and-control node for the malware.
  • ...1 more annotation...
  • Bencsáth took the mirror images and the company’s system logs with him, after they had been scrubbed of any sensitive customer data, and over the next few days scoured them for more malicious files, all the while being coy to his colleagues back at the lab about what he was doing. The triage team worked in parallel, and after several more days they had uncovered three additional suspicious files. When Bencsáth examined one of them—a kernel-mode driver, a program that helps the computer communicate with devices such as printers—his heart quickened. It was signed with a valid digital certificate from a company in Taiwan (digital certificates are documents ensuring that a piece of software is legitimate). Wait a minute, he thought. Stuxnet—the cyberweapon that was unleashed on Iran’s uranium-enrichment program—also used a driver that was signed with a certificate from a company in Taiwan. That one came from RealTek Semiconductor, but this certificate belonged to a different company, C-Media Electronics. The driver had been signed with the certificate in August 2009, around the same time Stuxnet had been unleashed on machines in Iran.
Paul Merrell

Microsoft to host data in Germany to evade US spying | Naked Security - 0 views

  • Microsoft's new plan to keep the US government's hands off its customers' data: Germany will be a safe harbor in the digital privacy storm. Microsoft on Wednesday announced that beginning in the second half of 2016, it will give foreign customers the option of keeping data in new European facilities that, at least in theory, should shield customers from US government surveillance. It will cost more, according to the Financial Times, though pricing details weren't forthcoming. Microsoft Cloud - including Azure, Office 365 and Dynamics CRM Online - will be hosted from new datacenters in the German regions of Magdeburg and Frankfurt am Main. Access to data will be controlled by what the company called a German data trustee: T-Systems, a subsidiary of the independent German company Deutsche Telekom. Without the permission of Deutsche Telekom or customers, Microsoft won't be able to get its hands on the data. If it does get permission, the trustee will still control and oversee Microsoft's access.
  • Microsoft CEO Satya Nadella dropped the word "trust" into the company's statement: Microsoft’s mission is to empower every person and every individual on the planet to achieve more. Our new datacenter regions in Germany, operated in partnership with Deutsche Telekom, will not only spur local innovation and growth, but offer customers choice and trust in how their data is handled and where it is stored.
  • On Tuesday, at the Future Decoded conference in London, Nadella also announced that Microsoft would, for the first time, be opening two UK datacenters next year. The company's also expanding its existing operations in Ireland and the Netherlands. Officially, none of this has anything to do with the long-drawn-out squabbling over the transatlantic Safe Harbor agreement, which the EU's highest court struck down last month, calling the agreement "invalid" because it didn't protect data from US surveillance. No, Nadella said, the new datacenters and expansions are all about giving local businesses and organizations "transformative technology they need to seize new global growth." But as Diginomica reports, Microsoft EVP of Cloud and Enterprise Scott Guthrie followed up his boss’s comments by saying that yes, the driver behind the new datacenters is to let customers keep data close: We can guarantee customers that their data will always stay in the UK. Being able to very concretely tell that story is something that I think will accelerate cloud adoption further in the UK.
  • ...2 more annotations...
  • Microsoft and T-Systems' lawyers may well think that storing customer data in a German trustee data center will protect it from the reach of US law, but for all we know, that could be wishful thinking. Forrester cloud computing analyst Paul Miller: To be sure, we must wait for the first legal challenge. And the appeal. And the counter-appeal. As with all new legal approaches, we don’t know it is watertight until it is challenged in court. Microsoft and T-Systems’ lawyers are very good and say it's watertight. But we can be sure opposition lawyers will look for all the holes. By keeping data offshore - particularly in Germany, which has strong data privacy laws - Microsoft could avoid the situation it's now facing with the US demanding access to customer emails stored on a Microsoft server in Dublin. The US has argued that Microsoft, as a US company, comes under US jurisdiction, regardless of where it keeps its data.
  • Running away to Germany isn't a groundbreaking move; other US cloud services providers have already pledged expansion of their EU presences, including Amazon's plan to open a UK datacenter in late 2016 that will offer what CTO Werner Vogels calls "strong data sovereignty to local users." Other big data operators that have followed suit: Salesforce, which has already opened datacenters in the UK and Germany and plans to open one in France next year, as well as new EU operations pledged for the new year by NetSuite and Box. Can Germany keep the US out of its datacenters? Can Ireland? Time, and court cases, will tell.
  •  
    The European Community's Court of Justice decision in the Safe Harbor case --- and Edward Snowden --- are now officially downgrading the U.S. as a cloud data center location. NSA is good business for Europeans looking to displace American cloud service providers, as evidenced by Microsoft's decision. The legal test is whether Microsoft has "possession, custody, or control" of the data. From the info given in the article, it seems that Microsoft has done its best to dodge that bullet by moving data centers to Germany and placing their data under the control of a European company. Do ownership of the hardware and profits from their rent mean that Microsoft still has "possession, custody, or control" of the data? The fine print of the agreement with Deutsche Telekom and the customer EULAs will get a thorough going over by the Dept. of Justice for evidence of Microsoft "control" of the data. That will be the crucial legal issue. The data centers in Germany may pass the test. But the notion that data centers in the UK can offer privacy is laughable; the UK's legal authority for GCHQ makes it even easier to get the data than the NSA can in the U.S.  It doesn't even require a court order. 
Paul Merrell

European Human Rights Court Deals a Heavy Blow to the Lawfulness of Bulk Surveillance |... - 0 views

  • In a seminal decision updating and consolidating its previous jurisprudence on surveillance, the Grand Chamber of the European Court of Human Rights took a sideways swing at mass surveillance programs last week, reiterating the centrality of “reasonable suspicion” to the authorization process and the need to ensure interception warrants are targeted to an individual or premises. The decision in Zakharov v. Russia — coming on the heels of the European Court of Justice’s strongly-worded condemnation in Schrems of interception systems that provide States with “generalised access” to the content of communications — is another blow to governments across Europe and the United States that continue to argue for the legitimacy and lawfulness of bulk collection programs. It also provoked the ire of the Russian government, prompting an immediate legislative move to give the Russian constitution precedence over Strasbourg judgments. The Grand Chamber’s judgment in Zakharov is especially notable because its subject matter — the Russian SORM system of interception, which includes the installation of equipment on telecommunications networks that subsequently enables the State direct access to the communications transiting through those networks — is similar in many ways to the interception systems currently enjoying public and judicial scrutiny in the United States, France, and the United Kingdom. Zakharov also provides a timely opportunity to compare the differences between UK and Russian law: Namely, Russian law requires prior independent authorization of interception measures, whereas neither the proposed UK law nor the existing legislative framework do.
  • The decision is lengthy and comprises a useful restatement and harmonization of the Court’s approach to standing (which it calls “victim status”) in surveillance cases, which is markedly different from that taken by the US Supreme Court. (Indeed, Judge Dedov’s separate but concurring opinion notes the contrast with Clapper v. Amnesty International.) It also addresses at length issues of supervision and oversight, as well as the role played by notification in ensuring the effectiveness of remedies. (Marko Milanovic discusses many of these issues here.) For the purpose of the ongoing debate around the legitimacy of bulk surveillance regimes under international human rights law, however, three particular conclusions of the Court are critical.
  • The Court took issue with legislation permitting the interception of communications for broad national, military, or economic security purposes (as well as for “ecological security” in the Russian case), absent any indication of the particular circumstances under which an individual’s communications may be intercepted. It said that such broadly worded statutes confer an “almost unlimited degree of discretion in determining which events or acts constitute such a threat and whether that threat is serious enough to justify secret surveillance” (para. 248). Such discretion cannot be unbounded. It can be limited through the requirement for prior judicial authorization of interception measures (para. 249). Non-judicial authorities may also be competent to authorize interception, provided they are sufficiently independent from the executive (para. 258). What is important, the Court said, is that the entity authorizing interception must be “capable of verifying the existence of a reasonable suspicion against the person concerned, in particular, whether there are factual indications for suspecting that person of planning, committing or having committed criminal acts or other acts that may give rise to secret surveillance measures, such as, for example, acts endangering national security” (para. 260). This finding clearly constitutes a significant threshold which a number of existing and pending European surveillance laws would not meet. For example, the existence of individualized reasonable suspicion runs contrary to the premise of signals intelligence programs where communications are intercepted in bulk; by definition, those programs collect information without any consideration of individualized suspicion. Yet the Court was clearly articulating the principle with national security-driven surveillance in mind, and with the knowledge that interception of communications in Russia is conducted by Russian intelligence on behalf of law enforcement agencies.
  • ...6 more annotations...
  • This element of the Grand Chamber’s decision distinguishes it from prior jurisprudence of the Court, namely the decisions of the Third Section in Weber and Saravia v. Germany (2006) and of the Fourth Section in Liberty and Ors v. United Kingdom (2008). In both cases, the Court considered legislative frameworks which enable bulk interception of communications. (In the German case, the Court used the term “strategic monitoring,” while it referred to “more general programmes of surveillance” in Liberty.) In the latter case, the Fourth Section sought to depart from earlier European Commission of Human Rights — the court of first instance until 1998 — decisions which developed the requirements of the law in the context of surveillance measures targeted at specific individuals or addresses. It took note of the Weber decision which “was itself concerned with generalized ‘strategic monitoring’, rather than the monitoring of individuals” and concluded that there was no “ground to apply different principles concerning the accessibility and clarity of the rules governing the interception of individual communications, on the one hand, and more general programmes of surveillance, on the other” (para. 63). The Court in Liberty made no mention of any need for any prior or reasonable suspicion at all.
  • In Weber, reasonable suspicion was addressed only at the post-interception stage; that is, under the German system, bulk intercepted data could be transmitted from the German Federal Intelligence Service (BND) to law enforcement authorities without any prior suspicion. The Court found that the transmission of personal data without any specific prior suspicion, “in order to allow the institution of criminal proceedings against those being monitored” constituted a fairly serious interference with individuals’ privacy rights that could only be remedied by safeguards and protections limiting the extent to which such data could be used (para. 125). (In the context of that case, the Court found that Germany’s protections and restrictions were sufficient.) When you compare the language from these three cases, it would appear that the Grand Chamber in Zakharov is reasserting the requirement for individualized reasonable suspicion, including in national security cases, with full knowledge of the nature of surveillance considered by the Court in its two recent bulk interception cases.
  • The requirement of reasonable suspicion is bolstered by the Grand Chamber’s subsequent finding in Zakharov that the interception authorization (e.g., the court order or warrant) “must clearly identify a specific person to be placed under surveillance or a single set of premises as the premises in respect of which the authorisation is ordered. Such identification may be made by names, addresses, telephone numbers or other relevant information” (para. 264). In making this finding, it references paragraphs from Liberty describing the broad nature of the bulk interception warrants under British law. In that case, it was this description that led the Court to find the British legislation possessed insufficient clarity on the scope or manner of exercise of the State’s discretion to intercept communications. In one sense, therefore, the Grand Chamber seems to be retroactively annotating the Fourth Section’s Liberty decision so that it might become consistent with its decision in Zakharov. Without this revision, the Court would otherwise appear to depart to some extent — arguably, purposefully — from both Liberty and Weber.
  • Finally, the Grand Chamber took issue with the direct nature of the access enjoyed by Russian intelligence under the SORM system. The Court noted that this contributed to rendering oversight ineffective, despite the existence of a requirement for prior judicial authorization. Absent an obligation to demonstrate such prior authorization to the communications service provider, the likelihood that the system would be abused through “improper action by a dishonest, negligent or overly zealous official” was quite high (para. 270). Accordingly, “the requirement to show an interception authorisation to the communications service provider before obtaining access to a person’s communications is one of the important safeguards against abuse by the law-enforcement authorities” (para. 269). Again, this requirement arguably creates an unconquerable barrier for a number of modern bulk interception systems, which rely on the use of broad warrants to authorize the installation of, for example, fiber optic cable taps that facilitate the interception of all communications that cross those cables. In the United Kingdom, the Independent Reviewer of Terrorism Legislation David Anderson revealed in his essential inquiry into British surveillance in 2015, there are only 20 such warrants in existence at any time. Even if these 20 warrants are served on the relevant communications service providers upon the installation of cable taps, the nature of bulk interception deprives this of any genuine meaning, making the safeguard an empty one. Once a tap is installed for the purposes of bulk interception, the provider is cut out of the equation and can no longer play the role the Court found so crucial in Zakharov.
  • The Zakharov case not only levels a serious blow at bulk, untargeted surveillance regimes, it suggests the Grand Chamber’s intention to actively craft European Court of Human Rights jurisprudence in a manner that curtails such regimes. Any suggestion that the Grand Chamber’s decision was issued in ignorance of the technical capabilities or intentions of States and the continued preference for bulk interception systems should be dispelled; the oral argument in the case took place in September 2014, at a time when the Court had already indicated its intention to accord priority to cases arising out of the Snowden revelations. Indeed, the Court referenced such forthcoming cases in the fact sheet it issued after the Zakharov judgment was released. Any remaining doubt is eradicated through an inspection of the multiple references to the Snowden revelations in the judgment itself. In the main judgment, the Court excerpted text from the Director of the European Union Agency for Human Rights discussing Snowden, and in the separate opinion issued by Judge Dedov, he goes so far as to quote Edward Snowden: “With each court victory, with every change in the law, we demonstrate facts are more convincing than fear. As a society, we rediscover that the value of the right is not in what it hides, but in what it protects.”
  • The full implications of the Zakharov decision remain to be seen. However, it is likely we will not have to wait long to know whether the Grand Chamber intends to see the demise of bulk collection schemes; the three UK cases (Big Brother Watch & Ors v. United Kingdom, Bureau of Investigative Journalism & Alice Ross v. United Kingdom, and 10 Human Rights Organisations v. United Kingdom) pending before the Court have been fast-tracked, indicating the Court’s willingness to continue to confront the compliance of bulk collection schemes with human rights law. It is my hope that the approach in Zakharov hints at the Court’s conviction that bulk collection schemes lie beyond the bounds of permissible State surveillance.
Paul Merrell

Sick Of Facebook? Read This. - 2 views

  • In 2012, The Guardian reported on Facebook’s arbitrary and ridiculous nudity and violence guidelines which allow images of crushed limbs but – dear god spare us the image of a woman breastfeeding. Still, people stayed – and Facebook grew. In 2014, Facebook admitted to mind control games via positive or negative emotional content tests on unknowing and unwilling platform users. Still, people stayed – and Facebook grew. Following the 2016 election, Facebook responded to the Harpie shrieks from the corporate Democrats bysetting up a so-called “fake news” task force to weed out those dastardly commies (or socialists or anarchists or leftists or libertarians or dissidents or…). And since then, I’ve watched my reach on Facebook drain like water in a bathtub – hard to notice at first and then a spastic swirl while people bicker about how to plug the drain. And still, we stayed – and the censorship tightened. Roughly a year ago, my show Act Out! reported on both the censorship we were experiencing but also the cramped filter bubbling that Facebook employs in order to keep the undesirables out of everyone’s news feed. Still, I stayed – and the censorship tightened. 2017 into 2018 saw more and more activist organizers, particularly black and brown, thrown into Facebook jail for questioning systemic violence and demanding better. In August, puss bag ass hat in a human suit Alex Jones was banned from Facebook – YouTube, Apple and Twitter followed suit shortly thereafter. Some folks celebrated. Some others of us skipped the party because we could feel what was coming.
  • On Thursday, October 11th of this year, Facebook purged more than 800 pages including The Anti-Media, Police the Police, Free Thought Project and many other social justice and alternative media pages. Their explanation rested on the painfully flimsy foundation of “inauthentic behavior.” Meanwhile, their fake-news checking team is stacked with the likes of the Atlantic Council and the Weekly Standard, neocon junk organizations that peddle such drivel as “The Character Assassination of Brett Kavanaugh.” Soon after, on the Monday before the Midterm elections, Facebook blocked another 115 accounts citing once again, “inauthentic behavior.” Then, in mid November, a massive New York Times piece chronicled Facebook’s long road to not only save its image amid rising authoritarian behavior, but “to discredit activist protesters, in part by linking them to the liberal financier George Soros.” (I consistently find myself waiting for those Soros and Putin checks in the mail that just never appear.)
  • What we need is an open source, non-surveillance platform. And right now, that platform is Minds. Before you ask, I’m not being paid to write that.
  • ...2 more annotations...
  • Fashioned as an alternative to the closed and creepy Facebook behemoth, Minds advertises itself as “an open source and decentralized social network for Internet freedom.” Minds prides itself on being hands-off with regards to any content that falls in line with what’s permitted by law, which has elicited critiques from some on the left who say Minds is a safe haven for fascists and right-wing extremists. Yet, Ottman has himself stated openly that he wants ideas on content moderation and ways to make Minds a better place for social network users as well as radical content creators. What a few fellow journos and I are calling #MindsShift is an important step in not only moving away from our gagged existence on Facebook but in building a social network that can serve up the real news folks are now aching for.
  • To be clear, we aren’t advocating that you delete your Facebook account – unless you want to. For many, Facebook is still an important tool and our goal is to add to the outreach toolkit, not suppress it. We have set January 1st, 2019 as the ultimate date for this #MindsShift. Several outlets with a combined reach of millions of users will be making the move – and asking their readerships/viewerships to move with them. Along with fellow journalists, I am working with Minds to brainstorm new user-friendly functions and ways to make this #MindsShift a loud and powerful move. We ask that you, the reader, add to the conversation by joining the #MindsShift and spreading the word to your friends and family. (Join Minds via this link) We have created the #MindsShift open group on Minds.com so that you can join and offer up suggestions and ideas to make this platform a new home for radical and progressive media.
Paul Merrell

Opinion: Berkeley Can Become a City of Refuge | Opinion | East Bay Express - 0 views

  • The Berkeley City Council is poised to vote March 13 on the Surveillance Technology Use and Community Safety Ordinance, which will significantly protect people's right to privacy and safeguard the civil liberties of Berkeley residents in this age of surveillance and Big Data. The ordinance is based on an ACLU model that was first enacted by Santa Clara County in 2016. The Los Angeles Times has editorialized that the ACLU's model ordinance approach "is so pragmatic that cities, counties, and law enforcement agencies throughout California would be foolish not to embrace it." Berkeley's Peace and Justice and Police Review commissions agreed and unanimously approved a draft that will be presented to the council on Tuesday. The ordinance requires public notice and public debate prior to seeking funding, acquiring equipment, or otherwise moving forward with surveillance technology proposals. In neighboring Oakland, we saw the negative outcome that can occur from lack of such a discussion, when the city's administration pursued funding for, and began building, the citywide surveillance network known as the Domain Awareness Center ("DAC") without community input. Ultimately, the community rejected the project, and the fallout led to the establishment of a Privacy Advisory Commission and subsequent consideration of a similar surveillance ordinance to ensure proper vetting occurs up front, not after the fact. ✖ Play VideoPauseUnmuteCurrent Time 0:00/Duration Time 0:00Loaded: 0%Progress: 0%Stream TypeLIVERemaining Time -0:00 Playback Rate1ChaptersChaptersdescriptions off, selectedDescriptionssubtitles off, selectedSubtitlescaptions settings, opens captions settings dialogcaptions off, selectedCaptionsAudio TrackFullscreenThis is a modal window.Caption Settings DialogBeginning of dialog window. Escape will cancel and close the window.
‹ Previous 21 - 40 of 44 Next ›
Showing 20 items per page