Skip to main content

Home/ Future of the Web/ Group items tagged integration your

Rss Feed Group items tagged

Gary Edwards

Silicon Valley Veteran Bill Coleman on The Business Of Disruption . . . - 0 views

  • Cloud computing doesn't need government incentives because it is a disruptive technolo
  •  
    Tom Foremski of Silicon Valley Watcher interviews Bill Coleman of VisiCorp-Sun-BEA fame with questions about the economy and disruptive technologies. Coleman references noted business guru Peter Drucker when he claims that a platform ill be successful if it has three characteristics. First, it has to be able to commoditize a market. Secondly, it has to obey the 10x better/cheaper rule - providing at least ten times the value of what it's displacing. And thirdly, a platform must allow you to add value with custom additions.

    In the interview, Coleman backs up his assertions with bullseye examples. Clearly his passion is for Cloud Computing, especially the next generation.

    ......"As the cloud computing platform becomes more sophisticated, he predicts that there will be an acceleration in the use of the cloud driven by a "quadruple conversion." Video, audio, and IT data all become IP based, and productivity applications become integrated with social networks.

    "As we move forward from Web 2.0 to Web 3.0, all your productivity tools become integrated with your social networking, which becomes your business networking. Your mobile life and your online life will become the same. So now the client moves into the cloud and that's when we'll see a dramatic change in the cost structure of computing and of the capabilities you can have."....

    Good interview. I hope Tom publishes the rest of the session soon.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

The Internet of Things Will Turn Large-Scale Hacks into Real World Disasters | Motherboard - 0 views

  • Disaster stories involving the Internet of Things are all the rage. They feature cars (both driven and driverless), the power grid, dams, and tunnel ventilation systems. A particularly vivid and realistic one, near-future fiction published last month in New York Magazine, described a cyberattack on New York that involved hacking of cars, the water system, hospitals, elevators, and the power grid. In these stories, thousands of people die. Chaos ensues. While some of these scenarios overhype the mass destruction, the individual risks are all real. And traditional computer and network security isn’t prepared to deal with them.Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).
  • So far, internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by—presumptively—China in 2015. On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door—or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location. With the advent of the Internet of Things and cyber-physical systems in general, we've given the internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete. Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.
  •  
    Bruce Scneier on the insecurity of the Internet of Things, and possible consequences.
Gary Edwards

Spritz Speed Reading Revolution - 0 views

  •  
    "Why it Works: Reading is inherently time consuming because your eyes have to move from word to word and line to line. Traditional reading also consumes huge amounts of physical space on a page or screen, which limits reading effectiveness on small displays. Scrolling, pinching, and resizing a reading area doesn't fix the problem and only frustrates people. Now, with compact text streaming from Spritz, content can be streamed one word at a time, without forcing your eyes to spend time moving around the page. Spritz makes streaming your content easy and more comfortable, especially on small displays. Our "Redicle" technology enhances readability even more by using horizontal lines and hash marks to direct your eyes to the red letter in each word, so you can focus on the content that interests you. Best of all, Spritz's patent-pending technology can integrate into photos, maps, videos, and websites to promote more effective communication."
Gonzalo San Gil, PhD.

[# ! (#Free) #Tech:] Manage your Linux Box with htop - Freedom Penguin - 0 views

  •  
    "January 14, 2016 Joe Collins 0 Comment How To There is one application that I simply must have on every Linux system I install and that's htop. It doesn't matter whether it's a virtual machine or installed on hardware. It doesn't matter what distribution it is or whether the system is running a GUI Desktop Environment or not, I gotta have htop. It has become such an integral part of "
  •  
    "January 14, 2016 Joe Collins 0 Comment How To There is one application that I simply must have on every Linux system I install and that's htop. It doesn't matter whether it's a virtual machine or installed on hardware. It doesn't matter what distribution it is or whether the system is running a GUI Desktop Environment or not, I gotta have htop. It has become such an integral part of "
Gonzalo San Gil, PhD.

Raids cast doubt on the integrity of TOR | ITworld - 1 views

  •  
    "Federal law enforcement agencies in the U.S. and Europe have shut down more than 400 Web sites using .onion addresses and made arrests of those who run them, which calls into question whether the anonymizing The Onion Router (Tor) network itself is still secure."
  •  
    "Federal law enforcement agencies in the U.S. and Europe have shut down more than 400 Web sites using .onion addresses and made arrests of those who run them, which calls into question whether the anonymizing The Onion Router (Tor) network itself is still secure."
Gary Edwards

Spritz reader: Getting words into your brain faster - 1 views

  • Static blocks of text like the one you’re looking at now are an antiquated and inefficient way to get words into your head. That’s the contention of Boston-based startup Spritz, which has developed a speed-reading text box that shows no more than 13 characters at a time. The Spritz box flashes words at you in quick succession so you don’t have to move your eyes around a page, and in my very quick testing it allowed me to read at more than double my usual reading pace. Spritz has teamed up with Samsung to integrate its speed reading functionality with the upcoming Galaxy S5 smartphone. The written word, after 8,000 or so years, is still an extremely effective way to get a message from one mind into the minds of others. But even with the advent of the digital age and decades of usability work, font and layout development, we’re still nowhere near optimal efficiency with it yet.
  • Take this article – I’ve written it in easily digestible chunks, and we’ve presented it in nice, thin, 10 to 14 word columns that should make it easy to scan. But pay attention to what your eyes are doing while you try to read it. Chances are, even if you’re a quick reader, your eyes are jumping around all over the place. In fact, according to Boston-based startup Spritz, you spend as little as 20 percent of your reading time actually taking in the words you’re looking at, and as much as 80 percent physically moving your eyes around to find the right spot to read each word from. So, the Spritz team decided, why not eliminate that time altogether? The Spritz reader is a simple, small box that streams text at the reader, one word at a time. The words are presented in a large, very reader-friendly font, and centered around the "optimal recognition point" of each word. In fact, the box will only display a maximum of 13 characters, so larger words are broken up.
  • What’s really interesting is just how quickly this system can pipe information into your brain. I did a couple of online reading speed tests and found my average reading speed for regular blocks of text is around 330-350 words per minute. But I can comfortably follow a Spritz box at up to 500 words per minute without missing much, losing concentration or feeling any kind of eye strain. In short stints I can follow 800 words per minute, and the team says it’s easy to train yourself to go faster and retain more. Try it yourself. Here’s 250 words per minute:
  • ...1 more annotation...
  • Spritz claims that information retention rates on "spritzed" content are equal to or higher than that of traditional text block reading, and that some of its testers are now comfortably ingesting content at 1000 words per minute with no loss of information retention. That’s Tolstoy’s 1,440 page behemoth War and Peace dispatched in a single 10 hour sitting, if you had the concentration for it, or Stieg Larsson's Girl with a Dragon Tattoo in two and a bit hours. Spritz is also clearly developed to excel on mobile and handheld reading devices, and as such, the company has announced that Spritz will make its mobile debut on the upcoming Samsung Galaxy S5 release. Smartwatch and Google glass-type implementations are also on the radar. The mobile angle will have to be strong as there are numerous free tools for desktop browsers that can replicate a similar reading experience for free. If you’re using a Chrome browser, check out Spreed as an example. Perhaps the most significant move for Spritz will be bringing this speed reading technology to bear on your Android e-book library. Anything that can help me get through my reading backlog quicker will be most welcome!
Gonzalo San Gil, PhD.

How to integrate Git into your everyday workflow | Opensource.com - 0 views

  •  
    "Read: Part 1: What is Git? Part 2: Getting started with Git Part 3: Creating your first Git repository Part 4: How to restore older file versions in Git Part 5: 3 graphical tools for Git"
Gonzalo San Gil, PhD.

hackerthreads.org * View topic - Securing Linux: methodology, firewalls, daemons, root ... - 0 views

  •  
    "Postby weazy » Fri May 30, 2003 5:19 pm Methodology Simplicity The simplicity methodology seeks to increase security by reducing vulnerability. In the rest of this tutorial you will learn to: 1. Reduce network access to your machine using a firewall (we'll teach you how to build your own) 2. Decrease the number of priveleged programs. You help yourself by decreasing priveleged programs because you reduce the ways ppl can gain access 3. Tighten configuration of those priveleged programs you want to keep 4. Reduce number of paths to root, that is restrict access of non-priveleged users 5. Deploy intrusion detection by using file integrity checking "
Paul Merrell

We're Halfway to Encrypting the Entire Web | Electronic Frontier Foundation - 0 views

  • The movement to encrypt the web has reached a milestone. As of earlier this month, approximately half of Internet traffic is now protected by HTTPS. In other words, we are halfway to a web safer from the eavesdropping, content hijacking, cookie stealing, and censorship that HTTPS can protect against. Mozilla recently reported that the average volume of encrypted web traffic on Firefox now surpasses the average unencrypted volume
  • Google Chrome’s figures on HTTPS usage are consistent with that finding, showing that over 50% of of all pages loaded are protected by HTTPS across different operating systems.
  • This milestone is a combination of HTTPS implementation victories: from tech giants and large content providers, from small websites, and from users themselves.
  • ...4 more annotations...
  • Starting in 2010, EFF members have pushed tech companies to follow crypto best practices. We applauded when Facebook and Twitter implemented HTTPS by default, and when Wikipedia and several other popular sites later followed suit. Google has also put pressure on the tech community by using HTTPS as a signal in search ranking algorithms and, starting this year, showing security warnings in Chrome when users load HTTP sites that request passwords or credit card numbers. EFF’s Encrypt the Web Report also played a big role in tracking and encouraging specific practices. Recently other organizations have followed suit with more sophisticated tracking projects. For example, Secure the News and Pulse track HTTPS progress among news media sites and U.S. government sites, respectively.
  • But securing large, popular websites is only one part of a much bigger battle. Encrypting the entire web requires HTTPS implementation to be accessible to independent, smaller websites. Let’s Encrypt and Certbot have changed the game here, making what was once an expensive, technically demanding process into an easy and affordable task for webmasters across a range of resource and skill levels. Let’s Encrypt is a Certificate Authority (CA) run by the Internet Security Research Group (ISRG) and founded by EFF, Mozilla, and the University of Michigan, with Cisco and Akamai as founding sponsors. As a CA, Let’s Encrypt issues and maintains digital certificates that help web users and their browsers know they’re actually talking to the site they intended to. CAs are crucial to secure, HTTPS-encrypted communication, as these certificates verify the association between an HTTPS site and a cryptographic public key. Through EFF’s Certbot tool, webmasters can get a free certificate from Let’s Encrypt and automatically configure their server to use it. Since we announced that Let’s Encrypt was the web’s largest certificate authority last October, it has exploded from 12 million certs to over 28 million. Most of Let’s Encrypt’s growth has come from giving previously unencrypted sites their first-ever certificates. A large share of these leaps in HTTPS adoption are also thanks to major hosting companies and platforms--like WordPress.com, Squarespace, and dozens of others--integrating Let’s Encrypt and providing HTTPS to their users and customers.
  • Unfortunately, you can only use HTTPS on websites that support it--and about half of all web traffic is still with sites that don’t. However, when sites partially support HTTPS, users can step in with the HTTPS Everywhere browser extension. A collaboration between EFF and the Tor Project, HTTPS Everywhere makes your browser use HTTPS wherever possible. Some websites offer inconsistent support for HTTPS, use unencrypted HTTP as a default, or link from secure HTTPS pages to unencrypted HTTP pages. HTTPS Everywhere fixes these problems by rewriting requests to these sites to HTTPS, automatically activating encryption and HTTPS protection that might otherwise slip through the cracks.
  • Our goal is a universally encrypted web that makes a tool like HTTPS Everywhere redundant. Until then, we have more work to do. Protect your own browsing and websites with HTTPS Everywhere and Certbot, and spread the word to your friends, family, and colleagues to do the same. Together, we can encrypt the entire web.
  •  
    HTTPS connections don't work for you if you don't use them. If you're not using HTTPS Everywhere in your browser, you should be; it's your privacy that is at stake. And every encrypted communication you make adds to the backlog of encrypted data that NSA and other internet voyeurs must process as encrypted traffic; because cracking encrypted messages is computer resource intensive, the voyeurs do not have the resources to crack more than a tiny fraction. HTTPS is a free extension for Firefox, Chrome, and Opera. You can get it here. https://www.eff.org/HTTPS-everywhere
Paul Merrell

We finally gave Congress email addresses - Sunlight Foundation Blog - 0 views

  • On OpenCongress, you can now email your representatives and senators just as easily as you would a friend or colleague. We've added a new feature to OpenCongress. It's not flashy. It doesn't use D3 or integrate with social media. But we still think it's pretty cool. You might've already heard of it. Email. This may not sound like a big deal, but it's been a long time coming. A lot of people are surprised to learn that Congress doesn't have publicly available email addresses. It's the number one feature request that we hear from users of our APIs. Until recently, we didn't have a good response. That's because members of Congress typically put their feedback mechanisms behind captchas and zip code requirements. Sometimes these forms break; sometimes their requirements improperly lock out actual constituents. And they always make it harder to email your congressional delegation than it should be.
  • This is a real problem. According to the Congressional Management Foundation, 88% of Capitol Hill staffers agree that electronic messages from constituents influence their bosses' decisions. We think that it's inappropriate to erect technical barriers around such an essential democratic mechanism. Congress itself is addressing the problem. That effort has just entered its second decade, and people are feeling optimistic that a launch to a closed set of partners might be coming soon. But we weren't content to wait. So when the Electronic Frontier Foundation (EFF) approached us about this problem, we were excited to really make some progress. Building on groundwork first done by the Participatory Politics Foundation and more recent work within Sunlight, a network of 150 volunteers collected the data we needed from congressional websites in just two days. That information is now on Github, available to all who want to build the next generation of constituent communication tools. The EFF is already working on some exciting things to that end.
  • But we just wanted to be able to email our representatives like normal people. So now, if you visit a legislator's page on OpenCongress, you'll see an email address in the right-hand sidebar that looks like Sen.Reid@opencongress.org or Rep.Boehner@opencongress.org. You can also email myreps@opencongress.org to email both of your senators and your House representatives at once. The first time we get an email from you, we'll send one back asking for some additional details. This is necessary because our code submits your message by navigating those aforementioned congressional webforms, and we don't want to enter incorrect information. But for emails after the first one, all you'll have to do is click a link that says, "Yes, I meant to send that email."
  • ...1 more annotation...
  • One more thing: For now, our system will only let you email your own representatives. A lot of people dislike this. We do, too. In an age of increasing polarization, party discipline means that congressional leaders must be accountable to citizens outside their districts. But the unfortunate truth is that Congress typically won't bother reading messages from non-constituents — that's why those zip code requirements exist in the first place. Until that changes, we don't want our users to waste their time. So that's it. If it seems simple, it's because it is. But we think that unbreaking how Congress connects to the Internet is important. You should be able to send a call to action in a tweet, easily forward a listserv message to your representative and interact with your government using the tools you use to interact with everyone else.
Paul Merrell

How to Encrypt the Entire Web for Free - The Intercept - 0 views

  • If we’ve learned one thing from the Snowden revelations, it’s that what can be spied on will be spied on. Since the advent of what used to be known as the World Wide Web, it has been a relatively simple matter for network attackers—whether it’s the NSA, Chinese intelligence, your employer, your university, abusive partners, or teenage hackers on the same public WiFi as you—to spy on almost everything you do online. HTTPS, the technology that encrypts traffic between browsers and websites, fixes this problem—anyone listening in on that stream of data between you and, say, your Gmail window or bank’s web site would get nothing but useless random characters—but is woefully under-used. The ambitious new non-profit Let’s Encrypt aims to make the process of deploying HTTPS not only fast, simple, and free, but completely automatic. If it succeeds, the project will render vast regions of the internet invisible to prying eyes.
  • Encryption also prevents attackers from tampering with or impersonating legitimate websites. For example, the Chinese government censors specific pages on Wikipedia, the FBI impersonated The Seattle Times to get a suspect to click on a malicious link, and Verizon and AT&T injected tracking tokens into mobile traffic without user consent. HTTPS goes a long way in preventing these sorts of attacks. And of course there’s the NSA, which relies on the limited adoption of HTTPS to continue to spy on the entire internet with impunity. If companies want to do one thing to meaningfully protect their customers from surveillance, it should be enabling encryption on their websites by default.
  • Let’s Encrypt, which was announced this week but won’t be ready to use until the second quarter of 2015, describes itself as “a free, automated, and open certificate authority (CA), run for the public’s benefit.” It’s the product of years of work from engineers at Mozilla, Cisco, Akamai, Electronic Frontier Foundation, IdenTrust, and researchers at the University of Michigan. (Disclosure: I used to work for the Electronic Frontier Foundation, and I was aware of Let’s Encrypt while it was being developed.) If Let’s Encrypt works as advertised, deploying HTTPS correctly and using all of the best practices will be one of the simplest parts of running a website. All it will take is running a command. Currently, HTTPS requires jumping through a variety of complicated hoops that certificate authorities insist on in order prove ownership of domain names. Let’s Encrypt automates this task in seconds, without requiring any human intervention, and at no cost.
  • ...2 more annotations...
  • The benefits of using HTTPS are obvious when you think about protecting secret information you send over the internet, like passwords and credit card numbers. It also helps protect information like what you search for in Google, what articles you read, what prescription medicine you take, and messages you send to colleagues, friends, and family from being monitored by hackers or authorities. But there are less obvious benefits as well. Websites that don’t use HTTPS are vulnerable to “session hijacking,” where attackers can take over your account even if they don’t know your password. When you download software without encryption, sophisticated attackers can secretly replace the download with malware that hacks your computer as soon as you try installing it.
  • The transition to a fully encrypted web won’t be immediate. After Let’s Encrypt is available to the public in 2015, each website will have to actually use it to switch over. And major web hosting companies also need to hop on board for their customers to be able to take advantage of it. If hosting companies start work now to integrate Let’s Encrypt into their services, they could offer HTTPS hosting by default at no extra cost to all their customers by the time it launches.
  •  
    Don't miss the video. And if you have a web site, urge your host service to begin preparing for Let's Encrypt. (See video on why it's good for them.)
Gonzalo San Gil, PhD.

10 alternative social networks for business professionals | ITworld - 1 views

  •  
    [LinkedIn maybe No. 1 when it comes to professional social networking, but your quest to connect doesn't need to stop there. Here's a look at 10 social networks for professionals that offer a variety of features such as Facebook integration, career advice, resume critiques and more.]
Paul Merrell

Joint - Dear Colleague Letter: Electronic Book Readers - 0 views

  • U.S. Department of Justice Civil Rights Division U.S. Department of Education Office for Civil Rights
  •  
    June 29, 2010 Dear College or University President: We write to express concern on the part of the Department of Justice and the Department of Education that colleges and universities are using electronic book readers that are not accessible to students who are blind or have low vision and to seek your help in ensuring that this emerging technology is used in classroom settings in a manner that is permissible under federal law. A serious problem with some of these devices is that they lack an accessible text-to-speech function. Requiring use of an emerging technology in a classroom environment when the technology is inaccessible to an entire population of individuals with disabilities - individuals with visual disabilities - is discrimination prohibited by the Americans with Disabilities Act of 1990 (ADA) and Section 504 of the Rehabilitation Act of 1973 (Section 504) unless those individuals are provided accommodations or modifications that permit them to receive all the educational benefits provided by the technology in an equally effective and equally integrated manner. ... The Department of Justice recently entered into settlement agreements with colleges and universities that used the Kindle DX, an inaccessible, electronic book reader, in the classroom as part of a pilot study with Amazon.com, Inc. In summary, the universities agreed not to purchase, require, or recommend use of the Kindle DX, or any other dedicated electronic book reader, unless or until the device is fully accessible to individuals who are blind or have low vision, or the universities provide reasonable accommodation or modification so that a student can acquire the same information, engage in the same interactions, and enjoy the same services as sighted students with substantially equivalent ease of use. The texts of these agreements may be viewed on the Department of Justice's ADA Web site, www.ada.gov. (To find these settlements on www.ada.gov, search for "Kindle.") Consisten
Gonzalo San Gil, PhD.

Project Maelstrom - 0 views

  •  
    "Project Maelstrom aims on resolving this by attempting to create an open network of data sources, authentication methods, and applications. Unlike many other competing services, Maelstrom aims to create a comprehensive network of anything required for an individual web application to integrate with the internet as a whole. Just the connections, nothing more. We'll create the network; it'll be up to you to use it. We don't want to get in your way by attempting to compete. "
Gary Edwards

Two Microsofts: Mulling an alternate reality | ZDNet - 1 views

  • Judge Jackson had it right. And the Court of Appeals? Not so much
  • Judge Jackson is an American hero and news of his passing thumped me hard. His ruling against Microsoft and the subsequent overturn of that ruling resulted, IMHO, in two extraordinary directions that changed the world. Sure the what-if game is interesting, but the reality itself is stunning enough. Of course, Judge Jackson sought to break the monopoly. The US Court of Appeals overturn resulted in the monopoly remaining intact, but the Internet remaining free and open. Judge Jackson's breakup plan had a good shot at achieving both a breakup of the monopoly and, a free and open Internet. I admit though that at the time I did not favor the Judge's plan. And i actually did submit a proposal based on Microsoft having to both support the WiNE project, and, provide a complete port to WiNE to any software provider requesting a port. I wanted to break the monopolist's hold on the Windows Productivity Environment and the hundreds of millions of investment dollars and time that had been spent on application development forever trapped on that platform. For me, it was the productivity platform that had to be broken.
  • I assume the good Judge thought that separating the Windows OS from Microsoft Office / Applications would force the OS to open up the secret API's even as the OS continued to evolve. Maybe. But a full disclosure of the API's coupled with the community service "port to WiNE" requirement might have sped up the process. Incredibly, the "Undocumented Windows Secrets" industry continues to thrive, and the legendary Andrew Schulman's number is still at the top of Silicon Valley legal profession speed dials. http://goo.gl/0UGe8 Oh well. The Court of Appeals stopped the breakup, leaving the Windows Productivity Platform intact. Microsoft continues to own the "client" in "Client/Server" computing. Although Microsoft was temporarily stopped from leveraging their desktop monopoly to an iron fisted control and dominance of the Internet, I think what were watching today with the Cloud is Judge Jackson's worst nightmare. And mine too. A great transition is now underway, as businesses and enterprises begin the move from legacy client/server business systems and processes to a newly emerging Cloud Productivity Platform. In this great transition, Microsoft holds an inside straight. They have all the aces because they own the legacy desktop productivity platform, and can control the transition to the Cloud. No doubt this transition is going to happen. And it will severely disrupt and change Microsoft's profit formula. But if the Redmond reprobate can provide a "value added" transition of legacy business systems and processes, and direct these new systems to the Microsoft Cloud, the profits will be immense.
  • ...1 more annotation...
  • Judge Jackson sought to break the ability of Microsoft to "leverage" their existing monopoly into the Internet and his plan was overturned and replaced by one based on judicial oversight. Microsoft got a slap on the wrist from the Court of Appeals, but were wailed on with lawsuits from the hundreds of parties injured by their rampant criminality. Some put the price of that criminality as high as $14 Billion in settlements. Plus, the shareholders forced Chairman Bill to resign. At the end of the day though, Chairman Bill was right. Keeping the monopoly intact was worth whatever penalty Microsoft was forced to pay. He knew that even the judicial over-site would end one day. Which it did. And now his company is ready to go for it all by leveraging and controlling the great productivity transition. No business wants to be hostage to a cold heart'd monopolist. But there is huge difference between a non-disruptive and cost effective, process-by-process value-added transition to a Cloud Productivity Platform, and, the very disruptive and costly "rip-out-and-replace" transition offered by Google, ZOHO, Box, SalesForce and other Cloud Productivity contenders. Microsoft, and only Microsoft, can offer the value-added transition path. If they get the Cloud even halfway right, they will own business productivity far into the future. Rest in Peace Judge Jackson. Your efforts were heroic and will be remembered as such. ~ge~
  •  
    Comments on the latest SVN article mulling the effects of Judge Thomas Penfield Jackson's anti trust ruling and proposed break up of Microsoft. comment: "Chinese Wall" Ummm, there was a Chinese Wall between Microsoft Os and the MS Applciations layer. At least that's what Chairman Bill promised developers at a 1990 OS/2-Windows Conference I attended. It was a developers luncheon, hosted by Microsoft, with Chairman Bill speaking to about 40 developers with applications designed to run on the then soon to be released Windows 3.0. In his remarks, the Chairman described his vision of commoditizing the personal computer market through an open hardware-reference platform on the one side of the Windows OS, and provisioning an open application developers layer on the other using open and totally transparent API's. Of course the question came up concerning the obvious advantage Microsoft applications would have. Chairman Bill answered the question by describing the Chinese Wall that existed between Microsoft's OS and Apps develop departments. He promised that OS API's would be developed privately and separate from the Apps department, and publicly disclosed to ALL developers at the same time. Oh yeah. There was lots of anti IBM - evil empire stuff too :) Of course we now know this was a line of crap. Microsoft Apps was discovered to have been using undocumented and secret Window API's. http://goo.gl/0UGe8. Microsoft Apps had a distinct advantage over the competition, and eventually the entire Windows Productivity Platform became dependent on the MSOffice core. The company I worked for back then, Pyramid Data, had the first Contact Management application for Windows; PowerLeads. Every Friday night we would release bug fixes and improvements using Wildcat BBS. By Monday morning we would be slammed with calls from users complaining that they had downloaded the Friday night patch, and now some other application would not load or function properly. Eventually we tracked th
Matteo Spreafico

Advocacy Group Asks DOJ To Probe Google Search Results - 2 views

  • The nonprofit advocacy group said it sent a letter to Christine Varney, Assistant Attorney General for Antitrust Division, after news that the European Commission had received three complaints against Google alleging the company manipulated search engine results in an anticompetitive way.
  • "As part of your continued antitrust investigation we call on you to shine a light on Google’s black box, and require it to explain what’s behind search results," Simpson wrote.
  • "If, as it appears, Google is tweaking results to further its narrow agenda, this anticompetitive behavior must be stopped."
  •  
    If the evidence supports the allegations, this is a plausible antitrust theory, a company with a dominant market position leveraging that position into new markets via integration. In essence this is the same theory as that applied against Microsoft's bundling and integration of Windows, Internet Explorer, and Windows Media Player.  
Paul Merrell

BitTorrent Sync creates private, peer-to-peer Dropbox, no cloud required | Ars Technica - 6 views

  • BitTorrent today released folder syncing software that replicates files across multiple computers using the same peer-to-peer file sharing technology that powers BitTorrent clients. The free BitTorrent Sync application is labeled as being in the alpha stage, so it's not necessarily ready for prime-time, but it is publicly available for download and working as advertised on my home network. BitTorrent, Inc. (yes, there is a legitimate company behind BitTorrent) took to its blog to announce the move from a pre-alpha, private program to the publicly available alpha. Additions since the private alpha include one-way synchronization, one-time secrets for sharing files with a friend or colleague, and the ability to exclude specific files and directories.
  • BitTorrent Sync provides "unlimited, secure file-syncing," the company said. "You can use it for remote backup. Or, you can use it to transfer large folders of personal media between users and machines; editors and collaborators. It’s simple. It’s free. It’s the awesome power of P2P, applied to file-syncing." File transfers are encrypted, with private information never being stored on an external server or in the "cloud." "Since Sync is based on P2P and doesn’t require a pit-stop in the cloud, you can transfer files at the maximum speed supported by your network," BitTorrent said. "BitTorrent Sync is specifically designed to handle large files, so you can sync original, high quality, uncompressed files."
  •  
    Direct P2P encrypted file syncing, no cloud intermediate, which should translate to far more secure exchange of files, with less opportunity for snooping by governments or others, than with cloud-based services. 
  • ...5 more comments...
  •  
    Hey Paul, is there an open source document management system that I could hook the BitTorrent Sync to?
  •  
    More detail please. What do you want to do with the doc management system? Platform? Server-side or stand-alone? Industrial strength and highly configurable or lightweight and simple? What do you mean by "hook?" Not that I would be able to answer anyway. I really know very little about BitTorrent Sync. In fact, as far as I'd gone before your question was to look at the FAQ. It's linked from . But there's a link to a forum on the same page. Giving the first page a quick scan confirms that this really is alpha-state software. But that would probably be a better place to ask. (Just give them more specific information of what you'd like to do.) There are other projects out there working on getting around the surveillance problem. I2P is one that is a farther along than BitTorrent Sync and quite a bit more flexible. See . (But I haven't used it, so caveat emptor.)
  •  
    There is a great list of PRISM Proof software at http://prism-break.org/. Includes a link to I2P. I want to replace gmail though, but would like another Web based system since I need multi device access. Of course, I need to replace my Google Apps / Google Docs system. That's why I asked about a PRISM Proof sync-share-store DMS. My guess is that there are many users similarly seeking a PRISM Proof platform of communications, content and collaborative computing systems. BusinessIndiser.com is crushed with articles about Google struggling to squirm out from under the NSA PRISM boot-on-the-back-of-their-neck situation. As if blaming the NSA makes up for the dragnet that they consented/allowed/conceded to cover their entire platform. Perhaps we should be watching Germany? There must be tons of startup operations underway, all seeking to replace Google, Amazon, FaceBook, Microsoft, Skype and so many others. It's a great day for Libertyware :)
  •  
    Is the NSA involvement the "Kiss of Death"? Google seems to think so. I'm wondering what the impact would be if ZOHO were to announce a PRISM Proof productivity platform?
  •  
    It is indeed. The E.U. has far more protective digital privacy rights than we do (none). If you're looking for a Dropbox replacement (you should be), for a cloud-based solution take a look at . Unlike Dropbox, all of the encryption/decryption happens on your local machine; Wuala never sees your files unencrypted. Dropbox folks have admitted that there's no technical barrier to them looking at your files. Their encrypt/decrypt operations are done in the cloud (if they actually bother) and they have the key. Which makes it more chilling that the PRISM docs Snowden link make reference to Dropbox being the next cloud service NSA plans to add to their collection. Wuala also is located (as are its servers) in Switzerland, which also has far stronger digital data privacy laws than the U.S. Plus the Swiss are well along the path to E.U. membership; they've ratified many of the E.U. treaties including the treaty on Human Rights, which as I recall is where the digital privacy sections are. I've begun to migrate from Dropbox to Wuala. It seems to be neck and neck with Dropbox on features and supported platforms, with the advantage of a far more secure approach and 5 GB free. But I'd also love to see more approaches akin to IP2 and Bittorrent Sync that provide the means to bypass the cloud. Don't depend on government to ensure digital privacy, route around the government voyeurs. Hmmm ... I wonder if the NSA has the computer capacity to handle millions of people switching to encrypted communication? :-) Thanks for the link to the software list.
  •  
    Re: Google. I don't know if it's the 'kiss of death" but they're definitely going to take a hit, particularly outside the U.S. BTW, I'm remembering from a few years back when the ODF Foundation was still kicking. I did a fair bit of research on the bureaucratic forces in the E.U. that were pushing for the Open Document Exchange Formats. That grew out of a then-ongoing push to get all of the E.U. nations connected via a network that is not dependent on the Internet. It was fairly complete at the time down to the national level and was branching out to the local level and the plan from there was to push connections to business and then to Joe Sixpack and wife. Interop was key, hence ODEF. The E.U. might not be that far away from an ability to sever the digital connections with the U.S. Say a bunch of daisy-chained proxy anonymizers for communications with the U.S. Of course they'd have to block the UK from the network and treat it like it is the U.S. There's a formal signals intelligence service collaboration/integration dating back to WW 2, as I recall, among the U.S., the U.K., Canada, Australia, and New Zealand. Don't remember its name. But it's the same group of nations that were collaborating on Echelon. So the E.U. wouldn't want to let the UK fox inside their new chicken coop. Ah, it's just a fantasy. The U.S. and the E.U. are too interdependent. I have no idea hard it would be for the Zoho folk to come up with desktop/side encryption/decryption. And I don't know whether their servers are located outside the reach of a U.S. court's search warrant. But I think Google is going to have to move in that direction fast if it wants to minimize the damage. Or get way out in front of the hounds chomping at the NSA's ankles and reduce the NSA to compost. OTOH, Google might be a government covert op. for all I know. :-) I'm really enjoying watching the NSA show. Who knows what facet of their Big Brother operation gets revealed next?
  •  
    ZOHO is an Indian company with USA marketing offices. No idea where the server farm is located, but they were not on the NSA list. I've known Raju Vegesna for years, mostly from the old Web 2.0 and Office 2.0 Conferences. Raju runs the USA offices in Santa Clara. I'll try to catch up with him on Thursday. How he could miss this once in a lifetime moment to clean out Google, Microsoft and SalesForce.com is something I'd like to find out about. Thanks for the Wuala tip. You sent me that years ago, when i was working on research and design for the SurDocs project. Incredible that all our notes, research, designs and correspondence was left to rot in Google Wave! Too too funny. I recall telling Alex from SurDocs that he had to use a USA host, like Amazon, that could be trusted by USA customers to keep their docs safe and secure. Now look what i've done! I've tossed his entire company information set into the laps of the NSA and their cabal of connected corporatists :)
Paul Merrell

Own Your Own Devices You Will, Under Rep. Farenthold's YODA Bill | Bloomberg BNA - 0 views

  • A bill introduced Sept. 18 would make clear that consumers actually owned the electronic devices, and any accompanying software on that device, that they purchased, according to sponsor Rep. Blake Farenthold's (R-Texas). The You Own Devices Act (H.R. 5586) would amend the Copyright Act “to provide that the first sale doctrine applies to any computer program that enables a machine or other product to operate.” The bill, which is unlikely to receive attention during Congress's lame-duck legislative session, was well-received by consumer's rights groups.
  • Section 109(a) of the Copyright Act, 17 U.S.C. §109(a), serves as the foundation for the first sale doctrine. H.R. 5586 would amend Section 109(a) by adding a provision covering “transfer of computer programs.” That provision would state:if a computer program enables any part of a machine or other product to operate, the owner of the machine or other product is entitled to transfer an authorized copy of the computer pro gram, or the right to obtain such copy, when the owner sells, leases, or otherwise transfers the machine or other product to another person. The right to transfer provided under this subsection may not be waived by any agreement.
  • ‘Things' Versus SoftwareFarenthold had expressed concern during a Sept. 17 hearing on Section 1201 of the Digital Millennium Copyright Act over what he perceived was a muddling between patents and copyrights when it comes to consumer products. “Traditionally patent law has protected things and copyright law has protected artistic-type works,” he said. “But now more and more things have software in them and you are licensing that software when you purchase a thing.” Farenthold asked the witnesses if there was a way to draw a distinction in copyright “between software that is an integral part of a thing as opposed to an add-on app that you would put on your telephone.”
  • ...1 more annotation...
  • H.R. 5586 seeks to draw that distinction. “YODA would simply state that if you want to sell, lease, or give away your device, the software that enables it to work is transferred along with it, and that any right you have to security and bug fixing of that software is transferred as well,” Farenthold said in a statement issued Sept. 19.
1 - 20 of 27 Next ›
Showing 20 items per page