Skip to main content

Home/ Future of the Web/ Group items tagged business digital

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

Studies on file sharing - La Quadrature du Net - 0 views

  •  
    "Contents 1 Studies 1.1 Evaluation of the effects of the HADOPI law 1.1.1 University of Delaware and Université de Rennes - 2014 - Graduated Response Policy and the Behavior of Digital Pirates: Evidence from the French Three-Strike (Hadopi) Law 1.1.2 M@rsouin - 2010 - Evaluation of the effects of the HADOPI law (FR) 1.2 People who share files are people who spend the more for culture 1.2.1 Munich School of Management and Copenhagen Business School - Piracy and Movie Revenues: Evidence from Megaupload 1.2.2 The American Assembly (Collumbia University) - Copy Culture in the USA and Germany 1.2.3 GFK (Society for Consumer Research) - Disappointed commissioner suppresses study showing pirates are cinema's best consumers 1.2.4 HADOPI - 2011 - January 2011 study on online cultural practices (FR) 1.2.5 University of Amsterdam - 2010 - Economic and cultural effects of unlawful file sharing 1.2.6 BBC - 2009 - "Pirates" spend more on music (FR) 1.2.7 IPSOS Germany - 2009 - Filesharers are better "consumers" of culture (FR) 1.2.8 Frank N. Magid Associates, Inc. - 2009 - P2P / Best consumers for Hollywood (EN) 1.2.9 Business School of Norway - 2009 - Those who share music spend ten times more money on music (NO) 1.2.10 Annelies Huygen, et al. (Dutch government investigation) - 2009 - Ups and downs - Economische en culturele gevolgen van file sharing voor muziek, film en games 1.2.11 M@rsouin - 2008 - P2P / buy more DVDs (FR) 1.2.12 Canadian Department of Industry - 2007 - P2P / achètent plus de musique (FR) 1.2.13 Felix Oberholzer-Gee (above) and Koleman Strumpf - 2004 -File sharing may boost CD sales 1.3 Economical effects of filesharing 1.3.1 University of Kansas School of Business - Using Markets to Measure the Impact of File Sharing o
  •  
    "Contents 1 Studies 1.1 Evaluation of the effects of the HADOPI law 1.1.1 University of Delaware and Université de Rennes - 2014 - Graduated Response Policy and the Behavior of Digital Pirates: Evidence from the French Three-Strike (Hadopi) Law 1.1.2 M@rsouin - 2010 - Evaluation of the effects of the HADOPI law (FR) 1.2 People who share files are people who spend the more for culture 1.2.1 Munich School of Management and Copenhagen Business School - Piracy and Movie Revenues: Evidence from Megaupload 1.2.2 The American Assembly (Collumbia University) - Copy Culture in the USA and Germany 1.2.3 GFK (Society for Consumer Research) - Disappointed commissioner suppresses study showing pirates are cinema's best consumers 1.2.4 HADOPI - 2011 - January 2011 study on online cultural practices (FR) 1.2.5 University of Amsterdam - 2010 - Economic and cultural effects of unlawful file sharing 1.2.6 BBC - 2009 - "Pirates" spend more on music (FR) 1.2.7 IPSOS Germany - 2009 - Filesharers are better "consumers" of culture (FR) 1.2.8 Frank N. Magid Associates, Inc. - 2009 - P2P / Best consumers for Hollywood (EN) 1.2.9 Business School of Norway - 2009 - Those who share music spend ten times more money on music (NO) 1.2.10 Annelies Huygen, et al. (Dutch government investigation) - 2009 - Ups and downs - Economische en culturele gevolgen van file sharing voor muziek, film en games 1.2.11 M@rsouin - 2008 - P2P / buy more DVDs (FR) 1.2.12 Canadian Department of Industry - 2007 - P2P / achètent plus de musique (FR) 1.2.13 Felix Oberholzer-Gee (above) and Koleman Strumpf - 2004 -File sharing may boost CD sales 1.3 Economical effects of filesharing 1.3.1 University of Kansas School of Business - Using Markets to Measure the Impact of File Sharing o
Gonzalo San Gil, PhD.

Who's the Best Digital Music Distribution Company? - 0 views

  •  
    "I originally did this report for Ari's Take. Everything is current and has been fact checked by every company. This is the most comprehensive and accurate digital distribution comparison piece on the web. Who is the best digital distributor? Read on… I sat down with reps at 8 different digital distribution companies"
Paul Merrell

"In 10 Years, the Surveillance Business Model Will Have Been Made Illegal" - - 1 views

  • The opening panel of the Stigler Center’s annual antitrust conference discussed the source of digital platforms’ power and what, if anything, can be done to address the numerous challenges their ability to shape opinions and outcomes present. 
  • Google CEO Sundar Pichai caused a worldwide sensation earlier this week when he unveiled Duplex, an AI-driven digital assistant able to mimic human speech patterns (complete with vocal tics) to such a convincing degree that it managed to have real conversations with ordinary people without them realizing they were actually talking to a robot.   While Google presented Duplex as an exciting technological breakthrough, others saw something else: a system able to deceive people into believing they were talking to a human being, an ethical red flag (and a surefire way to get to robocall hell). Following the backlash, Google announced on Thursday that the new service will be designed “with disclosure built-in.” Nevertheless, the episode created the impression that ethical concerns were an “after-the-fact consideration” for Google, despite the fierce public scrutiny it and other tech giants faced over the past two months. “Silicon Valley is ethically lost, rudderless and has not learned a thing,” tweeted Zeynep Tufekci, a professor at the University of North Carolina at Chapel Hill and a prominent critic of tech firms.   The controversial demonstration was not the only sign that the global outrage has yet to inspire the profound rethinking critics hoped it would bring to Silicon Valley firms. In Pichai’s speech at Google’s annual I/O developer conference, the ethical concerns regarding the company’s data mining, business model, and political influence were briefly addressed with a general, laconic statement: “The path ahead needs to be navigated carefully and deliberately and we feel a deep sense of responsibility to get this right.”
  • Google’s fellow FAANGs also seem eager to put the “techlash” of the past two years behind them. Facebook, its shares now fully recovered from the Cambridge Analytica scandal, is already charging full-steam ahead into new areas like dating and blockchain.   But the techlash likely isn’t going away soon. The rise of digital platforms has had profound political, economic, and social effects, many of which are only now becoming apparent, and their sheer size and power makes it virtually impossible to exist on the Internet without using their services. As Stratechery’s Ben Thompson noted in the opening panel of the Stigler Center’s annual antitrust conference last month, Google and Facebook—already dominating search and social media and enjoying a duopoly in digital advertising—own many of the world’s top mobile apps. Amazon has more than 100 million Prime members, for whom it is usually the first and last stop for shopping online.   Many of the mechanisms that allowed for this growth are opaque and rooted in manipulation. What are those mechanisms, and how should policymakers and antitrust enforcers address them? These questions, and others, were the focus of the Stigler Center panel, which was moderated by the Economist’s New York bureau chief, Patrick Foulis.
Paul Merrell

Google to Stop Selling Ads Based on Your Specific Web Browsing - WSJ - 2 views

  • Google plans to stop selling ads based on individuals’ browsing across multiple websites, a change that could hasten upheaval in the digital advertising industry. The Alphabet Inc. company said Wednesday that it plans next year to stop using or investing in tracking technologies that uniquely identify web users as they move from site to site across the internet. The decision, coming from the world’s biggest digital advertising company, could help push the industry away from the use of such individualized tracking, which has come under increasing criticism from privacy advocates and faces scrutiny from regulators. Google’s heft means the change could reshape the digital ad business, where many companies rely on tracking individuals to target their ads, measure the ads’ effectiveness and stop fraud. Google accounted for 52% of last year’s global digital ad spending of $292 billion, according to Jounce Media, a digital ad consultancy.
Paul Merrell

Google bulges old time news archive | The Register - 0 views

  • Google is redoubling efforts to offer a digital archive of the world's newspapers. Two years ago, the search giant began indexing the existing digital archives of papers like The New York Times and The Washington Post, and today, with a post to The Official Google Blog, the company said it's now working with other publishers to bring a much broader range of old newsprint into the project.
  • In addition to the old ads, you'll find new ads. Digitized papers will be joined by familiar AdSense text, and Google will split the revenue with the papers' publishers.
  •  
    There's a change in Google's business model indicated by that last paragraph, sharing Google ad revenues with publishers. Publishers have been suing Google in Europe and the U.S. for indexing their web site news content. Is sharing Google Ad-Sense revenue with publishers the compromise that will bring the world an explosion of information previously unavailable online in easily searchable form? Most newspapers' archives are not available online and with far too many that are, subscriptions are required to search a single newspaper's archives; e.g., the New York Times. Sounds like Google may have its sights set on eroding the information subscription business model that the news business -- along with advertising -- has been built around for centuries. This announcement might mark a paradigm shift.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

EU unveils landmark law curbing power of tech giants | News | DW | 15.12.2020 - 0 views

  • The European Union unveiled landmark legislation on Tuesday that lays out strict rules for tech giants to do business in the bloc. The draft legislation, dubbed the Digital Services Act (DSA) and the Digital Markets Act (DMA), outlines specific regulations that seek to limit the power of global internet firms on the European market. Companies including Google, Apple, Amazon, Facebook and others could face hefty penalties for violating the rules. EU antitrust czar Margrethe Vestager and EU digital chief Thierry Breton presented the draft on Tuesday, after the content of the new rules was leaked to the media on Monday.
  • What's in the draft laws? The dual legislation sets out a list of do's, don'ts and penalties for internet giants: Companies with over 45 million EU users would be designated as digital "gatekeepers" — making them subject to stricter regulations. Firms could be fined up to 10% of their annual turnover for violating competition rules. The could also be required to sell one of their businesses or parts of it (including rights or brands). Platforms that refuse to comply and "endanger people's life and safety" could have their service temporarily suspended "as a last resort." Companies would need to inform the EU ahead of any planned mergers or acquisitions. Certain kinds of data must be shared with regulators and rivals. Companies favoring their own services could be outlawed. Platforms would be more responsible for illegal, disturbing or misleading content.
  • Following the announcement on Tuesday, US internet giant Google criticized the draft legislation, saying it appeared to target specific firms.  "We will carefully study the proposals made by the European Commission over the next few days. However, we are concerned that they seem to specifically target a handful of companies," said Karan Bhatia, the vice president of government affairs and public affairs at Google. Facebook appeared to offer a more conciliatory tone, saying the legislation was "on the right track."
  • ...1 more annotation...
  • The draft still faces a long ratification process, including feedback from the EU's 27 member states and the European Parliament. Company lobbyists and trade associations will also influence the final law. The process is expected to take several months or even a year.
Gonzalo San Gil, PhD.

Spain's Ill-Conceived 'Google Tax' Law Likely To Cause Immense Damage To Digital Common... - 1 views

  •  
    [# ! another international ridicule] "from the digital-cluelessness dept Techdirt recently wrote about Spain's imminent and almost unbelievably foolish new copyright law designed to prop up old and failing business models in the publishing sector. Mike mentioned that it was potentially disastrous for things like fair use, Creative Commons and public domain material"
  •  
    [# ! another international ridicule] "from the digital-cluelessness dept Techdirt recently wrote about Spain's imminent and almost unbelievably foolish new copyright law designed to prop up old and failing business models in the publishing sector. Mike mentioned that it was potentially disastrous for things like fair use, Creative Commons and public domain material"
Paul Merrell

Mozilla Sets New Plans for Do Not Track Browser | Adweek - 0 views

  • Much to the disappointment of the digital advertising establishment, Mozilla is going ahead with plans to automatically block third-party cookie tracking in its Firefox browser. Mozilla first announced its Do Not Track browser in February, only to back off in May saying it needed to do more testing. But that didn't stop a growing chorus of loud protests from the advertising community, which argued that the browser would choke off the ad-supported Internet. The Interactive Advertising Bureau's general counsel Mike Zaneis called Mozilla's browser nothing less than a "nuclear first strike" against the ad community. No date has been set for when Firefox will turn on the feature, but advertisers, which have been regularly meeting with Mozilla and were hopeful for a compromise, are already lashing back at Mozilla.
  • "It's troubling," said Lou Mastria, the managing director for the Digital Advertising Alliance, which manages an online self-regulatory program called Ad Choices that provides consumers with the choice to opt-out of targeted ads. "They're putting this under the cloak of privacy, but it's disrupting a business model," Mastria said. Advertisers are worried that Mozilla's plans could be the death knell to thousands of small Web publishers that depend on third-party targeted ads to stay in business. Nearly 1,000 signed a petition urging Mozilla to change its plans.  "One publisher said that 20 percent of their business would go away. That's huge," said Mastria. "Mozilla is really picking business model winners and losers."
  • Not all cookies will be blocked under Mozilla's latest plans for its proposed browser; there will be exceptions. Through a partnership with the Center for Internet and Society at Stanford Law School, the two are launching a Cookie Clearinghouse. Overseen by a six-person panel, it will determine a list of undesirable cookies and then block those. "The Cookie Clearinghouse will create, maintain and publish objective information," Aleecia McDonald, director of privacy at CIS, said in a statement. "Web browser companies will be able to choose to adopt the lists we publish to provide new privacy options to their users." But others say the approach is far from objective. "What these organizations and the privacy groups that back them are really saying is 'let us choose for you because we know best,' " said Daniel Castro, a senior analyst with the Information Technology and Innovation Foundation. "The proponents of this model have claimed they are empowering users. ... This is basically Sarah Palin's 'Death Panels' but for the Internet."
  • ...1 more annotation...
  • Advertisers have so far resisted some of the Do Not Track proposals advocated by privacy groups arguing they are technological solutions that could quickly be rendered obsolete by the fast-moving Internet economy. When Micosoft launched its Do Not Track default browser, advertisers said they would not honor it. Meanwhile, members of the World Wide Web Consortium's tracking group, represented by advertisers, privacy groups and other stakeholders, have been unable to reach consensus about a universal Do Not Track browser solution. In Congress, where baseline privacy legislation has moved at a glacial pace, Mozilla's news gave Sen. Jay Rockefeller (D-W.Va.) more ammunition for his Do Not Track Online Act. Introduced earlier this year, the bill hasn't gotten much traction and only has one co-sponsor, Sen. Richard Blumenthal (D-Conn.). "With major Web browsers now starting to provide privacy protections by default, it's even more important to give businesses the regulatory certainty they need and consumers the privacy protections they deserve," Rockefeller said in a statement. "I hope this will end the emerging back and forth so we can act quickly to pass new legislation."
  •  
    Cookie Clearinghouse. Overseen by a six-person panel, it will determine a list of undesirable cookies and then block those.
Gonzalo San Gil, PhD.

British reality show rigs teens' iPhones to record all their activity | Ars Technica - 0 views

  •  
    [# ! accustoming teens (young citizens) to being spied...] "A new reality series airing on Channel 4 used rigged iPhones to monitor all the digital activities of its teen characters, wrote the Columbia Journalism Review on Thursday. The system, referred to as a "digital rig" by the studio that developed it, had feeds monitored by a production team 13 hours a day, seven days a week."
  •  
    [# ! accustoming teens (young citizens) to being spied...] "A new reality series airing on Channel 4 used rigged iPhones to monitor all the digital activities of its teen characters, wrote the Columbia Journalism Review on Thursday. The system, referred to as a "digital rig" by the studio that developed it, had feeds monitored by a production team 13 hours a day, seven days a week."
Paul Merrell

Rural America and the 5G Digital Divide. Telecoms Expanding Their "Toxic Infrastructure... - 0 views

  • While there is considerable telecom hubris regarding the 5G rollout and increasing speculation that the next generation of wireless is not yet ready for Prime Time, the industry continues to make promises to Rural America that it has no intention of fulfilling. Decades-long promises to deliver digital Utopia to rural America by T-Mobile, Verizon and AT&T have never materialized.  
  • In 2017, the USDA reported that 29% of American farms had no internet access. The FCC says that 14 million rural Americans and 1.2 million Americans living on tribal lands do not have 4G LTE on their phones, and that 30 million rural residents do not have broadband service compared to 2% of urban residents.  It’s beginning to sound like a Third World country. Despite an FCC $4.5 billion annual subsidy to carriers to provide broadband service in rural areas, the FCC reports that ‘over 24 million Americans do not have access to high-speed internet service, the bulk of them in rural area”while a  Microsoft Study found that  “162 million people across the US do not have internet service at broadband speeds.” At the same time, only three cable companies have access to 70% of the market in a sweetheart deal to hike rates as they avoid competition and the FCC looks the other way.  The FCC believes that it would cost $40 billion to bring broadband access to 98% of the country with expansion in rural America even more expensive.  While the FCC has pledged a $2 billion, ten year plan to identify rural wireless locations, only 4 million rural American businesses and homes will be targeted, a mere drop in the bucket. Which brings us to rural mapping: Since the advent of the digital age, there have been no accurate maps identifying where broadband service is available in rural America and where it is not available.  The FCC has a long history of promulgating unreliable and unverified carrier-provided numbers as the Commission has repeatedly ‘bungled efforts to produce accurate broadband maps” that would have facilitated rural coverage. During the Senate Commerce Committee hearing on April 10th regarding broadband mapping, critical testimony questioned whether the FCC and/or the telecom industry have either the commitment or the proficiency to provide 5G to rural America.  Members of the Committee shared concerns that 5G might put rural America further behind the curve so as to never catch up with the rest of the country
Paul Merrell

Privacy Shield Program Overview | Privacy Shield - 0 views

  • EU-U.S. Privacy Shield Program Overview The EU-U.S. Privacy Shield Framework was designed by the U.S. Department of Commerce and European Commission to provide companies on both sides of the Atlantic with a mechanism to comply with EU data protection requirements when transferring personal data from the European Union to the United States in support of transatlantic commerce. On July 12, the European Commission deemed the Privacy Shield Framework adequate to enable data transfers under EU law (see the adequacy determination). The Privacy Shield program, which is administered by the International Trade Administration (ITA) within the U.S. Department of Commerce, enables U.S.-based organizations to join the Privacy Shield Framework in order to benefit from the adequacy determination. To join the Privacy Shield Framework, a U.S.-based organization will be required to self-certify to the Department of Commerce (via this website) and publicly commit to comply with the Framework’s requirements. While joining the Privacy Shield Framework is voluntary, once an eligible organization makes the public commitment to comply with the Framework’s requirements, the commitment will become enforceable under U.S. law. All organizations interested in joining the Privacy Shield Framework should review its requirements in their entirety. To assist in that effort, Commerce’s Privacy Shield Team has compiled resources and addressed frequently asked questions below. ResourcesKey New Requirements for Participating Organizations How to Join the Privacy ShieldPrivacy Policy FAQs Frequently Asked Questions
  •  
    I got a notice from Dropbox tonight that it is now certified under this program. This program is fallout from an E.U. Court of Justice decision following the Snowden disclosures, holding that the then existing U.S.-E.U. framework for ptoecting the rights of E.U. citozens' data were invalid because that framework did not adequately protect digital privacy rights. This new framework is intended to comoply with the court's decision but one need only look at section 5 of the agreement to see that it does not. Expect follow-on litigation. THe agreement is at https://www.privacyshield.gov/servlet/servlet.FileDownload?file=015t00000004qAg Section 5 lets NSA continue to intercept and read data from E.U. citizens and also allows their data to be disclosed to U.S. law enforcement. And the agreement adds nothing to U.S. citizens' digital privacy rights. In my view, this framework is a stopgap measure that will only last as long as it takes for another case to reach the Court of Justice and be ruled upon. The ox that got gored by the Court of Justice ruling was U.S. company's ability to store E.U. citizens' data outside the E.U. and to allow internet traffic from the E.U. to pass through the U.S. Microsoft had leadership that set up new server farms in Europe under the control of a business entity beyond the jurisdiction of U.S. courts. Other I/.S. internet biggies didn't follow suit. This framework is their lifeline until the next ruling by the Court of Justice.
Gary Edwards

Box extends its enterprise playbook, but users are still at the center | CITEworld - 0 views

  • The 47,000 developers making almost two billion API calls to the Box platform per month are a good start, Levie says, but Box needs to go further and do more to customize its platform to help push this user-centric, everything-everywhere-always model at larger and larger enterprises. 
  • Box for Industries is comprised of three parts: A Box-tailored core service offering, a selection of partner apps, and the implementation services to combine the two of those into something that ideally can be used by any enterprise in any vertical. 
  • Box is announcing solutions for three specific industries: Retail, healthcare, and media/entertainment. For retail, that includes vendor collaboration (helping vendors work with manufacturers and distributors), digital asset management, and retail store enablement.
  • ...6 more annotations...
  • Ted Blosser, senior vice president of Box Platform, also took the stage to show off how managing digital assets benefit from a just-announced metadata template capability that lets you pre-define custom fields so a store's back-office can flag, say, a new jacket as "blue" or "red." Those metadata tags can be pushed to a custom app running on a retail associate's iPad, so you can sort by color, line, or inventory level. Metadata plus Box Workflows equals a powerful content platform for retail that keeps people in sync with their content across geographies and devices, or so the company is hoping. 
  • It's the same collaboration model that cloud storage vendors have been pushing, but customized for very specific verticals, which is exactly the sales pitch that Box wants you to come away with. And developers must be cheering -- Box is going to help them sell their apps to previously inaccessible markets. 
  • More on the standard enterprise side, the so-named Box + Office 365 (previewed a few months back) currently only supports the Windows desktop versions of the productivity suite, but Levie promises web and Mac integrations are on the way. It's pretty basic, but potentially handy for the enterprises that Box supports.
  • The crux of the Office 365 announcement is that people expect that their data will follow them from device to device and from app to app. If people want their Box files and storage in Jive, Box needs to support Jive. And if enterprises are using Microsoft Office 365 to work with their documents -- and they are -- then Box needs to support that too. It's easier than it used to be, Levie says, thanks to Satya Nadella's push for a more open Microsoft. 
  • "We are quite confident that this is the kind of future they're building towards," Levie says -- but just in case, he urged BoxWorks attendees to tweet at Nadella and encourage him to help Box speed development along. 
  • Box SVP of Enterprise Annie Pearl came on stage to discuss how Box Workflow can be used to improve the ways people work with their content in the real world of business. It's worth noting that Box had a workflow tool previously, but it was relatively primitive and seems to have only existed to tick the box -- it didn't really go beyond assigning tasks and soliciting approvals.
  •  
    This will be very interesting. Looks like Box is betting their future on the success of integrating Microsoft Office 365 into the Box Productivity Cloud Service. Which competes directly with the Microsoft Office 365 - OneDrive Cloud Productivity Platform. Honestly, I don't see how this can ever work out for Box. Microsoft has them ripe for the plucking. And they have pulled it off on the eve of Box's expected IPO. "Box CEO Aaron Levie may not be able to talk about the cloud storage and collaboration company's forthcoming IPO, but he still took the stage at the company's biggest BoxWorks conference yet, with 5,000 attendees. Featured Resource Presented by Citrix Systems 10 essential elements for a secure enterprise mobility strategy Best practices for protecting sensitive business information while making people productive from LEARN MORE Levie discussed the future of the business and make some announcements -- including the beta of a Box integration with the Windows version of Microsoft Office 365; the introduction of Box Workflow, a tool coming in 2015 for creating repeatable workflows on the platform; and the unveiling of Box for Industries, an initiative to tailor Box solutions for specific industry use-cases. And if that wasn't enough, Box also announced a partnership with service firm Accenture to push the platform in large enterprises. The unifying factor for the announcements made at BoxWorks, Levie said, is that users expect their data to follow them everywhere, at home and at work. That means that Box has to think about enterprise from the user outwards, putting them at the center of the appified universe -- in effect, building an ecosystem of tools that support the things employees already use."
Paul Merrell

BitTorrent Sync creates private, peer-to-peer Dropbox, no cloud required | Ars Technica - 6 views

  • BitTorrent today released folder syncing software that replicates files across multiple computers using the same peer-to-peer file sharing technology that powers BitTorrent clients. The free BitTorrent Sync application is labeled as being in the alpha stage, so it's not necessarily ready for prime-time, but it is publicly available for download and working as advertised on my home network. BitTorrent, Inc. (yes, there is a legitimate company behind BitTorrent) took to its blog to announce the move from a pre-alpha, private program to the publicly available alpha. Additions since the private alpha include one-way synchronization, one-time secrets for sharing files with a friend or colleague, and the ability to exclude specific files and directories.
  • BitTorrent Sync provides "unlimited, secure file-syncing," the company said. "You can use it for remote backup. Or, you can use it to transfer large folders of personal media between users and machines; editors and collaborators. It’s simple. It’s free. It’s the awesome power of P2P, applied to file-syncing." File transfers are encrypted, with private information never being stored on an external server or in the "cloud." "Since Sync is based on P2P and doesn’t require a pit-stop in the cloud, you can transfer files at the maximum speed supported by your network," BitTorrent said. "BitTorrent Sync is specifically designed to handle large files, so you can sync original, high quality, uncompressed files."
  •  
    Direct P2P encrypted file syncing, no cloud intermediate, which should translate to far more secure exchange of files, with less opportunity for snooping by governments or others, than with cloud-based services. 
  • ...5 more comments...
  •  
    Hey Paul, is there an open source document management system that I could hook the BitTorrent Sync to?
  •  
    More detail please. What do you want to do with the doc management system? Platform? Server-side or stand-alone? Industrial strength and highly configurable or lightweight and simple? What do you mean by "hook?" Not that I would be able to answer anyway. I really know very little about BitTorrent Sync. In fact, as far as I'd gone before your question was to look at the FAQ. It's linked from . But there's a link to a forum on the same page. Giving the first page a quick scan confirms that this really is alpha-state software. But that would probably be a better place to ask. (Just give them more specific information of what you'd like to do.) There are other projects out there working on getting around the surveillance problem. I2P is one that is a farther along than BitTorrent Sync and quite a bit more flexible. See . (But I haven't used it, so caveat emptor.)
  •  
    There is a great list of PRISM Proof software at http://prism-break.org/. Includes a link to I2P. I want to replace gmail though, but would like another Web based system since I need multi device access. Of course, I need to replace my Google Apps / Google Docs system. That's why I asked about a PRISM Proof sync-share-store DMS. My guess is that there are many users similarly seeking a PRISM Proof platform of communications, content and collaborative computing systems. BusinessIndiser.com is crushed with articles about Google struggling to squirm out from under the NSA PRISM boot-on-the-back-of-their-neck situation. As if blaming the NSA makes up for the dragnet that they consented/allowed/conceded to cover their entire platform. Perhaps we should be watching Germany? There must be tons of startup operations underway, all seeking to replace Google, Amazon, FaceBook, Microsoft, Skype and so many others. It's a great day for Libertyware :)
  •  
    Is the NSA involvement the "Kiss of Death"? Google seems to think so. I'm wondering what the impact would be if ZOHO were to announce a PRISM Proof productivity platform?
  •  
    It is indeed. The E.U. has far more protective digital privacy rights than we do (none). If you're looking for a Dropbox replacement (you should be), for a cloud-based solution take a look at . Unlike Dropbox, all of the encryption/decryption happens on your local machine; Wuala never sees your files unencrypted. Dropbox folks have admitted that there's no technical barrier to them looking at your files. Their encrypt/decrypt operations are done in the cloud (if they actually bother) and they have the key. Which makes it more chilling that the PRISM docs Snowden link make reference to Dropbox being the next cloud service NSA plans to add to their collection. Wuala also is located (as are its servers) in Switzerland, which also has far stronger digital data privacy laws than the U.S. Plus the Swiss are well along the path to E.U. membership; they've ratified many of the E.U. treaties including the treaty on Human Rights, which as I recall is where the digital privacy sections are. I've begun to migrate from Dropbox to Wuala. It seems to be neck and neck with Dropbox on features and supported platforms, with the advantage of a far more secure approach and 5 GB free. But I'd also love to see more approaches akin to IP2 and Bittorrent Sync that provide the means to bypass the cloud. Don't depend on government to ensure digital privacy, route around the government voyeurs. Hmmm ... I wonder if the NSA has the computer capacity to handle millions of people switching to encrypted communication? :-) Thanks for the link to the software list.
  •  
    Re: Google. I don't know if it's the 'kiss of death" but they're definitely going to take a hit, particularly outside the U.S. BTW, I'm remembering from a few years back when the ODF Foundation was still kicking. I did a fair bit of research on the bureaucratic forces in the E.U. that were pushing for the Open Document Exchange Formats. That grew out of a then-ongoing push to get all of the E.U. nations connected via a network that is not dependent on the Internet. It was fairly complete at the time down to the national level and was branching out to the local level and the plan from there was to push connections to business and then to Joe Sixpack and wife. Interop was key, hence ODEF. The E.U. might not be that far away from an ability to sever the digital connections with the U.S. Say a bunch of daisy-chained proxy anonymizers for communications with the U.S. Of course they'd have to block the UK from the network and treat it like it is the U.S. There's a formal signals intelligence service collaboration/integration dating back to WW 2, as I recall, among the U.S., the U.K., Canada, Australia, and New Zealand. Don't remember its name. But it's the same group of nations that were collaborating on Echelon. So the E.U. wouldn't want to let the UK fox inside their new chicken coop. Ah, it's just a fantasy. The U.S. and the E.U. are too interdependent. I have no idea hard it would be for the Zoho folk to come up with desktop/side encryption/decryption. And I don't know whether their servers are located outside the reach of a U.S. court's search warrant. But I think Google is going to have to move in that direction fast if it wants to minimize the damage. Or get way out in front of the hounds chomping at the NSA's ankles and reduce the NSA to compost. OTOH, Google might be a government covert op. for all I know. :-) I'm really enjoying watching the NSA show. Who knows what facet of their Big Brother operation gets revealed next?
  •  
    ZOHO is an Indian company with USA marketing offices. No idea where the server farm is located, but they were not on the NSA list. I've known Raju Vegesna for years, mostly from the old Web 2.0 and Office 2.0 Conferences. Raju runs the USA offices in Santa Clara. I'll try to catch up with him on Thursday. How he could miss this once in a lifetime moment to clean out Google, Microsoft and SalesForce.com is something I'd like to find out about. Thanks for the Wuala tip. You sent me that years ago, when i was working on research and design for the SurDocs project. Incredible that all our notes, research, designs and correspondence was left to rot in Google Wave! Too too funny. I recall telling Alex from SurDocs that he had to use a USA host, like Amazon, that could be trusted by USA customers to keep their docs safe and secure. Now look what i've done! I've tossed his entire company information set into the laps of the NSA and their cabal of connected corporatists :)
Paul Merrell

'UK surveillance is worse than 1984' says UN privacy chief (Wired UK) - 0 views

  • The UN's newly appointed special rapporteur on privacy, Joseph Cannataci, has described digital surveillance in the UK as "worse" than anything imagined in George Orwell's totalitarian dystopia 1984.Speaking to the Guardian, Cannataci -- who doesn't own a Facebook account or use Twitter -- lambasted the oversight of British digital surveillance as "a rather bad joke at its citizens' expense".Warning against the steady erosion of privacy and increasing levels of government intrusion, he also drew sinister parallels with Orwell's vision of a mass-surveilled society, adding that today's reality was far worse than the fiction: "At least Winston [a character in Orwell's 1984] was able to go out in the countryside and go under a tree and expect there wouldn't be any screen, as it was called. Whereas today there are many parts of the English countryside where there are more cameras than George Orwell could ever have imagined."
  • Cannataci, who holds posts as a professor of technology of law at the University of Groningen, and as head of the department of Information Policy and Governance at the University of Malta, also called for a "Geneva convention-style law" for the internet. "Some people may not want to buy into it. But you know, if one takes the attitude that some countries will not play ball, then, for example, the chemical weapons agreement would never have come about."
  • As part of his new role -- which elevates digital privacy to the same level of importance as other human rights -- Cannataci has vowed to begin systematically reviewing government policies and the business models of large corporations, which he accuses of "very often taking the data that you never even knew they were taking". Although the privacy chief admits that his mandate is more than likely "impossible to achieve in the next three years", he stressed the importance of a "longer-term view" in an effort to help protect people's data and safeguard their digital rights.
Gonzalo San Gil, PhD.

Angel Investors and Venture Capitalists Say They Will Stop Funding Some Internet Start-... - 1 views

  •  
    [ A large majority of the angel investors and venture capitalists who took part in a Booz & Company study say they will not put their money in digital content intermediaries (DCIs) if governments pass tough new rules allowing websites to be sued or fined for pirated digital content posted by users. More than 70% of angel investors reported they would be deterred from investing if anti-piracy regulations against "user uploaded" websites were increased. ...]
Gonzalo San Gil, PhD.

Why customers need to Run Simple in the Digital and Networked Economy | ZDNet - 0 views

  •  
    "Summary:Customers today are looking for ways how to transform their business to cater to our digitized, networked, and complex world. The need to deliver services rather than products, and everything needs to happen fast, so innovation with speed is key."
Gary Edwards

Introduction to OpenCalais | OpenCalais - 0 views

  •  
    "The free OpenCalais service and open API is the fastest way to tag the people, places, facts and events in your content.  It can help you improve your SEO, increase your reader engagement, create search-engine-friendly 'topic hubs' and streamline content operations - saving you time and money. OpenCalais is free to use in both commercial and non-commercial settings, but can only be used on public content (don't run your confidential or competitive company information through it!). OpenCalais does not keep a copy of your content, but it does keep a copy of the metadata it extracts there from. To repeat, OpenCalais is not a private service, and there is no secure, enterprise version that you can buy to operate behind a firewall. It is your responsibility to police the content that you submit, so make sure you are comfortable with our Terms of Service (TOS) before you jump in. You can process up to 50,000 documents per day (blog posts, news stories, Web pages, etc.) free of charge.  If you need to process more than that - say you are an aggregator or a media monitoring service - then see this page to learn about Calais Professional. We offer a very affordable license. OpenCalais' early adopters include CBS Interactive / CNET, Huffington Post, Slate, Al Jazeera, The New Republic, The White House and more. Already more than 30,000 developers have signed up, and more than 50 publishers and 75 entrepreneurs are using the free service to help build their businesses. You can read about the pioneering work of these publishers, entrepreneurs and developers here. To get started, scroll to the bottom section of this page. To build OpenCalais into an existing site or publishing platform (CMS), you will need to work with your developers.  Why OpenCalais Matters The reason OpenCalais - and so-called "Web 3.0" in general (concepts like the Semantic Web, Linked Data, etc.) - are important is that these technologies make it easy to automatically conne
Gonzalo San Gil, PhD.

The Copyright Lobotomy: How Intellectual Property Makes Us Pretend To Be Stupid | Techdirt - 0 views

  •  
    "from the or:-how-to-build-an-intellectual-cage dept Here are two words that have no business hanging out together: "used MP3s." If you know anything about how computers work, that concept is intellectually offensive. Same goes for "ebook lending", "digital rental" and a host of other terms that have emerged from the content industries' desperate scramble to do the impossible: adapt without changing. "
  •  
    "from the or:-how-to-build-an-intellectual-cage dept Here are two words that have no business hanging out together: "used MP3s." If you know anything about how computers work, that concept is intellectually offensive. Same goes for "ebook lending", "digital rental" and a host of other terms that have emerged from the content industries' desperate scramble to do the impossible: adapt without changing. "
1 - 20 of 63 Next › Last »
Showing 20 items per page