Skip to main content

Home/ Open Web/ Group items tagged success

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Gary Edwards

CPU Wars - Intel to Play Fab for an ARM Chipmaker: Understanding What the Altera Deal M... - 0 views

  • Intel wants x86 to conquer all computing spaces -- including mobile -- and is trying to leverage its process lead to make that happen.  However, it's been slowed by a lack of inclusion of 4G cellular modems on-die and difficulties adapting to the mobile market's low component prices.  ARM, meanwhile, wants a piece of the PC and server markets, but has received a lukewarm response from consumers due to software compatibility concerns. The disappointing sales of (x86) tablet products using Microsoft Corp.'s (MSFT) Windows 8 and the flop of Windows RT (ARM) product in general somewhat unexpectedly had the net result of being a driver to maintain the status quo, allowing neither company to gain much ground.  For Intel, its partnership with Microsoft (the historic "Wintel" combo) has damaged its mobile efforts, as Windows 8 flopped in the tablet market.  Likewise ARM's efforts to score PC market share were stifled by the flop of Windows RT, which led to OEMs killing off ARM-based laptops and convertibles.
  • Both companies seem to have learned their lesson and are migrating away from Windows towards other platforms -- in ARM's case Chromebooks, and in Intel's case Android tablets/smartphones. But suffice it to say, ARM Holdings and Intel are still very much bitter enemies from a sales perspective.
  • III. Profit vs. Risk -- Understanding the Modern CPU Food Chain
  • ...16 more annotations...
  • Whether it's tablets or PCs, the processor is still one of the most expensive components onboard.  Aside from the discrete GPU -- if a device has one -- the CPU has the greatest earning potential for a large company like Intel because the CPU is the most complex component. Other components like the power supply or memory tend to either be lower margin or have more competitors.  The display, memory, and storage components are all sensitive to process, but see profit split between different parties (e.g. the company who makes the DRAM chips and the company who sells the stick of DRAM) and are primarily dependent on process technology. CPUs and GPUs remain the toughest product to make, as it's not enough to simply have the best process, you must also have the best architecture and the best optimization of that architecture for the space you're competing in. There's essentially five points of potential profit on the processor food chain: [CPU] Fabrication [CPU] Architecture design [CPU] Optimization OEM OS platform Of these, the fabrication/OS point is the most profitable (but is dependent on the number of OEM adopters).  The second most profitable niche is optimization (which again is dependent on OEM adopter market share), followed by OEM markups.  In terms of expense, fabrication and operating system designs requires the greatest capital investment and the highest risk.
  • In terms of difficulty/risk, the fabrication and operating system are the most difficult/risky points.  Hence in terms of combined risk, cost, and profitability the ranking of which points are "best" is arguably: Optimization Architecture design OS platfrom OEM Fabrication ...with the fabrication point being last largely because it's so high risk. In other words, the last thing Intel wants is to settle into a niche of playing fabs for everybody else's product, as that's an unsound approach.  If you can't keep up in terms of chip design, you typically spin off your fabs and opt for a different architecture direction -- just look at Advanced Micro Devices, Inc.'s (AMD) spinoff of GlobalFoundries and upcoming ARM product to see that.
  • IV. Top Firms' Role on That Food Chain
  • Apple has seen unbelievable profits due to this fundamental premise.  It controls the two most desirable points on the food chain -- OS and optimization -- while sharing some profit with its architecture designer (ARM Holdings) and a bit with the fabricator (Samsung Electronics Comp., Ltd. (KSC:005930)).  By choosing to play operating system maker, too, it adds to its profits, but also its risk.  Note that nearly every other first-party exclusive smartphone platform has failed or is about to fail (i.e. BlackBerry, Ltd. (TSE:BB) and the now-dead Palm).
  • Intel controls points 1, 2, and 5, currently, on the food chain.  Compared to Apple, Intel's points of control offer less risk, but also slightly less profitability. Its architecture control may be at risk, but even so, it's currently the top in its most risky/expensive point of control (fabrication), where as Apple's most risky/expensive point of control (OS development) is much less of a clear leader (as Android has surpassed Apple in market share).  Hence Apple might be a better short-term investment, but Intel certainly appears a better long-term investment.
  • Samsung is another top company in terms of market dominance and profit.  It occupies points 1, 3, 4, and 5 -- sometimes.  Sometimes Samsung's devices use third-party optimization firms like Qualcomm Inc. (QCOM) and NVIDIA Corp. (NVDA), which hurts profitability by removing one of the most profitable roles.  But Samsung makes up for this by being one of the largest and most successful third party manufacturers.
  • Microsoft enjoys a lot of profit due to its OS dominance, as does Google Inc. (GOOG); but both companies are limited in controlling only one point which they monetize in different ways (Microsoft by direct sales; Google by giving away OS product for free in return for web services market share and by proxy search advertising revenue).
  • Qualcomm and NVIDIA are also quite profitable operating solely as optimizers, as is ARM Holdings who serves as architecture maker to Qualcomm, NVIDIA, Apple, and Samsung.
  • V. Four Scenarios in the x86 vs. ARM Competition
  • Scenario one is that x86 proves dominant in the mobile space, assuming a comparable process.
  • A second scenario is that x86 and ARM are roughly tied, assuming a comparable process.
  • A third scenario is that x86 is inferior to ARM at a comparable process, but comparable or superior to ARM when the x86 chip is built using a superior process.  From the benchmarks I've seen to date, I personally believe this is most likely.
  • A fourth scenario is that x86 is so drastically inferior to ARM architecturally that a process lead by Intel can't make up for it.
  • This is perhaps the most interesting scenario, in the sense of thinking of how Intel would react, if not overly likely.  If Intel were faced with this scenario, I believe Intel would simply bite the bullet and start making ARM chips, leveraging its process lead to become the dominant ARM chipmaker.  To make up for the revenue it lost, paying licensing fees to ARM Holdings, it could focus its efforts in the OS space (it's Tizen Linux OS project with Samsung hints at that).  Or it could look to make up for lost revenue by expanding its production of other basic process-sensitive components (e.g. DRAM).  I think this would be Intel's best and most likely option in this scenario.
  • VI. Why Intel is Unlikely to Play Fab For ARM Chipmakers (Even if ARM is Better)
  • From Intel's point of view, there is an entrenched, but declining market for x86 chips because of Windows, and Intel will continue to support Atom chips (which will be required to run Windows 8 tablets), but growth on desktops will come from 64 bit desktop/server class non-Windows ARM devices - Chromebooks, Android laptops, possibly Apple's desktop products as well given they are going 64 bit ARM for their future iPhones. Even Windows has been trying to transition (unsuccessfully) to ARM. Again, the Windows server market is tied to x86, but Linux and FreeBSD servers will run on ARM as well, and ARM will take a chunk out of the server market when a decent 64bit ARM server chip is available as a result.
  •  
    Excellent article explaining the CPU war for the future of computing, as Intel and ARM square off.  Intel's x86 architecture dominates the era of client/server computing, with their famed WinTel alliance monopolizing desktop, notebook and server implementations.  But Microsoft was a no show with the merging mobile computing market, and now ARM is in position transition from their mobile dominance to challenge the desktop -notebook - server markets.   WinTel lost their shot at the mobile computing market, and now their legacy platforms are in play.  Good article!!! Well worth the read time  ................
Paul Merrell

Open letter to Google: free VP8, and use it on YouTube - Free Software Foundation - 0 views

  • Dear Google, With your purchase of On2, you now own both the world's largest video site (YouTube) and all the patents behind a new high performance video codec -- VP8. Just think what you can achieve by releasing the VP8 codec under an irrevocable royalty-free license and pushing it out to users on YouTube? You can end the web's dependence on patent-encumbered video formats and proprietary software (Flash).
  • This ability to offer a free format on YouTube, however, is only a tiny fraction of your real leverage. The real party starts when you begin to encourage users' browsers to support free formats. There are lots of ways to do this. Our favorite would be for YouTube to switch from Flash to free formats and HTML, offering users with obsolete browsers a plugin or a new browser (free software, of course). Apple has had the mettle to ditch Flash on the iPhone and the iPad -- albeit for suspect reasons and using abhorrent methods (DRM) -- and this has pushed web developers to make Flash-free alternatives of their pages. You could do the same with YouTube, for better reasons, and it would be a death-blow to Flash's dominance in web video.
  • If you care about free software and the free web (a movement and medium to which you owe your success) you must take bold action to replace Flash with free standards and free formats. Patented video codecs have already done untold harm to the web and its users, and this will continue until we stop it. Because patent-encumbered formats were costly to incorporate into browsers, a bloated, ill-suited piece of proprietary software (Flash) became the de facto standard for online video. Until we move to free formats, the threat of patent lawsuits and licensing fees hangs over every software developer, video creator, hardware maker, web site and corporation -- including you.
Gary Edwards

Interview: Paul Cotton on Microsoft Participation in the W3C HTML Working Group - W3C Blog - 1 views

  • As part of a series of interviews with W3C Members to learn more about their support for standards and participation in W3C, I'm talking to Paul Cotton from Microsoft and co-Chair of the W3C HTML Working Group.
    • Gary Edwards
       
      There's the W3C version of HTML5.  And then there's the WebKit version.  WebKit HTML5 is pushed forward by Google and Apple.  The methodology is that the WebKit developers submit innovations and advances back to the W3C HTML5 groups as "proposals".  The key is that WebKit does not wait for approval.  They make the submission and move on. The problem is that waiting for a snake pit of corporate competitors to approve your proposals and include them in the next rev of the specification does not make business sense.  Especially if the competitors are legacy burdened monopolist like Microsoft and IBM.   Google and Apple have to push WebKit HMTL5 forward.  Even Mozilla is now on the WebKit band wagon!  Nokia (QT), the RiMM Blackberry and Palm Pilot webOS are also on board.  The key to WebKit HTML5's success is the incredible marketshare of mobile-smartphone computing, and the pushback across the greater Web mobile-web computing devices are having. Does FaceBook wait for W3C HTML5?  Or do they chase the iPhone with a WebKit HTML5 website configuration and enhancement? That's a rhetorical question :)
Gary Edwards

Windows 8: Microsoft's browser-based OS | ExtremeTech - 1 views

  • Microsoft’s browser-based operating systemGet this: The entire Metro interface — the complete Windows 8 front-end — is powered by Internet Explorer 10. Not the browser with a back button and an address bar, but the IE10 rendering engine Trident. To drive this point home, Metro-style apps in Windows 8 can be written in HTML, CSS, and JavaScript, and they will be just as “low-level” as their C++ and C# cousins. In other words, Windows 8 runs web apps natively.
  • To put this into contrast, think about the current state-of-the-art in Chrome, Firefox, and Internet Explorer 9. Chrome has glorified extensions and bookmarks, Firefox is working on an Open Web App Store, and IE9 has pinned sites. Windows 8 will have web apps that are first-class citizens, capable of using all of the same hardware resources as any other compiled program — and it will all be powered by Internet Explorer 10.
  • It’s the great Web App Dream: write once, run anywhere.
  • ...7 more annotations...
  • All three versions are fundamentally identical.
  • What if Windows 8 is actually a success on the tablet? If Windows 8 becomes ubiquitous, so does Internet Explorer 10 — and if IE10 can be found on hundreds of millions of devices, what platform do you think developers will choose?
  • This poses a tricky question, though. You see, not only does IE10 power Windows 8′s primary interface, but Internet Explorer 10 — the browser — is also available as a Metro-style app, and as a full-interface browser in the Explorer Desktop.
  • Do you write an app for tens of millions of iPhones and iPads, or do you write a single piece of HTML, CSS, and JavaScript that can run perfectly on every Windows 8, IE10-powered tablet, laptop, and desktop?
  • Those same web apps, with a little tweaking, will probably even work with Chrome and Firefox and Safari — but here’s an uncomfortable truth: if Windows 8 reaches 90% penetration of the computing market, why bother targeting a web browser at all? Just write a native, Metro-style web app instead.
  • Finally, add in the fact that IE10 will almost certainly come to Windows Phone 8 next year, and you will have a single app container — AppX — that runs across every damn computer form factor.
  • Microsoft, threatened by the idea of OS-agnostic web apps and browser-based operating systems from Google and Mozilla, has just taken the game to a whole new level — and, rather shockingly, given that Windows 8 started its development in mid-2009, it would seem that the lumbering behemoth might have actually out-maneuvered Google
  •  
    Excellent review of Windows 8, including some prescient thinking about what it means to have HTML+ Web Apps running natively on the Win8 OS platform.  The author/reviewer Sebastion Anthony suggest why this breakthrough is a problem for Google, Apple and Mozilla.  I'm wondering though; is this a problem for the Open Web future?  Or is this a positive step towards an Open Web communications and collaborative computation platform that  is used by all and owned by none?   After nearly thirty years of a love-hate-hate more than ever relationship with Microsoft, for sure Win8 and native HTML+ is something to carefully watch.
cecilia marie

Computer Problem Solved - 1 views

I was having difficulties with the computer problem I am facing with and it really disturbs me. I cannot proceed with my school works well because it keeps on showing up. Then I discovered Compu...

computer problem

started by cecilia marie on 08 Jul 11 no follow-up yet
Paul Merrell

Memo to Potential Whistleblowers: If You See Something, Say Something | Global Research - 0 views

  • Blowing the whistle on wrongdoing creates a moral frequency that vast numbers of people are eager to hear. We don’t want our lives, communities, country and world continually damaged by the deadening silences of fear and conformity. I’ve met many whistleblowers over the years, and they’ve been extraordinarily ordinary. None were applying for halos or sainthood. All experienced anguish before deciding that continuous inaction had a price that was too high. All suffered negative consequences as well as relief after they spoke up and took action. All made the world better with their courage. Whistleblowers don’t sign up to be whistleblowers. Almost always, they begin their work as true believers in the system that conscience later compels them to challenge. “It took years of involvement with a mendacious war policy, evidence of which was apparent to me as early as 2003, before I found the courage to follow my conscience,” Matthew Hoh recalled this week.“It is not an easy or light decision for anyone to make, but we need members of our military, development, diplomatic and intelligence community to speak out if we are ever to have a just and sound foreign policy.”
  • Hoh describes his record this way: “After over 11 continuous years of service with the U.S. military and U.S. government, nearly six of those years overseas, including service in Iraq and Afghanistan, as well as positions within the Secretary of the Navy’s Office as a White House Liaison, and as a consultant for the State Department’s Iraq Desk, I resigned from my position with the State Department in Afghanistan in protest of the escalation of war in 2009.” Another former Department of State official, the ex-diplomat and retired Army colonel Ann Wright, who resigned in protest of the Iraq invasion in March 2003, is crossing paths with Hoh on Friday as they do the honors at a ribbon-cutting — half a block from the State Department headquarters in Washington — for a billboard with a picture of Pentagon Papers whistleblower Daniel Ellsberg. Big-lettered words begin by referring to the years he waited before releasing the Pentagon Papers in 1971. “Don’t do what I did,” Ellsberg says on the billboard.  “Don’t wait until a new war has started, don’t wait until thousands more have died, before you tell the truth with documents that reveal lies or crimes or internal projections of costs and dangers. You might save a war’s worth of lives.
  • The billboard – sponsored by the ExposeFacts organization, which launched this week — will spread to other prominent locations in Washington and beyond. As an organizer for ExposeFacts, I’m glad to report that outreach to potential whistleblowers is just getting started. (For details, visit ExposeFacts.org.) We’re propelled by the kind of hopeful determination that Hoh expressed the day before the billboard ribbon-cutting when he said: “I trust ExposeFacts and its efforts will encourage others to follow their conscience and do what is right.” The journalist Kevin Gosztola, who has astutely covered a range of whistleblower issues for years, pointed this week to the imperative of opening up news media. “There is an important role for ExposeFacts to play in not only forcing more transparency, but also inspiring more media organizations to engage in adversarial journalism,” he wrote. “Such journalism is called for in the face of wars, environmental destruction, escalating poverty, egregious abuses in the justice system, corporate control of government, and national security state secrecy. Perhaps a truly successful organization could inspire U.S. media organizations to play much more of a watchdog role than a lapdog role when covering powerful institutions in government.”
  • ...2 more annotations...
  • Overall, we desperately need to nurture and propagate a steadfast culture of outspoken whistleblowing. A central motto of the AIDS activist movement dating back to the 1980s – Silence = Death – remains urgently relevant in a vast array of realms. Whether the problems involve perpetual war, corporate malfeasance, climate change, institutionalized racism, patterns of sexual assault, toxic pollution or countless other ills, none can be alleviated without bringing grim realities into the light. “All governments lie,” Ellsberg says in a video statement released for the launch of ExposeFacts, “and they all like to work in the dark as far as the public is concerned, in terms of their own decision-making, their planning — and to be able to allege, falsely, unanimity in addressing their problems, as if no one who had knowledge of the full facts inside could disagree with the policy the president or the leader of the state is announcing.” Ellsberg adds: “A country that wants to be a democracy has to be able to penetrate that secrecy, with the help of conscientious individuals who understand in this country that their duty to the Constitution and to the civil liberties and to the welfare of this country definitely surmount their obligation to their bosses, to a given administration, or in some cases to their promise of secrecy.”
  • Right now, our potential for democracy owes a lot to people like NSA whistleblowers William Binney and Kirk Wiebe, and EPA whistleblower Marsha Coleman-Adebayo. When they spoke at the June 4 news conference in Washington that launched ExposeFacts, their brave clarity was inspiring. Antidotes to the poisons of cynicism and passive despair can emerge from organizing to help create a better world. The process requires applying a single standard to the real actions of institutions and individuals, no matter how big their budgets or grand their power. What cannot withstand the light of day should not be suffered in silence. If you see something, say something.
  •  
    While some governments -- my own included -- attempt to impose an Orwellian Dark State of ubiquitous secret surveillance, secret wars, the rule of oligarchs, and public ignorance, the Edward Snowden leaks fanned the flames of the countering War on Ignorance that had been kept alive by civil libertarians. Only days after the U.S. Supreme Court denied review in a case where a reporter had been ordered to reveal his source of information for a book on the Dark State under the penalties for contempt of court (a long stretch in jail), a new web site is launched for communications between sources and journalists where the source's names never need to be revealed. This article is part of the publicity for that new weapon fielded by the civil libertarian side in the War Against Ignorance.  Hurrah!
Paul Merrell

DOJ Pushes to Expand Hacking Abilities Against Cyber-Criminals - Law Blog - WSJ - 0 views

  • The U.S. Department of Justice is pushing to make it easier for law enforcement to get warrants to hack into the computers of criminal suspects across the country. The move, which would alter federal court rules governing search warrants, comes amid increases in cases related to computer crimes. Investigators say they need more flexibility to get warrants to allow hacking in such cases, especially when multiple computers are involved or the government doesn’t know where the suspect’s computer is physically located. The Justice Department effort is raising questions among some technology advocates, who say the government should focus on fixing the holes in computer software that allow such hacking instead of exploiting them. Privacy advocates also warn government spyware could end up on innocent people’s computers if remote attacks are authorized against equipment whose ownership isn’t clear.
  • The government’s push for rule changes sheds light on law enforcement’s use of remote hacking techniques, which are being deployed more frequently but have been protected behind a veil of secrecy for years. In documents submitted by the government to the judicial system’s rule-making body this year, the government discussed using software to find suspected child pornographers who visited a U.S. site and concealed their identity using a strong anonymization tool called Tor. The government’s hacking tools—such as sending an email embedded with code that installs spying software — resemble those used by criminal hackers. The government doesn’t describe these methods as hacking, preferring instead to use terms like “remote access” and “network investigative techniques.” Right now, investigators who want to search property, including computers, generally need to get a warrant from a judge in the district where the property is located, according to federal court rules. In a computer investigation, that might not be possible, because criminals can hide behind anonymizing technologies. In cases involving botnets—groups of hijacked computers—investigators might also want to search many machines at once without getting that many warrants.
  • Some judges have already granted warrants in cases when authorities don’t know where the machine is. But at least one judge has denied an application in part because of the current rules. The department also wants warrants to be allowed for multiple computers at the same time, as well as for searches of many related storage, email and social media accounts at once, as long as those accounts are accessed by the computer being searched. “Remote searches of computers are often essential to the successful investigation” of computer crimes, Acting Assistant Attorney General Mythili Raman wrote in a letter to the judicial system’s rulemaking authority requesting the change in September. The government tries to obtain these “remote access warrants” mainly to “combat Internet anonymizing techniques,” the department said in a memo to the authority in March. Some groups have raised questions about law enforcement’s use of hacking technologies, arguing that such tools mean the government is failing to help fix software problems exploited by criminals. “It is crucial that we have a robust public debate about how the Fourth Amendment and federal law should limit the government’s use of malware and spyware within the U.S.,” said Nathan Wessler, a staff attorney at the American Civil Liberties Union who focuses on technology issues.
  • ...1 more annotation...
  • A Texas judge who denied a warrant application last year cited privacy concerns associated with sending malware when the location of the computer wasn’t known. He pointed out that a suspect opening an email infected with spyware could be doing so on a public computer, creating risk of information being collected from innocent people. A former computer crimes prosecutor serving on an advisory committee of the U.S. Judicial Conference, which is reviewing the request, said he was concerned that allowing the search of multiple computers under a single warrant would violate the Fourth Amendment’s protections against overly broad searches. The proposed rule is set to be debated by the Judicial Conference’s Advisory Committee on Criminal Rules in early April, after which it would be opened to public comment.
Paul Merrell

For sale: Systems that can secretly track where cellphone users go around the globe - T... - 0 views

  • Makers of surveillance systems are offering governments across the world the ability to track the movements of almost anybody who carries a cellphone, whether they are blocks away or on another continent. The technology works by exploiting an essential fact of all cellular networks: They must keep detailed, up-to-the-minute records on the locations of their customers to deliver calls and other services to them. Surveillance systems are secretly collecting these records to map people’s travels over days, weeks or longer, according to company marketing documents and experts in surveillance technology.
  • The world’s most powerful intelligence services, such as the National Security Agency and Britain’s GCHQ, long have used cellphone data to track targets around the globe. But experts say these new systems allow less technically advanced governments to track people in any nation — including the United States — with relative ease and precision.
  • It is unclear which governments have acquired these tracking systems, but one industry official, speaking on the condition of anonymity to share sensitive trade information, said that dozens of countries have bought or leased such technology in recent years. This rapid spread underscores how the burgeoning, multibillion-dollar surveillance industry makes advanced spying technology available worldwide. “Any tin-pot dictator with enough money to buy the system could spy on people anywhere in the world,” said Eric King, deputy director of Privacy International, a London-based activist group that warns about the abuse of surveillance technology. “This is a huge problem.”
  • ...9 more annotations...
  • Security experts say hackers, sophisticated criminal gangs and nations under sanctions also could use this tracking technology, which operates in a legal gray area. It is illegal in many countries to track people without their consent or a court order, but there is no clear international legal standard for secretly tracking people in other countries, nor is there a global entity with the authority to police potential abuses.
  • tracking systems that access carrier location databases are unusual in their ability to allow virtually any government to track people across borders, with any type of cellular phone, across a wide range of carriers — without the carriers even knowing. These systems also can be used in tandem with other technologies that, when the general location of a person is already known, can intercept calls and Internet traffic, activate microphones, and access contact lists, photos and other documents. Companies that make and sell surveillance technology seek to limit public information about their systems’ capabilities and client lists, typically marketing their technology directly to law enforcement and intelligence services through international conferences that are closed to journalists and other members of the public.
  • Yet marketing documents obtained by The Washington Post show that companies are offering powerful systems that are designed to evade detection while plotting movements of surveillance targets on computerized maps. The documents claim system success rates of more than 70 percent. A 24-page marketing brochure for SkyLock, a cellular tracking system sold by Verint, a maker of analytics systems based in Melville, N.Y., carries the subtitle “Locate. Track. Manipulate.” The document, dated January 2013 and labeled “Commercially Confidential,” says the system offers government agencies “a cost-effective, new approach to obtaining global location information concerning known targets.”
  • (Privacy International has collected several marketing brochures on cellular surveillance systems, including one that refers briefly to SkyLock, and posted them on its Web site. The 24-page SkyLock brochure and other material was independently provided to The Post by people concerned that such systems are being abused.)
  • Verint, which also has substantial operations in Israel, declined to comment for this story. It says in the marketing brochure that it does not use SkyLock against U.S. or Israeli phones, which could violate national laws. But several similar systems, marketed in recent years by companies based in Switzerland, Ukraine and elsewhere, likely are free of such limitations.
  • The tracking technology takes advantage of the lax security of SS7, a global network that cellular carriers use to communicate with one another when directing calls, texts and Internet data. The system was built decades ago, when only a few large carriers controlled the bulk of global phone traffic. Now thousands of companies use SS7 to provide services to billions of phones and other mobile devices, security experts say. All of these companies have access to the network and can send queries to other companies on the SS7 system, making the entire network more vulnerable to exploitation. Any one of these companies could share its access with others, including makers of surveillance systems.
  • Companies that market SS7 tracking systems recommend using them in tandem with “IMSI catchers,” increasingly common surveillance devices that use cellular signals collected directly from the air to intercept calls and Internet traffic, send fake texts, install spyware on a phone, and determine precise locations. IMSI catchers — also known by one popular trade name, StingRay — can home in on somebody a mile or two away but are useless if a target’s general location is not known. SS7 tracking systems solve that problem by locating the general area of a target so that IMSI catchers can be deployed effectively. (The term “IMSI” refers to a unique identifying code on a cellular phone.)
  • Verint can install SkyLock on the networks of cellular carriers if they are cooperative — something that telecommunications experts say is common in countries where carriers have close relationships with their national governments. Verint also has its own “worldwide SS7 hubs” that “are spread in various locations around the world,” says the brochure. It does not list prices for the services, though it says that Verint charges more for the ability to track targets in many far-flung countries, as opposed to only a few nearby ones. Among the most appealing features of the system, the brochure says, is its ability to sidestep the cellular operators that sometimes protect their users’ personal information by refusing government requests or insisting on formal court orders before releasing information.
  • Another company, Defentek, markets a similar system called Infiltrator Global Real-Time Tracking System on its Web site, claiming to “locate and track any phone number in the world.” The site adds: “It is a strategic solution that infiltrates and is undetected and unknown by the network, carrier, or the target.”
  •  
    The Verint company has very close ties to the Iraeli government. Its former parent company Comverse, was heavily subsidized by Israel and the bulk of its manufacturing and code development was done in Israel. See https://en.wikipedia.org/wiki/Comverse_Technology "In December 2001, a Fox News report raised the concern that wiretapping equipment provided by Comverse Infosys to the U.S. government for electronic eavesdropping may have been vulnerable, as these systems allegedly had a back door through which the wiretaps could be intercepted by unauthorized parties.[55] Fox News reporter Carl Cameron said there was no reason to believe the Israeli government was implicated, but that "a classified top-secret investigation is underway".[55] A March 2002 story by Le Monde recapped the Fox report and concluded: "Comverse is suspected of having introduced into its systems of the 'catch gates' in order to 'intercept, record and store' these wire-taps. This hardware would render the 'listener' himself 'listened to'."[56] Fox News did not pursue the allegations, and in the years since, there have been no legal or commercial actions of any type taken against Comverse by the FBI or any other branch of the US Government related to data access and security issues. While no real evidence has been presented against Comverse or Verint, the allegations have become a favorite topic of conspiracy theorists.[57] By 2005, the company had $959 million in sales and employed over 5,000 people, of whom about half were located in Israel.[16]" Verint is also the company that got the Dept. of Homeland Security contract to provide and install an electronic and video surveillance system across the entire U.S. border with Mexico.  One need not be much of a conspiracy theorist to have concerns about Verint's likely interactions and data sharing with the NSA and its Israeli equivalent, Unit 8200. 
famsocial

Great Advice To Help You Generate Good Leads - 1 views

Lead generation is a process which isn't necessarily easy to figure out. Have you struggled to master it yourself? If so, this article has some great ideas which can help you turn lead generation i...

started by famsocial on 27 Jul 14 no follow-up yet
Paul Merrell

It's A-OK for FBI agents to silence web giants, says appeals court * The Register - 0 views

  • Gagging orders in the FBI's National Security Letters are all above board and constitutional, a California court has ruled. These security letters are typically sent to internet giants demanding information on whoever is behind a username or email address. Crucially, these requests include clauses that prevent the organizations from warning specific subscribers that they are under surveillance by the Feds. Cloudflare and Credo Mobile aren't happy with that, and – with the help of rights warriors at the EFF – challenged the gagging orders. Despite earlier successes in their legal battle, the 9th US Circuit Court of Appeals ruled [PDF] on Monday that the gagging orders do not trample on First Amendment rights.
  • The FBI dishes out thousands of National Security Letters (NSLs) every year; they can simply be issued by a special agent in charge in a bureau field office, and don’t require judicial review. They allow the Feds to obtain the name, address, and records of any services used – but not the contents of conversations – plus billing records of a person, and forbid the hosting company from telling the subject, meaning those under investigation can’t challenge the decision. It used to be the case that companies couldn’t even mention the existence of the NSL system for fear of prosecution. However, in 2013 a US district court in San Francisco ruled that such extreme gagging violated the First Amendment. That decision came after Google, and later others, started publishing the number of NSL orders that had been received, in defiance of the law. In 2015 the Obama administration amended the law to allow companies limited rights to disclose NSL orders, and to set a three-year limit for the gagging order. It also set up a framework for companies to challenge the legitimacy of NSL subpoenas, and it was these changes that caused the appeals court verdict in favor of the government.
Paul Merrell

Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy? - Lawfare - 0 views

  • “We are truly fucked.” That was Motherboard’s spot-on reaction to deep fake sex videos (realistic-looking videos that swap a person’s face into sex scenes actually involving other people). And that sleazy application is just the tip of the iceberg. As Julian Sanchez tweeted, “The prospect of any Internet rando being able to swap anyone’s face into porn is incredibly creepy. But my first thought is that we have not even scratched the surface of how bad ‘fake news’ is going to get.” Indeed. Recent events amply demonstrate that false claims—even preposterous ones—can be peddled with unprecedented success today thanks to a combination of social media ubiquity and virality, cognitive biases, filter bubbles, and group polarization. The resulting harms are significant for individuals, businesses, and democracy. Belated recognition of the problem has spurred a variety of efforts to address this most recent illustration of truth decay, and at first blush there seems to be reason for optimism. Alas, the problem may soon take a significant turn for the worse thanks to deep fakes. Get used to hearing that phrase. It refers to digital manipulation of sound, images, or video to impersonate someone or make it appear that a person did something—and to do so in a manner that is increasingly realistic, to the point that the unaided observer cannot detect the fake. Think of it as a destructive variation of the Turing test: imitation designed to mislead and deceive rather than to emulate and iterate.
  • Fueled by artificial intelligence, digital impersonation is on the rise. Machine-learning algorithms (often neural networks) combined with facial-mapping software enable the cheap and easy fabrication of content that hijacks one’s identity—voice, face, body. Deep fake technology inserts individuals’ faces into videos without their permission. The result is “believable videos of people doing and saying things they never did.” Not surprisingly, this concept has been quickly leveraged to sleazy ends. The latest craze is fake sex videos featuring celebrities like Gal Gadot and Emma Watson. Although the sex scenes look realistic, they are not consensual cyber porn. Conscripting individuals (more often women) into fake porn undermines their agency, reduces them to sexual objects, engenders feeling of embarrassment and shame, and inflicts reputational harm that can devastate careers (especially for everyday people). Regrettably, cyber stalkers are sure to use fake sex videos to torment victims. What comes next? We can expect to see deep fakes used in other abusive, individually-targeted ways, such as undermining a rival’s relationship with fake evidence of an affair or an enemy’s career with fake evidence of a racist comment.
Paul Merrell

The De-Americanization of Internet Freedom - Lawfare - 0 views

  • Why did the internet freedom agenda fail? Goldsmith’s essay tees up, but does not fully explore, a range of explanatory hypotheses. The most straightforward have to do with unrealistic expectations and unintended consequences. The idea that a minimally regulated internet would usher in an era of global peace, prosperity, and mutual understanding, Goldsmith tells us, was always a fantasy. As a project of democracy and human rights promotion, the internet freedom agenda was premised on a wildly overoptimistic view about the capacity of information flows, on their own, to empower oppressed groups and effect social change. Embracing this market-utopian view led the United States to underinvest in cybersecurity, social media oversight, and any number of other regulatory tools. In suggesting this interpretation of where U.S. policymakers and their civil society partners went wrong, Goldsmith’s essay complements recent critiques of the neoliberal strains in the broader human rights and transparency movements. Perhaps, however, the internet freedom agenda has faltered not because it was so naïve and unrealistic, but because it was so effective at achieving its realist goals. The seeds of this alternative account can be found in Goldsmith’s concession that the commercial non-regulation principle helped companies like Apple, Google, Facebook, and Amazon grab “huge market share globally.” The internet became an increasingly valuable cash cow for U.S. firms and an increasingly potent instrument of U.S. soft power over the past two decades; foreign governments, in due course, felt compelled to fight back. If the internet freedom agenda is understood as fundamentally a national economic project, rather than an international political or moral crusade, then we might say that its remarkable early success created the conditions for its eventual failure. Goldsmith’s essay also points to a third set of possible explanations for the collapse of the internet freedom agenda, involving its internal contradictions. Magaziner’s notion of a completely deregulated marketplace, if taken seriously, is incoherent. As Goldsmith and Tim Wu have discussed elsewhere, it takes quite a bit of regulation for any market, including markets related to the internet, to exist and to work. And indeed, even as Magaziner proposed “complete deregulation” of the internet, he simultaneously called for new legal protections against computer fraud and copyright infringement, which were soon followed by extensive U.S. efforts to penetrate foreign networks and to militarize cyberspace. Such internal dissonance was bound to invite charges of opportunism, and to render the American agenda unstable.
« First ‹ Previous 41 - 53 of 53
Showing 20 items per page