Skip to main content

Home/ Future of the Web/ Group items tagged Web run

Rss Feed Group items tagged

Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gary Edwards

Google's ARC Beta runs Android apps on Chrome OS, Windows, Mac, and Linux | Ars Technica - 0 views

  • So calling all developers: You can now (probably, maybe) run your Android apps on just about anything—Android, Chrome OS, Windows, Mac, and Linux—provided you fiddle with the ARC Welder and submit your app to the Chrome Web Store.
  • The App Runtime for Chrome and Native Client are hugely important projects because they potentially allow Google to push a "universal binary" strategy on developers. "Write your app for Android, and we'll make it run on almost every popular OS! (other than iOS)" Google Play Services support is a major improvement for ARC and signals just how ambitious this project is. Some day it will be a great sales pitch to convince developers to write for Android first, which gives them apps on all these desktop OSes for free.
  •  
    Thanks Marbux. ARC appears to be an extraordinary technology. Funny but Florian has been pushing Native Client (NaCL) since it was first ported from Firefox to Chrome. Looks like he was right. "In September, Google launched ARC-the "App Runtime for Chrome,"-a project that allowed Android apps to run on Chrome OS. A few days later, a hack revealed the project's full potential: it enabled ARC on every "desktop" version of Chrome, meaning you could unofficially run Android apps on Chrome OS, Windows, Mac OS X, and Linux. ARC made Android apps run on nearly every computing platform (save iOS). ARC is an early beta though so Google has kept the project's reach very limited-only a handful of apps have been ported to ARC, which have all been the result of close collaborations between Google and the app developer. Now though, Google is taking two big steps forward with the latest developer preview: it's allowing any developer to run their app on ARC via a new Chrome app packager, and it's allowing ARC to run on any desktop OS with a Chrome browser. ARC runs Windows, Mac, Linux, and Chrome OS thanks to Native Client (abbreviated "NaCL"). NaCL is a Chrome sandboxing technology that allows Chrome apps and plugins to run at "near native" speeds, taking full advantage of the system's CPU and GPU. Native Client turns Chrome into a development platform, write to it, and it'll run on all desktop Chrome browsers. Google ported a full Android stack to Native Client, allowing Android apps to run on most major OSes. With the original ARC release, there was no official process to getting an Android app running on the Chrome platform (other than working with Google). Now Google has released the adorably-named ARC Welder, a Chrome app which will convert any Android app into an ARC-powered Chrome app. It's mainly for developers to package up an APK and submit it to the Chrome Web Store, but anyone can package and launch an APK from the app directly."
Gonzalo San Gil, PhD.

No, Department of Justice, 80 Percent of Tor Traffic Is Not Child Porn | WIRED [# ! Via... - 0 views

  • The debate over online anonymity, and all the whistleblowers, trolls, anarchists, journalists and political dissidents it enables, is messy enough. It doesn’t need the US government making up bogus statistics about how much that anonymity facilitates child pornography.
  • he debate over online anonymity, and all the whistleblowers, trolls, anarchists, journalists and political dissidents it enables, is messy enough. It doesn’t need the US government making up bogus statistics about how much that anonymity facilitates child pornography. At the State of the Net conference in Washington on Tuesday, US assistant attorney general Leslie Caldwell discussed what she described as the dangers of encryption and cryptographic anonymity tools like Tor, and how those tools can hamper law enforcement. Her statements are the latest in a growing drumbeat of federal criticism of tech companies and software projects that provide privacy and anonymity at the expense of surveillance. And as an example of the grave risks presented by that privacy, she cited a study she said claimed an overwhelming majority of Tor’s anonymous traffic relates to pedophilia. “Tor obviously was created with good intentions, but it’s a huge problem for law enforcement,” Caldwell said in comments reported by Motherboard and confirmed to me by others who attended the conference. “We understand 80 percent of traffic on the Tor network involves child pornography.” That statistic is horrifying. It’s also baloney.
  • In a series of tweets that followed Caldwell’s statement, a Department of Justice flack said Caldwell was citing a University of Portsmouth study WIRED covered in December. He included a link to our story. But I made clear at the time that the study claimed 80 percent of traffic to Tor hidden services related to child pornography, not 80 percent of all Tor traffic. That is a huge, and important, distinction. The vast majority of Tor’s users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what’s often referred to as the “dark web,” use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software’s creators at the non-profit Tor Project. The University of Portsmouth study dealt exclusively with visits to hidden services. In contrast to Caldwell’s 80 percent claim, the Tor Project’s director Roger Dingledine pointed out last month that the study’s pedophilia findings refer to something closer to a single percent of Tor’s overall traffic.
  • ...1 more annotation...
  • So to whoever at the Department of Justice is preparing these talking points for public consumption: Thanks for citing my story. Next time, please try reading it.
  •  
    [# Via Paul Merrell's Diigo...] "That is a huge, and important, distinction. The vast majority of Tor's users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what's often referred to as the "dark web," use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software's creators at the non-profit Tor Project."
  •  
    [# Via Paul Merrell's Diigo...] "That is a huge, and important, distinction. The vast majority of Tor's users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what's often referred to as the "dark web," use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software's creators at the non-profit Tor Project."
Gary Edwards

Marc Chung: Chrome's Process Model Explained - 0 views

  •  
    One new feature I'm particularly excited about is process affinity. The online comic describes each tab as a separate running process. Why is this important? The short answer is robustness. A web application running in your browser, is a lot like an application running on your operating system, with one important distinction: Modern operating systems[1] run applications in their own separate process space, while modern browsers[2] run web applications in the same process space. By running applications in separate processes, the OS can terminate a malicious (or poorly written) application without affecting the rest of the OS. The browser, on the other hand, can't do this. Consequently a single rogue application can suck up mountains of memory and eventually crash your entire browser session, along with every other web application you were using at the time.
  •  
    Good discussion on why Chrome is a great web application foundation
Gary Edwards

With faster Chrome browser, Google offers an Android alternative - CNET - 0 views

  •  
    "On mobile devices, the Web hasn't lived up to its promise of a universal programming foundation. Google is trying to change that." Android hogged the spotlight at Google I/O, but performance improvements in Google's Chrome browser show that the company hasn't given up on trying to advance its other programming foundation -- the Web. The mobile version of Chrome has become much more responsive since 2013, said Paul Irish, a developer advocate on the Chrome team, speaking at the San Francisco conference. "We've improved the speed of animation by 75 percent and of scrolling 35 percent," Irish told developers Thursday. "We're committed to getting you 60 frames per second on the mobile Web." That performance is crucial for persuading people to use Web sites rather than native apps for things like posting on social networks, reading news, and playing games. It's also key to getting programmers to take the Web path when so many today focus on native apps written directly for Google's Android operating system and Apple's iOS competitor. The 60 frames-per-second rate refers to how fast the screen redraws when elements are in motion, either during games or when people are doing things like swiping among pages and dragging icons. The 60fps threshold is the minimum that game developers strive for, and to achieve it with no distracting stutters, a device must calculate how to update its entire screen every 16.7 milliseconds. Google, whose Android operating system initially lagged Apple's rival iOS significantly in this domain of responsiveness, has made great strides in improving its OS and its apps. But the mobile Web hasn't kept pace, and that means programmers have been more likely to aim for native apps rather than Web-based apps that can run on any device. ............................ Good review focused on the growing threat that native "paltform specific" apps are replacing Web apps as the developer's best choice. Florian thinks that native apps will win
Gary Edwards

Sun Labs Lively Kernel - 0 views

  • Main features The main features of the Lively Kernel include: Small web programming environment and computing kernel, written entirely with JavaScript. In addition to its application execution capabilities, the platform can also function as an integrated development environment (IDE), making the whole system self-contained and able to improve and extend itself on the fly. Programmatic access to the user interface. Our system provides programmatic access from JavaScript to the user interface via the Morphic user interface framework. The user interface is built around an event-based programming model familiar to most web developers. Asynchronous networking. As in Ajax, you can use asynchronous HTTP to perform all the network operations asynchronously, without blocking the user interface.
  •  
    "The Sun Labs Lively Kernel is a new web programming environment developed at Sun Microsystems Laboratories. The Lively Kernel supports desktop-style applications with rich graphics and direct manipulation capabilities, but without the installation or upgrade hassles that conventional desktop applications have. The system is written entirely in the JavaScript programming language, a language supported by all the web browsers, with the intent that the system can run in commercial web browsers without installation or any plug-in components. The system leverages the dynamic characteristics of the JavaScript language to make it possible to create, modify and deploy applications on the fly, using tools built into the system itself. In addition to its application execution capabilities, the Lively Kernel can also function as an integrated development environment (IDE), making the whole system self-sufficient and able to improve and extend itself dynamically....." Too little too late? Interestingly, Lively Kernel is 100% JavaScript. Check out this "motivation" rational: "...The main goal of the Lively Kernel is to bring the same kind of simplicity, generality and flexibility to web programming that we have known in desktop programming for thirty years, but without the installation and upgrade hassles than conventional desktop applications have. The Lively Kernel places a special emphasis on treating web applications as real applications, as opposed to the document-oriented nature of most web applications today. In general, we want to put programming into web development, as opposed to the current weaving of HTML, XML and CSS documents that is also sometimes referred to as programming. ...." I agree with the Web document <> Web Application statement. I think the shift though is one where the RiA frames web documents in a new envirnement, blending in massive amounts of data, streaming media and graphics. The WebKit docuemnt model was designed for this p
Gary Edwards

Developer: Dump JavaScript for faster Web loading | CIO - 0 views

  • Accomplishing the goal of a high-speed, responsive Web experience without loading JavaScript "could probably be done by linking anchor elements to JSON/XML (or a new definition) API endpoints [and] having the browser internally load the data into a new data structure," the proposal states.
  • The browser "then replaces DOM elements with whatever data that was loaded as needed.
  • The initial data and standard error responses could be in header fixtures, which could be replaced later if so desired. "The HTML body thus becomes a templating language with all the content residing in the fixtures that can be dynamically reloaded without JavaScript."
  •  
    "A W3C (World Wide Web Consortium) mailing list post entitled "HTML6 proposal for single-page Web apps without JavaScript" details the proposal, dated March 20. "The overall purpose [of the plan] is to reduce response times when loading Web pages," said Web developer Bobby Mozumder, editor in chief of FutureClaw magazine, in an email. "This is the difference between a 300ms page load vs 10ms. The faster you are, the better people are going to feel about using your Website." The proposal cites a standard design pattern emerging via front-end JavaScript frameworks where content is loaded dynamically via JSON APIs. "This is the single-page app Web design pattern," said Mozumder. "Everyone's into it because the responsiveness is so much better than loading a full page -- 10-50ms with a clean API load vs. 300-1500ms for a full HTML page load. Since this is so common now, can we implement this directly in the browsers via HTML so users can dynamically run single-page apps without JavaScript?" Accomplishing the goal of a high-speed, responsive Web experience without loading JavaScript "could probably be done by linking anchor elements to JSON/XML (or a new definition) API endpoints [and] having the browser internally load the data into a new data structure," the proposal states. The browser "then replaces DOM elements with whatever data that was loaded as needed." The initial data and standard error responses could be in header fixtures, which could be replaced later if so desired. "The HTML body thus becomes a templating language with all the content residing in the fixtures that can be dynamically reloaded without JavaScript." JavaScript frameworks and JavaScript are leveraged for loading now, but there are issues with these, Mozumder explained. "Should we force millions of Web developers to learn JavaScript, a framework, and an associated templating language if they want a speedy, responsive Web site out-of-the-box? This is a huge barrier for beginners, and right n
Gary Edwards

ES4 and the fight for the future of the Open Web - By Haavard - 0 views

  • Here, we have no better theory to explain why Microsoft is enthusiastic to spread C# onto the web via Silverlight, but not to give C# a run for its money in the open web standards by supporting ES4 in IE.The fact is, and we've heard this over late night truth-telling meetings between Mozilla principals and friends at Microsoft, that Microsoft does not think the web needs to change much. Or as one insider said to a Mozilla figure earlier this year: "we could improve the web standards, but what's in it for us?"
  •  
    Microsoft opposes the stunning collection of EcmaScript standards improvements to JavaScript ES3 known as "ES4". Brendan Eich, author of JavaScript and lead Mozilla developer claims that Microsoft is stalling the advance of JavaScript to protect their proprietary advantages with Silverlight - WPF technologies. Opera developer "Haavard" asks the question, "Why would Microsoft do this?" Brendan Eich explains: Indeed Microsoft does not desire serious change to ES3, and we heard this inside TG1 in April. The words were (from my notes) more like this: "Microsoft does not think the web needs to change much". Except, of course, via Silverlight and WPF, which if not matched by evolution of the open web standards, will spread far and wide on the Web, as Flash already has. And that change to the Web is apparently just fine and dandy according to Microsoft. First, Microsoft does not think the Web needs to change much, but then they give us Silverlight and WPF? An amazing contradiction if I ever saw one. It is obvious that Microsoft wants to lock the Web to their proprietary technologies again. They want Silverlight, not some new open standard which further threatens their locked-in position. They will use dirty tricks - lies and deception - to convince people that they are in the right. Excellent discussion on how Microsoft participates in open standards groups to delay, stall and dumb down the Open Web formats, protocols and interfaces their competitors use. With their applications and services, Microsoft offers users a Hobbsian choice; use the stalled, limited and dumbed down Open Web standards, or, use rich, fully featured and advanced but proprietary Silverlight-WPF technologies. Some choice.
Paul Merrell

Microsoft starts distributing open-source Drupal | The Open Road - The Business and Pol... - 0 views

  • The single biggest distributor of Drupal just might be Microsoft. As I discovered from Dries Buytaert's blog on Wednesday, Microsoft's Web Application Installer comes with out-of-the-box support for Drupal, OScommerce, and other popular open-source Web applications. The Web Application Installer Beta is designed to help get you up and running with the most widely used Web applications freely available for your Windows Server. Web AI provides support for popular ASP.net and PHP Web applications, including Graffiti, DotNetNuke, WordPress, Drupal, OSCommerce, and more. With just a few simple clicks, Web AI will check your machine for the necessary prerequisites, download these applications from their source location in the community, walk you through basic configuration items, and then install them on your computer.
  •  
    Microsoft attempts to co-opt the FOSS web app scene with a new installer. Will this Microsoft action will cause the FOSS community to make it easier to install web apps on Linux? At present, some Linux distribution repositories include installer packages for a very few, very popular web applications such as Mediawiki. Many web apps require expertise with the LAMP stack to install and resolve often complex dependencies and configuration details, perhaps most importantly security details. Documentation tends to be very poor for FOSS web apps, assuming knowledge most software users lack. Will this Microsoft move trigger a web app installer war with the FOSS community? Stay tuned.
Gary Edwards

AppleInsider | Intel says iPhone not capable of 'full Internet' - 0 views

  •  
    "If you want to run full internet, you're going to have to run an Intel-based architecture," Wall told the gathering of engineers. He said the "iPhone struggles" when tasked with running "any sort of application that requires any horse power." When i read this, the proverbial light went on. The WinTel empire was based on business software applications requiring a DOS-Windows OS running on Intel x86 architecture. The duopoly became legendary for it's efforts to maintain control over all things software. This statement however says something else. Intel believes that the Web platform is the platform of the future; not Windows! Intel is determined to promote this idea that the Web runs best on an Intel architecture. Interesting change of perspective for partner Microsoft.
Gary Edwards

Wary of Upsetting Mighty Microsoft, Acer Limits Use Android for Phones, Not Netbooks. - 0 views

  •  
    "For a netbook, you really need to be able to view a full Web for the total Internet experience, and Android is not that yet," Jim Wong, head of Acer's IT products, said Tuesday while introducing a new line of computers."

    Right. Android runs the webkit/Chromium browser based on the same WebKit code base used by Apple iPhone/Safari, Google Chrome, Palm Pre, Nokia s60 and QT IDE, 280 Atlas WebKit IDE, SproutCore-Cocoa project, KOffice, Sun's javaFX, Adobe AiR, and Eclipse "Blinki", Eclipse SWT, Linux Midori, and the Windows CE IRiS browser - to name but a few. Other Open Web browsers Opera and Mozilla Firefox have embraced the highly interactive and very visual WebKit document and application model. Add to this WebKit tsunami the many web sites, applications and services that adopted the WebKit document model to become iPhone ready.

    Finally there is this; any browser, application or web server seekign to pass the ACiD-3 test is in effect an effort to become fully WebKit compliant.

    Maybe Mr. Wong is talking about the 1998 Internet experience supported by IE8? Or maybe there is a secret OEM agreement lurking in the background here. The kind that was used by Microsoft to stop Netscape and Java way back when.

    The problem for Microsoft is that, when it comes to smartphones, countertops and netbooks at the edge of the Web, they are not competing against individual companies pushing device and/or platform specific services. This time they are competing against the next generation Open Web. An very visual and interactive Open Web defined by the surge the WebKit, Firefox and the many JavaScript communities are leading.

    ge
  •  
    The Information Week page bookmarked says "NON-WORKING URL! The URL (Web address) that has been entered is directing to a non-existent page" Try this instead http://www.informationweek.com/news/hardware/handheld/showArticle.jhtml?articleID=216403510 Acer To Use Android For Phones, Not Netbooks April 8, 2009
  •  
    Microsoft conspiracies have happened in the past and we should watch for them. However, another explanation is that Android does not (yet) support many browser plugins. No doubt that is what the Microsoft drones remind Acer each time they meet with them, along with a pitch for Silverlight 2 !! For me, Silverlight 2 is so rare that I would not, personally, make it a requirement for a "full web". A non-Android Linux distribution on a netbook that ran Adobe Flash, Acrobat Reader, OpenOffice.org and AIR when necessary would suit me fine. One day Android may do all these things to, but for now Google has bigger fish to fry!
Paul Merrell

We're Halfway to Encrypting the Entire Web | Electronic Frontier Foundation - 0 views

  • The movement to encrypt the web has reached a milestone. As of earlier this month, approximately half of Internet traffic is now protected by HTTPS. In other words, we are halfway to a web safer from the eavesdropping, content hijacking, cookie stealing, and censorship that HTTPS can protect against. Mozilla recently reported that the average volume of encrypted web traffic on Firefox now surpasses the average unencrypted volume
  • Google Chrome’s figures on HTTPS usage are consistent with that finding, showing that over 50% of of all pages loaded are protected by HTTPS across different operating systems.
  • This milestone is a combination of HTTPS implementation victories: from tech giants and large content providers, from small websites, and from users themselves.
  • ...4 more annotations...
  • Starting in 2010, EFF members have pushed tech companies to follow crypto best practices. We applauded when Facebook and Twitter implemented HTTPS by default, and when Wikipedia and several other popular sites later followed suit. Google has also put pressure on the tech community by using HTTPS as a signal in search ranking algorithms and, starting this year, showing security warnings in Chrome when users load HTTP sites that request passwords or credit card numbers. EFF’s Encrypt the Web Report also played a big role in tracking and encouraging specific practices. Recently other organizations have followed suit with more sophisticated tracking projects. For example, Secure the News and Pulse track HTTPS progress among news media sites and U.S. government sites, respectively.
  • But securing large, popular websites is only one part of a much bigger battle. Encrypting the entire web requires HTTPS implementation to be accessible to independent, smaller websites. Let’s Encrypt and Certbot have changed the game here, making what was once an expensive, technically demanding process into an easy and affordable task for webmasters across a range of resource and skill levels. Let’s Encrypt is a Certificate Authority (CA) run by the Internet Security Research Group (ISRG) and founded by EFF, Mozilla, and the University of Michigan, with Cisco and Akamai as founding sponsors. As a CA, Let’s Encrypt issues and maintains digital certificates that help web users and their browsers know they’re actually talking to the site they intended to. CAs are crucial to secure, HTTPS-encrypted communication, as these certificates verify the association between an HTTPS site and a cryptographic public key. Through EFF’s Certbot tool, webmasters can get a free certificate from Let’s Encrypt and automatically configure their server to use it. Since we announced that Let’s Encrypt was the web’s largest certificate authority last October, it has exploded from 12 million certs to over 28 million. Most of Let’s Encrypt’s growth has come from giving previously unencrypted sites their first-ever certificates. A large share of these leaps in HTTPS adoption are also thanks to major hosting companies and platforms--like WordPress.com, Squarespace, and dozens of others--integrating Let’s Encrypt and providing HTTPS to their users and customers.
  • Unfortunately, you can only use HTTPS on websites that support it--and about half of all web traffic is still with sites that don’t. However, when sites partially support HTTPS, users can step in with the HTTPS Everywhere browser extension. A collaboration between EFF and the Tor Project, HTTPS Everywhere makes your browser use HTTPS wherever possible. Some websites offer inconsistent support for HTTPS, use unencrypted HTTP as a default, or link from secure HTTPS pages to unencrypted HTTP pages. HTTPS Everywhere fixes these problems by rewriting requests to these sites to HTTPS, automatically activating encryption and HTTPS protection that might otherwise slip through the cracks.
  • Our goal is a universally encrypted web that makes a tool like HTTPS Everywhere redundant. Until then, we have more work to do. Protect your own browsing and websites with HTTPS Everywhere and Certbot, and spread the word to your friends, family, and colleagues to do the same. Together, we can encrypt the entire web.
  •  
    HTTPS connections don't work for you if you don't use them. If you're not using HTTPS Everywhere in your browser, you should be; it's your privacy that is at stake. And every encrypted communication you make adds to the backlog of encrypted data that NSA and other internet voyeurs must process as encrypted traffic; because cracking encrypted messages is computer resource intensive, the voyeurs do not have the resources to crack more than a tiny fraction. HTTPS is a free extension for Firefox, Chrome, and Opera. You can get it here. https://www.eff.org/HTTPS-everywhere
Gary Edwards

Google Chrome: Bad news for Adobe « counternotions - 0 views

  • Agree with much of what Kontra said and disagree with many who mentioned alternatives to JavaScript/Chrome. The main, simplest reason Adobe will be in a losing fight in terms of web platform? The Big Two - Google and Microsoft - will never make themselves dependent on or promote Adobe platform and strategy.
  • Luis, I think that’s already in play with HTML5. As I pointed out in Runtime wars (2): Apple’s answer to Flash, Silverlight and JavaFX, Apple and WHATWG are firmly progressing along those lines. Canvas is at the center of it. The glue language for all this, JavaScript, is getting a potent shot in the arm. The graphics layer, at the level of SVG, needs more work. And so on.
  •  
    "What's good for the Internet is good for Google, and the company says its strategic proposition for the newly introduced Chrome browser is: a better platform is needed to deliver a new generation of online applications......." This is one of the best explanations of why Google had to do Chrome i've seen thus far. Kontra also provided some excellent coverage concerning the Future of the Web in a two part article previously published. Here he nails the RiA space, comparing Google Chrome, Apollo (Adobe AiR/Flex/Flash) and Microsoft Silverlight. Chrome is clearly an Open Web play. Apollo and Sivlerlight are proprietary bound in some way. Although it must be said that Apollo implements the SAME WebKit layout engine / WebKit docuemtn model as Google Chrome, Apple Safari-iPhone, Nokia, RiM and the Iris "Smart Phone" browser. The WebKit model is based on advanced HTML, CSS, SVG and JavaScript. Where Adobe goes proprietary is in replacing SVG with the proprietary SWF. The differences between JavaScript and ActionScript are inconsequential to me, especially given the problems at Ecma. One other point not covered by Kontra is the fact that Apollo and Silverlight can run as either browser plugins or standalone runtimes. Wha tthey can't do though is run as sufing browsers. They are clearly for Web Applications. Chome on the other hand re-invents the browser to handle both surfing mode AND RiA. Plus, a Chrome RiA can also run as a plugin in other browsers (Opera and FireFox). Very cool. The last point is that i wouldn't totally discount Apple RiA. They too use WebKit. The differnece is tha tApple uses the SquirrelFish JavaScript JiT with the SproutCore-Cocoa developers framework. This approach is designed to bridge the gap between the OSX desktop/server Cocoa API, and the WebKit-SproutCore API. Chrome uses the V8 JiT. And Adobe uses Tamarin to compile JavaScript-ActionScript. Tamarin was donated to the Mozilla community. If there is anythin that will s
Gary Edwards

Running beyond the browser - 0 views

  •  
    Although there are many ways to slice this discussion, it might be useful to compare Adobe RIA and Microsoft Silverlight RIA in terms of web ready, highly interactive documents. The Adobe RIA story is quite different from that of Silverlight. Both however exploit the shortcomings of browsers; shortcomings that are in large part, i think, due to the disconnect the browser community has had with the W3C. The W3C forked off the HTML-CSS path, putting the bulk of their attention into XML, RDF and the Semantic Web. The web developer community stayed the course, pushing the HTML-CSS envelope with JavaScript and some rather stunning CSS magic. Adobe seems to have picked up the HTML-CSS-Javascript trail with a Microsoft innovation to take advantage of browser cache, DHTML (Dynamic HTML). DHTML morphs into AJAX, (which so wild as to have difficulty scaling). And AJAX gets tamed by an Adobe-Apple sponsored WebKit. Most people see WebKit as a browser specific layout engine, and compare it to the IE and Gecko on those terms. I would argue however that WebKit is both a document model and, a document format. For sure it's a framework for very advanced HTML-CSS-DOM-Javascript work. Because the Adobe AIR run-time is based on WebKit layout, WebKit documents can hit on all cylinders across any browser able to implement the AIR plug-in. Meaning, web developers and web content providers need only target the WebKit document model to attain the interactive access ubiquity all seek. Very cool. Let me also add that the WebKit HTML-CSS-DOM-Javascript model is capable of "fixed/flow" representation. I'll explain the importance of "fixed/flow" un momento, but think about how iPhone renders a web page and you'll understand the "flow" side of this equation.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Gary Edwards

That Reinvention Of The Web Thing Opera Was Talking About? It's Called Opera Unite - 0 views

  •  
    this morning Opera unveiled a P2P based technology called Opera Unite that essentially turns every computer running the Opera browser into a full-fledged Web server. Opera Unite can be used to directly share documents, music, photos, videos, or run websites, or even chat rooms without third-party requirements. The company extended the collaborative technology to a platform that comes with a set of open APIs, encouraging developers to create their own applications (known as Opera Unite services) on top of it, directly linking personal computers together, no matter which OS they are running and without the need to download additional software. Networking above and beyond the OS. Catch the video on this page! Although it doesn't explain much by way of the underlying technology, it's really well done and very stylish. It's interesting the way they paint "the Servers" as threatening and evil.
Paul Merrell

How to Encrypt the Entire Web for Free - The Intercept - 0 views

  • If we’ve learned one thing from the Snowden revelations, it’s that what can be spied on will be spied on. Since the advent of what used to be known as the World Wide Web,&nbsp;it has been a relatively simple matter&nbsp;for network attackers—whether it’s&nbsp;the NSA, Chinese intelligence, your employer, your university, abusive partners, or teenage hackers on the same public WiFi as you—to spy on almost everything you do online. HTTPS, the technology that encrypts traffic between browsers and websites, fixes this problem—anyone listening in on that stream of data between you and, say, your Gmail window or bank’s web site would get nothing but useless random characters—but is woefully under-used. The ambitious new non-profit&nbsp;Let’s Encrypt aims to make the process of deploying HTTPS not only fast, simple, and free, but completely automatic. If it&nbsp;succeeds, the project will render&nbsp;vast regions of the internet invisible to prying eyes.
  • Encryption also prevents attackers from tampering with or impersonating legitimate websites. For example, the Chinese government censors specific pages on Wikipedia, the FBI impersonated The Seattle Times to get a suspect to click on a malicious link, and Verizon and AT&amp;T injected tracking tokens into mobile traffic without user consent. HTTPS goes a long way in preventing these sorts of attacks. And of course there’s the NSA, which relies on the limited adoption&nbsp;of HTTPS to continue to spy on the entire internet with impunity. If companies want to do one thing to meaningfully protect their customers from surveillance, it should be enabling encryption on their websites by default.
  • Let’s Encrypt, which was announced this week but won’t be ready to use until the second quarter of 2015, describes itself as “a free, automated, and open certificate authority (CA), run for the public’s benefit.” It’s the product of years of work from engineers at Mozilla, Cisco, Akamai, Electronic Frontier Foundation, IdenTrust, and researchers at the University of Michigan. (Disclosure: I used to work for the Electronic Frontier Foundation, and I was aware of Let’s Encrypt while it was being developed.) If&nbsp;Let’s Encrypt works as advertised,&nbsp;deploying HTTPS correctly and using all of the best practices will be one of the simplest parts of running a website. All it will take is running a command. Currently, HTTPS requires jumping through a variety of complicated&nbsp;hoops that certificate authorities insist on in order&nbsp;prove ownership of domain names. Let’s Encrypt&nbsp;automates this task in seconds, without requiring any human intervention, and at no cost.
  • ...2 more annotations...
  • The benefits of using HTTPS are obvious when you think about protecting secret information you send over the internet, like passwords and credit card numbers. It also helps&nbsp;protect&nbsp;information like&nbsp;what you search for in&nbsp;Google, what articles you read, what prescription medicine you take, and messages you send to colleagues, friends, and family from being monitored&nbsp;by hackers or authorities. But there are less obvious benefits as well. Websites that don’t use HTTPS are vulnerable to “session hijacking,” where attackers can take over your account even if they don’t know your password. When you download software without encryption, sophisticated attackers can secretly replace the download&nbsp;with malware that hacks your computer as soon as you try installing it.
  • The transition&nbsp;to&nbsp;a fully encrypted web won’t be immediate. After Let’s Encrypt is available to the public in 2015, each website will have to actually use it to switch over. And major web hosting companies also need to&nbsp;hop on board for&nbsp;their customers to be able to&nbsp;take advantage of it. If hosting companies start work now to integrate&nbsp;Let’s Encrypt into their services, they could offer HTTPS hosting by default at no extra cost to all their customers by the time it launches.
  •  
    Don't miss the video. And if you have a web site, urge your host service to begin preparing for Let's Encrypt. (See video on why it's good for them.)
Gary Edwards

The Future of the Desktop - ReadWriteWeb by Nova Spivak - 0 views

  •  
    Excellent commentary from Nova Spivak; about as well thought out a discussion as i've ever seen concerning the future of the desktop. Nova sees the emergence of a WebOS, most likely based on JavaScript. This article set off a fire storm of controversy and discussion, but was quickly lost in the dark days of late August/September of 2008, where news of the subsequent collapse of the world financial system and the fear filled USA elections dominated everything. Too bad. this is great stuff. ..... "Everything is moving to the cloud. As we enter the third decade of the Web we are seeing an increasing shift from native desktop applications towards Web-hosted clones that run in browsers. For example, a range of products such as Microsoft Office Live, Google Docs, Zoho, ThinkFree, DabbleDB, Basecamp, and many others now provide Web-based alternatives to the full range of familiar desktop office productivity apps. The same is true for an increasing range of enterprise applications, led by companies such as Salesforce.com, and this process seems to be accelerating. In addition, hosted remote storage for individuals and enterprises of all sizes is now widely available and inexpensive. As these trends continue, what will happen to the desktop and where will it live?" .... Is the desktop of the future going to just be a web-hosted version of the same old-fashioned desktop metaphors we have today? ..... The desktop of the future is going to be a hosted web service ..... The Browser is Going to Swallow Up the Desktop ...... The focus of the desktop will shift from information to attention ...... Users are going to shift from acting as librarians to acting as daytraders. ...... The Webtop will be more social and will leverage and integrate collective intelligence ....... The desktop of the future is going to have powerful semantic search and social search capabilities built-in ....... Interactive shared spaces will replace folders ....... The Portable Desktop ........ The Sma
Paul Merrell

ExposeFacts - For Whistleblowers, Journalism and Democracy - 0 views

  • Launched by the Institute for Public Accuracy in June 2014, ExposeFacts.org represents a new approach for encouraging whistleblowers to disclose information that citizens need to make truly informed decisions in a democracy. From the outset, our message is clear: “Whistleblowers Welcome at ExposeFacts.org.” ExposeFacts aims to shed light on concealed activities that are relevant to human rights, corporate malfeasance, the environment, civil liberties and war. At a time when key provisions of the First, Fourth and Fifth Amendments are under assault, we are standing up for a free press, privacy, transparency and due process as we seek to reveal official information—whether governmental or corporate—that the public has a right to know. While no software can provide an ironclad guarantee of confidentiality, ExposeFacts—assisted by the Freedom of the Press Foundation and its “SecureDrop” whistleblower submission system—is utilizing the latest technology on behalf of anonymity for anyone submitting materials via the ExposeFacts.org website. As journalists we are committed to the goal of protecting the identity of every source who wishes to remain anonymous.
  • The seasoned editorial board of ExposeFacts will be assessing all the submitted material and, when deemed appropriate, will arrange for journalistic release of information. In exercising its judgment, the editorial board is able to call on the expertise of the ExposeFacts advisory board, which includes more than 40 journalists, whistleblowers, former U.S. government officials and others with wide-ranging expertise.&nbsp;We are proud that Pentagon Papers whistleblower Daniel Ellsberg was the first person to become a member of the ExposeFacts advisory board. The icon below links to a SecureDrop implementation for ExposeFacts overseen by the Freedom of the Press Foundation and is only accessible using the Tor browser. As the Freedom of the Press Foundation notes, no one can guarantee 100 percent security, but this provides a “significantly more secure environment for sources to get information than exists through normal digital channels, but there are always risks.” ExposeFacts follows all guidelines as recommended by Freedom of the Press Foundation, and whistleblowers should too; the SecureDrop onion URL should only be accessed with the Tor browser — and, for added security, be running the Tails operating system. Whistleblowers should not log-in to SecureDrop from a home or office Internet connection, but rather from public wifi, preferably one you do not frequent. Whistleblowers should keep to a minimum interacting with whistleblowing-related websites unless they are using such secure software.
  •  
    A new resource site for whistle-blowers. somewhat in the tradition of Wikileaks, but designed for encrypted communications between whistleblowers and journalists.  This one has an impressive board of advisors that includes several names I know and tend to trust, among them former whistle-blowers Daniel Ellsberg, Ray McGovern, Thomas Drake, William Binney, and Ann Wright. Leaked records can only be dropped from a web browser running the Tor anonymizer software and uses the SecureDrop system originally developed by Aaron Schwartz. They strongly recommend using the Tails secure operating system that can be installed to a thumb drive and leaves no tracks on the host machine. https://tails.boum.org/index.en.html Curious, I downloaded Tails and installed it to a virtual machine. It's a heavily customized version of Debian. It has a very nice Gnome desktop and blocks any attempt to connect to an external network by means other than installed software that demands encrypted communications. For example, web sites can only be viewed via the Tor anonymizing proxy network. It does take longer for web pages to load because they are moving over a chain of proxies, but even so it's faster than pages loaded in the dial-up modem days, even for web pages that are loaded with graphics, javascript, and other cruft. E.g., about 2 seconds for New York Times pages. All cookies are treated by default as session cookies so disappear when you close the page or the browser. I love my Linux Mint desktop, but I am thinking hard about switching that box to Tails. I've been looking for methods to send a lot more encrypted stuff down the pipe for NSA to store. Tails looks to make that not only easy, but unavoidable. From what I've gathered so far, if you want to install more software on Tails, it takes about an hour to create a customized version and then update your Tails installation from a new ISO file. Tails has a wonderful odor of having been designed for secure computing. Current
Gary Edwards

Can C.E.O. Satya Nadella Save Microsoft? | Vanity Fair - 0 views

  • he new world of computing is a radical break from the past. That’s because of the growth of mobile devices and cloud computing. In the old world, corporations owned and ran Windows P.C.’s and Window servers in their own facilities, with the necessary software installed on them. Everyone used Windows, so everything was developed for Windows. It was a virtuous circle for Microsoft.
  • Now the processing power is in the cloud, and very sophisticated applications, from e-mail to tools you need to run a business, can be run by logging onto a Web site, not from pre-installed software. In addition, the way we work (and play) has shifted from P.C.’s to mobile devices—where Android and Apple’s iOS each outsell Windows by more than 10 to 1. Why develop software to run on Windows if no one is using Windows? Why use Windows if nothing you want can run on it? The virtuous circle has turned vicious.
  • Part of why Microsoft failed with devices is that competitors upended its business model. Google doesn’t charge for the operating system. That’s because Google makes its money on search. Apple can charge high prices because of the beauty and elegance of its devices, where the software and hardware are integrated in one gorgeous package. Meanwhile, Microsoft continued to force outside manufacturers, whose products simply weren’t as compelling as Apple’s, to pay for a license for Windows. And it didn’t allow Office to be used on non-Windows phones and tablets. “The whole philosophy of the company was Windows first,” says Heather Bellini, an analyst at Goldman Sachs. Of course it was: that’s how Microsoft had always made its money.
  • ...18 more annotations...
  • Right now, Windows itself is fragmented: applications developed for one Windows device, say a P.C., don’t even necessarily work on another Windows device. And if Microsoft develops a new killer application, it almost has to be released for Android and Apple phones, given their market dominance, thereby strengthening those eco-systems, too.
  • At its core, Azure uses Windows server technology. That helps existing Windows applications run seamlessly on Azure. Technologists sometimes call what Microsoft has done a “hybrid cloud” because companies can use Azure alongside their pre-existing on-site Windows servers. At the same time, Nadella also to some extent has embraced open-source software—free code that doesn’t require a license from Microsoft—so that someone could develop something using non-Microsoft technology, and it would run on Azure. That broadens Azure’s appeal.
  • “In some ways the way people think about Bill and Steve is almost a Rorschach test.” For those who romanticize the Gates era, Microsoft’s current predicament will always be Ballmer’s fault. For others, it’s not so clear. “He left Steve holding a big bag of shit,” the former executive says of Gates. In the year Ballmer officially took over, Microsoft was found to be a predatory monopolist by the U.S. government and was ordered to split into two; the cost of that to Gates and his company can never be calculated. In addition, the dotcom bubble had burst, causing Microsoft stock to collapse, which resulted in a simmering tension between longtime employees, whom the company had made rich, and newer ones, who had missed the gravy train.
  • Nadella lived this dilemma because his job at Microsoft included figuring out the cloud-based future while maintaining the highly profitable Windows server business. And so he did a bunch of things that were totally un-Microsoft-like. He went to talk to start-ups to find out why they weren’t using Microsoft. He put massive research-and-development dollars behind Azure, a cloud-based platform that Microsoft had developed in Skunk Works fashion, which by definition took resources away from the highly profitable existing business.
  • They even have a catchphrase: “Re-inventing productivity.”
  • Microsoft’s historical reluctance to open Windows and Office is why it was such a big deal when in late March, less than two months after becoming C.E.O., Nadella announced that Microsoft would offer Office for Apple’s iPad. A team at the company had been working on it for about a year. Ballmer says he would have released it eventually, but Nadella did it immediately. Nadella also announced that Windows would be free for devices smaller than nine inches, meaning phones and small tablets. “Now that we have 30 million users on the iPad using it, that is 30 million people who never used Office before [on an iPad,]” he says. “And to me that’s what really drives us.” These are small moves in some ways, and yet they are also big. “It’s the first time I have listened to a senior Microsoft executive admit that they are behind,” says one institutional investor. “The fact that they are giving away Windows, their bread and butter for 25 years—it is quite a fundamental change.”
  • And whoever does the best job of building the right software experiences to give both organizations and individuals time back so that they can get more out of their time, that’s the core of this company—that’s the soul. That’s what Bill started this company with. That’s the Office franchise. That’s the Windows franchise. We have to re-invent them. . . . That’s where this notion of re-inventing productivity comes from.”
  • Ballmer might be a complicated character, but he has nothing on Gates, whose contradictions have long fascinated Microsoft-watchers. He is someone who has no problem humiliating individuals—he might not even notice—but who genuinely cares deeply about entire populations and is deeply loyal. He is generous in the biggest ways imaginable, and yet in small things, like picking up a lunch tab, he can be shockingly cheap. He can’t make small talk and can come across as totally lacking in E.Q. “The rules of human life that allow you to get along are not complicated,” says one person who knows Gates. “He could write a book on it, but he can’t do it!”
  • At the Microsoft board meeting in late June 2013, Ballmer announced he had a handshake deal with Nokia’s management to buy the company, pending the Microsoft board’s approval, according to a source close to the events. Ballmer thought he had it and left before the post-board-meeting dinner to attend his son’s middle-school graduation. When he came back the next day, he found that the board had pulled a coup: they informed him they weren’t doing the deal, and it wasn’t up for discussion. For Ballmer, it seems, the unforgivable thing was that Gates had been part of the coup, which Ballmer saw as the ultimate betrayal.
  • what is scarce in all of this abundance is human attention
  • And the original idea of having great software people and broad software products and Office being the primary tool that people look to across all these devices, that’ s as true today and as strong as ever.”
  • Meeting Room Plus
  • But he combines that with flashes of insight and humor that leave some wondering whether he can’t do it or simply chooses not to, or both. His most pronounced characteristic shouldn’t be simply labeled a competitive streak, because it is really a fierce, deep need to win. The dislike it bred among his peers in the industry is well known—“Silicon Bully” was the title of an infamous magazine story about him. And yet he left Microsoft for the philanthropic world, where there was no one to bully, only intractable problems to solve.
  • “The Irrelevance of Microsoft” is actually the title of a blog post by an analyst named Benedict Evans, who works at the Silicon Valley venture-capital firm Andreessen Horowitz. On his blog, Evans pointed out that Microsoft’s share of all computing devices that we use to connect to the Internet, including P.C.’s, phones, and tablets, has plunged from 90 percent in 2009 to just around 20 percent today. This staggering drop occurred not because Microsoft lost ground in personal computers, on which its software still dominates, but rather because it has failed to adapt its products to smartphones, where all the growth is, and tablets.
  • The board told Ballmer they wanted him to stay, he says, and they did eventually agree to a slightly different version of the deal. In September, Microsoft announced it was buying Nokia’s devices-and-services business for $7.2 billion. Why? The board finally realized the downside: without Nokia, Microsoft was effectively done in the smartphone business. But, for Ballmer, the damage was done, in more ways than one. He now says it became clear to him that despite the lack of a new C.E.O. he couldn’t stay. Cultural change, he decided, required a change at the top, and, he says,“there was too much water under the bridge with this board.” The feeling was mutual. As a source close to Microsoft says, no one, including Gates, tried to stop him from quitting.
  • in Wall Street’s eyes, Nadella can do no wrong. Microsoft’s stock has risen 30 percent since he became C.E.O., increasing its market value by $87 billion. “It’s interesting with Satya,” says one person who observes him with investors. “He is not a business guy or a financial analyst, but he finds a common language with investors, and in his short tenure, they leave going, Wow.” But the honeymoon is the easy part.
  • “He was so publicly and so early in life defined as the brilliant guy,” says a person who has observed him. “Anything that threatens that, he becomes narcissistic and defensive.” Or as another person puts it, “He throws hissy fits when he doesn’t get his way.”
  • round three-quarters of Microsoft’s profits come from the two fabulously successful products on which the company was built: the Windows operating system, which essentially makes personal computers run, and Office, the suite of applications that includes Word, Excel, and PowerPoint. Financially speaking, Microsoft is still extraordinarily powerful. In the last 12 months the company reported sales of $86.83 billion and earnings of $22.07 billion; it has $85.7 billion of cash on its balance sheet. But the company is facing a confluence of threats that is all the more staggering given Microsoft’s sheer size. Competitors such as Google and Apple have upended Microsoft’s business model, making it unclear where Windows will fit in the world, and even challenging Office. In the Valley, there are two sayings that everyone regards as truth. One is that profits follow relevance. The other is that there’s a difference between strategic position and financial position. “It’s easy to be in denial and think the financials reflect the current reality,” says a close observer of technology firms. “They do not.”
  •  
    Awesome article describing the history of Microsoft as seen through the lives of it's three CEO's: Bill Gates, Steve Ballmer and Satya Nadella
Gary Edwards

How the Web was almost won ... Tim O'Reilly 1998 | Salon - 0 views

  •  
    The Justice Department's antitrust suit and Judge Jackson's finding of fact have focused on how Microsoft used its operating system dominance to wrest control of the Web browser market from Netscape. Perhaps even more significant is the untold story of Microsoft's attempts to corner the Web server market. As someone whose company competes directly with Microsoft, (we sell a Web server called WebSite that runs on Windows NT, and we are active in promoting Perl, Linux and other open-source technologies), I've been privy to some of the not-so-small details that have guided the course of this recent history. And, it seems to me that if it weren't for the work of a small group of independent open-source software developers, the Justice Department intervention might have come too late not just for Netscape but the Web as a whole.
1 - 20 of 89 Next › Last »
Showing 20 items per page