Skip to main content

Home/ Future of the Web/ Group items tagged domains

Rss Feed Group items tagged

Gary Edwards

With faster Chrome browser, Google offers an Android alternative - CNET - 0 views

  •  
    "On mobile devices, the Web hasn't lived up to its promise of a universal programming foundation. Google is trying to change that." Android hogged the spotlight at Google I/O, but performance improvements in Google's Chrome browser show that the company hasn't given up on trying to advance its other programming foundation -- the Web. The mobile version of Chrome has become much more responsive since 2013, said Paul Irish, a developer advocate on the Chrome team, speaking at the San Francisco conference. "We've improved the speed of animation by 75 percent and of scrolling 35 percent," Irish told developers Thursday. "We're committed to getting you 60 frames per second on the mobile Web." That performance is crucial for persuading people to use Web sites rather than native apps for things like posting on social networks, reading news, and playing games. It's also key to getting programmers to take the Web path when so many today focus on native apps written directly for Google's Android operating system and Apple's iOS competitor. The 60 frames-per-second rate refers to how fast the screen redraws when elements are in motion, either during games or when people are doing things like swiping among pages and dragging icons. The 60fps threshold is the minimum that game developers strive for, and to achieve it with no distracting stutters, a device must calculate how to update its entire screen every 16.7 milliseconds. Google, whose Android operating system initially lagged Apple's rival iOS significantly in this domain of responsiveness, has made great strides in improving its OS and its apps. But the mobile Web hasn't kept pace, and that means programmers have been more likely to aim for native apps rather than Web-based apps that can run on any device. ............................ Good review focused on the growing threat that native "paltform specific" apps are replacing Web apps as the developer's best choice. Florian thinks that native apps will win
Gonzalo San Gil, PhD.

Demonoid Frustrates Censors With Domain Name Switch | TorrentFreak [#Note...] - 1 views

    • Gonzalo San Gil, PhD.
       
      #gatestothecountry
    • Gonzalo San Gil, PhD.
       
      # ! the never ending - expensive & useless- # ! cat-and-mouse game...
Gonzalo San Gil, PhD.

Will Molecular Biology's Most Important Discovery In Years Be Ruined By Patents? | Tech... - 1 views

  •  
    "from the GNU-Emacs-for-DNA dept Techdirt readers hardly need to be reminded that, far from promoting innovation, patents can shut it down, either directly, through legal action, or indirectly through the chill they cast on work in related areas. But not all patents are created equal. Some are so slight as to be irrelevant, while others have such a wide reach that they effectively control an entire domain. Patents on a new biological technique based on a mechanism found in nature, discussed in a long and fascinating piece in the Boston Review, definitely fall into the second category. Here's the article's explanation of the underlying mechanism, known as CRISPR-Cas: "
Gonzalo San Gil, PhD.

Warner Pays $14 Million For Illegitimate "Happy Birthday" Claims - TorrentFreak - 0 views

  •  
    " Ernesto on February 10, 2016 C: 56 Breaking After raking in dozens of millions in licensing fees, Warner/Chappell has admitted that it doesn't own the rights to the song "Happy Birthday". The music company has agreed to set aside a $14 million settlement fund for people who paid to use Happy Birthday in public. In addition, the court has been asked to enter the song into the public domain."
Gonzalo San Gil, PhD.

Learn how to calculate ROI for open hardware projects | Opensource.com - 0 views

  •  
    "Free and open source software advocates have courageously blazed a trail that is now being followed by those interested in open source for physical objects. It's called free and open source hardware (FOSH), and we're seeing an exponential rise in the number of free designs for hardware released under opensource licenses, Creative Commons licenses,or placed in the public domain."
Paul Merrell

Google to block Flash on Chrome, only 10 websites exempt - CNET - 0 views

  • The inexorable slide into a world without Flash continues, with Google revealing plans to phase out support for Adobe's Flash Player in its Chrome browser for all but a handful of websites. And the company expects the changes to roll out by the fourth quarter of 2016. While it says Flash might have "historically" been a good way to present rich media online, Google is now much more partial to HTML5, thanks to faster load times and lower power use. As a result, Flash will still come bundled with Chrome, but "its presence will not be advertised by default." Where the Flash Player is the only option for viewing content on a site, users will need to actively switch it on for individual sites. Enterprise Chrome users will also have the option of switching Flash off altogether. Google will maintain support in the short-term for the top 10 domains using the player, including YouTube, Facebook, Yahoo, Twitch and Amazon. But this "whitelist" is set to be periodically reviewed, with sites removed if they no longer warrant an exception, and the exemption list will expire after a year. A spokesperson for Adobe said it was working with Google in its goal of "an industry-wide transition to Open Web standards," including the adoption of HTML5. "At the same time, given that Flash continues to be used in areas such as education, web gaming and premium video, the responsible thing for Adobe to do is to continue to support Flash with updates and fixes, as we help the industry transition," Adobe said in an emailed statement. "Looking ahead, we encourage content creators to build with new web standards."
Gonzalo San Gil, PhD.

Download & Streaming : Audio Archive : Internet Archive - 0 views

  •  
    "Download or listen to free music and audio This library contains recordings ranging from alternative news programming, to Grateful Dead concerts, to Old Time Radio shows, to book and poetry readings, to original music uploaded by our users. Many of these audios and MP3s are available for free download. Check our FAQ for more information. Contribute Your Audio Please feel free to upload your audio (Uploaders, please set a Creative Commons license as part of the upload process, so people know what they can do with your audio - thanks!) "
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

Rapid - Press Releases - EUROPA - 0 views

  • MEMO/09/15 Brussels, 17th January 2009
  • The European Commission can confirm that it has sent a Statement of Objections (SO) to Microsoft on 15th January 2009. The SO outlines the Commission’s preliminary view that Microsoft’s tying of its web browser Internet Explorer to its dominant client PC operating system Windows infringes the EC Treaty rules on abuse of a dominant position (Article 82).
  • In the SO, the Commission sets out evidence and outlines its preliminary conclusion that Microsoft’s tying of Internet Explorer to the Windows operating system harms competition between web browsers, undermines product innovation and ultimately reduces consumer choice. The SO is based on the legal and economic principles established in the judgment of the Court of First Instance of 17 September 2007 (case T-201/04), in which the Court of First Instance upheld the Commission's decision of March 2004 (see IP/04/382), finding that Microsoft had abused its dominant position in the PC operating system market by tying Windows Media Player to its Windows PC operating system (see MEMO/07/359).
  • ...3 more annotations...
  • The evidence gathered during the investigation leads the Commission to believe that the tying of Internet Explorer with Windows, which makes Internet Explorer available on 90% of the world's PCs, distorts competition on the merits between competing web browsers insofar as it provides Internet Explorer with an artificial distribution advantage which other web browsers are unable to match. The Commission is concerned that through the tying, Microsoft shields Internet Explorer from head to head competition with other browsers which is detrimental to the pace of product innovation and to the quality of products which consumers ultimately obtain. In addition, the Commission is concerned that the ubiquity of Internet Explorer creates artificial incentives for content providers and software developers to design websites or software primarily for Internet Explorer which ultimately risks undermining competition and innovation in the provision of services to consumers.
  • Microsoft has 8 weeks to reply the SO, and will then have the right to be heard in an Oral Hearing should it wish to do so. If the preliminary views expressed in the SO are confirmed, the Commission may impose a fine on Microsoft, require Microsoft to cease the abuse and impose a remedy that would restore genuine consumer choice and enable competition on the merits.
  • A Statement of Objections is a formal step in Commission antitrust investigations in which the Commission informs the parties concerned in writing of the objections raised against them. The addressee of a Statement of Objections can reply in writing to the Statement of Objections, setting out all facts known to it which are relevant to its defence against the objections raised by the Commission. The party may also request an oral hearing to present its comments on the case. The Commission may then take a decision on whether conduct addressed in the Statement of Objections is compatible or not with the EC Treaty’s antitrust rules. Sending a Statement of Objections does not prejudge the final outcome of the procedure. In the March 2004 Decision the Commission ordered Microsoft to offer to PC manufacturers a version of its Windows client PC operating system without Windows Media Player. Microsoft, however, retained the right to also offer a version with Windows Media Player (see IP/04/382).
  •  
    It's official, hot off the presses (wasn't there a few minutes ago). We're now into a process where DG Competition will revisit its previous order requiring Microsoft to market two versions of Windows, one with Media Player and one without. DG Competition staff were considerably outraged that Microsoft took advantage of a bit of under-specification in the previous order and sold the two versions at the same price. That detail will not be neglected this time around. Moreover, given the ineffectiveness of the previous order in restoring competition among media players, don't be surprised if this results in an outright ban on bundling MSIE with Windows.
Paul Merrell

Sun, Microsoft tout fruits of cooperation - CNET News - 0 views

  • The software will be incorporated into future versions of the companies' products--likely in 2006, Ballmer said. For now, it's the most concrete example of cooperation between the companies whose fierce competition was blunted somewhat by a 2004 agreement to settle legal issues, share patents and make their software interoperable.
  • Next up will be cooperation in a number of other domains: storage software and hardware; unified systems management; Web services standards for messaging and event-tracking; and Windows terminal services that let PCs act like thin clients by leaving the heavy lifting of computing to central servers.
  •  
    From 2005, a year after Sun and Microsoft became partners in Microsoft's assault on the Web.
Paul Merrell

The New York Times Archives + Amazon Web Services = TimesMachine - Open - Code - New Yo... - 0 views

  • TimesMachine is a collection of full-page image scans of the newspaper from 1851–1922 (i.e., the public domain archives). Organized chronologically and navigated by a simple calendar interface, TimesMachine provides a unique way to traverse the historical archives of The New York Times.
  • Using Amazon Web Services, Hadoop and our own code, we ingested 405,000 very large TIFF images, 3.3 million articles in SGML and 405,000 xml files mapping articles to rectangular regions in the TIFF’s. This data was converted to a more web-friendly 810,000 PNG images (thumbnails and full images) and 405,000 JavaScript files — all of it ready to be assembled into a TimesMachine. By leveraging the power of AWS and Hadoop, we were able to utilize hundreds of machines concurrently and process all the data in less than 36 hours.
  •  
    Like this http://www.hdfilmsaati.net Film,dvd,download,free download,product... ppc,adword,adsense,amazon,clickbank,osell,bookmark,dofollow,edu,gov,ads,linkwell,traffic,scor,serp,goggle,bing,yahoo.ads,ads network,ads goggle,bing,quality links,link best,ptr,cpa,bpa
Gonzalo San Gil, PhD.

Free Online Class Shakes Up Photo Education | Raw File - 1 views

  •  
    [On the ground floor of a converted, Victorian-era cinema in Coventry, England, Jonathan Worth delivers a world-class photography lecture anyone can attend at any time, from anywhere, for free. The green-tiled building stands on an otherwise typical city center street. From here, alongside teaching assistant Matt Johnston and boss Jonathan Shaw, Worth corals 28 attending students in addition to the few thousand clocking-in from across the globe. ...]
David Corking

UK National Portrait Gallery threatens Wikipedia over scans of its public domain art - ... - 0 views

  • If you take public money to buy art, you should make that art available to the public using the best, most efficient means possible. If you believe the public wants to subsidize the creation of commercial art-books, then get out of the art-gallery business, start a publisher and hit the government up for some free tax-money.
    • David Corking
       
      Hear, hear.
  •  
    This is how I would like my taxes used.
  •  
    Analysis from the "open source" novelist
Paul Merrell

Microsoft offers free repository for agency data -- Government Computer News - 0 views

  • Microsoft has set up a repository in which government agencies may upload and store their public-facing datasets so that they can be reused by other parties. Agency developers can upload their data to this repository, called the Open Government Data Initiative (OGDI), through Microsoft's Azure, the company's cloud-computing offering.
  • Since taking the role of federal chief information officer, Vivek Kundra has urged agencies to make more of their data open to the public in easy-to-use formats. To this end, the General Services Administration, on behalf of Kundra, is setting up a repository of government feeds, to be called Data.gov. Data.gov will both serve as a repository for data and as an index for government data located elsewhere, Kundra told GCN. OGDI came about as a way to introduce Azure to the federal information technology community, said Susie Adams, Microsoft Federal chief technology officer. "The government wants to store all this data, what with Kundra talking about Data.gov. We asked if you were to use Azure as data source, [what would you need to do]?"
  • In addition to Microsoft's effort, at least one other company has volunteered to rehost government data for wider use. Amazon is offering to store public-domain datasets for users of its Elastic Compute Cloud service.
Paul Merrell

Google Says Website Encryption Will Now Influence Search Rankings - 0 views

  • Google will begin using website encryption, or HTTPS, as a ranking signal – a move which should prompt website developers who have dragged their heels on increased security measures, or who debated whether their website was “important” enough to require encryption, to make a change. Initially, HTTPS will only be a lightweight signal, affecting fewer than 1% of global queries, says Google. That means that the new signal won’t carry as much weight as other factors, including the quality of the content, the search giant noted, as Google means to give webmasters time to make the switch to HTTPS. Over time, however, encryption’s effect on search ranking make strengthen, as the company places more importance on website security. Google also promises to publish a series of best practices around TLS (HTTPS, is also known as HTTP over TLS, or Transport Layer Security) so website developers can better understand what they need to do in order to implement the technology and what mistakes they should avoid. These tips will include things like what certificate type is needed, how to use relative URLs for resources on the same secure domain, best practices around allowing for site indexing, and more.
  • In addition, website developers can test their current HTTPS-enabled website using the Qualys Lab tool, says Google, and can direct further questions to Google’s Webmaster Help Forums where the company is already in active discussions with the broader community. The announcement has drawn a lot of feedback from website developers and those in the SEO industry – for instance, Google’s own blog post on the matter, shared in the early morning hours on Thursday, is already nearing 1,000 comments. For the most part, the community seems to support the change, or at least acknowledge that they felt that something like this was in the works and are not surprised. Google itself has been making moves to better securing its own traffic in recent months, which have included encrypting traffic between its own servers. Gmail now always uses an encrypted HTTPS connection which keeps mail from being snooped on as it moves from a consumer’s machine to Google’s data centers.
  • While HTTPS and site encryption have been a best practice in the security community for years, the revelation that the NSA has been tapping the cables, so to speak, to mine user information directly has prompted many technology companies to consider increasing their own security measures, too. Yahoo, for example, also announced in November its plans to encrypt its data center traffic. Now Google is helping to push the rest of the web to do the same.
  •  
    The Internet continues to harden in the wake of the NSA revelations. This is a nice nudge by Google.
Gonzalo San Gil, PhD.

Apple Patents Technology to Legalize P2P Sharing | TorrentFreak * - 1 views

  •  
    "This means that transferring files between devices is only possible if these support Apple's licensing scheme. That's actually a step backwards from the DRM-free music that's sold in most stores today." [* What 'Apple's licensing scheme' -closed source- can hide?]
  •  
    "This means that transferring files between devices is only possible if these support Apple's licensing scheme. That's actually a step backwards from the DRM-free music that's sold in most stores today." [* What 'Apple's licensing scheme' -closed source- can hide?]
  •  
    A business method software patent combining old elements that are all prior art, including DRM. Yech! "... a patent that makes it possible to license P2P sharing" really puts a spin on reality. If the methods were in the public domain, anyone could use them without a license. That's equivalent to to saying "a government-granted monopoly with the power but no responsibility to collect money from anyone who wants to invade the monopoly's protected rights" and presenting that fact as some sort of tremendous philanthropic act by Apple. On software patent claims as prior art and obvious, see my legal memo on that topic here. http://goo.gl/5X8Kg9
Paul Merrell

How to Encrypt the Entire Web for Free - The Intercept - 0 views

  • If we’ve learned one thing from the Snowden revelations, it’s that what can be spied on will be spied on. Since the advent of what used to be known as the World Wide Web, it has been a relatively simple matter for network attackers—whether it’s the NSA, Chinese intelligence, your employer, your university, abusive partners, or teenage hackers on the same public WiFi as you—to spy on almost everything you do online. HTTPS, the technology that encrypts traffic between browsers and websites, fixes this problem—anyone listening in on that stream of data between you and, say, your Gmail window or bank’s web site would get nothing but useless random characters—but is woefully under-used. The ambitious new non-profit Let’s Encrypt aims to make the process of deploying HTTPS not only fast, simple, and free, but completely automatic. If it succeeds, the project will render vast regions of the internet invisible to prying eyes.
  • Encryption also prevents attackers from tampering with or impersonating legitimate websites. For example, the Chinese government censors specific pages on Wikipedia, the FBI impersonated The Seattle Times to get a suspect to click on a malicious link, and Verizon and AT&T injected tracking tokens into mobile traffic without user consent. HTTPS goes a long way in preventing these sorts of attacks. And of course there’s the NSA, which relies on the limited adoption of HTTPS to continue to spy on the entire internet with impunity. If companies want to do one thing to meaningfully protect their customers from surveillance, it should be enabling encryption on their websites by default.
  • Let’s Encrypt, which was announced this week but won’t be ready to use until the second quarter of 2015, describes itself as “a free, automated, and open certificate authority (CA), run for the public’s benefit.” It’s the product of years of work from engineers at Mozilla, Cisco, Akamai, Electronic Frontier Foundation, IdenTrust, and researchers at the University of Michigan. (Disclosure: I used to work for the Electronic Frontier Foundation, and I was aware of Let’s Encrypt while it was being developed.) If Let’s Encrypt works as advertised, deploying HTTPS correctly and using all of the best practices will be one of the simplest parts of running a website. All it will take is running a command. Currently, HTTPS requires jumping through a variety of complicated hoops that certificate authorities insist on in order prove ownership of domain names. Let’s Encrypt automates this task in seconds, without requiring any human intervention, and at no cost.
  • ...2 more annotations...
  • The benefits of using HTTPS are obvious when you think about protecting secret information you send over the internet, like passwords and credit card numbers. It also helps protect information like what you search for in Google, what articles you read, what prescription medicine you take, and messages you send to colleagues, friends, and family from being monitored by hackers or authorities. But there are less obvious benefits as well. Websites that don’t use HTTPS are vulnerable to “session hijacking,” where attackers can take over your account even if they don’t know your password. When you download software without encryption, sophisticated attackers can secretly replace the download with malware that hacks your computer as soon as you try installing it.
  • The transition to a fully encrypted web won’t be immediate. After Let’s Encrypt is available to the public in 2015, each website will have to actually use it to switch over. And major web hosting companies also need to hop on board for their customers to be able to take advantage of it. If hosting companies start work now to integrate Let’s Encrypt into their services, they could offer HTTPS hosting by default at no extra cost to all their customers by the time it launches.
  •  
    Don't miss the video. And if you have a web site, urge your host service to begin preparing for Let's Encrypt. (See video on why it's good for them.)
Paul Merrell

Facebook and Corporate "Friends" Threat Exchange? | nsnbc international - 0 views

  • Facebook teamed up with several corporate “friends” to adapt Facebook’s in-house software to identify cyber threats and their source with other corporations. Countering cyber threats sounds positive while there are serious questions about transparency when smaller, independent media fall victim to major corporation’s unwillingness to reveal the source of attacks resulted in websites being closed for hours or days. Transparency, yes, but for whom? Among the companies Facebook is teaming up with are Printerest, Tumblr, Twitter, Yahoo, Drpbox and Bit.ly, reports Susanne Posel at Occupy Corporatism. The stated goal of “Threat Exchange” is to locate malware, the source domains, the IP addresses which are involved as well as the nature of the malware itself.
  • While the platform may be useful for major corporations, who can afford buying the privilege to join the club, the initiative does little to nothing to protect smaller, independent media from being targeted with impunity. The development prompts the question “Cyber security for whom?” The question is especially pertinent because identifying a site as containing malware, whether it is correct or not, will result in the site being added to Google’s so-called “Safe Browsing List”.
  • An article written by nsnbc editor-in-chief Christof Lehmann entitled “Censorship Alert: The Alternative Media are getting harassed by the NSA” provides several examples which raise serious questions about the lack of transparency when independent media demand information about either real or alleged malware content on their media’s websites. An alleged malware content in a java script that had been inserted via the third-party advertising company MadAdsMedia resulted in the nsnbc website being closed down and added to Google’s Safe Browsing list. The response to nsnbc’s request to send detailed information about the alleged malware and most importantly, about the source, was rejected. MadAdsMedia’s response to a renewed request was to stop serving advertisements to nsnbc from one day to the other, stating that nsnbc could contact another company, YieldSelect, which is run by the same company. Shell Games? SiteLock, who partners with most western-based web hosting providers, including BlueHost, Hostgator and many others contacted nsnbc warning about an alleged malware threat. SiteLock refused to provide detailed information.
  • ...1 more annotation...
  • BlueHost refused to help the International Middle East Media Center (IMEMC)  during a Denial of Service DoS attack. Asked for help, BlueHost reportedly said that they should deal with the issue themselves, which was impossible without BlueHost’s cooperation. The news agency’s website was down for days because BlueHost reportedly just shut down IMEMC’s server and told the editor-in-chief, Saed Bannoura to “go somewhere else”. The question is whether “transparency” can be the privilege of major corporations or whether there is need for legislation that forces all corporations to provide detailed information that enables media and other internet users to pursue real or alleged malware threats, cyber attacks and so forth, criminally and legally. That is, also when the alleged or real threat involves major corporations.
Paul Merrell

WikiLeaks republishes all Sony hacking scandal documents | Technology | The Guardian - 0 views

  • Julian Assange says data ‘belongs in the public domain’ and says hacked files shed light on extent of cooperation between government and Hollywood
« First ‹ Previous 41 - 60 of 65 Next ›
Showing 20 items per page