Skip to main content

Home/ Future of the Web/ Group items tagged declaration

Rss Feed Group items tagged

49More

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
5More

Spies and internet giants are in the same business: surveillance. But we can stop them ... - 0 views

  • On Tuesday, the European court of justice, Europe’s supreme court, lobbed a grenade into the cosy, quasi-monopolistic world of the giant American internet companies. It did so by declaring invalid a decision made by the European commission in 2000 that US companies complying with its “safe harbour privacy principles” would be allowed to transfer personal data from the EU to the US. This judgment may not strike you as a big deal. You may also think that it has nothing to do with you. Wrong on both counts, but to see why, some background might be useful. The key thing to understand is that European and American views about the protection of personal data are radically different. We Europeans are very hot on it, whereas our American friends are – how shall I put it? – more relaxed.
  • Given that personal data constitutes the fuel on which internet companies such as Google and Facebook run, this meant that their exponential growth in the US market was greatly facilitated by that country’s tolerant data-protection laws. Once these companies embarked on global expansion, however, things got stickier. It was clear that the exploitation of personal data that is the core business of these outfits would be more difficult in Europe, especially given that their cloud-computing architectures involved constantly shuttling their users’ data between server farms in different parts of the world. Since Europe is a big market and millions of its citizens wished to use Facebook et al, the European commission obligingly came up with the “safe harbour” idea, which allowed companies complying with its seven principles to process the personal data of European citizens. The circle having been thus neatly squared, Facebook and friends continued merrily on their progress towards world domination. But then in the summer of 2013, Edward Snowden broke cover and revealed what really goes on in the mysterious world of cloud computing. At which point, an Austrian Facebook user, one Maximilian Schrems, realising that some or all of the data he had entrusted to Facebook was being transferred from its Irish subsidiary to servers in the United States, lodged a complaint with the Irish data protection commissioner. Schrems argued that, in the light of the Snowden revelations, the law and practice of the United States did not offer sufficient protection against surveillance of the data transferred to that country by the government.
  • The Irish data commissioner rejected the complaint on the grounds that the European commission’s safe harbour decision meant that the US ensured an adequate level of protection of Schrems’s personal data. Schrems disagreed, the case went to the Irish high court and thence to the European court of justice. On Tuesday, the court decided that the safe harbour agreement was invalid. At which point the balloon went up. “This is,” writes Professor Lorna Woods, an expert on these matters, “a judgment with very far-reaching implications, not just for governments but for companies the business model of which is based on data flows. It reiterates the significance of data protection as a human right and underlines that protection must be at a high level.”
  • ...2 more annotations...
  • This is classic lawyerly understatement. My hunch is that if you were to visit the legal departments of many internet companies today you would find people changing their underpants at regular intervals. For the big names of the search and social media worlds this is a nightmare scenario. For those of us who take a more detached view of their activities, however, it is an encouraging development. For one thing, it provides yet another confirmation of the sterling service that Snowden has rendered to civil society. His revelations have prompted a wide-ranging reassessment of where our dependence on networking technology has taken us and stimulated some long-overdue thinking about how we might reassert some measure of democratic control over that technology. Snowden has forced us into having conversations that we needed to have. Although his revelations are primarily about government surveillance, they also indirectly highlight the symbiotic relationship between the US National Security Agency and Britain’s GCHQ on the one hand and the giant internet companies on the other. For, in the end, both the intelligence agencies and the tech companies are in the same business, namely surveillance.
  • And both groups, oddly enough, provide the same kind of justification for what they do: that their surveillance is both necessary (for national security in the case of governments, for economic viability in the case of the companies) and conducted within the law. We need to test both justifications and the great thing about the European court of justice judgment is that it starts us off on that conversation.
3More

Obama lawyers asked secret court to ignore public court's decision on spying | US news ... - 0 views

  • The Obama administration has asked a secret surveillance court to ignore a federal court that found bulk surveillance illegal and to once again grant the National Security Agency the power to collect the phone records of millions of Americans for six months. The legal request, filed nearly four hours after Barack Obama vowed to sign a new law banning precisely the bulk collection he asks the secret court to approve, also suggests that the administration may not necessarily comply with any potential court order demanding that the collection stop.
  • But Carlin asked the Fisa court to set aside a landmark declaration by the second circuit court of appeals. Decided on 7 May, the appeals court ruled that the government had erroneously interpreted the Patriot Act’s authorization of data collection as “relevant” to an ongoing investigation to permit bulk collection. Carlin, in his filing, wrote that the Patriot Act provision remained “in effect” during the transition period. “This court may certainly consider ACLU v Clapper as part of its evaluation of the government’s application, but second circuit rulings do not constitute controlling precedent for this court,” Carlin wrote in the 2 June application. Instead, the government asked the court to rely on its own body of once-secret precedent stretching back to 2006, which Carlin called “the better interpretation of the statute”.
  • But the Fisa court must first decide whether the new bulk-surveillance request is lawful. On Friday, the conservative group FreedomWorks filed a rare motion before the Fisa court, asking it to reject the government’s surveillance request as a violation of the fourth amendment’s prohibition on unreasonable searches and seizures. Fisa court judge Michael Moseman gave the justice department until this coming Friday to respond – and explicitly barred the government from arguing that FreedomWorks lacks the standing to petition the secret court.
3More

Google Open Sources Google XML Pages - O'Reilly News - 0 views

  • OSCON 2008, Gonsalves made the announcement that, after several years of consideration, Google was releasing Google XML Pages (or GXP) under the Apache Open Source License.
  • At OSCON 2008, Gonsalves made the announcement that, after several years of consideration, Google was releasing Google XML Pages (or GXP) under the Apache Open Source License.
  • Originally developed as a Python interpreter that produced Java source code, gxp was rewritten in 2006-7 to be a completely Java based application. The idea behind gxp is fairly simple (and is one that is used, in slightly different fashion, for Microsoft's XAML and Silverlight) - a web designer can declare a number of XML namespaces that define specific libraries on an XHTML or GXP container element, intermixing GXP and XHTML code in order to perform conditional logic, invoke server components, define state variables or create template modules. This GXP code is then parsed and used to generate the relevant Java code, which in turn is compiled into a server module invoked from within a Java servlet engine such as Tomcat or Jetty and cached on the server.
3More

Microsoft breaks IE8 interoperability promise | The Register - 0 views

  • In March, Microsoft announced that their upcoming Internet Explorer 8 would: "use its most standards compliant mode, IE8 Standards, as the default." Note the last word: default. Microsoft argued that, in light of their newly published interoperability principles, it was the right thing to do. This declaration heralded an about-face and was widely praised by the web standards community; people were stunned and delighted by Microsoft's promise. This week, the promise was broken. It lasted less than six months. Now that Internet Explorer IE8 beta 2 is released, we know that many, if not most, pages viewed in IE8 will not be shown in standards mode by default.
  • How many pages are affected by this change? Here's the back of my envelope: The PC market can be split into two segments — the enterprise market and the home market. The enterprise market accounts for around 60 per cent of all PCs sold, while the home market accounts for the remaining 40 per cent. Within enterprises, intranets are used for all sorts of things and account for, perhaps, 80 per cent of all page views. Thus, intranets account for about half of all page views on PCs!
  •  
    Article by Hakon Lie of Opera Software. Also note that acdcording to the European Commission, "As for the tying of separate software products, in its Microsoft judgment of 17 September 2007, the Court of First Instance confirmed the principles that must be respected by dominant companies. In a complaint by Opera, a competing browser vendor, Microsoft is alleged to have engaged in illegal tying of its Internet Explorer product to its dominant Windows operating system. The complaint alleges that there is ongoing competitive harm from Microsoft's practices, in particular in view of new proprietary technologies that Microsoft has allegedly introduced in its browser that would reduce compatibility with open internet standards, and therefore hinder competition. In addition, allegations of tying of other separate software products by Microsoft, including desktop search and Windows Live have been brought to the Commission's attention. The Commission's investigation will therefore focus on allegations that a range of products have been unlawfully tied to sales of Microsoft's dominant operating system." http://europa.eu/rapid/pressReleasesAction.do?reference=MEMO/08/19&format=HTML&aged=0&language=EN&guiLanguage=en
2More

XForms for HTML: W3C Working Draft 19 December 2008 - 0 views

  • AbstractXForms for HTML provides a set of attributes and script methods that can be used by the tags or elements of an HTML or XHTML web page to simplify the integration of data-intensive interactive processing capabilities from XForms. The semantics of the attributes are mapped to the rich XForms model-view-controller-connector architecture, thereby allowing web application authors a smoother, selective migration path to the higher-order behaviors available from the full element markup available in modules of XForms.
  • This document describes XForms for HTML, which provides a set of attributes and script methods encompassing a useful subset of XForms functionality and mapping that functionality to syntactic constructs that are familiar to authors of HTML and XHTML web pages. The intent of this module is to simplify the means by which web page authors gain access to the rich functionality available from the hybrid execution model of XForms, which combines declarative constructs with event-driven imperative processing. These attributes and script methods increase the initial consumability of XForms by allowing injection of rich semantics directly into the host language markup. In turn, the behaviors of the attributes and script methods are mapped to the XForms model-view-controller-connector architecture so that applications manifest behaviors consistent with having used XForms markup elements. This allows authors to gradually address greater application complexity as it arises in the software lifecycle by opportunistically, i.e. as the need arises, switching from the attributes and script methods of this specification to the corresponding XForms markup elements. This gradual adoption strategy is being further supported by the modularization of XForms into components that can be consumed incrementally by authors and implementers.
3More

W3C releases Working Draft for Widgets 1.0: APIs and Events - 0 views

  • This specification defines a set of APIs and events for the Widgets 1.0 Family of Specifications that enable baseline functionality for widgets. The APIs and Events defined by this specification defines, amongst other things, the means to:access the metadata declared in a widget's configuration document, receive events related to changes in the view state of a widget, determine the locale under which a widget is currently running, be notified of events relating to the widget being updated, invoke a widget to open a URL on the system's default browser, requests the user's attention in a device independent manner, and check if any additional APIs requested via the configuration document's feature element have successfully loaded.
  • This specification defines a set of APIs and events for widgets that enable baseline functionality for widgets. Widgets are full-fledged client-side applications that are authored using Web standards. They are typically downloaded and installed on a client machine or device where they typically run as stand-alone applications outside of a Web browser. Examples range from simple clocks, stock tickers, news casters, games and weather forecasters, to complex applications that pull data from multiple sources to be "mashed-up" and presented to a user in some interesting and useful way
  • This specification is part of the Widgets 1.0 family of specifications, which together standardize widgets as a whole. The Widgets 1.0: Packaging and Configuration [Widgets-Packaging] standardizes a Zip-based packaging format, an XML-based configuration document format and a series of steps that user agents follow when processing and verifying various aspects of widgets. The Widgets 1.0: Digital Signature [Widgets-DigSig] specification defines a means for widgets to be digitally signed using a custom profile of the XML-Signature Syntax and Processing Specification. The Widgets: 1.0: Automatic Updates [Widgets-Updates] specification defines a version control model that allows widgets to be kept up-to-date over [HTTP].
8More

UN Report Finds Mass Surveillance Violates International Treaties and Privacy Rights - ... - 0 views

  • The United Nations’ top official for counter-terrorism and human rights (known as the “Special Rapporteur”) issued a formal report to the U.N. General Assembly today that condemns mass electronic surveillance as a clear violation of core privacy rights guaranteed by multiple treaties and conventions. “The hard truth is that the use of mass surveillance technology effectively does away with the right to privacy of communications on the Internet altogether,” the report concluded. Central to the Rapporteur’s findings is the distinction between “targeted surveillance” — which “depend[s] upon the existence of prior suspicion of the targeted individual or organization” — and “mass surveillance,” whereby “states with high levels of Internet penetration can [] gain access to the telephone and e-mail content of an effectively unlimited number of users and maintain an overview of Internet activity associated with particular websites.” In a system of “mass surveillance,” the report explained, “all of this is possible without any prior suspicion related to a specific individual or organization. The communications of literally every Internet user are potentially open for inspection by intelligence and law enforcement agencies in the States concerned.”
  • Mass surveillance thus “amounts to a systematic interference with the right to respect for the privacy of communications,” it declared. As a result, “it is incompatible with existing concepts of privacy for States to collect all communications or metadata all the time indiscriminately.” In concluding that mass surveillance impinges core privacy rights, the report was primarily focused on the International Covenant on Civil and Political Rights, a treaty enacted by the General Assembly in 1966, to which all of the members of the “Five Eyes” alliance are signatories. The U.S. ratified the treaty in 1992, albeit with various reservations that allowed for the continuation of the death penalty and which rendered its domestic law supreme. With the exception of the U.S.’s Persian Gulf allies (Saudi Arabia, UAE and Qatar), virtually every major country has signed the treaty. Article 17 of the Covenant guarantees the right of privacy, the defining protection of which, the report explained, is “that individuals have the right to share information and ideas with one another without interference by the State, secure in the knowledge that their communication will reach and be read by the intended recipients alone.”
  • The report’s key conclusion is that this core right is impinged by mass surveillance programs: “Bulk access technology is indiscriminately corrosive of online privacy and impinges on the very essence of the right guaranteed by article 17. In the absence of a formal derogation from States’ obligations under the Covenant, these programs pose a direct and ongoing challenge to an established norm of international law.” The report recognized that protecting citizens from terrorism attacks is a vital duty of every state, and that the right of privacy is not absolute, as it can be compromised when doing so is “necessary” to serve “compelling” purposes. It noted: “There may be a compelling counter-terrorism justification for the radical re-evaluation of Internet privacy rights that these practices necessitate. ” But the report was adamant that no such justifications have ever been demonstrated by any member state using mass surveillance: “The States engaging in mass surveillance have so far failed to provide a detailed and evidence-based public justification for its necessity, and almost no States have enacted explicit domestic legislation to authorize its use.”
  • ...5 more annotations...
  • Instead, explained the Rapporteur, states have relied on vague claims whose validity cannot be assessed because of the secrecy behind which these programs are hidden: “The arguments in favor of a complete abrogation of the right to privacy on the Internet have not been made publicly by the States concerned or subjected to informed scrutiny and debate.” About the ongoing secrecy surrounding the programs, the report explained that “states deploying this technology retain a monopoly of information about its impact,” which is “a form of conceptual censorship … that precludes informed debate.” A June report from the High Commissioner for Human Rights similarly noted “the disturbing lack of governmental transparency associated with surveillance policies, laws and practices, which hinders any effort to assess their coherence with international human rights law and to ensure accountability.” The rejection of the “terrorism” justification for mass surveillance as devoid of evidence echoes virtually every other formal investigation into these programs. A federal judge last December found that the U.S. Government was unable to “cite a single case in which analysis of the NSA’s bulk metadata collection actually stopped an imminent terrorist attack.” Later that month, President Obama’s own Review Group on Intelligence and Communications Technologies concluded that mass surveillance “was not essential to preventing attacks” and information used to detect plots “could readily have been obtained in a timely manner using conventional [court] orders.”
  • That principle — that the right of internet privacy belongs to all individuals, not just Americans — was invoked by NSA whistleblower Edward Snowden when he explained in a June, 2013 interview at The Guardian why he disclosed documents showing global surveillance rather than just the surveillance of Americans: “More fundamentally, the ‘US Persons’ protection in general is a distraction from the power and danger of this system. Suspicionless surveillance does not become okay simply because it’s only victimizing 95% of the world instead of 100%.” The U.N. Rapporteur was clear that these systematic privacy violations are the result of a union between governments and tech corporations: “States increasingly rely on the private sector to facilitate digital surveillance. This is not confined to the enactment of mandatory data retention legislation. Corporates [sic] have also been directly complicit in operationalizing bulk access technology through the design of communications infrastructure that facilitates mass surveillance. ”
  • The report was most scathing in its rejection of a key argument often made by American defenders of the NSA: that mass surveillance is justified because Americans are given special protections (the requirement of a FISA court order for targeted surveillance) which non-Americans (95% of the world) do not enjoy. Not only does this scheme fail to render mass surveillance legal, but it itself constitutes a separate violation of international treaties (emphasis added): The Special Rapporteur concurs with the High Commissioner for Human Rights that where States penetrate infrastructure located outside their territorial jurisdiction, they remain bound by their obligations under the Covenant. Moreover, article 26 of the Covenant prohibits discrimination on grounds of, inter alia, nationality and citizenship. The Special Rapporteur thus considers that States are legally obliged to afford the same privacy protection for nationals and non-nationals and for those within and outside their jurisdiction. Asymmetrical privacy protection regimes are a clear violation of the requirements of the Covenant.
  • Three Democratic Senators on the Senate Intelligence Committee wrote in The New York Times that “the usefulness of the bulk collection program has been greatly exaggerated” and “we have yet to see any proof that it provides real, unique value in protecting national security.” A study by the centrist New America Foundation found that mass metadata collection “has had no discernible impact on preventing acts of terrorism” and, where plots were disrupted, “traditional law enforcement and investigative methods provided the tip or evidence to initiate the case.” It labeled the NSA’s claims to the contrary as “overblown and even misleading.” While worthless in counter-terrorism policies, the UN report warned that allowing mass surveillance to persist with no transparency creates “an ever present danger of ‘purpose creep,’ by which measures justified on counter-terrorism grounds are made available for use by public authorities for much less weighty public interest purposes.” Citing the UK as one example, the report warned that, already, “a wide range of public bodies have access to communications data, for a wide variety of purposes, often without judicial authorization or meaningful independent oversight.”
  • The latest finding adds to the growing number of international formal rulings that the mass surveillance programs of the U.S. and its partners are illegal. In January, the European parliament’s civil liberties committee condemned such programs in “the strongest possible terms.” In April, the European Court of Justice ruled that European legislation on data retention contravened EU privacy rights. A top secret memo from the GCHQ, published last year by The Guardian, explicitly stated that one key reason for concealing these programs was fear of a “damaging public debate” and specifically “legal challenges against the current regime.” The report ended with a call for far greater transparency along with new protections for privacy in the digital age. Continuation of the status quo, it warned, imposes “a risk that systematic interference with the security of digital communications will continue to proliferate without any serious consideration being given to the implications of the wholesale abandonment of the right to online privacy.” The urgency of these reforms is underscored, explained the Rapporteur, by a conclusion of the United States Privacy and Civil Liberties Oversight Board that “permitting the government to routinely collect the calling records of the entire nation fundamentally shifts the balance of power between the state and its citizens.”
2More

Pro-Privacy Senator Wyden on Fighting the NSA From Inside the System | WIRED - 1 views

  •  
    "Senator Ron Wyden thought he knew what was going on. The Democrat from Oregon, who has served on the Senate Select Committee on Intelligence since 2001, thought he knew the nature of the National Security Agency's surveillance activities. As a committee member with a classified clearance, he received regular briefings to conduct oversight."
  •  
    I'm a retired lawyer in Oregon and a devout civil libertarian. Wyden is one of my senators. I have been closely following this government digital surveillance stuff since the original articles in 1988 that first broke the story on the Five Eyes' Echelon surveillance system. E.g., http://goo.gl/mCxs6Y While I will grant that Wyden has bucked the system gently (he's far more a drag anchor than a propeller), he has shown no political courage on the NSA stuff whatsoever. In the linked article, he admits keeping his job as a Senator was more important to him than doing anything *effective* to stop the surveillance in its tracks. His "working from the inside" line notwithstanding, he allowed creation of a truly Orwellian state to develop without more than a few ineffective yelps that were never listened to because he lacked the courage to take a stand and bring down the house that NSA built with documentary evidence. It took a series of whistleblowers culminating in Edward Snowden's courageous willingness to spend the rest of his life in prison to bring the public to its currently educated state. Wyden on the other hand, didn't even have the courage to lay it all out in the public Congressional record when he could have done so at any time without risking more than his political career because of the Constitution's Speech and Debate Clause that absolutely protects Wyden from criminal prosecution had he done so. I don't buy arguments that fear of NSA blackmail can excuse politicians from doing their duty. That did not stop the Supreme Court from unanimously laying down an opinion, in Riley v. California, that brings to an end the line of case decisions based on Smith v. Maryland that is the underpinning of the NSA/DoJ position on access to phone metadata without a warrant. http://scholar.google.com/scholar_case?case=9647156672357738355 Elected and appointed government officials owe a duty to the citizens of this land to protect and defend the Constitution that legallh
4More

FBI Now Holding Up Michael Horowitz' Investigation into the DEA | emptywheel - 0 views

  • Man, at some point Congress is going to have to declare the FBI legally contemptuous and throw them in jail. They continue to refuse to cooperate with DOJ’s Inspector General, as they have been for basically 5 years. But in Michael Horowitz’ latest complaint to Congress, he adds a new spin: FBI is not only obstructing his investigation of the FBI’s management impaired surveillance, now FBI is obstructing his investigation of DEA’s management impaired surveillance. I first reported on DOJ IG’s investigation into DEA’s dragnet databases last April. At that point, the only dragnet we knew about was Hemisphere, which DEA uses to obtain years of phone records as well as location data and other details, before it them parallel constructs that data out of a defendant’s reach.
  • But since then, we’ve learned of what the government claims to be another database — that used to identify Shantia Hassanshahi in an Iranian sanctions case. After some delay, the government revealed that this was another dragnet, including just international calls. It claims that this database was suspended in September 2013 (around the time Hemisphere became public) and that it is no longer obtaining bulk records for it. According to the latest installment of Michael Horowitz’ complaints about FBI obstruction, he tried to obtain records on the DEA databases on November 20, 2014 (of note, during the period when the government was still refusing to tell even Judge Rudolph Contreras what the database implicating Hassanshahi was). FBI slow-walked production, but promised to provide everything to Horowitz by February 13, 2015. FBI has decided it has to keep reviewing the emails in question to see if there is grand jury, Title III electronic surveillance, and Fair Credit Reporting Act materials, which are the same categories of stuff FBI has refused in the past. So Horowitz is pointing to the language tied to DOJ’s appropriations for FY 2015 which (basically) defunded FBI obstruction. Only FBI continues to obstruct.
  • There’s one more question about this. As noted, this investigation is supposed to be about DEA’s databases. We’ve already seen that FBI uses Hemisphere (when I asked FBI for comment in advance of this February 4, 2014 article on FBI obstinance, Hemisphere was the one thing they refused all comment on). And obviously, FBI access another DEA database to go after Hassanshahi. So that may be the only reason why Horowitz needs the FBI’s cooperation to investigate the DEA’s dragnets. Plus, assuming FBI is parallel constructing these dragnets just like DEA is, I can understand why they’d want to withhold grand jury information, which would make that clear. Still, I can’t help but wonder — as I have in the past — whether these dragnets are all connected, a constantly moving shell game. That might explain why FBI is so intent on obstructing Horowitz again.
  •  
    Marcy Wheeler's specuiulation that various government databases simply move to another agency when they're brought to light is not without precedent. When Congress shut down DARPA's Total Information Awareness program, most of its software programs and databases were just moved to NSA. 
1More

Germany Fires Verizon Over NSA Spying - 0 views

  • Germany announced Thursday it is canceling its contract with Verizon Communications over concerns about the role of U.S. telecom corporations in National Security Agency spying. “The links revealed between foreign intelligence agencies and firms after the N.S.A. affair show that the German government needs a high level of security for its essential networks,” declared Germany’s Interior Ministry in a statement released Thursday. The Ministry said it is engaging in a communications overhaul to strengthen privacy protections as part of the process of severing ties with Verizon. The announcement follows revelations, made possible by NSA whistleblower Edward Snowden, that Germany is a prime target of NSA spying. This includes surveillance of German Chancellor Angela Merkel’s mobile phone communications, as well as a vast network of centers that secretly collect information across the country. Yet, many have accused Germany of being complicit in NSA spying, in addition to being targeted by it. The German government has refused to grant Snowden political asylum, despite his contribution to the public record about U.S. spying on Germany.
4More

Carriers Tell U.S. 'No' to Plans for Internet Fast Lanes - 1 views

  •  
    [# Another little freedom battle won by citizens...] "In recent letters, AT&T, Comcast and Verizon said they have no plans to seek deals with content providers that would give faster Internet performance in exchange for special payments."
  • ...1 more comment...
  •  
    [# Another little freedom battle won by citizens...] "In recent letters, AT&T, Comcast and Verizon said they have no plans to seek deals with content providers that would give faster Internet performance in exchange for special payments."
  •  
    "In recent letters, AT&T, Comcast and Verizon said they have no plans to seek deals with content providers that would give faster Internet performance in exchange for special payments." [ # How Good it would be # ! ... if it were #true... # ! #Time Will '#Tell' # ! And, if real, it will be thanks to citizens' #coordinated #struggle...]
  •  
    Too early to declare victory. The battle isn't over until FCC adopts regulations *forbidding* the carriers from charging extra for faster data transmission. Company statements using weasel words like they "have no plans" leave a wide open door to change their minds after a regulation is adopted that permits the surcharges to be made. It could be a ploy to dampen the number of emails the FCC, the White House, and Congress are receiving. In matters of the public interest law type, what the corporate side says is irrelevant and frequently is a lie. What matters is the wording of the final rule.
3More

The Linux desktop battle (and why it matters) - TechRepublic - 2 views

  •  
    Jack Wallen ponders the problem with the ever-lagging acceptance of the Linux desktop and poses a radical solution.
  •  
    "Jack Wallen ponders the problem with the ever-lagging acceptance of the Linux desktop and poses a radical solution. Linux desktop I have been using Ubuntu Unity for a very long time. In fact, I would say that this is, by far, the longest I've stuck with a single desktop interface. Period. That doesn't mean I don't stop to smell the desktop roses along the Linux path. In fact, I've often considered other desktops as a drop-in replacement for Unity. GNOME and Budgie have vied for my attention of late. Both are solid takes on the desktop that offer a minimalistic, modern look and feel (something I prefer) and help me get my work done with an efficiency other desktops can't match. What I see across the Linux landscape, however, often takes me by surprise. While Microsoft and Apple continue to push the idea of the user interface forward, a good amount of the Linux community seems bent on holding us in a perpetual state of "90s computing." Consider Xfce, Mate, and Cinnamon -- three very popular Linux desktop interfaces that work with one very common thread... not changing for the sake of change. Now, this can be considered a very admirable cause when it's put in place to ensure that user experience (UX) is as positive as possible. What this idea does, however, is deny the idea that change can affect an even more efficient and positive UX. When I spin up a distribution that makes use of Xfce, Mate, or Cinnamon, I find the environments work well and get the job done. At the same time, I feel as if the design of the desktops is trapped in the wrong era. At this point, you're certainly questioning the validity and path of this post. If the desktops work well and help you get the job done, what's wrong? It's all about perception. Let me offer you up a bit of perspective. The only reason Apple managed to rise from the ashes and become one of the single most powerful forces in technology is because they understood the concept of perception. They re-invented th
  •  
    Jack Wallen ponders the problem with the ever-lagging acceptance of the Linux desktop and poses a radical solution.
2More

Report: Germany Spied on FBI, US Companies, French Minister - 0 views

  • German public radio station rbb-Inforadio reported Wednesday that the country's foreign intelligence agency spied on the FBI and U.S. arms companies, adding to a growing list of targets among friendly nations the agency allegedly eavesdropped on.The station claimed that Germany's BND also spied on the International Criminal Court in The Hague, the World Health Organization, French Foreign Minister Laurent Fabius and even a German diplomat who headed an EU observer mission to Georgia from 2008 to 2011. It provided no source for its report, but the respected German weekly Der Spiegel also reported at the weekend that the BND targeted phone numbers and email addresses of officials in the United States, Britain, France, Switzerland, Greece, the Vatican and other European countries, as well as at international aid groups such as the Red Cross. The claims are particularly sensitive in Germany because the government reacted with anger two years ago to reports that the U.S. eavesdropped on German targets, including Chancellor Angela Merkel, who declared at the time that "spying among friends, that's just wrong."German lawmakers have broadened a probe into the U.S. National Security Agency's activities in the country to include the work of the BND.
6More

Facebook Says It Is Deleting Accounts at the Direction of the U.S. and Israeli Governments - 0 views

  • In September of last year, we noted that Facebook representatives were meeting with the Israeli government to determine which Facebook accounts of Palestinians should be deleted on the ground that they constituted “incitement.” The meetings — called for and presided over by one of the most extremist and authoritarian Israeli officials, pro-settlement Justice Minister Ayelet Shaked — came after Israel threatened Facebook that its failure to voluntarily comply with Israeli deletion orders would result in the enactment of laws requiring Facebook to do so, upon pain of being severely fined or even blocked in the country. The predictable results of those meetings are now clear and well-documented. Ever since, Facebook has been on a censorship rampage against Palestinian activists who protest the decades-long, illegal Israeli occupation, all directed and determined by Israeli officials. Indeed, Israeli officials have been publicly boasting about how obedient Facebook is when it comes to Israeli censorship orders
  • Facebook now seems to be explicitly admitting that it also intends to follow the censorship orders of the U.S. government.
  • What this means is obvious: that the U.S. government — meaning, at the moment, the Trump administration — has the unilateral and unchecked power to force the removal of anyone it wants from Facebook and Instagram by simply including them on a sanctions list. Does anyone think this is a good outcome? Does anyone trust the Trump administration — or any other government — to compel social media platforms to delete and block anyone it wants to be silenced? As the ACLU’s Jennifer Granick told the Times: It’s not a law that appears to be written or designed to deal with the special situations where it’s lawful or appropriate to repress speech. … This sanctions law is being used to suppress speech with little consideration of the free expression values and the special risks of blocking speech, as opposed to blocking commerce or funds as the sanctions was designed to do. That’s really problematic.
  • ...3 more annotations...
  • As is always true of censorship, there is one, and only one, principle driving all of this: power. Facebook will submit to and obey the censorship demands of governments and officials who actually wield power over it, while ignoring those who do not. That’s why declared enemies of the U.S. and Israeli governments are vulnerable to censorship measures by Facebook, whereas U.S and Israeli officials (and their most tyrannical and repressive allies) are not
  • All of this illustrates that the same severe dangers from state censorship are raised at least as much by the pleas for Silicon Valley giants to more actively censor “bad speech.” Calls for state censorship may often be well-intentioned — a desire to protect marginalized groups from damaging “hate speech” — yet, predictably, they are far more often used against marginalized groups: to censor them rather than protect them. One need merely look at how hate speech laws are used in Europe, or on U.S. college campuses, to see that the censorship victims are often critics of European wars, or activists against Israeli occupation, or advocates for minority rights.
  • It’s hard to believe that anyone’s ideal view of the internet entails vesting power in the U.S. government, the Israeli government, and other world powers to decide who may be heard on it and who must be suppressed. But increasingly, in the name of pleading with internet companies to protect us, that’s exactly what is happening.
1More

Trump's Blocking of Twitter Users Is Unconstitutional, Judge Says - The New York Times - 0 views

  • Apart from the man himself, perhaps nothing has defined President Trump’s political persona more than Twitter.But on Wednesday, one of Mr. Trump’s Twitter habits — his practice of blocking critics on the service, preventing them from engaging with his account — was declared unconstitutional by a federal judge in Manhattan.Judge Naomi Reice Buchwald, addressing a novel issue about how the Constitution applies to social media platforms and public officials, found that the president’s Twitter feed is a public forum. As a result, she ruled that when Mr. Trump or an aide blocked seven plaintiffs from viewing and replying to his posts, he violated the First Amendment.If the principle undergirding Wednesday’s ruling in Federal District Court stands, it is likely to have implications far beyond Mr. Trump’s feed and its 52 million followers, said Jameel Jaffer, the Knight First Amendment Institute’s executive director and the counsel for the plaintiffs. Public officials throughout the country, from local politicians to governors and members of Congress, regularly use social media platforms like Twitter and Facebook to interact with the public about government business.
5More

Time to 'Break Facebook Up,' Sanders Says After Leaked Docs Show Social Media Giant 'Tr... - 0 views

  • After NBC News on Wednesday published a trove of leaked documents that show how Facebook "treated user data as a bargaining chip with external app developers," White House hopeful Sen. Bernie Sanders declared that it is time "to break Facebook up."
  • When British investigative journalist Duncan Campbell first shared the trove of documents with a handful of media outlets including NBC News in April, journalists Olivia Solon and Cyrus Farivar reported that "Facebook CEO Mark Zuckerberg oversaw plans to consolidate the social network's power and control competitors by treating its users' data as a bargaining chip, while publicly proclaiming to be protecting that data." With the publication Wednesday of nearly 7,000 pages of records—which include internal Facebook emails, web chats, notes, presentations, and spreadsheets—journalists and the public can now have a closer look at exactly how the company was using the vast amount of data it collects when it came to bargaining with third parties.
  • The document dump comes as Facebook and Zuckerberg are facing widespread criticism over the company's political advertising policy, which allows candidates for elected office to lie in the ads they pay to circulate on the platform. It also comes as 47 state attorneys general, led by Letitia James of New York, are investigating the social media giant for antitrust violations.
  • ...2 more annotations...
  • According to Solon and Farivar of NBC: Taken together, they show how Zuckerberg, along with his board and management team, found ways to tap Facebook users' data—including information about friends, relationships, and photos—as leverage over the companies it partnered with. In some cases, Facebook would reward partners by giving them preferential access to certain types of user data while denying the same access to rival companies. For example, Facebook gave Amazon special access to user data because it was spending money on Facebook advertising. In another case the messaging app MessageMe was cut off from access to data because it had grown too popular and could compete with Facebook.
  • The call from Sanders (I-Vt.) Wednesday to break up Facebook follows similar but less definitive statements from the senator. One of Sanders' rivals in the 2020 Democratic presidential primary race, Sen. Elizabeth Warren (D-Mass.), released her plan to "Break Up Big Tech" in March. Zuckerberg is among the opponents of Warren's proposal, which also targets other major technology companies like Amazon and Google.
2More

Comcast asks the FCC to prohibit states from enforcing net neutrality | Ars Technica - 0 views

  • Comcast met with Federal Communications Commission Chairman Ajit Pai's staff this week in an attempt to prevent states from issuing net neutrality rules. As the FCC prepares to gut its net neutrality rules, broadband providers are worried that states might enact their own laws to prevent ISPs from blocking, throttling, or discriminating against online content.
  • Comcast Senior VP Frank Buono and a Comcast attorney met with Pai Chief of Staff Matthew Berry and Senior Counsel Nicholas Degani on Monday, the company said in an ex parte filing that describes the meeting. Comcast urged Pai's staff to reverse the FCC's classification of broadband as a Title II common carrier service, a move that would eliminate the legal authority the FCC uses to enforce net neutrality rules. Pai has said he intends to do just that, so Comcast will likely get its wish on that point. But Comcast also wants the FCC to go further by making a declaration that states cannot impose their own regulations on broadband. The filing said: We also emphasized that the Commission's order in this proceeding should include a clear, affirmative ruling that expressly confirms the primacy of federal law with respect to BIAS [Broadband Internet Access Service] as an interstate information service, and that preempts state and local efforts to regulate BIAS either directly or indirectly.
3More

Judge "Disturbed" To Learn Google Tracks 'Incognito' Users, Demands Answers | ZeroHedge - 1 views

  • A US District Judge in San Jose, California says she was "disturbed" over Google's data collection practices, after learning that the company still collects and uses data from users in its Chrome browser's so-called 'incognito' mode - and has demanded an explanation "about what exactly Google does," according to Bloomberg.
  • In a class-action lawsuit that describes the company's private browsing claims as a "ruse" - and "seeks $5,000 in damages for each of the millions of people whose privacy has been compromised since June of 2016," US District Judge Lucy Koh said she finds it "unusual" that the company would make the "extra effort" to gather user data if it doesn't actually use the information for targeted advertising or to build user profiles.Koh has a long history with the Alphabet Inc. subsidiary, previously forcing the Mountain View, California-based company to disclose its scanning of emails for the purposes of targeted advertising and profile building.In this case, Google is accused of relying on pieces of its code within websites that use its analytics and advertising services to scrape users’ supposedly private browsing history and send copies of it to Google’s servers. Google makes it seem like private browsing mode gives users more control of their data, Amanda Bonn, a lawyer representing users, told Koh. In reality, “Google is saying there’s basically very little you can do to prevent us from collecting your data, and that’s what you should assume we’re doing,” Bonn said.Andrew Schapiro, a lawyer for Google, argued the company’s privacy policy “expressly discloses” its practices. “The data collection at issue is disclosed,” he said.Another lawyer for Google, Stephen Broome, said website owners who contract with the company to use its analytics or other services are well aware of the data collection described in the suit. -Bloomberg
  • Koh isn't buying it - arguing that the company is effectively tricking users under the impression that their information is not being transmitted to the company."I want a declaration from Google on what information they’re collecting on users to the court’s website, and what that’s used for," Koh demanded.The case is Brown v. Google, 20-cv-03664, U.S. District Court, Northern District of California (San Jose), via Bloomberg.
« First ‹ Previous 41 - 59 of 59
Showing 20 items per page