Skip to main content

Home/ Future of the Web/ Group items tagged control some

Rss Feed Group items tagged

Gary Edwards

Skynet rising: Google acquires 512-qubit quantum computer; NSA surveillance to be turne... - 0 views

  •  
    "The ultimate code breakers" If you know anything about encryption, you probably also realize that quantum computers are the secret KEY to unlocking all encrypted files. As I wrote about last year here on Natural News, once quantum computers go into widespread use by the NSA, the CIA, Google, etc., there will be no more secrets kept from the government. All your files - even encrypted files - will be easily opened and read. Until now, most people believed this day was far away. Quantum computing is an "impractical pipe dream," we've been told by scowling scientists and "flat Earth" computer engineers. "It's not possible to build a 512-qubit quantum computer that actually works," they insisted. Don't tell that to Eric Ladizinsky, co-founder and chief scientist of a company called D-Wave. Because Ladizinsky's team has already built a 512-qubit quantum computer. And they're already selling them to wealthy corporations, too. DARPA, Northrup Grumman and Goldman Sachs In case you're wondering where Ladizinsky came from, he's a former employee of Northrup Grumman Space Technology (yes, a weapons manufacturer) where he ran a multi-million-dollar quantum computing research project for none other than DARPA - the same group working on AI-driven armed assault vehicles and battlefield robots to replace human soldiers. .... When groundbreaking new technology is developed by smart people, it almost immediately gets turned into a weapon. Quantum computing will be no different. This technology grants God-like powers to police state governments that seek to dominate and oppress the People.  ..... Google acquires "Skynet" quantum computers from D-Wave According to an article published in Scientific American, Google and NASA have now teamed up to purchase a 512-qubit quantum computer from D-Wave. The computer is called "D-Wave Two" because it's the second generation of the system. The first system was a 128-qubit computer. Gen two
  •  
    Normally, I'd be suspicious of anything published by Infowars because its editors are willing to publish really over the top stuff, but: [i] this is subject matter I've maintained an interest in over the years and I was aware that working quantum computers were imminent; and [ii] the pedigree on this particular information does not trace to Scientific American, as stated in the article. I've known Scientific American to publish at least one soothing and lengthy article on the subject of chlorinated dioxin hazard -- my specialty as a lawyer was litigating against chemical companies that generated dioxin pollution -- that was generated by known closet chemical industry advocates long since discredited and was totally lacking in scientific validity and contrary to established scientific knowledge. So publication in Scientific American doesn't pack a lot of weight with me. But checking the Scientific American linked article, notes that it was reprinted by permission from Nature, a peer-reviewed scientific journal and news organization that I trust much more. That said, the InfoWars version is a rewrite that contains lots of information not in the Nature/Scientific American version of a sensationalist nature, so heightened caution is still in order. Check the reprinted Nature version before getting too excited: "The D-Wave computer is not a 'universal' computer that can be programmed to tackle any kind of problem. But scientists have found they can usefully frame questions in machine-learning research as optimisation problems. "D-Wave has battled to prove that its computer really operates on a quantum level, and that it is better or faster than a conventional computer. Before striking the latest deal, the prospective customers set a series of tests for the quantum computer. D-Wave hired an outside expert in algorithm-racing, who concluded that the speed of the D-Wave Two was above average overall, and that it was 3,600 times faster than a leading conventional comput
Paul Merrell

The Internet of Things Will Turn Large-Scale Hacks into Real World Disasters | Motherboard - 0 views

  • Disaster stories involving the Internet of Things are all the rage. They feature cars (both driven and driverless), the power grid, dams, and tunnel ventilation systems. A particularly vivid and realistic one, near-future fiction published last month in New York Magazine, described a cyberattack on New York that involved hacking of cars, the water system, hospitals, elevators, and the power grid. In these stories, thousands of people die. Chaos ensues. While some of these scenarios overhype the mass destruction, the individual risks are all real. And traditional computer and network security isn’t prepared to deal with them.Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).
  • So far, internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by—presumptively—China in 2015. On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door—or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location. With the advent of the Internet of Things and cyber-physical systems in general, we've given the internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete. Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.
  •  
    Bruce Scneier on the insecurity of the Internet of Things, and possible consequences.
Gary Edwards

The Time to Pounce Has Come : Is Microsoft slow to the punch on SOA, or just waiting fo... - 0 views

  • The Time to Pounce Has Come I agree with DonnieBoy. Microsoft will try to leverage their MSOffice monopoly to dominate the newly emerging marketplace of Web-Stack and Cloud Computing solutions. I also believe that for Microsoft, the final pieces of this puzzle fell into place on March 29th, 2008 with ISO approval of the MSOffice-OOXML document format. For most businesses, Microsoft is the "client" in "client/server". The great transition from client/server to client/ Web-Stack /server has been slow because Microsoft was uncertain as to how they could control this transition. Some light was shed on the nature of this "uncertainty" when the Combs vs. Microsoft antitrust case brought forth a 1998 eMail from Chairman Bill to the MSOffice development team. The issue for the good Chairman was that of controlling the formats and protocols used to connect MSOffice to a Web centric world. MSOffice support for Open Web formats and protocols like (X)HTML, CSS, and WebDAV were out of the question. Microsoft needed to figure out how pull off this transition with proprietary formats and protocols. And avoid the wrath of antitrust in the process!
  •  
    ge response to Joe McKendrick's SOA article.
Paul Merrell

It's the business processes that are bound to MSOffice - Windows' dominance stifles dem... - 0 views

  • 15 years of workgroup oriented business process automation based on the MSOffice productivity environment has had an impact. Microsoft pretty much owns the "client" in "client/server" because so many of these day-to-day business processes are bound to the MSOffice productivity environment in some way.
  • The good news is that there is a great transition underway. The world is slowly but inexorably moving from "client/server" systems to an emerging architecture one might describe as "client/ WebStack-Cloud-Ria /server. The reason for the great transition is simple; the productivity advantages of putting the Web in the center of information systems and workflows are extraordinary.
  • Now the bad news. Microsoft fully understands this and has spent years preparing for a very controlled transition. They are ready. The pieces are finally falling into place for a controlled transition connecting legacy MSOffice bound business processes to the Microsoft WebStack-Cloud-RiA model (Exchange-SharePoint-SQL Server-Mesh-Silverlight).
  • ...2 more annotations...
  • Anyone with a pulse knows that the Web is the future. Yet, look at how much time and effort has been spent on formats, protocols and interfaces that at best would "break" the Web, and at worst, determine to refight the 1995 office desktop wars. In Massachusetts, while the war between ODF and OOXML raged, Exchange and SharePoint servers were showing up everywhere. It was as if the outcome of the desktop office format decision didn't matter to the Web future.
  • And if we don't successfully re-purpose MSOffice to the Open Web? (And for that matter, OpenOffice). The Web will break. The great transition will be directed to the MS WebStack-Cloud-RiA model. Web enhanced business processes will be entangled with proprietary formats, protocols and interfaces. The barriers to this emerging desktop-Web-device platform of business processes and systems will prove even more impenetrable than the 1995 desktop productivity environment. Linux will not penetrate the business desktop arena. And we will all wonder what it was that we were doing as this unfolded before our eyes.
Paul Merrell

U.S. Embedded Spyware Overseas, Report Claims - NYTimes.com - 0 views

  • The United States has found a way to permanently embed surveillance and sabotage tools in computers and networks it has targeted in Iran, Russia, Pakistan, China, Afghanistan and other countries closely watched by American intelligence agencies, according to a Russian cybersecurity firm.In a presentation of its findings at a conference in Mexico on Monday, Kaspersky Lab, the Russian firm, said that the implants had been placed by what it called the “Equation Group,” which appears to be a veiled reference to the National Security Agency and its military counterpart, United States Cyber Command.
  • It linked the techniques to those used in Stuxnet, the computer worm that disabled about 1,000 centrifuges in Iran’s nuclear enrichment program. It was later revealed that Stuxnet was part of a program code-named Olympic Games and run jointly by Israel and the United States.Kaspersky’s report said that Olympic Games had similarities to a much broader effort to infect computers well beyond those in Iran. It detected particularly high infection rates in computers in Iran, Pakistan and Russia, three countries whose nuclear programs the United States routinely monitors.
  • Some of the implants burrow so deep into the computer systems, Kaspersky said, that they infect the “firmware,” the embedded software that preps the computer’s hardware before the operating system starts. It is beyond the reach of existing antivirus products and most security controls, Kaspersky reported, making it virtually impossible to wipe out.
  • ...1 more annotation...
  • In many cases, it also allows the American intelligence agencies to grab the encryption keys off a machine, unnoticed, and unlock scrambled contents. Moreover, many of the tools are designed to run on computers that are disconnected from the Internet, which was the case in the computers controlling Iran’s nuclear enrichment plants.
Paul Merrell

U.S. military closer to making cyborgs a reality - CNNPolitics.com - 0 views

  • The U.S. military is spending millions on an advanced implant that would allow a human brain to communicate directly with computers.If it succeeds, cyborgs will be a reality.The Pentagon's research arm, the Defense Advanced Research Projects Agency (DARPA), hopes the implant will allow humans to directly interface with computers, which could benefit people with aural and visual disabilities, such as veterans injured in combat.The goal of the proposed implant is to "open the channel between the human brain and modern electronics" according to DARPA's program manager, Phillip Alvelda.
  • DARPA sees the implant as providing a foundation for new therapies that could help people with deficits in sight or hearing by "feeding digital auditory or visual information into the brain."A spokesman for DARPA told CNN that the program is not intended for military applications.
  • But some experts see such an implant as having the potential for numerous applications, including military ones, in the field of wearable robotics -- which aims to augment and restore human performance.Conor Walsh, a professor of mechanical and biomedical engineering at Harvard University, told CNN that the implant would "change the game," adding that "in the future, wearable robotic devices will be controlled by implants."Walsh sees the potential for wearable robotic devices or exoskeletons in everything from helping a medical patient recover from a stroke to enhancing soldiers' capabilities in combat.The U.S. military is currently developing a battery-powered exoskeleton, the Tactical Assault Light Operator Suit, to provide superior protection from enemy fire and in-helmet technologies that boost the user's communications ability and vision.The suits' development is being overseen by U.S. Special Operations Command.In theory, the proposed neural implant would allow the military member operating the suit to more effectively control the armored exoskeleton while deployed in combat.
  • ...1 more annotation...
  • In its announcement, DARPA acknowledged that an implant is still a long ways away, with breakthroughs in neuroscience, synthetic biology, low-power electronics, photonics and medical-device manufacturing needed before the device could be used.DARPA plans to recruit a diverse set of experts in an attempt to accelerate the project's development, according to its statement announcing the project.
  •  
    Let's assume for the moment that DARPA's goal is realizable and brain implants for commuication with computers become common. How long will it take for FBI, NSA, et ilk to get legislation or a court order allowing them to conduct mass surveillance of people's brains? Not long, I suspect. 
Gonzalo San Gil, PhD.

Learn Git and GitHub Through Videos | FOSS Force - 1 views

  •  
    The Video Screening Room These days, GitHub is pretty much the warehouse district where nearly all open source projects are stored and maintained. There are some tricks to navigating the site, which can easily be mastered by watching tutorial videos. If you’re an open source enthusiast, you need to be advocating for interested [...] Continue reading Learn Git and GitHub Through Videos
  •  
    The Video Screening Room These days, GitHub is pretty much the warehouse district where nearly all open source projects are stored and maintained. There are some tricks to navigating the site, which can easily be mastered by watching tutorial videos. If you’re an open source enthusiast, you need to be advocating for interested [...] Continue reading Learn Git and GitHub Through Videos
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

Memo to Potential Whistleblowers: If You See Something, Say Something | Global Research - 0 views

  • Blowing the whistle on wrongdoing creates a moral frequency that vast numbers of people are eager to hear. We don’t want our lives, communities, country and world continually damaged by the deadening silences of fear and conformity. I’ve met many whistleblowers over the years, and they’ve been extraordinarily ordinary. None were applying for halos or sainthood. All experienced anguish before deciding that continuous inaction had a price that was too high. All suffered negative consequences as well as relief after they spoke up and took action. All made the world better with their courage. Whistleblowers don’t sign up to be whistleblowers. Almost always, they begin their work as true believers in the system that conscience later compels them to challenge. “It took years of involvement with a mendacious war policy, evidence of which was apparent to me as early as 2003, before I found the courage to follow my conscience,” Matthew Hoh recalled this week.“It is not an easy or light decision for anyone to make, but we need members of our military, development, diplomatic and intelligence community to speak out if we are ever to have a just and sound foreign policy.”
  • Hoh describes his record this way: “After over 11 continuous years of service with the U.S. military and U.S. government, nearly six of those years overseas, including service in Iraq and Afghanistan, as well as positions within the Secretary of the Navy’s Office as a White House Liaison, and as a consultant for the State Department’s Iraq Desk, I resigned from my position with the State Department in Afghanistan in protest of the escalation of war in 2009.” Another former Department of State official, the ex-diplomat and retired Army colonel Ann Wright, who resigned in protest of the Iraq invasion in March 2003, is crossing paths with Hoh on Friday as they do the honors at a ribbon-cutting — half a block from the State Department headquarters in Washington — for a billboard with a picture of Pentagon Papers whistleblower Daniel Ellsberg. Big-lettered words begin by referring to the years he waited before releasing the Pentagon Papers in 1971. “Don’t do what I did,” Ellsberg says on the billboard.  “Don’t wait until a new war has started, don’t wait until thousands more have died, before you tell the truth with documents that reveal lies or crimes or internal projections of costs and dangers. You might save a war’s worth of lives.
  • The billboard – sponsored by the ExposeFacts organization, which launched this week — will spread to other prominent locations in Washington and beyond. As an organizer for ExposeFacts, I’m glad to report that outreach to potential whistleblowers is just getting started. (For details, visit ExposeFacts.org.) We’re propelled by the kind of hopeful determination that Hoh expressed the day before the billboard ribbon-cutting when he said: “I trust ExposeFacts and its efforts will encourage others to follow their conscience and do what is right.” The journalist Kevin Gosztola, who has astutely covered a range of whistleblower issues for years, pointed this week to the imperative of opening up news media. “There is an important role for ExposeFacts to play in not only forcing more transparency, but also inspiring more media organizations to engage in adversarial journalism,” he wrote. “Such journalism is called for in the face of wars, environmental destruction, escalating poverty, egregious abuses in the justice system, corporate control of government, and national security state secrecy. Perhaps a truly successful organization could inspire U.S. media organizations to play much more of a watchdog role than a lapdog role when covering powerful institutions in government.”
  • ...2 more annotations...
  • Overall, we desperately need to nurture and propagate a steadfast culture of outspoken whistleblowing. A central motto of the AIDS activist movement dating back to the 1980s – Silence = Death – remains urgently relevant in a vast array of realms. Whether the problems involve perpetual war, corporate malfeasance, climate change, institutionalized racism, patterns of sexual assault, toxic pollution or countless other ills, none can be alleviated without bringing grim realities into the light. “All governments lie,” Ellsberg says in a video statement released for the launch of ExposeFacts, “and they all like to work in the dark as far as the public is concerned, in terms of their own decision-making, their planning — and to be able to allege, falsely, unanimity in addressing their problems, as if no one who had knowledge of the full facts inside could disagree with the policy the president or the leader of the state is announcing.” Ellsberg adds: “A country that wants to be a democracy has to be able to penetrate that secrecy, with the help of conscientious individuals who understand in this country that their duty to the Constitution and to the civil liberties and to the welfare of this country definitely surmount their obligation to their bosses, to a given administration, or in some cases to their promise of secrecy.”
  • Right now, our potential for democracy owes a lot to people like NSA whistleblowers William Binney and Kirk Wiebe, and EPA whistleblower Marsha Coleman-Adebayo. When they spoke at the June 4 news conference in Washington that launched ExposeFacts, their brave clarity was inspiring. Antidotes to the poisons of cynicism and passive despair can emerge from organizing to help create a better world. The process requires applying a single standard to the real actions of institutions and individuals, no matter how big their budgets or grand their power. What cannot withstand the light of day should not be suffered in silence. If you see something, say something.
  •  
    While some governments -- my own included -- attempt to impose an Orwellian Dark State of ubiquitous secret surveillance, secret wars, the rule of oligarchs, and public ignorance, the Edward Snowden leaks fanned the flames of the countering War on Ignorance that had been kept alive by civil libertarians. Only days after the U.S. Supreme Court denied review in a case where a reporter had been ordered to reveal his source of information for a book on the Dark State under the penalties for contempt of court (a long stretch in jail), a new web site is launched for communications between sources and journalists where the source's names never need to be revealed. This article is part of the publicity for that new weapon fielded by the civil libertarian side in the War Against Ignorance.  Hurrah!
Paul Merrell

American Surveillance Now Threatens American Business - The Atlantic - 0 views

  • What does it look like when a society loses its sense of privacy? <div><a href="http://pubads.g.doubleclick.net/gampad/jump?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" title=""><img style="border:none;" src="http://pubads.g.doubleclick.net/gampad/ad?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" alt="" /></a></div>In the almost 18 months since the Snowden files first received coverage, writers and critics have had to guess at the answer. Does a certain trend, consumer complaint, or popular product epitomize some larger shift? Is trust in tech companies eroding—or is a subset just especially vocal about it? Polling would make those answers clear, but polling so far has been… confused. A new study, conducted by the Pew Internet Project last January and released last week, helps make the average American’s view of his or her privacy a little clearer. And their confidence in their own privacy is ... low. The study's findings—and the statistics it reports—stagger. Vast majorities of Americans are uncomfortable with how the government uses their data, how private companies use and distribute their data, and what the government does to regulate those companies. No summary can equal a recounting of the findings. Americans are displeased with government surveillance en masse:   
  • A new study finds that a vast majority of Americans trust neither the government nor tech companies with their personal data.
  • According to the study, 70 percent of Americans are “at least somewhat concerned” with the government secretly obtaining information they post to social networking sites. Eighty percent of respondents agreed that “Americans should be concerned” with government surveillance of telephones and the web. They are also uncomfortable with how private corporations use their data: Ninety-one percent of Americans believe that “consumers have lost control over how personal information is collected and used by companies,” according to the study. Eighty percent of Americans who use social networks “say they are concerned about third parties like advertisers or businesses accessing the data they share on these sites.” And even though they’re squeamish about the government’s use of data, they want it to regulate tech companies and data brokers more strictly: 64 percent wanted the government to do more to regulate private data collection. Since June 2013, American politicians and corporate leaders have fretted over how much the leaks would cost U.S. businesses abroad.
  • ...3 more annotations...
  • What does it look like when a society loses its sense of privacy? <div><a href="http://pubads.g.doubleclick.net/gampad/jump?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" title=""><img style="border:none;" src="http://pubads.g.doubleclick.net/gampad/ad?iu=%2F4624%2FTheAtlanticOnline%2Fchannel_technology&t=src%3Dblog%26by%3Drobinson-meyer%26title%3Damerican-surveillance-now-threatens-american-business%26pos%3Din-article&sz=300x250&c=285899172&tile=1" alt="" /></a></div>In the almost 18 months since the Snowden files first received coverage, writers and critics have had to guess at the answer. Does a certain trend, consumer complaint, or popular product epitomize some larger shift? Is trust in tech companies eroding—or is a subset just especially vocal about it? Polling would make those answers clear, but polling so far has been… confused. A new study, conducted by the Pew Internet Project last January and released last week, helps make the average American’s view of his or her privacy a little clearer. And their confidence in their own privacy is ... low. The study's findings—and the statistics it reports—stagger. Vast majorities of Americans are uncomfortable with how the government uses their data, how private companies use and distribute their data, and what the government does to regulate those companies. No summary can equal a recounting of the findings. Americans are displeased with government surveillance en masse:   
  • “It’s clear the global community of Internet users doesn’t like to be caught up in the American surveillance dragnet,” Senator Ron Wyden said last month. At the same event, Google chairman Eric Schmidt agreed with him. “What occurred was a loss of trust between America and other countries,” he said, according to the Los Angeles Times. “It's making it very difficult for American firms to do business.” But never mind the world. Americans don’t trust American social networks. More than half of the poll’s respondents said that social networks were “not at all secure. Only 40 percent of Americans believe email or texting is at least “somewhat” secure. Indeed, Americans trusted most of all communication technologies where some protections has been enshrined into the law (though the report didn’t ask about snail mail). That is: Talking on the telephone, whether on a landline or cell phone, is the only kind of communication that a majority of adults believe to be “very secure” or “somewhat secure.”
  • (That may seem a bit incongruous, because making a telephone call is one area where you can be almost sure you are being surveilled: The government has requisitioned mass call records from phone companies since 2001. But Americans appear, when discussing security, to differentiate between the contents of the call and data about it.) Last month, Ramsey Homsany, the general counsel of Dropbox, said that one big thing could take down the California tech scene. “We have built this incredible economic engine in this region of the country,” said Homsany in the Los Angeles Times, “and [mistrust] is the one thing that starts to rot it from the inside out.” According to this poll, the mistrust has already begun corroding—and is already, in fact, well advanced. We’ve always assumed that the great hurt to American business will come globally—that citizens of other nations will stop using tech companies’s services. But the new Pew data shows that Americans suspect American businesses just as much. And while, unlike citizens of other nations, they may not have other places to turn, they may stop putting sensitive or delicate information online.
Paul Merrell

China Pressures U.S. Companies to Buckle on Strong Encryption and Surveillance - 0 views

  • Before Chinese President Xi Jinping visits President Obama, he and Chinese executives have some business in Seattle: pressing U.S. tech companies, hungry for the Chinese market, to comply with the country’s new stringent and suppressive Internet policies. The New York Times reported last week that Chinese authorities sent a letter to some U.S. tech firms seeking a promise they would not harm China’s national security. That might require such things as forcing users to register with their real names, storing Chinese citizens’ data locally where the government can access it, and building government “back doors” into encrypted communication products for better surveillance. China’s new national security law calls for systems that are “secure and controllable”, which industry groups told the Times in July means companies will have to hand over encryption keys or even source code to their products. Among the big names joining Xi at Wednesday’s U.S.-China Internet Industry Forum: Apple, Google, Facebook, IBM, and Microsoft.
  • The meeting comes as U.S. law enforcement officials have been pressuring companies to give them a way to access encrypted communications. The technology community has responded by pointing out that any sort of hole for law enforcement weakens the entire system to attack from outside bad actors—such as China, which has been tied to many instances of state-sponsored hacking into U.S systems. In fact, one argument privacy advocates have repeatedly made is that back doors for law enforcement would set a dangerous precedent when countries like China want the same kind of access to pursue their own domestic political goals. But here, potentially, the situation has been reversed, with China using its massive economic leverage to demand that sort of access right now. Human rights groups are urging U.S. companies not to give in.
Paul Merrell

The Strongest Link: Libraries and Linked Data - 2 views

  • Abstract Since 1999 the W3C has been working on a set of Semantic Web standards that have the potential to revolutionize web search. Also known as Linked Data, the Machine-Readable Web, the Web of Data, or Web 3.0, the Semantic Web relies on highly structured metadata that allow computers to understand the relationships between objects. Semantic web standards are complex, and difficult to conceptualize, but they offer solutions to many of the issues that plague libraries, including precise web search, authority control, classification, data portability, and disambiguation. This article will outline some of the benefits that linked data could have for libraries, will discuss some of the non-technical obstacles that we face in moving forward, and will finally offer suggestions for practical ways in which libraries can participate in the development of the semantic web.
  •  
    See also Wikipedia on Linked Data: http://en.wikipedia.org/wiki/Linked_Data
Paul Merrell

Remaining Snowden docs will be released to avert 'unspecified US war' - ‪Cryp... - 1 views

  • All the remaining Snowden documents will be released next month, according t‪o‬ whistle-blowing site ‪Cryptome, which said in a tweet that the release of the info by unnamed third parties would be necessary to head off an unnamed "war".‬‪Cryptome‬ said it would "aid and abet" the release of "57K to 1.7M" new documents that had been "withheld for national security-public debate [sic]". <a href="http://pubads.g.doubleclick.net/gampad/jump?iu=/6978/reg_security/front&sz=300x250%7C300x600&tile=3&c=33U7RchawQrMoAAHIac14AAAKH&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" target="_blank"> <img src="http://pubads.g.doubleclick.net/gampad/ad?iu=/6978/reg_security/front&sz=300x250%7C300x600&tile=3&c=33U7RchawQrMoAAHIac14AAAKH&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" alt=""></a> The site clarified that will not be publishing the documents itself.Transparency activists would welcome such a release but such a move would be heavily criticised by inteligence agencies and military officials, who argue that Snowden's dump of secret documents has set US and allied (especially British) intelligence efforts back by years.
  • As things stand, the flow of Snowden disclosures is controlled by those who have access to the Sn‪o‬wden archive, which might possibly include Snowden confidants such as Glenn Greenwald and Laura Poitras. In some cases, even when these people release information to mainstream media organisations, it is then suppressed by these organisations after negotiation with the authorities. (In one such case, some key facts were later revealed by the Register.)"July is when war begins unless headed off by Snowden full release of crippling intel. After war begins not a chance of release," Cryptome tweeted on its official feed."Warmongerers are on a rampage. So, yes, citizens holding Snowden docs will do the right thing," it said.
  • "For more on Snowden docs release in July watch for Ellsberg, special guest and others at HOPE, July 18-20: http://www.hope.net/schedule.html," it added.HOPE (Hackers On Planet Earth) is a well-regarded and long-running hacking conference organised by 2600 magazine. Previous speakers at the event have included Kevin Mitnick, Steve Wozniak and Jello Biafra.In other developments, ‪Cryptome‬ has started a Kickstarter fund to release its entire archive in the form of a USB stick archive. It wants t‪o‬ raise $100,000 to help it achieve its goal. More than $14,000 has already been raised.The funding drive follows a dispute between ‪Cryptome‬ and its host Network Solutions, which is owned by web.com. Access to the site was bl‪o‬cked f‪o‬ll‪o‬wing a malware infection last week. ‪Cryptome‬ f‪o‬under J‪o‬hn Y‪o‬ung criticised the host, claiming it had ‪o‬ver-reacted and had been sl‪o‬w t‪o‬ rest‪o‬re access t‪o‬ the site, which ‪Cryptome‬ criticised as a form of cens‪o‬rship.In resp‪o‬nse, ‪Cryptome‬ plans to more widely distribute its content across multiple sites as well as releasing the planned USB stick archive. ®
  •  
    Can't happen soon enough. 
Gonzalo San Gil, PhD.

Minister: Sue Mums, Dads, Students To Send Anti-Piracy Message | TorrentFreak - 0 views

  •  
    " Andy on August 1, 2014 C: 53 Breaking Just as discussion moves away from the punitive measures that did little to curtail piracy in the last decade, an Australian minister has urged a return. Communications Minister Malcolm Turnbull says that in order to send a clear message, rightsholders need to "roll up their sleeves" and strategically sue some "moms, dads and students.""
Paul Merrell

Social Media Giants Choking Independent News Site Traffic to a Trickle - 0 views

  • Several prominent figures, including Web inventor Tim Berners-Lee, warned the EU Parliament that its proposed censorship measure would begin transforming the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users.
  • For much of the year, independent media has felt the sting of increased social media censorship, as the “revolving door” between U.S. intelligence agencies and social-media companies has manifested in a crackdown on news that challenges official government narratives. With many notable independent news websites having shut down since then as a result, those that remain afloat are being censored like never before, with social media traffic from Facebook and Twitter completely cut off in some cases. Among such websites, social media censorship by the most popular social networks is now widely regarded to be the worst it has ever been – a chilling reality for any who seek fact-based perspectives on major world events that differ from those to be found on well-known corporate-media outlets that consistently toe the government line. Last August, MintPress reported that a new Google algorithm targeting “fake news” had quashed traffic to many independent news and advocacy sites, with sites such as the American Civil Liberties Union, Democracy Now, and WikiLeaks, seeing their returns from Google searches experience massive drops. The World Socialist Website, one of the affected pages, reported a 67 percent decrease in Google returns while MintPress experienced an even larger decrease of 76 percent in Google search returns. The new algorithm targeted online publications on both sides of the political spectrum critical of U.S. imperialism, foreign wars, and other long-standing government policies. Now, less than a year later, the situation has become even more dire. Several independent media pages have reported that their social media traffic has sharply declined since March and – in some cases – stopped almost entirely since June began. For instance, independent media website Antimedia – a page with over 2 million likes and follows – saw its traffic drop from around 150,000 page views per day earlier this month to around 12,000 as of this week. As a reference, this time last year Antimedia’s traffic stood at nearly 300,000 a day.
Gonzalo San Gil, PhD.

Creative Commons licences under scrutiny: What does "noncommercial" mean? | Ars Technic... - 0 views

  •  
    "Commercial v. noncommercial use of CC licences. Where's the line of demarcation? David Kravets (US) - Sep 19, 2016 7:35 am UTC"
Paul Merrell

Civil Rights Groups, Funded by Telecoms, Back Donald Trump's Plan to Kill Net Neutrality - 0 views

  • Leading civil rights groups who for many years have been heavily bankrolled by the telecom industry are signaling their support for Donald Trump’s promised rollback of the Obama administration’s net neutrality rules, which prevent internet service providers from prioritizing some content providers over others. The Obama administration’s Federal Communications Commission established net neutrality by reclassifying high-speed internet as a regulated phone-like telecommunications service, as opposed to a mostly unregulated information service. The re-classification was cheered by advocates for a free and open internet. But now Trump’s new FCC Chairman Ajit Pai, a former Verizon attorney, is pushing to repeal the net neutrality reform by rolling back that re-classification — and he’s getting help not only from a legion of telecom lobbyists, but from civil rights groups. In a little-noticed joint letter released last week, the NAACP, Asian Americans Advancing Justice, OCA (formerly known as the Organization for Chinese Americans), the National Urban League, and other civil rights organizations sharply criticized the “jurisdictional and classification problems that plagued the last FCC” — a reference to the legal mechanism used by the Obama administration to accomplish net neutrality. Instead of classifying broadband as a public utility, the letter states, open internet rules should be written by statute. What does that mean? It means the Republican-led Congress should take control of the process — the precise approach that is favored by industry.
Gary Edwards

Meet OX Text, a collaborative, non-destructive alternative to Google Docs - Tech News a... - 0 views

  • The German software-as-a-service firm Open-Xchange, which provides apps that telcos and other service providers can bundle with their connectivity or hosting products, is adding a cloud-based office productivity toolset called OX Documents to its OX App Suite lineup. Open-Xchange has around 70 million users through its contracts with roughly 80 providers such as 1&1 Internet and Strato. Its OX App Suite takes the form of a virtual desktop of sorts, that lets users centralize their email and file storage accounts and view all sorts of documents through a unified portal. However, as of an early April release it will also include OX Text, a non-destructive, collaborative document editor that rivals Google Docs, and that has an interesting heritage of its own.
  • The team that created the HTML5- and JavaScript-based OX Text includes some of the core developers behind OpenOffice, the free alternative to Microsoft Office that passed from Sun Microsystems to Oracle before morphing into LibreOffice. The German developers we’re talking about hived off the project before LibreOffice happened, and ended up getting hired by Open-Xchange. “To them it was a once in a lifetime event, because we allowed them to start from scratch,” Open-Xchange CEO Rafael Laguna told me. “We said we wanted a fresh office productivity suite that runs inside the browser. In terms of the architecture and principles for the product, we wanted to make it fully round-trip capable, meaning whatever file format we run into needs to be retained.”
  • This is an extremely handy formatting and version control feature. Changes made to a document in OX Text get pushed through to Open-Xchange’s backend, where a changelog is maintained. “Power” Word features such as Smart Art or Charts, which are not necessarily supported by other productivity suites, are replaced with placeholders during editing and are there, as before, when the edited document is eventually downloaded. As the OX Text blurb says, “OX Text never damages your valuable work even if it does not understand it”.
  • ...1 more annotation...
  • “[This avoids] the big disadvantage of anything other than Microsoft Office,” Laguna said. “If you use OpenOffice with a .docx file, the whole document is converted, creating artefacts, then you convert it back. That’s one of the major reasons not everyone is using OpenOffice, and the same is true for Google Apps.” OX Text will be available as an extension to OX App Suite, which also includes calendaring and other productivity tools. However, it will also come out as a standalone product under both commercial licenses – effectively support-based subscriptions for Open-Xchange’s service provider customers – and open-source licenses, namely the GNU General Public License 2 and Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License, which will allow free personal, non-commercial use. You can find a demo of App Suite, including the OX Text functionality, here, and there’s a video too:
Paul Merrell

Theresa May warns Yahoo that its move to Dublin is a security worry | Technology | The ... - 0 views

  • Theresa May summoned the internet giant Yahoo for an urgent meeting on Thursday to raise security concerns after the company announced plans to move to Dublin where it is beyond the reach of Britain's surveillance laws.By making the Irish capital rather than London the centre of its European, Middle East and Africa operations, Yahoo cannot be forced to hand over information demanded by Scotland Yard and the intelligence agencies through "warrants" issued under Britain's controversial anti-terror laws.Yahoo has had longstanding concerns about securing the privacy of its hundreds of millions of users – anxieties that have been heightened in recent months by revelations from the whistleblower Edward Snowden.
  • In February, the Guardian revealed that Britain's eavesdropping centre GCHQ intercepted and stored the images of millions of people using Yahoo webcams, regardless of whether they were suspects. The data included a large quantity of sexually explicit pictures.The company said this represented "a whole new level of violation of our users' privacy".The home secretary called the meeting with Yahoo to express the fears of Britain's counter-terrorism investigators. They can force companies based in the UK to provide information on their servers by seeking warrants under the Regulation of Investigatory Powers Act, 2000 (Ripa).
  • the Guardian has been told that Charles Farr, the head of the office for security and counter-terrorism (OSCT) within the Home Office, has been pressing May to talk to Yahoo because of anxiety in Scotland Yard's counter-terrorism command about the effect the move to Dublin could have on their inquiries.Farr, a former senior intelligence officer, coordinates the work of Scotland Yard and the security service MI5, to prevent terrorist attacks in the UK."There are concerns in the Home Office about how Ripa will apply to Yahoo once it has moved its headquarters to Dublin," said a Whitehall source. "The home secretary asked to see officials from Yahoo because in Dublin they don't have equivalent laws to Ripa. This could particularly affect investigations led by Scotland Yard and the national crime agency. They regard this as a very serious issue."
  • ...3 more annotations...
  • The move to make Dublin the centre of its headquarters for Europe, the Middle East and Africa (EMEA) was announced last month and will take effect from Friday.In a statement at the time, Yahoo said Dublin was a natural home for the company and that it would be incorporated into Irish laws.The firm insisted the move was driven by "business needs … we believe it is in the best interest of our users. Dublin is already the European home to many of the world's leading global technology brands."However, the firm has been horrified by some of the surveillance programmes revealed by Snowden and is understood to be relieved that it will be beyond the immediate reach of UK surveillance laws.
  • Following the Guardian's disclosures about snooping on Yahoo webcams, the company said it was "committed to preserving our users trust and security and continue our efforts to expand encryption across all of our services." It said GCHQ's activity was "completely unacceptable..we strongly call on the world's governments to reform surveillance law."Explaining the move to Dublin, the company said: "The principal change is that Yahoo EMEA, as the new provider of services to our European users, will replace Yahoo UK Ltd as the data controller responsible for handling your personal information. Yahoo EMEA will be responsible for complying with Irish privacy and data protection laws, which are based on the European data protection directive."Emma Carr, deputy director of Big Brother Watch, said: "It should not come as a surprise if companies concerned about maintaining their users' trust to hold their information start to move to countries with more rigorous oversight processes, particularly where courts oversee requests for information." Surveillance laws have a direct impact on our economy and Yahoo's decision should be ring an alarm in Parliament that ignoring the serious questions about surveillance that are being debated around the world will only harm Britain's digital economy."
  • From Friday, investigators may have to seek information by using a more drawn out process of approaching Yahoo through a Mutual Legal Assistance Treaty between Ireland and the UK.
Paul Merrell

Exclusive: Inside America's Plan to Kill Online Privacy Rights Everywhere | The Cable - 0 views

  • The United States and its key intelligence allies are quietly working behind the scenes to kneecap a mounting movement in the United Nations to promote a universal human right to online privacy, according to diplomatic sources and an internal American government document obtained by The Cable. The diplomatic battle is playing out in an obscure U.N. General Assembly committee that is considering a proposal by Brazil and Germany to place constraints on unchecked internet surveillance by the National Security Agency and other foreign intelligence services. American representatives have made it clear that they won't tolerate such checks on their global surveillance network. The stakes are high, particularly in Washington -- which is seeking to contain an international backlash against NSA spying -- and in Brasilia, where Brazilian President Dilma Roussef is personally involved in monitoring the U.N. negotiations.
  • The Brazilian and German initiative seeks to apply the right to privacy, which is enshrined in the International Covenant on Civil and Political Rights (ICCPR), to online communications. Their proposal, first revealed by The Cable, affirms a "right to privacy that is not to be subjected to arbitrary or unlawful interference with their privacy, family, home, or correspondence." It notes that while public safety may "justify the gathering and protection of certain sensitive information," nations "must ensure full compliance" with international human rights laws. A final version the text is scheduled to be presented to U.N. members on Wednesday evening and the resolution is expected to be adopted next week. A draft of the resolution, which was obtained by The Cable, calls on states to "to respect and protect the right to privacy," asserting that the "same rights that people have offline must also be protected online, including the right to privacy." It also requests the U.N. high commissioner for human rights, Navi Pillay, present the U.N. General Assembly next year with a report on the protection and promotion of the right to privacy, a provision that will ensure the issue remains on the front burner.
  • Publicly, U.S. representatives say they're open to an affirmation of privacy rights. "The United States takes very seriously our international legal obligations, including those under the International Covenant on Civil and Political Rights," Kurtis Cooper, a spokesman for the U.S. mission to the United Nations, said in an email. "We have been actively and constructively negotiating to ensure that the resolution promotes human rights and is consistent with those obligations." But privately, American diplomats are pushing hard to kill a provision of the Brazilian and German draft which states that "extraterritorial surveillance" and mass interception of communications, personal information, and metadata may constitute a violation of human rights. The United States and its allies, according to diplomats, outside observers, and documents, contend that the Covenant on Civil and Political Rights does not apply to foreign espionage.
  • ...6 more annotations...
  • n recent days, the United States circulated to its allies a confidential paper highlighting American objectives in the negotiations, "Right to Privacy in the Digital Age -- U.S. Redlines." It calls for changing the Brazilian and German text so "that references to privacy rights are referring explicitly to States' obligations under ICCPR and remove suggestion that such obligations apply extraterritorially." In other words: America wants to make sure it preserves the right to spy overseas. The U.S. paper also calls on governments to promote amendments that would weaken Brazil's and Germany's contention that some "highly intrusive" acts of online espionage may constitute a violation of freedom of expression. Instead, the United States wants to limit the focus to illegal surveillance -- which the American government claims it never, ever does. Collecting information on tens of millions of people around the world is perfectly acceptable, the Obama administration has repeatedly said. It's authorized by U.S. statute, overseen by Congress, and approved by American courts.
  • "Recall that the USG's [U.S. government's] collection activities that have been disclosed are lawful collections done in a manner protective of privacy rights," the paper states. "So a paragraph expressing concern about illegal surveillance is one with which we would agree." The privacy resolution, like most General Assembly decisions, is neither legally binding nor enforceable by any international court. But international lawyers say it is important because it creates the basis for an international consensus -- referred to as "soft law" -- that over time will make it harder and harder for the United States to argue that its mass collection of foreigners' data is lawful and in conformity with human rights norms. "They want to be able to say ‘we haven't broken the law, we're not breaking the law, and we won't break the law,'" said Dinah PoKempner, the general counsel for Human Rights Watch, who has been tracking the negotiations. The United States, she added, wants to be able to maintain that "we have the freedom to scoop up anything we want through the massive surveillance of foreigners because we have no legal obligations."
  • The United States negotiators have been pressing their case behind the scenes, raising concerns that the assertion of extraterritorial human rights could constrain America's effort to go after international terrorists. But Washington has remained relatively muted about their concerns in the U.N. negotiating sessions. According to one diplomat, "the United States has been very much in the backseat," leaving it to its allies, Australia, Britain, and Canada, to take the lead. There is no extraterritorial obligation on states "to comply with human rights," explained one diplomat who supports the U.S. position. "The obligation is on states to uphold the human rights of citizens within their territory and areas of their jurisdictions."
  • The position, according to Jamil Dakwar, the director of the American Civil Liberties Union's Human Rights Program, has little international backing. The International Court of Justice, the U.N. Human Rights Committee, and the European Court have all asserted that states do have an obligation to comply with human rights laws beyond their own borders, he noted. "Governments do have obligation beyond their territories," said Dakwar, particularly in situations, like the Guantanamo Bay detention center, where the United States exercises "effective control" over the lives of the detainees. Both PoKempner and Dakwar suggested that courts may also judge that the U.S. dominance of the Internet places special legal obligations on it to ensure the protection of users' human rights.
  • "It's clear that when the United States is conducting surveillance, these decisions and operations start in the United States, the servers are at NSA headquarters, and the capabilities are mainly in the United States," he said. "To argue that they have no human rights obligations overseas is dangerous because it sends a message that there is void in terms of human rights protection outside countries territory. It's going back to the idea that you can create a legal black hole where there is no applicable law." There were signs emerging on Wednesday that America may have been making ground in pressing the Brazilians and Germans to back on one of its toughest provisions. In an effort to address the concerns of the U.S. and its allies, Brazil and Germany agreed to soften the language suggesting that mass surveillance may constitute a violation of human rights. Instead, it simply deep "concern at the negative impact" that extraterritorial surveillance "may have on the exercise of and enjoyment of human rights." The U.S., however, has not yet indicated it would support the revised proposal.
  • The concession "is regrettable. But it’s not the end of the battle by any means," said Human Rights Watch’s PoKempner. She added that there will soon be another opportunity to corral America's spies: a U.N. discussion on possible human rights violations as a result of extraterritorial surveillance will soon be taken up by the U.N. High commissioner.
  •  
    Woo-hoo! Go get'em, U.N.
‹ Previous 21 - 40 of 59 Next ›
Showing 20 items per page