Skip to main content

Home/ Future of the Web/ Group items tagged only

Rss Feed Group items tagged

Paul Merrell

European Human Rights Court Deals a Heavy Blow to the Lawfulness of Bulk Surveillance |... - 0 views

  • In a seminal decision updating and consolidating its previous jurisprudence on surveillance, the Grand Chamber of the European Court of Human Rights took a sideways swing at mass surveillance programs last week, reiterating the centrality of “reasonable suspicion” to the authorization process and the need to ensure interception warrants are targeted to an individual or premises. The decision in Zakharov v. Russia — coming on the heels of the European Court of Justice’s strongly-worded condemnation in Schrems of interception systems that provide States with “generalised access” to the content of communications — is another blow to governments across Europe and the United States that continue to argue for the legitimacy and lawfulness of bulk collection programs. It also provoked the ire of the Russian government, prompting an immediate legislative move to give the Russian constitution precedence over Strasbourg judgments. The Grand Chamber’s judgment in Zakharov is especially notable because its subject matter — the Russian SORM system of interception, which includes the installation of equipment on telecommunications networks that subsequently enables the State direct access to the communications transiting through those networks — is similar in many ways to the interception systems currently enjoying public and judicial scrutiny in the United States, France, and the United Kingdom. Zakharov also provides a timely opportunity to compare the differences between UK and Russian law: Namely, Russian law requires prior independent authorization of interception measures, whereas neither the proposed UK law nor the existing legislative framework do.
  • The decision is lengthy and comprises a useful restatement and harmonization of the Court’s approach to standing (which it calls “victim status”) in surveillance cases, which is markedly different from that taken by the US Supreme Court. (Indeed, Judge Dedov’s separate but concurring opinion notes the contrast with Clapper v. Amnesty International.) It also addresses at length issues of supervision and oversight, as well as the role played by notification in ensuring the effectiveness of remedies. (Marko Milanovic discusses many of these issues here.) For the purpose of the ongoing debate around the legitimacy of bulk surveillance regimes under international human rights law, however, three particular conclusions of the Court are critical.
  • The Court took issue with legislation permitting the interception of communications for broad national, military, or economic security purposes (as well as for “ecological security” in the Russian case), absent any indication of the particular circumstances under which an individual’s communications may be intercepted. It said that such broadly worded statutes confer an “almost unlimited degree of discretion in determining which events or acts constitute such a threat and whether that threat is serious enough to justify secret surveillance” (para. 248). Such discretion cannot be unbounded. It can be limited through the requirement for prior judicial authorization of interception measures (para. 249). Non-judicial authorities may also be competent to authorize interception, provided they are sufficiently independent from the executive (para. 258). What is important, the Court said, is that the entity authorizing interception must be “capable of verifying the existence of a reasonable suspicion against the person concerned, in particular, whether there are factual indications for suspecting that person of planning, committing or having committed criminal acts or other acts that may give rise to secret surveillance measures, such as, for example, acts endangering national security” (para. 260). This finding clearly constitutes a significant threshold which a number of existing and pending European surveillance laws would not meet. For example, the existence of individualized reasonable suspicion runs contrary to the premise of signals intelligence programs where communications are intercepted in bulk; by definition, those programs collect information without any consideration of individualized suspicion. Yet the Court was clearly articulating the principle with national security-driven surveillance in mind, and with the knowledge that interception of communications in Russia is conducted by Russian intelligence on behalf of law enforcement agencies.
  • ...6 more annotations...
  • This element of the Grand Chamber’s decision distinguishes it from prior jurisprudence of the Court, namely the decisions of the Third Section in Weber and Saravia v. Germany (2006) and of the Fourth Section in Liberty and Ors v. United Kingdom (2008). In both cases, the Court considered legislative frameworks which enable bulk interception of communications. (In the German case, the Court used the term “strategic monitoring,” while it referred to “more general programmes of surveillance” in Liberty.) In the latter case, the Fourth Section sought to depart from earlier European Commission of Human Rights — the court of first instance until 1998 — decisions which developed the requirements of the law in the context of surveillance measures targeted at specific individuals or addresses. It took note of the Weber decision which “was itself concerned with generalized ‘strategic monitoring’, rather than the monitoring of individuals” and concluded that there was no “ground to apply different principles concerning the accessibility and clarity of the rules governing the interception of individual communications, on the one hand, and more general programmes of surveillance, on the other” (para. 63). The Court in Liberty made no mention of any need for any prior or reasonable suspicion at all.
  • In Weber, reasonable suspicion was addressed only at the post-interception stage; that is, under the German system, bulk intercepted data could be transmitted from the German Federal Intelligence Service (BND) to law enforcement authorities without any prior suspicion. The Court found that the transmission of personal data without any specific prior suspicion, “in order to allow the institution of criminal proceedings against those being monitored” constituted a fairly serious interference with individuals’ privacy rights that could only be remedied by safeguards and protections limiting the extent to which such data could be used (para. 125). (In the context of that case, the Court found that Germany’s protections and restrictions were sufficient.) When you compare the language from these three cases, it would appear that the Grand Chamber in Zakharov is reasserting the requirement for individualized reasonable suspicion, including in national security cases, with full knowledge of the nature of surveillance considered by the Court in its two recent bulk interception cases.
  • The requirement of reasonable suspicion is bolstered by the Grand Chamber’s subsequent finding in Zakharov that the interception authorization (e.g., the court order or warrant) “must clearly identify a specific person to be placed under surveillance or a single set of premises as the premises in respect of which the authorisation is ordered. Such identification may be made by names, addresses, telephone numbers or other relevant information” (para. 264). In making this finding, it references paragraphs from Liberty describing the broad nature of the bulk interception warrants under British law. In that case, it was this description that led the Court to find the British legislation possessed insufficient clarity on the scope or manner of exercise of the State’s discretion to intercept communications. In one sense, therefore, the Grand Chamber seems to be retroactively annotating the Fourth Section’s Liberty decision so that it might become consistent with its decision in Zakharov. Without this revision, the Court would otherwise appear to depart to some extent — arguably, purposefully — from both Liberty and Weber.
  • Finally, the Grand Chamber took issue with the direct nature of the access enjoyed by Russian intelligence under the SORM system. The Court noted that this contributed to rendering oversight ineffective, despite the existence of a requirement for prior judicial authorization. Absent an obligation to demonstrate such prior authorization to the communications service provider, the likelihood that the system would be abused through “improper action by a dishonest, negligent or overly zealous official” was quite high (para. 270). Accordingly, “the requirement to show an interception authorisation to the communications service provider before obtaining access to a person’s communications is one of the important safeguards against abuse by the law-enforcement authorities” (para. 269). Again, this requirement arguably creates an unconquerable barrier for a number of modern bulk interception systems, which rely on the use of broad warrants to authorize the installation of, for example, fiber optic cable taps that facilitate the interception of all communications that cross those cables. In the United Kingdom, the Independent Reviewer of Terrorism Legislation David Anderson revealed in his essential inquiry into British surveillance in 2015, there are only 20 such warrants in existence at any time. Even if these 20 warrants are served on the relevant communications service providers upon the installation of cable taps, the nature of bulk interception deprives this of any genuine meaning, making the safeguard an empty one. Once a tap is installed for the purposes of bulk interception, the provider is cut out of the equation and can no longer play the role the Court found so crucial in Zakharov.
  • The Zakharov case not only levels a serious blow at bulk, untargeted surveillance regimes, it suggests the Grand Chamber’s intention to actively craft European Court of Human Rights jurisprudence in a manner that curtails such regimes. Any suggestion that the Grand Chamber’s decision was issued in ignorance of the technical capabilities or intentions of States and the continued preference for bulk interception systems should be dispelled; the oral argument in the case took place in September 2014, at a time when the Court had already indicated its intention to accord priority to cases arising out of the Snowden revelations. Indeed, the Court referenced such forthcoming cases in the fact sheet it issued after the Zakharov judgment was released. Any remaining doubt is eradicated through an inspection of the multiple references to the Snowden revelations in the judgment itself. In the main judgment, the Court excerpted text from the Director of the European Union Agency for Human Rights discussing Snowden, and in the separate opinion issued by Judge Dedov, he goes so far as to quote Edward Snowden: “With each court victory, with every change in the law, we demonstrate facts are more convincing than fear. As a society, we rediscover that the value of the right is not in what it hides, but in what it protects.”
  • The full implications of the Zakharov decision remain to be seen. However, it is likely we will not have to wait long to know whether the Grand Chamber intends to see the demise of bulk collection schemes; the three UK cases (Big Brother Watch & Ors v. United Kingdom, Bureau of Investigative Journalism & Alice Ross v. United Kingdom, and 10 Human Rights Organisations v. United Kingdom) pending before the Court have been fast-tracked, indicating the Court’s willingness to continue to confront the compliance of bulk collection schemes with human rights law. It is my hope that the approach in Zakharov hints at the Court’s conviction that bulk collection schemes lie beyond the bounds of permissible State surveillance.
Gonzalo San Gil, PhD.

Chile's New Copyright Legislation Would Make Creative Commons Licensing Impossible For ... - 0 views

    • Gonzalo San Gil, PhD.
       
      # ! Copyright to stiffle the right to creation... ()# ! Think that, in fact, every work is always a 'derivative' work...( :/
  •  
    "Techdirt has written many times about the way in which copyright only ever seems to get stronger, and how different jurisdictions point to other examples of excessive copyright to justify making their own just as bad. In Chile, there's an interesting example of that kind of copyright ratchet being applied in the same country but to different domains. It concerns audiovisual works, and aims to give directors, screenwriters and others new rights to "match" those that others enjoy"
  •  
    "Techdirt has written many times about the way in which copyright only ever seems to get stronger, and how different jurisdictions point to other examples of excessive copyright to justify making their own just as bad. In Chile, there's an interesting example of that kind of copyright ratchet being applied in the same country but to different domains. It concerns audiovisual works, and aims to give directors, screenwriters and others new rights to "match" those that others enjoy"
Gonzalo San Gil, PhD.

How Google Does Open Source - Datamation - 0 views

  •  
    "TORONTO - Marc Merlin has been working as an engineer at Google since 2002 and has seen (and done) a lot of open source and Linux work during that time. Speaking at the LinuxCon North America event this week, Merlin provided a standing room only audience with an overview how Google uses and contributes to open source."
  •  
    "TORONTO - Marc Merlin has been working as an engineer at Google since 2002 and has seen (and done) a lot of open source and Linux work during that time. Speaking at the LinuxCon North America event this week, Merlin provided a standing room only audience with an overview how Google uses and contributes to open source."
Paul Merrell

Google to block Flash on Chrome, only 10 websites exempt - CNET - 0 views

  • The inexorable slide into a world without Flash continues, with Google revealing plans to phase out support for Adobe's Flash Player in its Chrome browser for all but a handful of websites. And the company expects the changes to roll out by the fourth quarter of 2016. While it says Flash might have "historically" been a good way to present rich media online, Google is now much more partial to HTML5, thanks to faster load times and lower power use. As a result, Flash will still come bundled with Chrome, but "its presence will not be advertised by default." Where the Flash Player is the only option for viewing content on a site, users will need to actively switch it on for individual sites. Enterprise Chrome users will also have the option of switching Flash off altogether. Google will maintain support in the short-term for the top 10 domains using the player, including YouTube, Facebook, Yahoo, Twitch and Amazon. But this "whitelist" is set to be periodically reviewed, with sites removed if they no longer warrant an exception, and the exemption list will expire after a year. A spokesperson for Adobe said it was working with Google in its goal of "an industry-wide transition to Open Web standards," including the adoption of HTML5. "At the same time, given that Flash continues to be used in areas such as education, web gaming and premium video, the responsible thing for Adobe to do is to continue to support Flash with updates and fixes, as we help the industry transition," Adobe said in an emailed statement. "Looking ahead, we encourage content creators to build with new web standards."
Gonzalo San Gil, PhD.

Easy, Linux-compatible password management with KeePassX | Opensource.com - 0 views

  •  
    "According to the U.S. Computer Emergency Readiness Team, "Passwords are a common form of authentication and are often the only barrier between a user and your personal information. There are several programs attackers can use to help guess or 'crack' passwords, but by choosing good passwords and keeping them confidential, you can make it more difficult for an unauthorized person to access your information.""
  •  
    "According to the U.S. Computer Emergency Readiness Team, "Passwords are a common form of authentication and are often the only barrier between a user and your personal information. There are several programs attackers can use to help guess or 'crack' passwords, but by choosing good passwords and keeping them confidential, you can make it more difficult for an unauthorized person to access your information.""
Gary Edwards

Blog | Spritz - 0 views

  • Therein lies one of the biggest problems with traditional RSVP. Each time you see text that is not centered properly on the ORP position, your eyes naturally will look for the ORP to process the word and understand its meaning. This requisite eye movement creates a “saccade”, a physical eye movement caused by your eyes taking a split second to find the proper ORP for a word. Every saccade has a penalty in both time and comprehension, especially when you start to speed up reading. Some saccades are considered by your brain to be “normal” during reading, such as when you move your eye from left to right to go from one ORP position to the next ORP position while reading a book. Other saccades are not normal to your brain during reading, such as when you move your eyes right to left to spot an ORP. This eye movement is akin to trying to read a line of text backwards. In normal reading, you normally won’t saccade right-to-left unless you encounter a word that your brain doesn’t already know and you go back for another look; those saccades will increase based on the difficulty of the text being read and the percentage of words within it that you already know. And the math doesn’t look good, either. If you determined the length of all the words in a given paragraph, you would see that, depending on the language you’re reading, there is a low (less than 15%) probability of two adjacent words being the same length and not requiring a saccade when they are shown to you one at a time. This means you move your eyes on a regular basis with traditional RSVP! In fact, you still move them with almost every word. In general, left-to-right saccades contribute to slower reading due to the increased travel time for the eyeballs, while right-to-left saccades are discombobulating for many people, especially at speed. It’s like reading a lot of text that contains words you don’t understand only you DO understand the words! The experience is frustrating to say the least.
  • In addition to saccading, another issue with RSVP is associated with “foveal vision,” the area in focus when you look at a sentence. This distance defines the number of letters on which your eyes can sharply focus as you read. Its companion is called “parafoveal vision” and refers to the area outside foveal vision that cannot be seen sharply.
  •  
    "To understand Spritz, you must understand Rapid Serial Visual Presentation (RSVP). RSVP is a common speed-reading technique used today. However, RSVP was originally developed for psychological experiments to measure human reactions to content being read. When RSVP was created, there wasn't much digital content and most people didn't have access to it anyway. The internet didn't even exist yet. With traditional RSVP, words are displayed either left-aligned or centered. Figure 1 shows an example of a center-aligned RSVP, with a dashed line on the center axis. When you read a word, your eyes naturally fixate at one point in that word, which visually triggers the brain to recognize the word and process its meaning. In Figure 1, the preferred fixation point (character) is indicated in red. In this figure, the Optimal Recognition Position (ORP) is different for each word. For example, the ORP is only in the middle of a 3-letter word. As the length of a word increases, the percentage that the ORP shifts to the left of center also increases. The longer the word, the farther to the left of center your eyes must move to locate the ORP."
Gonzalo San Gil, PhD.

GNU's Framework for Secure Peer-to-Peer Networking GNU's Framework for Secure Peer-to-P... - 0 views

  •  
    "Philosophy The foremost goal of the GNUnet project is to become a widely used, reliable, open, non-discriminating, egalitarian, unfettered and censorship-resistant system of free information exchange. We value free speech above state secrets, law-enforcement or intellectual property. GNUnet is supposed to be an anarchistic network, where the only limitation for peers is that they must contribute enough back to the network such that their resource consumption does not have a significant impact on other users. GNUnet should be more than just another file-sharing network. The plan is to offer many other services and in particular to serve as a development platform for the next generation of decentralized Internet protocols."
  •  
    "Philosophy The foremost goal of the GNUnet project is to become a widely used, reliable, open, non-discriminating, egalitarian, unfettered and censorship-resistant system of free information exchange. We value free speech above state secrets, law-enforcement or intellectual property. GNUnet is supposed to be an anarchistic network, where the only limitation for peers is that they must contribute enough back to the network such that their resource consumption does not have a significant impact on other users. GNUnet should be more than just another file-sharing network. The plan is to offer many other services and in particular to serve as a development platform for the next generation of decentralized Internet protocols."
Paul Merrell

The Government Can No Longer Track Your Cell Phone Without a Warrant | Motherboard - 0 views

  • The government and police regularly use location data pulled off of cell phone towers to put criminals at the scenes of crimes—often without a warrant. Well, an appeals court ruled today that the practice is unconstitutional, in one of the strongest judicial defenses of technology privacy rights we've seen in a while.  The United States Court of Appeals for the Eleventh Circuit ruled that the government illegally obtained and used Quartavious Davis's cell phone location data to help convict him in a string of armed robberies in Miami and unequivocally stated that cell phone location information is protected by the Fourth Amendment. "In short, we hold that cell site location information is within the subscriber’s reasonable expectation of privacy," the court ruled in an opinion written by Judge David Sentelle. "The obtaining of that data without a warrant is a Fourth Amendment violation."
  • In Davis's case, police used his cell phone's call history against him to put him at the scene of several armed robberies. They obtained a court order—which does not require the government to show probable cause—not a warrant, to do so. From now on, that'll be illegal. The decision applies only in the Eleventh Circuit, but sets a strong precedent for future cases.
  • Indeed, the decision alone is a huge privacy win, but Sentelle's strong language supporting cell phone users' privacy rights is perhaps the most important part of the opinion. Sentelle pushed back against several of the federal government's arguments, including one that suggested that, because cell phone location data based on a caller's closest cell tower isn't precise, it should be readily collectable.  "The United States further argues that cell site location information is less protected than GPS data because it is less precise. We are not sure why this should be significant. We do not doubt that there may be a difference in precision, but that is not to say that the difference in precision has constitutional significance," Sentelle wrote. "That information obtained by an invasion of privacy may not be entirely precise does not change the calculus as to whether obtaining it was in fact an invasion of privacy." The court also cited the infamous US v. Jones Supreme Court decision that held that attaching a GPS to a suspect's car is a "search" under the Fourth Amendment. Sentelle suggested a cell phone user has an even greater expectation of location privacy with his or her cell phone use than a driver does with his or her car. A car, Sentelle wrote, isn't always with a person, while a cell phone, these days, usually is.
  • ...2 more annotations...
  • "One’s cell phone, unlike an automobile, can accompany its owner anywhere. Thus, the exposure of the cell site location information can convert what would otherwise be a private event into a public one," he wrote. "In that sense, cell site data is more like communications data than it is like GPS information. That is, it is private in nature rather than being public data that warrants privacy protection only when its collection creates a sufficient mosaic to expose that which would otherwise be private." Finally, the government argued that, because Davis made outgoing calls, he "voluntarily" gave up his location data. Sentelle rejected that, too, citing a prior decision by a Third Circuit Court. "The Third Circuit went on to observe that 'a cell phone customer has not ‘voluntarily’ shared his location information with a cellular provider in any meaningful way.' That circuit further noted that 'it is unlikely that cell phone customers are aware that their cell phone providers collect and store historical location information,'” Sentelle wrote.
  • "Therefore, as the Third Circuit concluded, 'when a cell phone user makes a call, the only information that is voluntarily and knowingly conveyed to the phone company is the number that is dialed, and there is no indication to the user that making that call will also locate the caller,'" he continued.
  •  
    Another victory for civil libertarians against the surveillance state. Note that this is another decision drawing guidance from the Supreme Court's decision in U.S. v. Jones, shortly before the Edward Snowden leaks came to light, that called for re-examination of the Third Party Doctrine, an older doctrine that data given to or generated by third parties is not protected by the Fourth Amendment.   
Gonzalo San Gil, PhD.

Digital Content Online Should Be Free, Children Say | TorrentFreak - 0 views

  •  
    " Andy on June 20, 2014 C: 67 Breaking A new survey of young children and adults has found consensus on what should be charged for content online. In both groups, 49% said that people should be able to download content they want for free, with a quarter of 16-24 year olds stating that file-sharing was the only way they could afford to obtain it."
  •  
    " Andy on June 20, 2014 C: 67 Breaking A new survey of young children and adults has found consensus on what should be charged for content online. In both groups, 49% said that people should be able to download content they want for free, with a quarter of 16-24 year olds stating that file-sharing was the only way they could afford to obtain it."
Gonzalo San Gil, PhD.

Most Top Films Are Not Available on Netflix, Research Finds | TorrentFreak - 0 views

    • Gonzalo San Gil, PhD.
       
      # Anotheer sample of the cultural market manipulation...
  •  
    [ Ernesto on September 26, 2014 C: 0 [Breaking A new study published by research firm KPMG reveals that only 16% of the most popular and critically acclaimed films are available via Netflix and other on-demand subscription services. ] [# ! ...The '#Magnificent' #Entertainment #Industry #Strategy: # ! A new envelopment for '#Their' #censorship. # ! Wanna #Thrive? #monetize (fairly and proportionally) #Free # ! #FileSharing (through ISPs, non-invasive advertising and, of # ! course, a little help from #governments, that already collect a lot # ! of Aficionados/consumers'/taxpayers'/citizens' #Money... )]
  •  
    [ Ernesto on September 26, 2014 C: 0 [Breaking A new study published by research firm KPMG reveals that only 16% of the most popular and critically acclaimed films are available via Netflix and other on-demand subscription services. ]
Gonzalo San Gil, PhD.

Who Does That Server Really Serve? - GNU Project - Free Software Foundation - 0 views

  •  
    "by Richard Stallman (The first version was published in Boston Review.) On the Internet, proprietary software isn't the only way to lose your freedom. Service as a Software Substitute, or SaaSS, is another way to let someone else have power over your computing."
  •  
    "by Richard Stallman (The first version was published in Boston Review.) On the Internet, proprietary software isn't the only way to lose your freedom. Service as a Software Substitute, or SaaSS, is another way to let someone else have power over your computing."
Gonzalo San Gil, PhD.

Open source to make caring for your health feel wonderful | Opensource.com - 0 views

  •  
    [Interview Juhan Sonin of Involution Studios] "Juhan Sonin wants to influence the world from protein, to policy, to pixel. And, he believes the only way to do that is with open source principles guiding the way. Juhan is the Creative Director at Involution Studios, a design firm educating and empowering people to feel wonderful by creating, developing, and licensing their work for the public."
  •  
    "Juhan Sonin wants to influence the world from protein, to policy, to pixel. And, he believes the only way to do that is with open source principles guiding the way. Juhan is the Creative Director at Involution Studios, a design firm educating and empowering people to feel wonderful by creating, developing, and licensing their work for the public."
Gonzalo San Gil, PhD.

The Art of Unblocking Websites Without Committing Crimes | TorrentFreak - 1 views

  •  
    " Andy on September 23, 2014 C: 31 Breaking Last month UK police took down several torrent site proxies and arrested their owner. Now a UK developer has created a new & free service that not only silently unblocks any website without falling foul of the law, but one that will eventually become available to all under a GPL 3.0 license."
  •  
    " Andy on September 23, 2014 C: 31 Breaking Last month UK police took down several torrent site proxies and arrested their owner. Now a UK developer has created a new & free service that not only silently unblocks any website without falling foul of the law, but one that will eventually become available to all under a GPL 3.0 license."
Gonzalo San Gil, PhD.

EU digital single market: Death by compromise - POLITICO (*) - 0 views

  •  
    By Ryan Heath and Zoya Sheftalovich 6/5/15, 5:30 AM CET Updated 6/5/15, 12:07 PM CET The long-awaited, much-ballyhooed Digital Single Market strategy is set to be published at noon Wednesday by the European Commission. Reaction will be quick, loud and vociferous, but look for clues to the answer to one key question: Will this document really change anything? [*The structure of Media supply/demand keeps on being vertical: Users will only access what Big Companies offer. There must be a way -via watermarking, perhaps- to allow people to consume whatever they want, and fairly monetize it later... If not, the contents will be keep restricted to editors will: that is censorship and restrictions in te age of abundance and freedom] "A user's guide to the Commission's latest brainstorm. By Ryan Heath and Zoya Sheftalovich 6/5/15, 5:30 AM CET Updated 6/5/15, 12:07 PM CET"
  •  
    By Ryan Heath and Zoya Sheftalovich 6/5/15, 5:30 AM CET Updated 6/5/15, 12:07 PM CET The long-awaited, much-ballyhooed Digital Single Market strategy is set to be published at noon Wednesday by the European Commission. Reaction will be quick, loud and vociferous, but look for clues to the answer to one key question: Will this document really change anything? [*The structure of Media supply/demand keeps on being vertical: Users will only access what Big Companies offer. There must be a way -via watermarking, perhaps- to allow people to consume whatever they want, and fairly monetize it later... If not, the contents will be keep restricted to editors will: that is censorship and restrictions in te age of abundance and freedom] "A user's guide to the Commission's latest brainstorm. By Ryan Heath and Zoya Sheftalovich 6/5/15, 5:30 AM CET Updated 6/5/15, 12:07 PM CET"
Gonzalo San Gil, PhD.

11 Tactics Used By The Mainstream Media To Manufacture Consent For The Oligarchy | Worl... - 1 views

  •  
    "The mainstream media is aging and collapsing under the weight of its own hubris and arrogance. Now entirely formulaic in presentation and predictable in substance, the 'major' outlets of news, which are monopolized under only a small handful of corporations, serve the purpose of misleading the public on important issues and manufacturing consent for government and the oligarchs"
  •  
    "The mainstream media is aging and collapsing under the weight of its own hubris and arrogance. Now entirely formulaic in presentation and predictable in substance, the 'major' outlets of news, which are monopolized under only a small handful of corporations, serve the purpose of misleading the public on important issues and manufacturing consent for government and the oligarchs"
Gonzalo San Gil, PhD.

Thanks for the Really Counter-Productive DMCA Complaints | TorrentFreak - 1 views

  •  
    " Andy on May 24, 2015 C: 0 Opinion While plenty of sites infringe copyright on a regular basis, this site is never one of them. However, that doesn't stop copyright holders from trying to disappear our pages from Google search. Sadly, the latest sorry efforts only attracted our attention to even bigger mistakes, such as the Web Sheriff trying to take down a movie's own Kickstarter page."
  •  
    " Andy on May 24, 2015 C: 0 Opinion While plenty of sites infringe copyright on a regular basis, this site is never one of them. However, that doesn't stop copyright holders from trying to disappear our pages from Google search. Sadly, the latest sorry efforts only attracted our attention to even bigger mistakes, such as the Web Sheriff trying to take down a movie's own Kickstarter page."
Gary Edwards

Two Microsofts: Mulling an alternate reality | ZDNet - 1 views

  • Judge Jackson had it right. And the Court of Appeals? Not so much
  • Judge Jackson is an American hero and news of his passing thumped me hard. His ruling against Microsoft and the subsequent overturn of that ruling resulted, IMHO, in two extraordinary directions that changed the world. Sure the what-if game is interesting, but the reality itself is stunning enough. Of course, Judge Jackson sought to break the monopoly. The US Court of Appeals overturn resulted in the monopoly remaining intact, but the Internet remaining free and open. Judge Jackson's breakup plan had a good shot at achieving both a breakup of the monopoly and, a free and open Internet. I admit though that at the time I did not favor the Judge's plan. And i actually did submit a proposal based on Microsoft having to both support the WiNE project, and, provide a complete port to WiNE to any software provider requesting a port. I wanted to break the monopolist's hold on the Windows Productivity Environment and the hundreds of millions of investment dollars and time that had been spent on application development forever trapped on that platform. For me, it was the productivity platform that had to be broken.
  • I assume the good Judge thought that separating the Windows OS from Microsoft Office / Applications would force the OS to open up the secret API's even as the OS continued to evolve. Maybe. But a full disclosure of the API's coupled with the community service "port to WiNE" requirement might have sped up the process. Incredibly, the "Undocumented Windows Secrets" industry continues to thrive, and the legendary Andrew Schulman's number is still at the top of Silicon Valley legal profession speed dials. http://goo.gl/0UGe8 Oh well. The Court of Appeals stopped the breakup, leaving the Windows Productivity Platform intact. Microsoft continues to own the "client" in "Client/Server" computing. Although Microsoft was temporarily stopped from leveraging their desktop monopoly to an iron fisted control and dominance of the Internet, I think what were watching today with the Cloud is Judge Jackson's worst nightmare. And mine too. A great transition is now underway, as businesses and enterprises begin the move from legacy client/server business systems and processes to a newly emerging Cloud Productivity Platform. In this great transition, Microsoft holds an inside straight. They have all the aces because they own the legacy desktop productivity platform, and can control the transition to the Cloud. No doubt this transition is going to happen. And it will severely disrupt and change Microsoft's profit formula. But if the Redmond reprobate can provide a "value added" transition of legacy business systems and processes, and direct these new systems to the Microsoft Cloud, the profits will be immense.
  • ...1 more annotation...
  • Judge Jackson sought to break the ability of Microsoft to "leverage" their existing monopoly into the Internet and his plan was overturned and replaced by one based on judicial oversight. Microsoft got a slap on the wrist from the Court of Appeals, but were wailed on with lawsuits from the hundreds of parties injured by their rampant criminality. Some put the price of that criminality as high as $14 Billion in settlements. Plus, the shareholders forced Chairman Bill to resign. At the end of the day though, Chairman Bill was right. Keeping the monopoly intact was worth whatever penalty Microsoft was forced to pay. He knew that even the judicial over-site would end one day. Which it did. And now his company is ready to go for it all by leveraging and controlling the great productivity transition. No business wants to be hostage to a cold heart'd monopolist. But there is huge difference between a non-disruptive and cost effective, process-by-process value-added transition to a Cloud Productivity Platform, and, the very disruptive and costly "rip-out-and-replace" transition offered by Google, ZOHO, Box, SalesForce and other Cloud Productivity contenders. Microsoft, and only Microsoft, can offer the value-added transition path. If they get the Cloud even halfway right, they will own business productivity far into the future. Rest in Peace Judge Jackson. Your efforts were heroic and will be remembered as such. ~ge~
  •  
    Comments on the latest SVN article mulling the effects of Judge Thomas Penfield Jackson's anti trust ruling and proposed break up of Microsoft. comment: "Chinese Wall" Ummm, there was a Chinese Wall between Microsoft Os and the MS Applciations layer. At least that's what Chairman Bill promised developers at a 1990 OS/2-Windows Conference I attended. It was a developers luncheon, hosted by Microsoft, with Chairman Bill speaking to about 40 developers with applications designed to run on the then soon to be released Windows 3.0. In his remarks, the Chairman described his vision of commoditizing the personal computer market through an open hardware-reference platform on the one side of the Windows OS, and provisioning an open application developers layer on the other using open and totally transparent API's. Of course the question came up concerning the obvious advantage Microsoft applications would have. Chairman Bill answered the question by describing the Chinese Wall that existed between Microsoft's OS and Apps develop departments. He promised that OS API's would be developed privately and separate from the Apps department, and publicly disclosed to ALL developers at the same time. Oh yeah. There was lots of anti IBM - evil empire stuff too :) Of course we now know this was a line of crap. Microsoft Apps was discovered to have been using undocumented and secret Window API's. http://goo.gl/0UGe8. Microsoft Apps had a distinct advantage over the competition, and eventually the entire Windows Productivity Platform became dependent on the MSOffice core. The company I worked for back then, Pyramid Data, had the first Contact Management application for Windows; PowerLeads. Every Friday night we would release bug fixes and improvements using Wildcat BBS. By Monday morning we would be slammed with calls from users complaining that they had downloaded the Friday night patch, and now some other application would not load or function properly. Eventually we tracked th
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

High Court Rules UK's Surveillance Powers Violate Human Rights - 0 views

  • UK's High Court found the rushed Data Retention and Investigatory Powers Act (DRIPA) to be illegal under the European Convention on Human Rights and EU Charter of Fundamental Rights, both of which require respect for private and family life, as well as protection of personal data in the case of the latter. DRIPA was challenged by two members of Parliament (MPs), Labor's Tom Watson and the Conservative David Davis, who argued that the surveillance of communications wasn't limited to serious crimes, that individual notices for data collection were kept secret, and that no provision existed to protect those who need professional confidentiality, such as lawyers and journalists. DRIPA was pushed through in three days last year after the European Court of Justice ruled that the EU data retention powers were disproportionate, which invalidated the previous data retention law in the UK. The UK High Court also ruled that sections 1 and 2 of DRIPA were unlawful based on the fact that they fail to provide precise policies to ensure that data is only accessed for the purpose of investigating serious crimes. Another major point against DRIPA was that it didn't require judicial approval, which could limit access to only the data that is strictly necessary for investigations.
  • DRIPA passed in only three days, but the Court allowed it to continue for another nine months, to give the UK government enough time to draft new legislation. Although this almost doubles the time in which this law will exist, it might be better in the long term, as it gives the members of Parliament enough time to debate its successor, without having to rush yet another law fearing that the government's surveillance powers will expire. This court ruling arrived at the right time, as the UK government is currently preparing the draft for the Investigative Powers Bill (also called Snooper's Charter by many), which further expands the government's surveillance powers and may even request encryption backdoors. It also joins other recent reviews of the government's surveillance laws that called for much stricter oversight done by judges rather than the government's own members. "Campaigners, MPs across the political spectrum, the Government's own reviewer of terrorism legislation are all calling for judicial oversight and clearer safeguards," said James Welch, Legal Director for Liberty, a human rights organization.
  •  
    The Dark State takes another hit.
Gonzalo San Gil, PhD.

Where is My European Union? - 0 views

    • Gonzalo San Gil, PhD.
       
      [# ! Via Cristina Draghis -> FB's Wake up: The World Needs You...]
  •  
    [As Greece prepares for a monumental decision, there is only one certainty: the European Ideal has been irrevocably damaged...]
  •  
    [As Greece prepares for a monumental decision, there is only one certainty: the European Ideal has been irrevocably damaged...]
‹ Previous 21 - 40 of 361 Next › Last »
Showing 20 items per page