Skip to main content

Home/ Future of the Web/ Group items tagged fundamental

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

The Copyright Monopoly's Fundamental Problem Remains The Same... | TorrentFreak - 1 views

  •  
    " Rick Falkvinge on April 20, 2014 C: 24 Opinion The fundamental problem with the copyright monopoly today is that it can't coexist with private communications as a concept. Our sharing of culture and knowledge happens as part of the private correspondence that leaves our computer, and therefore, the monopoly cannot be enforced as long as private correspondence exists."
Gonzalo San Gil, PhD.

RIAA: The Pirate Bay Assaults Fundamental Human Rights | TorrentFreak - 0 views

  •  
    " Ernesto on October 28, 2014 C: 50 Breaking The RIAA has just submitted its latest list of "rogue" websites to the U.S. Government. The report includes many of the usual suspects and also calls out websites who claim that they're protecting the Internet from censorship, specifically naming The Pirate Bay. "We must end this assault on our humanity and the misappropriation of fundamental human rights," RIAA writes." [# ! Funny # ! ... coming from those who #scorn #culture, keep #prices artificially # ! high, treat all Pe@ple as #Thieves, and #lobby #politics to # ! #manipulate #laws for the (#extreme) #benefit of just a #few...]
  •  
    " Ernesto on October 28, 2014 C: 50 Breaking The RIAA has just submitted its latest list of "rogue" websites to the U.S. Government. The report includes many of the usual suspects and also calls out websites who claim that they're protecting the Internet from censorship, specifically naming The Pirate Bay. "We must end this assault on our humanity and the misappropriation of fundamental human rights," RIAA writes."
Gonzalo San Gil, PhD.

The Universal Declaration of Human Rights - 3 views

  •  
    [PREAMBLE Whereas recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world, Whereas disregard and contempt for human rights have resulted in barbarous acts which have outraged the conscience of mankind, and the advent of a world in which human beings shall enjoy freedom of speech and belief and freedom from fear and want has been proclaimed as the highest aspiration of the common people, Whereas it is essential, if man is not to be compelled to have recourse, as a last resort, to rebellion against tyranny and oppression, that human rights should be protected by the rule of law, Whereas it is essential to promote the development of friendly relations between nations, Whereas the peoples of the United Nations have in the Charter reaffirmed their faith in fundamental human rights, in the dignity and worth of the human person and in the equal rights of men and women and have determined to promote social progress and better standards of life in larger freedom, Whereas Member States have pledged themselves to achieve, in co-operation with the United Nations, the promotion of universal respect for and observance of human rights and fundamental freedoms, Whereas a common understanding of these rights and freedoms is of the greatest importance for the full realization of this pledge, Now, Therefore THE GENERAL ASSEMBLY proclaims THIS UNIVERSAL DECLARATION OF HUMAN RIGHTS as a common standard of achievement for all peoples and all nations, to the end that every individual and every organ of society, keeping this Declaration constantly in mind, shall strive by teaching and education to promote respect for these rights and freedoms and by progressive measures, national and international, to secure their universal and effective recognition and observance, both among the peoples of Member States themselves and among the peoples of territories
  •  
    The Declaration is an important document but only aspirational in nature. It was hamstrung from the beginning by omission of mandated procedures by which an aggrieved person could seek its enforcement or protection.
  •  
    Oh.. of course, Paul. This is Just a Reminder... ... of the other ways to do the things... For Every@ne. Perhaps One Day... :)
Gonzalo San Gil, PhD.

Linux Newbie Administrator Guide - Linux Newbie Administrator Guide [# ! Lead 'Note'...] - 1 views

  •  
    [# ! every day is a new beginning...] "Table of Contents 1. For the Undecided (Linux Benefits) 1.1 Fundamentally, why Linux? 1.2 Is Linux for me? 1.3 Linux is difficult for newbies. 1.4 What are the benefits of Linux?"
  •  
    [# ! every day is a new beginning...] "Table of Contents 1. For the Undecided (Linux Benefits) 1.1 Fundamentally, why Linux? 1.2 Is Linux for me? 1.3 Linux is difficult for newbies. 1.4 What are the benefits of Linux?"
  •  
    [# ! every day is a new beginning...] "Table of Contents 1. For the Undecided (Linux Benefits) 1.1 Fundamentally, why Linux? 1.2 Is Linux for me? 1.3 Linux is difficult for newbies. 1.4 What are the benefits of Linux?"
Paul Merrell

EU-US Personal Data Privacy Deal 'Cracked Beyond Repair' - 0 views

  • Privacy Shield is the proposed new deal between the EU and the US that is supposed to safeguard all personal data on EU citizens held on computer systems in the US from being subject to mass surveillance by the US National Security Agency. The data can refer to any transaction — web purchases, cars or clothing — involving an EU citizen whose data is held on US servers. Privacy groups say Privacy Shield — which replaces the Safe Harbor agreement ruled unlawful in October 2015 — does not meet strict EU standard on the use of personal data. Monique Goyens, Director General of the European Consumer Organization (BEUC) told Sputnik: “We consider that the shield is cracked beyond repair and is unlikely to stand scrutiny by the European Court of Justice. A fundamental problem remains that the US side of the shield is made of clay, not iron.”
  • The agreement has been under negotiation for months ever since the because the European Court of Justice ruled in October 2015 that the previous EU-US data agreement — Safe Harbor — was invalid. The issue arises from the strict EU laws — enshrined in the Charter of Fundamental Rights of the European Union — to the privacy of their personal data.
  • The Safe Harbor agreement was a quasi-judicial understanding that the US undertook to agree that it would ensure that EU citizens’ data on US servers would be held and protected under the same restrictions as it would be under EU law and directives. The data covers a huge array of information — from Internet and communications usage, to sales transactions, import and exports.
  • ...1 more annotation...
  • The case arose when Maximillian Schrems, a Facebook user, lodged a complaint with the Irish Data Protection Commissioner, arguing that — in the light of the revelations by ex-CIA contractor Edward Snowden of mass surveillance by the US National Security Agency (NSA) — the transfer of data from Facebook’s Irish subsidiary onto the company’s servers in the US does not provide sufficient protection of his personal data. The court ruled that: “the Safe Harbor Decision denies the national supervisory authorities their powers where a person calls into question whether the decision is compatible with the protection of the privacy and of the fundamental rights and freedoms of individuals.”
  •  
    Off we go for another trip to the European Court of Justice.
Gary Edwards

Tech Execs Express Extreme Concern That NSA Surveillance Could Lead To 'Breaking' The I... - 0 views

  • We need to look the world's dangers in the face. And we need to resolve that we will not allow the dangers of the world to freeze this country in its tracks. We need to recognize that antiquated laws will not keep the public safe. We need to recognize that laws that the rest of the world does not respect will ultimately undermine the fundamental ability of our own legal processes, law enforcement agencies and even the intelligence community itself. At the end of the day, we need to recognize... the one asset that the US has which is even stronger than our military might is our moral authority. And this decline in trust, has not only effected people's trust in American technology products. It has effected people's willingness to trust the leadership of the United States. If we are going to win the war on terror. If we are going to keep the public safe. If we are going to improve American competitiveness, we need Congress to stay on the path it's set. We need Congress to finish in December the job the President put before Congress in January.
  •  
    "Nothing necessarily earth-shattering was said by anyone, but it did involve a series of high powered tech execs absolutely slamming the NSA and the intelligence community, and warning of the vast repercussions from that activity, up to and including potentially splintering or "breaking" the internet by causing people to so distrust the existing internet, that they set up separate networks on their own. The execs repeated the same basic points over and over again. They had been absolutely willing to work with law enforcement when and where appropriate based on actual court orders and review -- but that the government itself completely poisoned the well with its activities, including hacking into the transmission lines between overseas datacenters. Thus, as Eric Schmidt noted, if the NSA and other law enforcement folks are "upset" about Google and others suddenly ramping up their use of encryption and being less willing to cooperate with the government, they only have themselves to blame for completely obliterating any sense of trust. Microsoft's Brad Smith, towards the end, made quite an impassioned plea -- it sounded more like a politician's stump speech -- about the need for rebuilding trust in the internet. It's at about an hour and 3 minutes into the video. He points out that while people had expected Congress to pass the USA Freedom Act, the rise of ISIS and other claimed threats has some people scared, but, he notes: We need to look the world's dangers in the face. And we need to resolve that we will not allow the dangers of the world to freeze this country in its tracks. We need to recognize that antiquated laws will not keep the public safe. We need to recognize that laws that the rest of the world does not respect will ultimately undermine the fundamental ability of our own legal processes, law enforcement agencies and even the intelligence community itself. At the end of the day, we need to recognize... the one asset that the US has which is even stron
Gonzalo San Gil, PhD.

Why TPP Threatens To Undermine One Of The Fundamental Principles Of Science | Techdirt - 1 views

  •  
    "from the and-that's-a-fact dept Last week, we wrote that among the final obstacles to completing the TPP agreement was the issue of enhanced protection for drugs. More specifically, the fight is over an important new class of medicines called "biologics," which are produced from living organisms, and tend to be more complex and expensive to devise."
Paul Merrell

Reset The Net - Privacy Pack - 1 views

  • This June 5th, I pledge to take strong steps to protect my freedom from government mass surveillance. I expect the services I use to do the same.
  • Fight for the Future and Center for Rights will contact you about future campaigns. Privacy Policy
  •  
    I wound up joining this campaign at the urging of the ACLU after checking the Privacy Policy. The Reset the Net campaign seems to be endorsed by a lot of change-oriented groups, from the ACLU to Greenpeac to the Pirate Party. A fair number of groups with a Progressive agenda, but certainly not limited to them. The right answer to that situation is to urge other groups to endorse, not to avoid the campaign. Single-issue coalition-building is all about focusing on an area of agreement rather than worrying about who you are rubbing elbows with.  I have been looking for a a bipartisan group that's tackling government surveillance issues via mass actions but has no corporate sponsors. This might be the one. The reason: Corporate types like Google have no incentive to really butt heads with the government voyeurs. They are themselves engaged in massive surveillance of their users and certainly will not carry the battle for digital privacy over to the private sector. But this *is* a battle over digital privacy and legally defining user privacy rights in the private sector is just as important as cutting back on government surveillance. As we have learned through the Snowden disclosures, what the private internet companies have, the NSA can and does get.  The big internet services successfully pushed in the U.S. for authorization to publish more numbers about how many times they pass private data to the government, but went no farther. They wanted to be able to say they did something, but there's a revolving door of staffers between NSA and the big internet companies and the internet service companies' data is an open book to the NSA.   The big internet services are not champions of their users' privacy. If they were, they would be featuring end-to-end encryption with encryption keys unique to each user and unknown to the companies.  Like some startups in Europe are doing. E.g., the Wuala.com filesync service in Switzerland (first 5 GB of storage free). Compare tha
  •  
    "This June 5th, I pledge to take strong steps to protect my freedom from government mass surveillance. I expect the services I use to do the same."
  •  
    I wound up joining this campaign at the urging of the ACLU after checking the Privacy Policy. The Reset the Net campaign seems to be endorsed by a lot of change-oriented groups, from the ACLU to Greenpeac to the Pirate Party. A fair number of groups with a Progressive agenda, but certainly not limited to them. The right answer to that situation is to urge other groups to endorse, not to avoid the campaign. Single-issue coalition-building is all about focusing on an area of agreement rather than worrying about who you are rubbing elbows with.  I have been looking for a a bipartisan group that's tackling government surveillance issues via mass actions but has no corporate sponsors. This might be the one. The reason: Corporate types like Google have no incentive to really butt heads with the government voyeurs. They are themselves engaged in massive surveillance of their users and certainly will not carry the battle for digital privacy over to the private sector. But this *is* a battle over digital privacy and legally defining user privacy rights in the private sector is just as important as cutting back on government surveillance. As we have learned through the Snowden disclosures, what the private internet companies have, the NSA can and does get.  The big internet services successfully pushed in the U.S. for authorization to publish more numbers about how many times they pass private data to the government, but went no farther. They wanted to be able to say they did something, but there's a revolving door of staffers between NSA and the big internet companies and the internet service companies' data is an open book to the NSA.   The big internet services are not champions of their users' privacy. If they were, they would be featuring end-to-end encryption with encryption keys unique to each user and unknown to the companies.  Like some startups in Europe are doing. E.g., the Wuala.com filesync service in Switzerland (first 5 GB of storage free). Com
Paul Merrell

German Parliament Says No More Software Patents | Electronic Frontier Foundation - 0 views

  • The German Parliament recently took a huge step that would eliminate software patents (PDF) when it issued a joint motion requiring the German government to ensure that computer programs are only covered by copyright. Put differently, in Germany, software cannot be patented. The Parliament's motion follows a similar announcement made by New Zealand's government last month (PDF), in which it determined that computer programs were not inventions or a manner of manufacture and, thus, cannot be patented.
  • The crux of the German Parliament's motion rests on the fact that software is already protected by copyright, and developers are afforded "exploitation rights." These rights, however, become confused when broad, abstract patents also cover general aspects of computer programs. These two intellectual property systems are at odds. The clearest example of this clash is with free software. The motion recognizes this issue and therefore calls upon the government "to preserve the precedence of copyright law so that software developers can also publish their work under open source license terms and conditions with legal security." The free software movement relies upon the fact that software can be released under a copyright license that allows users to share it and build upon others' works. Patents, as Parliament finds, inhibit this fundamental spread.
  • Just like in the New Zealand order, the German Parliament carved out one type of software that could be patented, when: the computer program serves merely as a replaceable equivalent for a mechanical or electro-mechanical component, as is the case, for instance, when software-based washing machine controls can replace an electromechanical program control unit consisting of revolving cylinders which activate the control circuits for the specific steps of the wash cycle This allows for software that is tied to (and controls part of) another invention to be patented. In other words, if a claimed process is purely a computer program, then it is not patentable. (New Zealand's order uses a similar washing machine example.) The motion ends by calling upon the German government to push for this approach to be standard across all of Europe. We hope policymakers in the United States will also consider fundamental reform that deals with the problems caused by low-quality software patents. Ultimately, any real reform must address this issue.
  •  
    Note that an unofficial translation of the parliamentary motion is linked from the article. This adds substantially to the pressure internationally to end software patents because Germany has been the strongest defender of software patents in Europe. The same legal grounds would not apply in the U.S. The strongest argument for the non-patentability in the U.S., in my opinion, is that software patents embody embody both prior art and obviousness. A general purpose computer can accomplish nothing unforeseen by the prior art of the computing device. And it is impossible for software to do more than cause different sequences of bit register states to be executed. This is the province of "skilled artisans" using known methods to produce predictable results. There is a long line of Supreme Court decisions holding that an "invention" with such traits is non-patentable. I have summarized that argument with citations at . 
Gonzalo San Gil, PhD.

Net Neutrality: BEREC on the Right Path, Let's Keep the Pressure on | La Quadrature du Net - 0 views

  •  
    "Paris, 30 September 2016 - Net Neutrality is one of central challenge in the application of fundamental rights in the digital space. Too often it has been only considered as a technical or commercial issue, but it has serious impact on the real e"
Gonzalo San Gil, PhD.

Time to #Fixcopyright and Free the Panorama Across EU - infojustice - 0 views

  •  
    "[Anna Mazgal, Communia Association, Link (CC-0)] Freedom of panorama is a fundamental element of European cultural heritage and visual history. Rooted in freedom of expression, it allows painters, photographers, filmmakers, journalists and tourists alike to document public spaces, create masterpieces of art and memories of beautiful places, and freely share it with others. Within the Best Case Scenarios for Copyright series we present Portugal as the best example for freedom of panorama. Below you can find the basic facts and for more evidence check the Best Case Scenario for Copyright - Freedom of Panorama in Portugal legal study. EU, it's time to #fixcopyright!"
Gonzalo San Gil, PhD.

ACTA: Total Victory for Citizens and Democracy! | La Quadrature du Net - 0 views

  •  
    [Submitted on 4 Jul 2012 - 10:40 ACTA Karel De Gucht press release Printer-friendly version Send by email Français Strasbourg, July 4th 2012 - The European Parliament rejected ACTA1 by a huge majority, killing it for good. This is a major victory for the multitude of connected citizens and organizations who worked hard for years, but also a great hope on a global scale for a better democracy. On the ruins of ACTA we must now build a positive copyright reform2, taking into account our rights instead of attacking them. The ACTA victory must resonate as a wake up call for lawmakers: Fundamental freedoms as well as the free and open Internet must prevail over private interests. ...]
Gonzalo San Gil, PhD.

EU Court of Justice: Censorship in Name of Copyright Violates Fundamental Rights | La Q... - 2 views

  •  
    [Paris, November 24th, 2011 - The European Court of Justice just rendered a historic decision in the Scarlet Extended case, which is crucial for the future of rights and freedoms on the Internet. The Court ruled that forcing Internet service providers to monitor and censor their users' communications violated EU law, and in particular the right to freedom of communication. At a time of all-out offensive in the war against culture sharing online, this decision suggests that censorship measures requested by the entertainment industry are disproportionate means to enforce an outdated copyright regime. Policy-makers across Europe must take this decision into account by refusing new repressive schemes, such as the Anti-Counterfeiting Trade Agreement (ACTA), and engage in a much needed reform of copyright.]
Gonzalo San Gil, PhD.

European Commission Public Consultation on Copyright: La Quadrature du Net's Answer | L... - 0 views

  •  
    "Paris, 14 February 2014 - The European Commission's public consultation on copyright reform is open until 5 March [The European Commission extended the deadline by a month]. This consultation represents an important opportunity for European citizens to demand that access to culture and knowledge be recognised as their fundamental right. It also allows the interests of authors and creators to be defended against those of the cultural industries, major distributors and intermediaries, and heirs of rightholders who currently receive the greatest share of income from copyrighted works. La Quadrature du Net therefore calls on the maximum number of citizens and organisations to reply to the consultation and support a positive reform of copyright."
Gonzalo San Gil, PhD.

ISOC members @IGF 2013 - 0 views

  •  
    "ISOC members @IGF 2013 Each year, the Internet Governance Forum (IGF) provides all stakeholders a unique opportunity to discuss openly critical emerging Internet-related issues. This year's overarching IGF theme is: "Building Bridges" - Enhancing Multistakeholder Cooperation for Growth and Sustainable Development" As part of its engagement at the IGF, the Internet Society strongly supports the fundamentals of the open and sustainable Internet: -Open Global standards for unleashed innovation; -Open to Everyone: a freedom-enhancer for every Internet user; -Open for Business and Economic progress; -Open and Multistakeholder governance for transparent inclusion. Encouraging An Ongoing Dialogue Internet Society Members are actively engaged in the IGF. They also have a unique perspective on what is going on at the regional and local levels. "
Gonzalo San Gil, PhD.

End Global Surveillance and Protect the Free Internet - oiga.me - 2 views

  •  
    "A message to governments of the World Governments have a moral obligation and a duty to protect their citizens' fundamental freedoms against aggressions by public and private entities. We expect them to protect the decentralized architecture of a Free Internet as a common good."
Gonzalo San Gil, PhD.

The Net vs. The Power of Narratives | TorrentFreak - 1 views

  •  
    " By Rick Falkvinge on April 29, 2012 C: 76 Opinion The net changes the world's power structures in a much more fundamental way than changing the way a few groups of entrepreneurs are able to make money. The net is the greatest equalizer that humankind has ever invented. It is either the greatest invention since the printing press, or the greatest invention since written language. The battles we see are not a result of loss of money; they are caused by a loss of the power of narratives."
Gary Edwards

» 21 Facts About NSA Snooping That Every American Should Know Alex Jones' Inf... - 0 views

  •  
    NSA-PRISM-Echelon in a nutshell.  The list below is a short sample.  Each fact is documented, and well worth the time reading. "The following are 21 facts about NSA snooping that every American should know…" #1 According to CNET, the NSA told Congress during a recent classified briefing that it does not need court authorization to listen to domestic phone calls… #2 According to U.S. Representative Loretta Sanchez, members of Congress learned "significantly more than what is out in the media today" about NSA snooping during that classified briefing. #3 The content of all of our phone calls is being recorded and stored.  The following is a from a transcript of an exchange between Erin Burnett of CNN and former FBI counterterrorism agent Tim Clemente which took place just last month… #4 The chief technology officer at the CIA, Gus Hunt, made the following statement back in March… "We fundamentally try to collect everything and hang onto it forever." #5 During a Senate Judiciary Oversight Committee hearing in March 2011, FBI Director Robert Mueller admitted that the intelligence community has the ability to access emails "as they come in"… #6 Back in 2007, Director of National Intelligence Michael McConnell told Congress that the president has the "constitutional authority" to authorize domestic spying without warrants no matter when the law says. #7 The Director Of National Intelligence James Clapper recently told Congress that the NSA was not collecting any information about American citizens.  When the media confronted him about his lie, he explained that he "responded in what I thought was the most truthful, or least untruthful manner". #8 The Washington Post is reporting that the NSA has four primary data collection systems… MAINWAY, MARINA, METADATA, PRISM #9 The NSA knows pretty much everything that you are doing on the Internet.  The following is a short excerpt from a recent Yahoo article… #10 The NSA is suppose
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

6 Anti-NSA Technological innovations that May Just Change the World | StormCloudsGathering - 2 views

  • Rather than grovel and beg for the U.S. government to respect our privacy, these innovators have taken matters into their own hands, and their work may change the playing field completely.
  • People used to assume that the United States government was held in check by the constitution, which prohibits unreasonable searches and seizures and which demands due process in criminal investigations, but such illusions have evaporated in recent years. It turns out that the NSA considers itself above the law in every respect and feels entitled to spy on anyone anywhere in the world without warrants, and without any real oversight. Understandably these revelations shocked the average citizen who had been conditioned to take the government's word at face value, and the backlash has been considerable. The recent "Today We Fight Back" campaign to protest the NSA's surveillance practices shows that public sentiment is in the right place. Whether these kinds of petitions and protests will have any real impact on how the U.S. government operates is questionable (to say the least), however some very smart people have decided not to wait around and find out. Instead they're focusing on making the NSA's job impossible. In the process they may fundamentally alter the way the internet operates.
  • People used to assume that the United States government was held in check by the constitution, which prohibits unreasonable searches and seizures and which demands due process in criminal investigations, but such illusions have evaporated in recent years. It turns out that the NSA considers itself above the law in every respect and feels entitled to spy on anyone anywhere in the world without warrants, and without any real oversight. Understandably these revelations shocked the average citizen who had been conditioned to take the government's word at face value, and the backlash has been considerable. The recent "Today We Fight Back" campaign to protest the NSA's surveillance practices shows that public sentiment is in the right place. Whether these kinds of petitions and protests will have any real impact on how the U.S. government operates is questionable (to say the least), however some very smart people have decided not to wait around and find out. Instead they're focusing on making the NSA's job impossible. In the process they may fundamentally alter the way the internet operates.
1 - 20 of 48 Next › Last »
Showing 20 items per page