Skip to main content

Home/ Future of the Web/ Group items tagged provides

Rss Feed Group items tagged

Gary Edwards

Can Cloud Computing Achieve Interoperable Platforms? - 0 views

  • the fact is that today if a customer has heavily invested in either platform then there isn't a straightforward way for customers to extricate themselves from the platform and switch to another vendor. In addition there is not a competitive marketplace of vendors providing standard/interoperable platforms as there are with email hosting or Web hosting providers.
  •  
    Response from Microsoft's Dare Obasanjo to the Tim Bray blog: Get in the Cloud. .. "When it comes to cloud computing platforms, you have all of the same problems described above and a few extra ones. The key wrinkle with cloud computing platforms is that there is no standardization of the APIs and platform technologies that underlie these services. The APIs provided by Amazon's cloud computing platform (EC2/S3/EBS/etc) are radically different from those provided by Google App Engine (Datastore API/Python runtime/Images API/etc). For zero lock-in to occur in this space, there need to be multiple providers of the same underlying APIs. Otherwise, migrating between cloud computing platforms will be more like switching your application from Ruby on Rails and MySQL to Django and PostgreSQL (i.e. a complete rewrite)...." Although cloud computing vendors are not explicitly trying to lock-in customers to their platform, the fact is that today if a customer has heavily invested in either platform then there isn't a straightforward way for customers to extricate themselves from the platform and switch to another vendor. In addition there is not a competitive marketplace of vendors providing standard/interoperable platforms as there are with email hosting or Web hosting providers.
Gonzalo San Gil, PhD.

Top 10 VPN Service providers which ensure that your browsing stays private and anonymous - 0 views

  •  
    "Techworm brings you the top 10 VPN service providers who provide privacy and security"
  •  
    "Techworm brings you the top 10 VPN service providers who provide privacy and security"
Gonzalo San Gil, PhD.

letsencrypt | How It Works - 0 views

  •  
    "Anyone who has gone through the trouble of setting up a secure website knows what a hassle getting and maintaining a certificate can be. Let's Encrypt automates away the pain and lets site operators turn on and manage HTTPS with simple commands. No validation emails, no complicated configuration editing, no expired certificates breaking your website. And of course, because Let's Encrypt provides certificates for free, no need to arrange payment. This page describes how to carry out the most common certificate management functions using the Let's Encrypt client. You're welcome to use any compatible client, but we only provide instructions for using the client that we provide. If you'd like to know more about how this works behind the scenes, check out our technical overview."
  •  
    "Anyone who has gone through the trouble of setting up a secure website knows what a hassle getting and maintaining a certificate can be. Let's Encrypt automates away the pain and lets site operators turn on and manage HTTPS with simple commands. No validation emails, no complicated configuration editing, no expired certificates breaking your website. And of course, because Let's Encrypt provides certificates for free, no need to arrange payment. This page describes how to carry out the most common certificate management functions using the Let's Encrypt client. You're welcome to use any compatible client, but we only provide instructions for using the client that we provide. If you'd like to know more about how this works behind the scenes, check out our technical overview."
Gonzalo San Gil, PhD.

On net neutrality, Internet providers are betrayed by one of their own | Ars Technica - 2 views

  •  
    "They're not happy anymore, especially not after Wheeler yesterday all but confirmed at the Consumer Electronics Show (CES) that he will propose reclassifying Internet providers as common carriers in order to impose net neutrality rules. This would expose broadband to some of the FCC's strongest powers contained in Title II of the Communications Act, usually reserved for wireline phone service." [# ! The saddest... # ! ... of this story is that , one more time, is clearly shown that, # ! in the #Internet issues, #citizens are the least #important.... (# ! and it's yet to be seen if that, finally, Internet providers are reclassified as "common carriers in order to impose net neutrality rules". )]
  •  
    "They're not happy anymore, especially not after Wheeler yesterday all but confirmed at the Consumer Electronics Show (CES) that he will propose reclassifying Internet providers as common carriers in order to impose net neutrality rules. This would expose broadband to some of the FCC's strongest powers contained in Title II of the Communications Act, usually reserved for wireline phone service."
Paul Merrell

Legislative Cyber Threats: CISA's Not The Only One | Just Security - 0 views

  • If anyone in the United States Senate had any doubts that the proposed Cyber Information Sharing Act (CISA) was universally hated by a range of civil society groups, a literal blizzard of faxes should’ve cleared up the issue by now. What’s not getting attention is a CISA “alternative” introduced last week by Sens. Mark Warner (D-Va) and Susan Collins (R-Me). Dubbed the “FISMA Reform Act,” the authors make the following claims about the bill:  This legislation would allow the Secretary of Homeland Security to operate intrusion detection and prevention capabilities on all federal agencies on the .gov domain. The bipartisan bill would also direct the Secretary of Homeland Security to conduct risk assessments of any network within the government domain. The bill would allow the Secretary of Homeland Security to operate defensive countermeasures on these networks once a cyber threat has been detected. The legislation would strengthen and streamline the authority Congress gave to DHS last year to issue binding operational directives to federal agencies, especially to respond to substantial cyber security threats in emergency circumstances.
  • The bill would require the Office of Management and Budget to report to Congress annually on the extent to which OMB has exercised its existing authority to enforce government wide cyber security standards. On the surface, it actually sounds like a rational response to the disastrous OPM hack. Unfortunately, the Warner-Collins bill has some vague or problematic language and non-existent definitions that make it potentially just as dangerous for data security and privacy as CISA. The bill would allow the Secretary of Homeland Security to carry out cyber security activities “in conjunction with other agencies and the private sector” [for] “assessing and fostering the development of information security technologies and capabilities for use across multiple agencies.” While the phrase “information sharing” is not present in this subsection, “security technologies and capabilities” is more than broad — and vague — enough to allow it.
  • The bill would also allow the secretary to “acquire, intercept, retain, use, and disclose communications and other system traffic that are transiting to or from or stored on agency information systems and deploy countermeasures with regard to the communications and system traffic.”
  • ...2 more annotations...
  • The bill also allows the head of a federal agency or department “to disclose to the Secretary or a private entity providing assistance to the Secretary…information traveling to or from or stored on an agency information system, notwithstanding any other law that would otherwise restrict or prevent agency heads from disclosing such information to the Secretary.” (Emphasis added.) So confidential, proprietary or other information otherwise precluded from disclosure under laws like HIPAA or the Privacy Act get waived if the Secretary of DHS or an agency head feel that your email needs to be shared with a government contracted outfit like the Hacking Team for analysis. And the bill explicitly provides for just this kind of cyber threat analysis outsourcing:
  • (3) PRIVATE ENTITIES. — The Secretary may enter into contracts or other agreements, or otherwise request and obtain the assistance of, private entities that provide electronic communication or information security services to acquire, intercept, retain, use, and disclose communications and other system traffic in accordance with this subsection. The bill further states that the content of your communications, will be retained only if the communication is associated with a known or reasonably suspected information security threat, and communications and system traffic will not be subject to the operation of a countermeasure unless associated with the threats. (Emphasis added.) “Reasonably suspected” is about as squishy a definition as one can find.
  •  
    "The bill also allows the head of a federal agency or department "to disclose to the Secretary or a private entity providing assistance to the Secretary…information traveling to or from or stored on an agency information system, notwithstanding any other law that would otherwise restrict or prevent agency heads from disclosing such information to the Secretary."" Let's see: if your information is intercepted by the NSA and stored on its "information system" in Bluffdale, Utah, then it can be disclosed to the Secretary of DHS or any private entity providing him/her with assistance, "notwithstanding any other law that would otherwise restrict or prevent agency heads from disclosing such information to the Secretary." And if NSA just happens to be intercepting every digital bit of data generated or received in the entire world, including the U.S., then it's all in play, "notwithstanding any other law that would otherwise restrict or prevent agency heads from disclosing such information to the Secretary.". Sheesh! Our government voyeurs never stop trying to get more nude pix and videos to view.  
Paul Merrell

Microsoft to host data in Germany to evade US spying | Naked Security - 0 views

  • Microsoft's new plan to keep the US government's hands off its customers' data: Germany will be a safe harbor in the digital privacy storm. Microsoft on Wednesday announced that beginning in the second half of 2016, it will give foreign customers the option of keeping data in new European facilities that, at least in theory, should shield customers from US government surveillance. It will cost more, according to the Financial Times, though pricing details weren't forthcoming. Microsoft Cloud - including Azure, Office 365 and Dynamics CRM Online - will be hosted from new datacenters in the German regions of Magdeburg and Frankfurt am Main. Access to data will be controlled by what the company called a German data trustee: T-Systems, a subsidiary of the independent German company Deutsche Telekom. Without the permission of Deutsche Telekom or customers, Microsoft won't be able to get its hands on the data. If it does get permission, the trustee will still control and oversee Microsoft's access.
  • Microsoft CEO Satya Nadella dropped the word "trust" into the company's statement: Microsoft’s mission is to empower every person and every individual on the planet to achieve more. Our new datacenter regions in Germany, operated in partnership with Deutsche Telekom, will not only spur local innovation and growth, but offer customers choice and trust in how their data is handled and where it is stored.
  • On Tuesday, at the Future Decoded conference in London, Nadella also announced that Microsoft would, for the first time, be opening two UK datacenters next year. The company's also expanding its existing operations in Ireland and the Netherlands. Officially, none of this has anything to do with the long-drawn-out squabbling over the transatlantic Safe Harbor agreement, which the EU's highest court struck down last month, calling the agreement "invalid" because it didn't protect data from US surveillance. No, Nadella said, the new datacenters and expansions are all about giving local businesses and organizations "transformative technology they need to seize new global growth." But as Diginomica reports, Microsoft EVP of Cloud and Enterprise Scott Guthrie followed up his boss’s comments by saying that yes, the driver behind the new datacenters is to let customers keep data close: We can guarantee customers that their data will always stay in the UK. Being able to very concretely tell that story is something that I think will accelerate cloud adoption further in the UK.
  • ...2 more annotations...
  • Microsoft and T-Systems' lawyers may well think that storing customer data in a German trustee data center will protect it from the reach of US law, but for all we know, that could be wishful thinking. Forrester cloud computing analyst Paul Miller: To be sure, we must wait for the first legal challenge. And the appeal. And the counter-appeal. As with all new legal approaches, we don’t know it is watertight until it is challenged in court. Microsoft and T-Systems’ lawyers are very good and say it's watertight. But we can be sure opposition lawyers will look for all the holes. By keeping data offshore - particularly in Germany, which has strong data privacy laws - Microsoft could avoid the situation it's now facing with the US demanding access to customer emails stored on a Microsoft server in Dublin. The US has argued that Microsoft, as a US company, comes under US jurisdiction, regardless of where it keeps its data.
  • Running away to Germany isn't a groundbreaking move; other US cloud services providers have already pledged expansion of their EU presences, including Amazon's plan to open a UK datacenter in late 2016 that will offer what CTO Werner Vogels calls "strong data sovereignty to local users." Other big data operators that have followed suit: Salesforce, which has already opened datacenters in the UK and Germany and plans to open one in France next year, as well as new EU operations pledged for the new year by NetSuite and Box. Can Germany keep the US out of its datacenters? Can Ireland? Time, and court cases, will tell.
  •  
    The European Community's Court of Justice decision in the Safe Harbor case --- and Edward Snowden --- are now officially downgrading the U.S. as a cloud data center location. NSA is good business for Europeans looking to displace American cloud service providers, as evidenced by Microsoft's decision. The legal test is whether Microsoft has "possession, custody, or control" of the data. From the info given in the article, it seems that Microsoft has done its best to dodge that bullet by moving data centers to Germany and placing their data under the control of a European company. Do ownership of the hardware and profits from their rent mean that Microsoft still has "possession, custody, or control" of the data? The fine print of the agreement with Deutsche Telekom and the customer EULAs will get a thorough going over by the Dept. of Justice for evidence of Microsoft "control" of the data. That will be the crucial legal issue. The data centers in Germany may pass the test. But the notion that data centers in the UK can offer privacy is laughable; the UK's legal authority for GCHQ makes it even easier to get the data than the NSA can in the U.S.  It doesn't even require a court order. 
Paul Merrell

European Human Rights Court Deals a Heavy Blow to the Lawfulness of Bulk Surveillance |... - 0 views

  • In a seminal decision updating and consolidating its previous jurisprudence on surveillance, the Grand Chamber of the European Court of Human Rights took a sideways swing at mass surveillance programs last week, reiterating the centrality of “reasonable suspicion” to the authorization process and the need to ensure interception warrants are targeted to an individual or premises. The decision in Zakharov v. Russia — coming on the heels of the European Court of Justice’s strongly-worded condemnation in Schrems of interception systems that provide States with “generalised access” to the content of communications — is another blow to governments across Europe and the United States that continue to argue for the legitimacy and lawfulness of bulk collection programs. It also provoked the ire of the Russian government, prompting an immediate legislative move to give the Russian constitution precedence over Strasbourg judgments. The Grand Chamber’s judgment in Zakharov is especially notable because its subject matter — the Russian SORM system of interception, which includes the installation of equipment on telecommunications networks that subsequently enables the State direct access to the communications transiting through those networks — is similar in many ways to the interception systems currently enjoying public and judicial scrutiny in the United States, France, and the United Kingdom. Zakharov also provides a timely opportunity to compare the differences between UK and Russian law: Namely, Russian law requires prior independent authorization of interception measures, whereas neither the proposed UK law nor the existing legislative framework do.
  • The decision is lengthy and comprises a useful restatement and harmonization of the Court’s approach to standing (which it calls “victim status”) in surveillance cases, which is markedly different from that taken by the US Supreme Court. (Indeed, Judge Dedov’s separate but concurring opinion notes the contrast with Clapper v. Amnesty International.) It also addresses at length issues of supervision and oversight, as well as the role played by notification in ensuring the effectiveness of remedies. (Marko Milanovic discusses many of these issues here.) For the purpose of the ongoing debate around the legitimacy of bulk surveillance regimes under international human rights law, however, three particular conclusions of the Court are critical.
  • The Court took issue with legislation permitting the interception of communications for broad national, military, or economic security purposes (as well as for “ecological security” in the Russian case), absent any indication of the particular circumstances under which an individual’s communications may be intercepted. It said that such broadly worded statutes confer an “almost unlimited degree of discretion in determining which events or acts constitute such a threat and whether that threat is serious enough to justify secret surveillance” (para. 248). Such discretion cannot be unbounded. It can be limited through the requirement for prior judicial authorization of interception measures (para. 249). Non-judicial authorities may also be competent to authorize interception, provided they are sufficiently independent from the executive (para. 258). What is important, the Court said, is that the entity authorizing interception must be “capable of verifying the existence of a reasonable suspicion against the person concerned, in particular, whether there are factual indications for suspecting that person of planning, committing or having committed criminal acts or other acts that may give rise to secret surveillance measures, such as, for example, acts endangering national security” (para. 260). This finding clearly constitutes a significant threshold which a number of existing and pending European surveillance laws would not meet. For example, the existence of individualized reasonable suspicion runs contrary to the premise of signals intelligence programs where communications are intercepted in bulk; by definition, those programs collect information without any consideration of individualized suspicion. Yet the Court was clearly articulating the principle with national security-driven surveillance in mind, and with the knowledge that interception of communications in Russia is conducted by Russian intelligence on behalf of law enforcement agencies.
  • ...6 more annotations...
  • This element of the Grand Chamber’s decision distinguishes it from prior jurisprudence of the Court, namely the decisions of the Third Section in Weber and Saravia v. Germany (2006) and of the Fourth Section in Liberty and Ors v. United Kingdom (2008). In both cases, the Court considered legislative frameworks which enable bulk interception of communications. (In the German case, the Court used the term “strategic monitoring,” while it referred to “more general programmes of surveillance” in Liberty.) In the latter case, the Fourth Section sought to depart from earlier European Commission of Human Rights — the court of first instance until 1998 — decisions which developed the requirements of the law in the context of surveillance measures targeted at specific individuals or addresses. It took note of the Weber decision which “was itself concerned with generalized ‘strategic monitoring’, rather than the monitoring of individuals” and concluded that there was no “ground to apply different principles concerning the accessibility and clarity of the rules governing the interception of individual communications, on the one hand, and more general programmes of surveillance, on the other” (para. 63). The Court in Liberty made no mention of any need for any prior or reasonable suspicion at all.
  • In Weber, reasonable suspicion was addressed only at the post-interception stage; that is, under the German system, bulk intercepted data could be transmitted from the German Federal Intelligence Service (BND) to law enforcement authorities without any prior suspicion. The Court found that the transmission of personal data without any specific prior suspicion, “in order to allow the institution of criminal proceedings against those being monitored” constituted a fairly serious interference with individuals’ privacy rights that could only be remedied by safeguards and protections limiting the extent to which such data could be used (para. 125). (In the context of that case, the Court found that Germany’s protections and restrictions were sufficient.) When you compare the language from these three cases, it would appear that the Grand Chamber in Zakharov is reasserting the requirement for individualized reasonable suspicion, including in national security cases, with full knowledge of the nature of surveillance considered by the Court in its two recent bulk interception cases.
  • The requirement of reasonable suspicion is bolstered by the Grand Chamber’s subsequent finding in Zakharov that the interception authorization (e.g., the court order or warrant) “must clearly identify a specific person to be placed under surveillance or a single set of premises as the premises in respect of which the authorisation is ordered. Such identification may be made by names, addresses, telephone numbers or other relevant information” (para. 264). In making this finding, it references paragraphs from Liberty describing the broad nature of the bulk interception warrants under British law. In that case, it was this description that led the Court to find the British legislation possessed insufficient clarity on the scope or manner of exercise of the State’s discretion to intercept communications. In one sense, therefore, the Grand Chamber seems to be retroactively annotating the Fourth Section’s Liberty decision so that it might become consistent with its decision in Zakharov. Without this revision, the Court would otherwise appear to depart to some extent — arguably, purposefully — from both Liberty and Weber.
  • Finally, the Grand Chamber took issue with the direct nature of the access enjoyed by Russian intelligence under the SORM system. The Court noted that this contributed to rendering oversight ineffective, despite the existence of a requirement for prior judicial authorization. Absent an obligation to demonstrate such prior authorization to the communications service provider, the likelihood that the system would be abused through “improper action by a dishonest, negligent or overly zealous official” was quite high (para. 270). Accordingly, “the requirement to show an interception authorisation to the communications service provider before obtaining access to a person’s communications is one of the important safeguards against abuse by the law-enforcement authorities” (para. 269). Again, this requirement arguably creates an unconquerable barrier for a number of modern bulk interception systems, which rely on the use of broad warrants to authorize the installation of, for example, fiber optic cable taps that facilitate the interception of all communications that cross those cables. In the United Kingdom, the Independent Reviewer of Terrorism Legislation David Anderson revealed in his essential inquiry into British surveillance in 2015, there are only 20 such warrants in existence at any time. Even if these 20 warrants are served on the relevant communications service providers upon the installation of cable taps, the nature of bulk interception deprives this of any genuine meaning, making the safeguard an empty one. Once a tap is installed for the purposes of bulk interception, the provider is cut out of the equation and can no longer play the role the Court found so crucial in Zakharov.
  • The Zakharov case not only levels a serious blow at bulk, untargeted surveillance regimes, it suggests the Grand Chamber’s intention to actively craft European Court of Human Rights jurisprudence in a manner that curtails such regimes. Any suggestion that the Grand Chamber’s decision was issued in ignorance of the technical capabilities or intentions of States and the continued preference for bulk interception systems should be dispelled; the oral argument in the case took place in September 2014, at a time when the Court had already indicated its intention to accord priority to cases arising out of the Snowden revelations. Indeed, the Court referenced such forthcoming cases in the fact sheet it issued after the Zakharov judgment was released. Any remaining doubt is eradicated through an inspection of the multiple references to the Snowden revelations in the judgment itself. In the main judgment, the Court excerpted text from the Director of the European Union Agency for Human Rights discussing Snowden, and in the separate opinion issued by Judge Dedov, he goes so far as to quote Edward Snowden: “With each court victory, with every change in the law, we demonstrate facts are more convincing than fear. As a society, we rediscover that the value of the right is not in what it hides, but in what it protects.”
  • The full implications of the Zakharov decision remain to be seen. However, it is likely we will not have to wait long to know whether the Grand Chamber intends to see the demise of bulk collection schemes; the three UK cases (Big Brother Watch & Ors v. United Kingdom, Bureau of Investigative Journalism & Alice Ross v. United Kingdom, and 10 Human Rights Organisations v. United Kingdom) pending before the Court have been fast-tracked, indicating the Court’s willingness to continue to confront the compliance of bulk collection schemes with human rights law. It is my hope that the approach in Zakharov hints at the Court’s conviction that bulk collection schemes lie beyond the bounds of permissible State surveillance.
Gonzalo San Gil, PhD.

ISP Vows to Protect Users From a Piracy Witch Hunt - TorrentFreak - 0 views

  •  
    " By Ernesto on April 22, 2016 C: 14 Breaking Swedish Internet service provider Bahnhof says it will do everything in its power to prevent copyright holders from threatening its subscribers. The provider is responding to a recent case in which a competing ISP was ordered to expose alleged BitTorrent pirates, reportedly without any thorough evidence."
  •  
    " By Ernesto on April 22, 2016 C: 14 Breaking Swedish Internet service provider Bahnhof says it will do everything in its power to prevent copyright holders from threatening its subscribers. The provider is responding to a recent case in which a competing ISP was ordered to expose alleged BitTorrent pirates, reportedly without any thorough evidence."
Paul Merrell

U.S. knocks plans for European communication network | Reuters - 0 views

  • The United States on Friday criticized proposals to build a European communication network to avoid emails and other data passing through the United States, warning that such rules could breach international trade laws. In its annual review of telecommunications trade barriers, the office of the U.S. Trade Representative said impediments to cross-border data flows were a serious and growing concern.It was closely watching new laws in Turkey that led to the blocking of websites and restrictions on personal data, as well as calls in Europe for a local communications network following revelations last year about U.S. digital eavesdropping and surveillance."Recent proposals from countries within the European Union to create a Europe-only electronic network (dubbed a 'Schengen cloud' by advocates) or to create national-only electronic networks could potentially lead to effective exclusion or discrimination against foreign service suppliers that are directly offering network services, or dependent on them," the USTR said in the report.
  • Germany and France have been discussing ways to build a European network to keep data secure after the U.S. spying scandal. Even German Chancellor Angela Merkel's cell phone was reportedly monitored by American spies.The USTR said proposals by Germany's state-backed Deutsche Telekom to bypass the United States were "draconian" and likely aimed at giving European companies an advantage over their U.S. counterparts.Deutsche Telekom has suggested laws to stop data traveling within continental Europe being routed via Asia or the United States and scrapping the Safe Harbor agreement that allows U.S. companies with European-level privacy standards access to European data. (www.telekom.com/dataprotection)"Any mandatory intra-EU routing may raise questions with respect to compliance with the EU's trade obligations with respect to Internet-enabled services," the USTR said. "Accordingly, USTR will be carefully monitoring the development of any such proposals."
  • U.S. tech companies, the leaders in an e-commerce marketplace estimated to be worth up to $8 trillion a year, have urged the White House to undertake reforms to calm privacy concerns and fend off digital protectionism.
  •  
    High comedy from the office of the U.S. Trade Representative. The USTR's press release is here along with a link to its report. http://www.ustr.gov/about-us/press-office/press-releases/2014/March/USTR-Targets-Telecommunications-Trade-Barriers The USTR is upset because the E.U. is aiming to build a digital communications network that does not route internal digital traffic outside the E.U., to limit the NSA's ability to surveil Europeans' communications. Part of the plan is to build an E.U.-centric cloud that is not susceptible to U.S. court orders. This plan does not, of course, sit well with U.S.-based cloud service providers.  Where the comedy comes in is that the USTR is making threats to go to the World Trade organization to block the E.U. move under the authority of the General Agreement on Trade in Services (GATS). But that treaty provides, in article XIV, that:  "Subject to the requirement that such measures are not applied in a manner which would constitute a means of arbitrary or unjustifiable discrimination between countries where like conditions prevail, or a disguised restriction on trade in services, nothing in this Agreement shall be construed to prevent the adoption or enforcement by any Member of measures: ... (c)      necessary to secure compliance with laws or regulations which are not inconsistent with the provisions of this Agreement including those relating to:   ... (ii)     the protection of the privacy of individuals in relation to the processing and dissemination of personal data and the protection of confidentiality of individual records and accounts[.]" http://www.wto.org/english/docs_e/legal_e/26-gats_01_e.htm#articleXIV   The E.U., in its Treaty on Human Rights, has very strong privacy protections for digital communications. The USTR undoubtedly knows all this, and that the WTO Appellate Panel's judges are of the European mold, sticklers for protection of human rights and most likely do not appreciate being subjects o
Paul Merrell

The FCC is about to kill the free Internet | PandoDaily - 0 views

  • The Federal Communications Commission is poised to ruin the free Internet on a technicality. The group is expected to introduce new net neutrality laws that would allow companies to pay for better access to consumers through deals similar to the one struck by Netflix and Comcast earlier this year. The argument is that those deals don’t technically fall under the net neutrality umbrella, so these new rules won’t apply to them even though they directly affect the Internet. At least the commission is being upfront about its disinterest in protecting the free Internet.
  • The Verge notes that the proposed rules will offer some protections to consumers: The Federal Communication Commission’s proposal for new net neutrality rules will allow internet service providers to charge companies for preferential treatment, effectively undermining the concept of net neutrality, according to The Wall Street Journal. The rules will reportedly allow providers to charge for preferential treatment so long as they offer that treatment to all interested parties on “commercially reasonable” terms, with the FCC will deciding whether the terms are reasonable on a case-by-case basis. Providers will not be able to block individual websites, however. The goal of net neutrality rules is to prevent service providers from discriminating between different content, allowing all types of data and all companies’ data to be treated equally. While it appears that outright blocking of individual services won’t be allowed, the Journal reports that some forms of discrimination will be allowed, though that will apparently not include slowing down websites.
  • Re/code summarizes the discontent with these proposed rules: Consumer groups have complained about that plan because they’re worried that Wheeler’s rules may not hold up in court either. A federal appeals court rejected two previous versions of net neutrality rules after finding fault in the FCC’s legal reasoning. During the latest smackdown, however, the court suggested that the FCC had some authority to impose net neutrality rules under a section of the law that gives the agency the ability to regulate the deployment of broadband lines. Internet activists would prefer that the FCC just re-regulate Internet lines under old rules designed for telephone networks, which they say would give the agency clear authority to police Internet lines. Wheeler has rejected that approach for now. Phone and cable companies, including Comcast, AT&T and Verizon, have vociferously fought that idea over the past few years.
  • ...2 more annotations...
  • The Chicago Tribune reports on the process directing these rules: The five-member regulatory commission may vote as soon as May to formally propose the rules and collect public comment on them. Virtually all large Internet service providers, such as Verizon Communications Inc. and Time Warner Cable Inc., have pledged to abide by the principles of open Internet reinforced by these rules. But critics have raised concerns that, without a formal rule, the voluntary pledges could be pulled back over time and also leave the door open for deals that would give unequal treatment to websites or services.
  • I wrote about the European Union’s attempts to defend the free Internet: The legislation is meant to provide access to online services ‘without discrimination, restriction or interference, independent of the sender, receiver, type, content, device, service or application.’ For example, ISPs would be barred from slowing down or ‘throttling’ the speed at which one service’s videos are delivered while allowing other services to stream at normal rates. To bastardize Gertrude Stein: a byte is a byte is a byte. Such restrictions would prevent deals like the one Comcast recently made with Netflix, which will allow the service’s videos to reach consumers faster than before. Comcast is also said to be in talks with Apple for a deal that would allow videos from its new streaming video service to reach consumers faster than videos from competitors. The Federal Communications Commission’s net neutrality laws don’t apply to those deals, according to FCC Chairman Tom Wheeler, so they are allowed to continue despite the threat they pose to the free Internet.
  •  
    Cute. Deliberately not using the authority the court of appeals said it could use to impose net neutrality. So Europe can have net neutrality but not in the U.S.
Paul Merrell

Joint - Dear Colleague Letter: Electronic Book Readers - 0 views

  • U.S. Department of Justice Civil Rights Division U.S. Department of Education Office for Civil Rights
  •  
    June 29, 2010 Dear College or University President: We write to express concern on the part of the Department of Justice and the Department of Education that colleges and universities are using electronic book readers that are not accessible to students who are blind or have low vision and to seek your help in ensuring that this emerging technology is used in classroom settings in a manner that is permissible under federal law. A serious problem with some of these devices is that they lack an accessible text-to-speech function. Requiring use of an emerging technology in a classroom environment when the technology is inaccessible to an entire population of individuals with disabilities - individuals with visual disabilities - is discrimination prohibited by the Americans with Disabilities Act of 1990 (ADA) and Section 504 of the Rehabilitation Act of 1973 (Section 504) unless those individuals are provided accommodations or modifications that permit them to receive all the educational benefits provided by the technology in an equally effective and equally integrated manner. ... The Department of Justice recently entered into settlement agreements with colleges and universities that used the Kindle DX, an inaccessible, electronic book reader, in the classroom as part of a pilot study with Amazon.com, Inc. In summary, the universities agreed not to purchase, require, or recommend use of the Kindle DX, or any other dedicated electronic book reader, unless or until the device is fully accessible to individuals who are blind or have low vision, or the universities provide reasonable accommodation or modification so that a student can acquire the same information, engage in the same interactions, and enjoy the same services as sighted students with substantially equivalent ease of use. The texts of these agreements may be viewed on the Department of Justice's ADA Web site, www.ada.gov. (To find these settlements on www.ada.gov, search for "Kindle.") Consisten
Alyn William

How Twitter Use Has Changed, From 2009 to 2010 - 1 views

  •  
    The number of Twitter users who provide personal information in their bios has more than doubled since last year; 82% of Twitter users now provide a name, compared with only 33% in 2009. And 73% provide location information, compared to 44% in 2009. Forty five percent give a website address, up from 22% a year ago. This seems to indicate that Twitter users are becoming more comfortable disclosing personal information on the site, but more importantly perhaps, that having a profile on Twitter is becoming increasingly important.
Gary Edwards

Official Google Webmaster Central Blog: Introducing Rich Snippets - 0 views

  •  
    Google "Rich Snippets" is a new presentation of HTML snippets that applies Google's algorithms to highlight structured data embedded in web pages. Rich Snippets give end-users convenient summary information about their search results at a glance. Google is currently supporting a very limited subset of data about reviews and people. When searching for a product or service, users can easily see reviews and ratings, and when searching for a person, they'll get help distinguishing between people with the same name. It's a simple change to the display of search results, yet our experiments have shown that users find the new data valuable. For this to work though, both Web-masters and Web-workers have to annotate thier pages with structured data in a standard format. Google snippets supports microformats and RDFa. Existing Web data can be wrapped with some additional tags to accomplish this. Notice that Google avoids mention of RDF and the W3C's vision of a "Semantic Web" where Web objects are fully described in machine readable semantics. Over at the WHATWG group, where work on HTML5 continues, Google's Ian Hickson has been fighting RDFa and the Semantic Web in what looks to be an effort to protect the infamous Google algorithms. RDFa provides a means for Web-workers, knowledge-workers, line-of-business managers and document generating end-users to enrich their HTML+ with machine semantics. The idea being that the document experts creating Web content can best describe to search engine and content management machines the objects-of-information used. The google algorithms provide a proprietary semantics of this same content. The best solution to the tsunami of conten the Web has wrought would be to combine end-user semantic expertise with Google algorithms. Let's hope Google stays the RDFa course and comes around to recognize the full potential of organizing the world's information with the input of content providers. One thing the world desperatel
Paul Merrell

For sale: Systems that can secretly track where cellphone users go around the globe - T... - 0 views

  • Makers of surveillance systems are offering governments across the world the ability to track the movements of almost anybody who carries a cellphone, whether they are blocks away or on another continent. The technology works by exploiting an essential fact of all cellular networks: They must keep detailed, up-to-the-minute records on the locations of their customers to deliver calls and other services to them. Surveillance systems are secretly collecting these records to map people’s travels over days, weeks or longer, according to company marketing documents and experts in surveillance technology.
  • The world’s most powerful intelligence services, such as the National Security Agency and Britain’s GCHQ, long have used cellphone data to track targets around the globe. But experts say these new systems allow less technically advanced governments to track people in any nation — including the United States — with relative ease and precision.
  • It is unclear which governments have acquired these tracking systems, but one industry official, speaking on the condition of anonymity to share sensitive trade information, said that dozens of countries have bought or leased such technology in recent years. This rapid spread underscores how the burgeoning, multibillion-dollar surveillance industry makes advanced spying technology available worldwide. “Any tin-pot dictator with enough money to buy the system could spy on people anywhere in the world,” said Eric King, deputy director of Privacy International, a London-based activist group that warns about the abuse of surveillance technology. “This is a huge problem.”
  • ...9 more annotations...
  • Security experts say hackers, sophisticated criminal gangs and nations under sanctions also could use this tracking technology, which operates in a legal gray area. It is illegal in many countries to track people without their consent or a court order, but there is no clear international legal standard for secretly tracking people in other countries, nor is there a global entity with the authority to police potential abuses.
  • tracking systems that access carrier location databases are unusual in their ability to allow virtually any government to track people across borders, with any type of cellular phone, across a wide range of carriers — without the carriers even knowing. These systems also can be used in tandem with other technologies that, when the general location of a person is already known, can intercept calls and Internet traffic, activate microphones, and access contact lists, photos and other documents. Companies that make and sell surveillance technology seek to limit public information about their systems’ capabilities and client lists, typically marketing their technology directly to law enforcement and intelligence services through international conferences that are closed to journalists and other members of the public.
  • Yet marketing documents obtained by The Washington Post show that companies are offering powerful systems that are designed to evade detection while plotting movements of surveillance targets on computerized maps. The documents claim system success rates of more than 70 percent. A 24-page marketing brochure for SkyLock, a cellular tracking system sold by Verint, a maker of analytics systems based in Melville, N.Y., carries the subtitle “Locate. Track. Manipulate.” The document, dated January 2013 and labeled “Commercially Confidential,” says the system offers government agencies “a cost-effective, new approach to obtaining global location information concerning known targets.”
  • (Privacy International has collected several marketing brochures on cellular surveillance systems, including one that refers briefly to SkyLock, and posted them on its Web site. The 24-page SkyLock brochure and other material was independently provided to The Post by people concerned that such systems are being abused.)
  • Verint, which also has substantial operations in Israel, declined to comment for this story. It says in the marketing brochure that it does not use SkyLock against U.S. or Israeli phones, which could violate national laws. But several similar systems, marketed in recent years by companies based in Switzerland, Ukraine and elsewhere, likely are free of such limitations.
  • The tracking technology takes advantage of the lax security of SS7, a global network that cellular carriers use to communicate with one another when directing calls, texts and Internet data. The system was built decades ago, when only a few large carriers controlled the bulk of global phone traffic. Now thousands of companies use SS7 to provide services to billions of phones and other mobile devices, security experts say. All of these companies have access to the network and can send queries to other companies on the SS7 system, making the entire network more vulnerable to exploitation. Any one of these companies could share its access with others, including makers of surveillance systems.
  • Companies that market SS7 tracking systems recommend using them in tandem with “IMSI catchers,” increasingly common surveillance devices that use cellular signals collected directly from the air to intercept calls and Internet traffic, send fake texts, install spyware on a phone, and determine precise locations. IMSI catchers — also known by one popular trade name, StingRay — can home in on somebody a mile or two away but are useless if a target’s general location is not known. SS7 tracking systems solve that problem by locating the general area of a target so that IMSI catchers can be deployed effectively. (The term “IMSI” refers to a unique identifying code on a cellular phone.)
  • Verint can install SkyLock on the networks of cellular carriers if they are cooperative — something that telecommunications experts say is common in countries where carriers have close relationships with their national governments. Verint also has its own “worldwide SS7 hubs” that “are spread in various locations around the world,” says the brochure. It does not list prices for the services, though it says that Verint charges more for the ability to track targets in many far-flung countries, as opposed to only a few nearby ones. Among the most appealing features of the system, the brochure says, is its ability to sidestep the cellular operators that sometimes protect their users’ personal information by refusing government requests or insisting on formal court orders before releasing information.
  • Another company, Defentek, markets a similar system called Infiltrator Global Real-Time Tracking System on its Web site, claiming to “locate and track any phone number in the world.” The site adds: “It is a strategic solution that infiltrates and is undetected and unknown by the network, carrier, or the target.”
  •  
    The Verint company has very close ties to the Iraeli government. Its former parent company Comverse, was heavily subsidized by Israel and the bulk of its manufacturing and code development was done in Israel. See https://en.wikipedia.org/wiki/Comverse_Technology "In December 2001, a Fox News report raised the concern that wiretapping equipment provided by Comverse Infosys to the U.S. government for electronic eavesdropping may have been vulnerable, as these systems allegedly had a back door through which the wiretaps could be intercepted by unauthorized parties.[55] Fox News reporter Carl Cameron said there was no reason to believe the Israeli government was implicated, but that "a classified top-secret investigation is underway".[55] A March 2002 story by Le Monde recapped the Fox report and concluded: "Comverse is suspected of having introduced into its systems of the 'catch gates' in order to 'intercept, record and store' these wire-taps. This hardware would render the 'listener' himself 'listened to'."[56] Fox News did not pursue the allegations, and in the years since, there have been no legal or commercial actions of any type taken against Comverse by the FBI or any other branch of the US Government related to data access and security issues. While no real evidence has been presented against Comverse or Verint, the allegations have become a favorite topic of conspiracy theorists.[57] By 2005, the company had $959 million in sales and employed over 5,000 people, of whom about half were located in Israel.[16]" Verint is also the company that got the Dept. of Homeland Security contract to provide and install an electronic and video surveillance system across the entire U.S. border with Mexico.  One need not be much of a conspiracy theorist to have concerns about Verint's likely interactions and data sharing with the NSA and its Israeli equivalent, Unit 8200. 
Gonzalo San Gil, PhD.

ISP Provides Free VPN to Protect Customer Privacy | TorrentFreak - 1 views

  •  
    " Andy on November 17, 2014 C: 32 News A leading Swedish Internet service provider is taking a novel approach to protect customer privacy. Faced with a legal requirement to log subscriber activities, from next week ISP Bahnhof will give all of its customers a free, no-logging VPN service" [# ! #Technology #watching for #HumanRights]
  •  
    " Andy on November 17, 2014 C: 32 News A leading Swedish Internet service provider is taking a novel approach to protect customer privacy. Faced with a legal requirement to log subscriber activities, from next week ISP Bahnhof will give all of its customers a free, no-logging VPN service"
Gonzalo San Gil, PhD.

How to speed up your internet connection on Linux | HowtoForge - 0 views

  •  
    "...there isn't a way to transform a slow internet connection into a lighting-speed one if your provider is just not giving you enough bandwidth, no matter what you do. This post is only aiming to provide generic advice on how to make things a little bit better if possible, and if applicable to each case."
  •  
    "...there isn't a way to transform a slow internet connection into a lighting-speed one if your provider is just not giving you enough bandwidth, no matter what you do. This post is only aiming to provide generic advice on how to make things a little bit better if possible, and if applicable to each case."
Paul Merrell

F.C.C. Backs Opening Net Rules for Debate - NYTimes.com - 0 views

  • On Thursday, the Federal Communications Commission voted 3-2 to open for public debate new rules meant to guarantee an open Internet. Before the plan becomes final, though, the chairman of the commission, Tom Wheeler, will need to convince his colleagues and an array of powerful lobbying groups that the plan follows the principle of net neutrality, the idea that all content running through the Internet’s pipes is treated equally.While the rules are meant to prevent Internet providers from knowingly slowing data, they would allow content providers to pay for a guaranteed fast lane of service. Some opponents of the plan, those considered net neutrality purists, argue that allowing some content to be sent along a fast lane would essentially discriminate against other content.
  • “We are dedicated to protecting and preserving an open Internet,” Mr. Wheeler said immediately before the commission vote. “What we’re dealing with today is a proposal, not a final rule. We are asking for specific comment on different approaches to accomplish the same goal, an open Internet.”
  • Mr. Wheeler argued on Thursday that the proposal did not allow a fast lane. But the proposed rules do not address the connection between an Internet service provider, which sells a connection to consumers, and the operators of backbone transport networks that connect various parts of the Internet’s central plumbing.That essentially means that as long as an Internet service provider like Comcast or Verizon does not slow the service that a consumer buys, the provider can give faster service to a company that pays to get its content to consumers unimpeded
  • ...2 more annotations...
  • The plan will be open for comment for four months, beginning immediately.
  • The public will have until July 15 to submit initial comments on the proposal to the commission, and until Sept. 10 to file comments replying to the initial discussions.
  •  
    I'll need to read the proposed rule, but this doesn't sound good. the FCC majority tries to spin this as options still being open, but I don't recall ever seeing formal regulations changed substantially from their proposed form. If their were to be substantial change, another proposal and comment period would be likely. The public cannot comment on what has not been proposed, so substantial departure from the proposal, absent a new proposal and comment period, would offend basic principles of public notice and comment rulemaking under the Administrative Procedures Act. The proverbial elephant in the room that the press hasn't picked up on yet is the fight that is going on behind the scenes in the Dept. of Justice. If the Anti-trust Division gets its way, DoJ's public comments on the proposed rule could blow this show out of the water. The ISPs are regulated utility monopolies in vast areas of the U.S. with market consolidation at or near the limits of what the anti-trust folk will tolerate. And leveraging one monopoly (service to subscribers) to impose another (fees for internet-based businesses to gain high speed access) is directly counter to the Sherman Act's section 2.   http://www.law.cornell.edu/uscode/text/15/2
Paul Merrell

FCC 'very much' eyeing Web rules shakeup | TheHill - 0 views

  • The head of the Federal Communications Commission was quick to reassure lawmakers on Wednesday that his agency is seriously considering using the authority it has to regulate phone lines on Internet service providers.“Title II is very much on the table,” Chairman Tom Wheeler said during a House Small Business Committee hearing on Wednesday, referring to the section of the Communications Act that some have urged the agency to turn to for stronger rules.“I will assure you that Title II is very much a topic of conversation and on the table and something that’s we’ve specially asked for comment on,” he added.In its controversial proposal on net neutrality — the notion that Internet service companies like Comcast or Cox should be banned from slowing or block access to some websites — the agency specifically asked whether it should reclassify broadband Internet as a “telecommunications service” and open them up to Title II rules, instead of an “information service.”
  • The plan Wheeler proposed earlier this year would not rely on that authority, but would instead allow for companies to make “commercially reasonable” deals to speed up users’ service on a particular website. Critics have said that would lead to “fast lanes” on the Internet, with quicker speeds for wealthy companies and slower service everywhere else.Supporters of strong rules have told the FCC that the stronger legal backing is the best way to prevent companies from slowing users’ service or blocking their access to particular websites.Critics, however, have said that the rules were designed for telephone monopolies and would lead to utility-style regulation on the Internet. In their comments to the FCC, cable companies have said that reclassifying broadband service to use the tough rules would likely be a violation of the law, which could tie the new rules up in court for years to come.
  •  
    Of course Comcast, et ilk don't want Title II regulation. "Hey, just because we've divvied up the turf so that we've got geographical monopolies doesn't mean we shouldn't be able to leverage our monopolies into new monopolies." But the big cable companies got where they are by buying up community-granted and regulated monopoly utility companies. As part of consolidating those markets, the soon-to-be-gnormous cable companies, lobbied to get community regulation weakened and here we are with the FCC, with the cable companies now acting as ISPs too, which is straightforward telecommunications provider service, and these guys want to be able to charge a premium to the big internet content companies for fast-service after their ISP customers have already paid for fast service? So they can slow down the competition for their own content services.  Heck, yes, FCC. No one forced Comcast and crew to become telecommunications providers. Make 'em live with telecommunications regulation like all the other telcos. They are government-created monopolies and they should be regulated as such.   
Gary Edwards

Two Microsofts: Mulling an alternate reality | ZDNet - 1 views

  • Judge Jackson had it right. And the Court of Appeals? Not so much
  • Judge Jackson is an American hero and news of his passing thumped me hard. His ruling against Microsoft and the subsequent overturn of that ruling resulted, IMHO, in two extraordinary directions that changed the world. Sure the what-if game is interesting, but the reality itself is stunning enough. Of course, Judge Jackson sought to break the monopoly. The US Court of Appeals overturn resulted in the monopoly remaining intact, but the Internet remaining free and open. Judge Jackson's breakup plan had a good shot at achieving both a breakup of the monopoly and, a free and open Internet. I admit though that at the time I did not favor the Judge's plan. And i actually did submit a proposal based on Microsoft having to both support the WiNE project, and, provide a complete port to WiNE to any software provider requesting a port. I wanted to break the monopolist's hold on the Windows Productivity Environment and the hundreds of millions of investment dollars and time that had been spent on application development forever trapped on that platform. For me, it was the productivity platform that had to be broken.
  • I assume the good Judge thought that separating the Windows OS from Microsoft Office / Applications would force the OS to open up the secret API's even as the OS continued to evolve. Maybe. But a full disclosure of the API's coupled with the community service "port to WiNE" requirement might have sped up the process. Incredibly, the "Undocumented Windows Secrets" industry continues to thrive, and the legendary Andrew Schulman's number is still at the top of Silicon Valley legal profession speed dials. http://goo.gl/0UGe8 Oh well. The Court of Appeals stopped the breakup, leaving the Windows Productivity Platform intact. Microsoft continues to own the "client" in "Client/Server" computing. Although Microsoft was temporarily stopped from leveraging their desktop monopoly to an iron fisted control and dominance of the Internet, I think what were watching today with the Cloud is Judge Jackson's worst nightmare. And mine too. A great transition is now underway, as businesses and enterprises begin the move from legacy client/server business systems and processes to a newly emerging Cloud Productivity Platform. In this great transition, Microsoft holds an inside straight. They have all the aces because they own the legacy desktop productivity platform, and can control the transition to the Cloud. No doubt this transition is going to happen. And it will severely disrupt and change Microsoft's profit formula. But if the Redmond reprobate can provide a "value added" transition of legacy business systems and processes, and direct these new systems to the Microsoft Cloud, the profits will be immense.
  • ...1 more annotation...
  • Judge Jackson sought to break the ability of Microsoft to "leverage" their existing monopoly into the Internet and his plan was overturned and replaced by one based on judicial oversight. Microsoft got a slap on the wrist from the Court of Appeals, but were wailed on with lawsuits from the hundreds of parties injured by their rampant criminality. Some put the price of that criminality as high as $14 Billion in settlements. Plus, the shareholders forced Chairman Bill to resign. At the end of the day though, Chairman Bill was right. Keeping the monopoly intact was worth whatever penalty Microsoft was forced to pay. He knew that even the judicial over-site would end one day. Which it did. And now his company is ready to go for it all by leveraging and controlling the great productivity transition. No business wants to be hostage to a cold heart'd monopolist. But there is huge difference between a non-disruptive and cost effective, process-by-process value-added transition to a Cloud Productivity Platform, and, the very disruptive and costly "rip-out-and-replace" transition offered by Google, ZOHO, Box, SalesForce and other Cloud Productivity contenders. Microsoft, and only Microsoft, can offer the value-added transition path. If they get the Cloud even halfway right, they will own business productivity far into the future. Rest in Peace Judge Jackson. Your efforts were heroic and will be remembered as such. ~ge~
  •  
    Comments on the latest SVN article mulling the effects of Judge Thomas Penfield Jackson's anti trust ruling and proposed break up of Microsoft. comment: "Chinese Wall" Ummm, there was a Chinese Wall between Microsoft Os and the MS Applciations layer. At least that's what Chairman Bill promised developers at a 1990 OS/2-Windows Conference I attended. It was a developers luncheon, hosted by Microsoft, with Chairman Bill speaking to about 40 developers with applications designed to run on the then soon to be released Windows 3.0. In his remarks, the Chairman described his vision of commoditizing the personal computer market through an open hardware-reference platform on the one side of the Windows OS, and provisioning an open application developers layer on the other using open and totally transparent API's. Of course the question came up concerning the obvious advantage Microsoft applications would have. Chairman Bill answered the question by describing the Chinese Wall that existed between Microsoft's OS and Apps develop departments. He promised that OS API's would be developed privately and separate from the Apps department, and publicly disclosed to ALL developers at the same time. Oh yeah. There was lots of anti IBM - evil empire stuff too :) Of course we now know this was a line of crap. Microsoft Apps was discovered to have been using undocumented and secret Window API's. http://goo.gl/0UGe8. Microsoft Apps had a distinct advantage over the competition, and eventually the entire Windows Productivity Platform became dependent on the MSOffice core. The company I worked for back then, Pyramid Data, had the first Contact Management application for Windows; PowerLeads. Every Friday night we would release bug fixes and improvements using Wildcat BBS. By Monday morning we would be slammed with calls from users complaining that they had downloaded the Friday night patch, and now some other application would not load or function properly. Eventually we tracked th
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
1 - 20 of 456 Next › Last »
Showing 20 items per page