Skip to main content

Home/ Future of the Web/ Group items tagged face

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

Rightscorp Threatens Every ISP in the United States - TorrentFreak [# ! Link Note...] - 1 views

  •  
    " Andy on August 12, 2016 C: 54 Breaking Following a court win by its client BMG over Cox Communications this week, Rightscorp has issued an unprecedented warning to every ISP in the United States today. Boasting a five-year trove of infringement data against Internet users, Rightscorp warned ISPs that they can either cooperate or face the consequences."
Paul Merrell

'Pardon Snowden' Campaign Takes Off As Sanders, Ellsberg, And Others Join - 0 views

  • Prominent activists, lawmakers, artists, academics, and other leading voices in civil society, including Sen. Bernie Sanders (I-Vt.), are joining the campaign to get a pardon for National Security Agency (NSA) whistleblower Edward Snowden. “The information disclosed by Edward Snowden has allowed Congress and the American people to understand the degree to which the NSA has abused its authority and violated our constitutional rights,” Sanders wrote for the Guardian on Wednesday. “Now we must learn from the troubling revelations Mr. Snowden brought to light. Our intelligence and law enforcement agencies must be given the tools they need to protect us, but that can be done in a way that does not sacrifice our rights.” Pentagon Papers whistleblower Daniel Ellsberg, who co-founded the public interest journalism advocacy group Freedom of the Press Foundation, where Snowden is a board member, also wrote, “Ed Snowden should be freed of the legal burden hanging over him. They should remove the indictment, pardon him if that’s the way to do it, so that he is no longer facing prison.” Snowden faces charges under the Espionage Act after he released classified NSA files to media outlets in 2013 exposing the U.S. government’s global mass surveillance operations. He fled to Hong Kong, then Russia, where he has been living under political asylum for the past three years.
  • The Pardon Snowden campaign, supported by the American Civil Liberties Union (ACLU), Amnesty International, and Human Rights Watch (HRW), urgespeople around the world to write to Obama throughout his last four months in the White House.
  •  
    If you want to take part, the action page is at https://www.pardonsnowden.org/
Paul Merrell

Google, ACLU call to delay government hacking rule | TheHill - 0 views

  • A coalition of 26 organizations, including the American Civil Liberties Union (ACLU) and Google, signed a letter Monday asking lawmakers to delay a measure that would expand the government’s hacking authority. The letter asks Senate Majority Leader Mitch McConnellMitch McConnellTrump voices confidence on infrastructure plan GOP leaders to Obama: Leave Iran policy to Trump GOP debates going big on tax reform MORE (R-Ky.) and Minority Leader Harry ReidHarry ReidNevada can’t trust Trump to protect public lands Sanders, Warren face tough decision on Trump Google, ACLU call to delay government hacking rule MORE (D-Nev.), plus House Speaker Paul RyanPaul RyanTrump voices confidence on infrastructure plan GOP leaders to Obama: Leave Iran policy to Trump GOP debates going big on tax reform MORE (R-Wis.), and House Minority Leader Nancy Pelosi (D-Calif.) to further review proposed changes to Rule 41 and delay its implementation until July 1, 2017. ADVERTISEMENTThe Department of Justice’s alterations to the rule would allow law enforcement to use a single warrant to hack multiple devices beyond the jurisdiction that the warrant was issued in. The FBI used such a tactic to apprehend users of the child pornography dark website, Playpen. It took control of the dark website for two weeks and after securing two warrants, installed malware on Playpen users computers to acquire their identities. But the signatories of the letter — which include advocacy groups, companies and trade associations — are raising questions about the effects of the change. 
  •  
    ".. no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." Fourth Amendment. The changes to Rule 41 ignore the particularity requirement by allowing the government to search computers that are not particularly identified in multiple locations not particularly identifed, in other words, a general warrant that is precisely the reason the particularity requirement was adopted to outlaw.
Gonzalo San Gil, PhD.

Ten Years in Jail For UK Internet Pirates: How the New Bill Reads - TorrentFreak - 0 views

  •  
    " By Andy on December 4, 2016 C: 102 News The Digital Economy Bill is currently at the report stage. It hasn't yet become law and could still be amended. However, as things stand those who upload any amount of infringing content to the Internet could face up to 10 years in jail. With the latest bill now published, we take a look at how file-sharers could be affected."
Paul Merrell

What to Do About Lawless Government Hacking and the Weakening of Digital Security | Ele... - 0 views

  • In our society, the rule of law sets limits on what government can and cannot do, no matter how important its goals. To give a simple example, even when chasing a fleeing murder suspect, the police have a duty not to endanger bystanders. The government should pay the same care to our safety in pursuing threats online, but right now we don’t have clear, enforceable rules for government activities like hacking and "digital sabotage." And this is no abstract question—these actions increasingly endanger everyone’s security
  • The problem became especially clear this year during the San Bernardino case, involving the FBI’s demand that Apple rewrite its iOS operating system to defeat security features on a locked iPhone. Ultimately the FBI exploited an existing vulnerability in iOS and accessed the contents of the phone with the help of an "outside party." Then, with no public process or discussion of the tradeoffs involved, the government refused to tell Apple about the flaw. Despite the obvious fact that the security of the computers and networks we all use is both collective and interwoven—other iPhones used by millions of innocent people presumably have the same vulnerability—the government chose to withhold information Apple could have used to improve the security of its phones. Other examples include intelligence activities like Stuxnet and Bullrun, and law enforcement investigations like the FBI’s mass use of malware against Tor users engaged in criminal behavior. These activities are often disproportionate to stopping legitimate threats, resulting in unpatched software for millions of innocent users, overbroad surveillance, and other collateral effects.  That’s why we’re working on a positive agenda to confront governmental threats to digital security. Put more directly, we’re calling on lawyers, advocates, technologists, and the public to demand a public discussion of whether, when, and how governments can be empowered to break into our computers, phones, and other devices; sabotage and subvert basic security protocols; and stockpile and exploit software flaws and vulnerabilities.  
  • Smart people in academia and elsewhere have been thinking and writing about these issues for years. But it’s time to take the next step and make clear, public rules that carry the force of law to ensure that the government weighs the tradeoffs and reaches the right decisions. This long post outlines some of the things that can be done. It frames the issue, then describes some of the key areas where EFF is already pursuing this agenda—in particular formalizing the rules for disclosing vulnerabilities and setting out narrow limits for the use of government malware. Finally it lays out where we think the debate should go from here.   
  •  
    "In our society, the rule of law sets limits on what government can and cannot do, no matter how important its goals. "
  •  
    It's not often that I disagree with EFF's positions, but on this one I do. The government should be prohibited from exploiting computer vulnerabilities and should be required to immediately report all vulnerabilities discovered to the relevant developers of hardware or software. It's been one long slippery slope since the Supreme Court first approved wiretapping in Olmstead v. United States, 277 US 438 (1928), https://goo.gl/NJevsr (.) Left undecided to this day is whether we have a right to whisper privately, a right that is undeniable. All communications intercept cases since Olmstead fly directly in the face of that right.
Paul Merrell

Rand Paul Is Right: NSA Routinely Monitors Americans' Communications Without Warrants - 0 views

  • On Sunday’s Face the Nation, Sen. Rand Paul was asked about President Trump’s accusation that President Obama ordered the NSA to wiretap his calls. The Kentucky senator expressed skepticism about the mechanics of Trump’s specific charge, saying: “I doubt that Trump was a target directly of any kind of eavesdropping.” But he then made a broader and more crucial point about how the U.S. government spies on Americans’ communications — a point that is deliberately obscured and concealed by U.S. government defenders. Paul explained how the NSA routinely and deliberately spies on Americans’ communications — listens to their calls and reads their emails — without a judicial warrant of any kind: The way it works is, the FISA court, through Section 702, wiretaps foreigners and then [NSA] listens to Americans. It is a backdoor search of Americans. And because they have so much data, they can tap — type Donald Trump into their vast resources of people they are tapping overseas, and they get all of his phone calls. And so they did this to President Obama. They — 1,227 times eavesdrops on President Obama’s phone calls. Then they mask him. But here is the problem. And General Hayden said this the other day. He said even low-level employees can unmask the caller. That is probably what happened to Flynn. They are not targeting Americans. They are targeting foreigners. But they are doing it purposefully to get to Americans.
  • Paul’s explanation is absolutely correct. That the NSA is empowered to spy on Americans’ communications without a warrant — in direct contravention of the core Fourth Amendment guarantee that “the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause” — is the dirty little secret of the U.S. Surveillance State. As I documented at the height of the controversy over the Snowden reporting, top government officials — including President Obama — constantly deceived (and still deceive) the public by falsely telling them that their communications cannot be monitored without a warrant. Responding to the furor created over the first set of Snowden reports about domestic spying, Obama sought to reassure Americans by telling Charlie Rose: “What I can say unequivocally is that if you are a U.S. person, the NSA cannot listen to your telephone calls … by law and by rule, and unless they … go to a court, and obtain a warrant, and seek probable cause.” The right-wing chairman of the House Intelligence Committee at the time, GOP Rep. Mike Rogers, echoed Obama, telling CNN the NSA “is not listening to Americans’ phone calls. If it did, it is illegal. It is breaking the law.” Those statements are categorically false. A key purpose of the new 2008 FISA law — which then-Senator Obama voted for during the 2008 general election after breaking his primary-race promise to filibuster it — was to legalize the once-controversial Bush/Cheney warrantless eavesdropping program, which the New York Times won a Pulitzer Prize for exposing in 2005. The crux of the Bush/Cheney controversy was that they ordered NSA to listen to Americans’ international telephone calls without warrants — which was illegal at the time — and the 2008 law purported to make that type of domestic warrantless spying legal.
Gonzalo San Gil, PhD.

How to Build Your Twitter Brand in 60 Days - 0 views

  •  
    "One of my favorite things in social media is coming across new faces and new brands. I am intrigued with how they are using Twitter, and, if they get it much faster than I did. I didn't know how to tweet effectively for my first six months on Twitter back in 2009. Yes, its true. However, I recently sat down with businesswoman Jody Barrett. She joined Twitter last Summer, but didn't step up efforts to grow her account until December 1st 2013. This woman is funny, direct, and very aware of the power of social media. She gets it, big time, and it didn't take her six months to understand."
Gonzalo San Gil, PhD.

Should anyone worry about the US ceding control of the Internet? - 0 views

  •  
    "When domestic and international organizations with key stakes in governing the Internet meet in Singapore starting Sunday, they'll face a major task that begins with the question: What's next? "
Gonzalo San Gil, PhD.

The original proposal of the WWW, HTMLized - 1 views

  •  
    [... The problems of information loss may be particularly acute at CERN, but in this case (as in certain others), CERN is a model in miniature of the rest of world in a few years time. CERN meets now some problems which the rest of the world will have to face soon. In 10 years, there may be many commercial solutions to the problems above, while today we need something to allow us to continue. ...]
Gonzalo San Gil, PhD.

Guide to Creative Commons » OAPEN-UK - 1 views

  •  
    "An output of the OAPEN-UK project, this guide explores concerns expressed in public evidence given by researchers, learned societies and publishers to inquiries in the House of Commons and the House of Lords, and also concerns expressed by researchers working with the OAPEN-UK project. We have also identified a number of common questions and have drafted answers, which have been checked by experts including Creative Commons. The guide has been edited by active researchers, to make sure that it is relevant and useful to academics faced with making decisions about publishing."
Paul Merrell

U.S. knocks plans for European communication network | Reuters - 0 views

  • The United States on Friday criticized proposals to build a European communication network to avoid emails and other data passing through the United States, warning that such rules could breach international trade laws. In its annual review of telecommunications trade barriers, the office of the U.S. Trade Representative said impediments to cross-border data flows were a serious and growing concern.It was closely watching new laws in Turkey that led to the blocking of websites and restrictions on personal data, as well as calls in Europe for a local communications network following revelations last year about U.S. digital eavesdropping and surveillance."Recent proposals from countries within the European Union to create a Europe-only electronic network (dubbed a 'Schengen cloud' by advocates) or to create national-only electronic networks could potentially lead to effective exclusion or discrimination against foreign service suppliers that are directly offering network services, or dependent on them," the USTR said in the report.
  • Germany and France have been discussing ways to build a European network to keep data secure after the U.S. spying scandal. Even German Chancellor Angela Merkel's cell phone was reportedly monitored by American spies.The USTR said proposals by Germany's state-backed Deutsche Telekom to bypass the United States were "draconian" and likely aimed at giving European companies an advantage over their U.S. counterparts.Deutsche Telekom has suggested laws to stop data traveling within continental Europe being routed via Asia or the United States and scrapping the Safe Harbor agreement that allows U.S. companies with European-level privacy standards access to European data. (www.telekom.com/dataprotection)"Any mandatory intra-EU routing may raise questions with respect to compliance with the EU's trade obligations with respect to Internet-enabled services," the USTR said. "Accordingly, USTR will be carefully monitoring the development of any such proposals."
  • U.S. tech companies, the leaders in an e-commerce marketplace estimated to be worth up to $8 trillion a year, have urged the White House to undertake reforms to calm privacy concerns and fend off digital protectionism.
  •  
    High comedy from the office of the U.S. Trade Representative. The USTR's press release is here along with a link to its report. http://www.ustr.gov/about-us/press-office/press-releases/2014/March/USTR-Targets-Telecommunications-Trade-Barriers The USTR is upset because the E.U. is aiming to build a digital communications network that does not route internal digital traffic outside the E.U., to limit the NSA's ability to surveil Europeans' communications. Part of the plan is to build an E.U.-centric cloud that is not susceptible to U.S. court orders. This plan does not, of course, sit well with U.S.-based cloud service providers.  Where the comedy comes in is that the USTR is making threats to go to the World Trade organization to block the E.U. move under the authority of the General Agreement on Trade in Services (GATS). But that treaty provides, in article XIV, that:  "Subject to the requirement that such measures are not applied in a manner which would constitute a means of arbitrary or unjustifiable discrimination between countries where like conditions prevail, or a disguised restriction on trade in services, nothing in this Agreement shall be construed to prevent the adoption or enforcement by any Member of measures: ... (c)      necessary to secure compliance with laws or regulations which are not inconsistent with the provisions of this Agreement including those relating to:   ... (ii)     the protection of the privacy of individuals in relation to the processing and dissemination of personal data and the protection of confidentiality of individual records and accounts[.]" http://www.wto.org/english/docs_e/legal_e/26-gats_01_e.htm#articleXIV   The E.U., in its Treaty on Human Rights, has very strong privacy protections for digital communications. The USTR undoubtedly knows all this, and that the WTO Appellate Panel's judges are of the European mold, sticklers for protection of human rights and most likely do not appreciate being subjects o
Gonzalo San Gil, PhD.

Connected Continent: a single telecom market for growth & jobs - Digital Agenda for Eur... - 0 views

  •  
    "In the face of the deep crisis affecting its economy and society, Europe needs to tap into new sources of growth in areas that will reinforce its competitiveness, drive innovation and create new job opportunities. " [Key information: Communication Regulation Recommendation Impact assessment Press release - Watch the press conference Memo Speech: We must act now - time for a Connected Continent Press conference opening remarks by Neelie Kroes, Vice-President of the EC in charge of Digital Agenda Statement by Ryan Heath, Spokesperson for Digital Agenda on the Telecom Package Tags: Telecoms telecoms single market growth and jobs]
Paul Merrell

Internet Giants Erect Barriers to Spy Agencies - NYTimes.com - 0 views

  • As fast as it can, Google is sealing up cracks in its systems that Edward J. Snowden revealed the N.S.A. had brilliantly exploited. It is encrypting more data as it moves among its servers and helping customers encode their own emails. Facebook, Microsoft and Yahoo are taking similar steps.
  • After years of cooperating with the government, the immediate goal now is to thwart Washington — as well as Beijing and Moscow. The strategy is also intended to preserve business overseas in places like Brazil and Germany that have threatened to entrust data only to local providers. Google, for example, is laying its own fiber optic cable under the world’s oceans, a project that began as an effort to cut costs and extend its influence, but now has an added purpose: to assure that the company will have more control over the movement of its customer data.
  • A year after Mr. Snowden’s revelations, the era of quiet cooperation is over. Telecommunications companies say they are denying requests to volunteer data not covered by existing law. A.T.&T., Verizon and others say that compared with a year ago, they are far more reluctant to cooperate with the United States government in “gray areas” where there is no explicit requirement for a legal warrant.
  • ...8 more annotations...
  • Eric Grosse, Google’s security chief, suggested in an interview that the N.S.A.'s own behavior invited the new arms race.“I am willing to help on the purely defensive side of things,” he said, referring to Washington’s efforts to enlist Silicon Valley in cybersecurity efforts. “But signals intercept is totally off the table,” he said, referring to national intelligence gathering.“No hard feelings, but my job is to make their job hard,” he added.
  • In Washington, officials acknowledge that covert programs are now far harder to execute because American technology companies, fearful of losing international business, are hardening their networks and saying no to requests for the kind of help they once quietly provided.Continue reading the main story Robert S. Litt, the general counsel of the Office of the Director of National Intelligence, which oversees all 17 American spy agencies, said on Wednesday that it was “an unquestionable loss for our nation that companies are losing the willingness to cooperate legally and voluntarily” with American spy agencies.
  • Many point to an episode in 2012, when Russian security researchers uncovered a state espionage tool, Flame, on Iranian computers. Flame, like the Stuxnet worm, is believed to have been produced at least in part by American intelligence agencies. It was created by exploiting a previously unknown flaw in Microsoft’s operating systems. Companies argue that others could have later taken advantage of this defect.Worried that such an episode undercuts confidence in its wares, Microsoft is now fully encrypting all its products, including Hotmail and Outlook.com, by the end of this year with 2,048-bit encryption, a stronger protection that would take a government far longer to crack. The software is protected by encryption both when it is in data centers and when data is being sent over the Internet, said Bradford L. Smith, the company’s general counsel.
  • Mr. Smith also said the company was setting up “transparency centers” abroad so that technical experts of foreign governments could come in and inspect Microsoft’s proprietary source code. That will allow foreign governments to check to make sure there are no “back doors” that would permit snooping by United States intelligence agencies. The first such center is being set up in Brussels.Microsoft has also pushed back harder in court. In a Seattle case, the government issued a “national security letter” to compel Microsoft to turn over data about a customer, along with a gag order to prevent Microsoft from telling the customer it had been compelled to provide its communications to government officials. Microsoft challenged the gag order as violating the First Amendment. The government backed down.
  • Hardware firms like Cisco, which makes routers and switches, have found their products a frequent subject of Mr. Snowden’s disclosures, and their business has declined steadily in places like Asia, Brazil and Europe over the last year. The company is still struggling to convince foreign customers that their networks are safe from hackers — and free of “back doors” installed by the N.S.A. The frustration, companies here say, is that it is nearly impossible to prove that their systems are N.S.A.-proof.
  • In one slide from the disclosures, N.S.A. analysts pointed to a sweet spot inside Google’s data centers, where they could catch traffic in unencrypted form. Next to a quickly drawn smiley face, an N.S.A. analyst, referring to an acronym for a common layer of protection, had noted, “SSL added and removed here!”
  • Facebook and Yahoo have also been encrypting traffic among their internal servers. And Facebook, Google and Microsoft have been moving to more strongly encrypt consumer traffic with so-called Perfect Forward Secrecy, specifically devised to make it more labor intensive for the N.S.A. or anyone to read stored encrypted communications.One of the biggest indirect consequences from the Snowden revelations, technology executives say, has been the surge in demands from foreign governments that saw what kind of access to user information the N.S.A. received — voluntarily or surreptitiously. Now they want the same.
  • The latest move in the war between intelligence agencies and technology companies arrived this week, in the form of a new Google encryption tool. The company released a user-friendly, email encryption method to replace the clunky and often mistake-prone encryption schemes the N.S.A. has readily exploited.But the best part of the tool was buried in Google’s code, which included a jab at the N.S.A.'s smiley-face slide. The code included the phrase: “ssl-added-and-removed-here-; - )”
Gonzalo San Gil, PhD.

MPAA Complained So We Seized Your Funds, PayPal Says | TorrentFreak - 1 views

    • Gonzalo San Gil, PhD.
       
      # ! ... and what happens with the hundreds of -non-IP-Infringement-related projects...?
  •  
    [ Andy on May 17, 2015 C: 0 Breaking Developers considering adding a torrent search engine to their portfolio should proceed with caution, especially if they value their income streams. Following a complaint from the MPAA one developer is now facing a six month wait for PayPal to unfreeze thousands in funds, the vast majority related to other projects. ...]
Gonzalo San Gil, PhD.

| Grooveshark Faces a $736,050,000.00 Hammer…Digital Music News - 0 views

    • Gonzalo San Gil, PhD.
       
      what a nonsense, call 'legal jihad' -with all its negative connotations- to a (supposedly) 'democratic' 'IP Protection' action... # ! :( (Another identification of sharing with terrorism... not The Faith...)
  •  
    When Universal Music Group declared 'legal jihad' against Grooveshark, it turns out they actually meant it. Now, after flattening Grooveshark and its principals on grounds of willful copyright infringement, the parties enter the phase of figuring out just how brutal this punishment will be. [# ! It's just a matter of culture flow -thought, socialization, VALUES- control...]
Paul Merrell

Republicans seek fast-track repeal of net neutrality | Ars Technica - 0 views

  • Republicans in Congress yesterday unveiled a new plan to fast track repeal of the Federal Communications Commission's net neutrality rules. Introduced by Rep. Doug Collins (R-Ga.) and 14 Republican co-sponsors, the "Resolution of Disapproval" would use Congress' fast track powers under the Congressional Review Act to cancel the FCC's new rules.
  • Saying the resolution "would require only a simple Senate majority to pass under special procedural rules of the Congressional Review Act," Collins' announcement called it "the quickest way to stop heavy-handed agency regulations that would slow Internet speeds, increase consumer prices and hamper infrastructure development, especially in his Northeast Georgia district." Republicans can use this method to bypass Democratic opposition in the Senate by requiring just a simple majority rather than 60 votes to overcome a filibuster, but "it would still face an almost certain veto from President Obama," National Journal wrote. "Other attempts to fast-track repeals of regulations in the past have largely been unsuccessful." This isn't the only Republican effort to overturn the FCC's net neutrality rules. Another, titled the "Internet Freedom Act," would wipe out the new net neutrality regime. Other Republican proposals would enforce some form of net neutrality rules while limiting the FCC's power to regulate broadband.
  • The FCC's rules also face lawsuits from industry consortiums that represent broadband providers. USTelecom filed suit yesterday just after the publication of the rules in the Federal Register. Today, the CTIA Wireless Association, National Cable & Telecommunications Association (NCTA), and American Cable Association (ACA) all filed lawsuits to overturn the FCC's Open Internet Order. The CTIA and NCTA are the most prominent trade groups representing the cable and wireless industries. The ACA, which represents smaller providers, said it supports net neutrality rules but opposes the FCC's decision to reclassify broadband as a common carrier service. However, a previous court decision ruled that the FCC could not impose the rules without reclassifying broadband.
Gary Edwards

Be Prepared: 5 Android Apps You Must Have in Case of Emergency - Best Android Apps - 2 views

  •  
    "WHEN CALAMITY STRIKES The unpredictable nature of most emergencies often puts us in dangerous situations. But today's technology has come far enough to inform, alert, secure and guide us through dangerous situation, if we happen to face any. Emergencies may come in the form of environmental or circumstantial problems. Therefore it is always better to accompany yourself with at least 5 apps that can help you in case of any emergency. Here is the list of top 5 emergency apps for Android."
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

Piracy: Hollywood's Losing a Few Pounds, Who Cares? - TorrentFreak - 0 views

  •  
    " Andy on August 27, 2015 C: 72 Breaking Following news this week that a man is facing a custodial sentence after potentially defrauding the movie industry out of £120m, FACT Director General Kieron Sharp has been confronted with an uncomfortable truth. According to listeners contacting the BBC, the public has little sympathy with Hollywood."
« First ‹ Previous 41 - 60 of 157 Next › Last »
Showing 20 items per page