Skip to main content

Home/ Future of the Web/ Group items tagged child

Rss Feed Group items tagged

Paul Merrell

Google Wants to Write Your Social Media Messages For You - Search Engine Watch (#SEW) - 0 views

  •  
    Visions of endless conversations between different people's bots with no human participation. Then a human being reads a reply and files a libel lawsuit against the human whose bot posted the reply. Can the defendant obtain dismissal on grounds that she did not write the message herself; her Google autoresponder did and therefore if anyone is liable it is Google?  Our Brave New (technological) World does and will pose many novel legal issues. My favorite so far: Assume that genetics have progressed to the point that unknown to Bill Gates, someone steals a bit of his DNA and implants it in a mother-to-be's egg. Is Bill Gates as the biological father liable for child support? Is that child an heir to Bill Gates' fortune? The current state of law in the U.S. would suggest that the answer to both questions is almost certainly "yes." The child itself is blameless and Bill Gates is his biological father.
Gonzalo San Gil, PhD.

No, Department of Justice, 80 Percent of Tor Traffic Is Not Child Porn | WIRED [# ! Via... - 0 views

  • The debate over online anonymity, and all the whistleblowers, trolls, anarchists, journalists and political dissidents it enables, is messy enough. It doesn’t need the US government making up bogus statistics about how much that anonymity facilitates child pornography.
  • he debate over online anonymity, and all the whistleblowers, trolls, anarchists, journalists and political dissidents it enables, is messy enough. It doesn’t need the US government making up bogus statistics about how much that anonymity facilitates child pornography. At the State of the Net conference in Washington on Tuesday, US assistant attorney general Leslie Caldwell discussed what she described as the dangers of encryption and cryptographic anonymity tools like Tor, and how those tools can hamper law enforcement. Her statements are the latest in a growing drumbeat of federal criticism of tech companies and software projects that provide privacy and anonymity at the expense of surveillance. And as an example of the grave risks presented by that privacy, she cited a study she said claimed an overwhelming majority of Tor’s anonymous traffic relates to pedophilia. “Tor obviously was created with good intentions, but it’s a huge problem for law enforcement,” Caldwell said in comments reported by Motherboard and confirmed to me by others who attended the conference. “We understand 80 percent of traffic on the Tor network involves child pornography.” That statistic is horrifying. It’s also baloney.
  • In a series of tweets that followed Caldwell’s statement, a Department of Justice flack said Caldwell was citing a University of Portsmouth study WIRED covered in December. He included a link to our story. But I made clear at the time that the study claimed 80 percent of traffic to Tor hidden services related to child pornography, not 80 percent of all Tor traffic. That is a huge, and important, distinction. The vast majority of Tor’s users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what’s often referred to as the “dark web,” use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software’s creators at the non-profit Tor Project. The University of Portsmouth study dealt exclusively with visits to hidden services. In contrast to Caldwell’s 80 percent claim, the Tor Project’s director Roger Dingledine pointed out last month that the study’s pedophilia findings refer to something closer to a single percent of Tor’s overall traffic.
  • ...1 more annotation...
  • So to whoever at the Department of Justice is preparing these talking points for public consumption: Thanks for citing my story. Next time, please try reading it.
  •  
    [# Via Paul Merrell's Diigo...] "That is a huge, and important, distinction. The vast majority of Tor's users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what's often referred to as the "dark web," use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software's creators at the non-profit Tor Project."
  •  
    [# Via Paul Merrell's Diigo...] "That is a huge, and important, distinction. The vast majority of Tor's users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what's often referred to as the "dark web," use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software's creators at the non-profit Tor Project."
Paul Merrell

AG Barr asks Facebook to postpone encrypted messaging plans - 0 views

  • Attorney General William Barr asks Facebook CEO Mark Zuckerberg to hold off on his plans to encrypt the company’s three messaging services until officials can determine it will not reduce public safety in a letter dated Oct. 4.Barr’s request is backed by officials in the U.K. and Australia. BuzzFeed News first reported the story after obtaining a draft of the open letter on Thursday. The letter, which the DOJ sent to CNBC Thursday, builds on concerns about Facebook’s plans to integrate and encrypt its messaging services across Messenger, Instagram and WhatsApp. A New York Times investigation published Saturday found that encrypted technology helps predators share child pornography online in a way that makes it much harder for law enforcement to track down.
  •  
    The text of the Attorney General's letter to Zuckerberg is here. Note the strong DoJ concern about child sex abusers. Yes, the same DoJ that let serial pederast Jeffrey Epstein off with a 13-month sentence in a county jail, where he was allowed to leave for 12 hours every day. The same DoJ that frames Muslims who lack mental capacity to resist to charge them as "terrorists." My point being that "child abuse" and "terrorists" are not real concerns for our illustrious leaders. It also bears notice that what government officials are after (without saying so) is the ability to intercept and decode messages en masse as they transit the Internet. With snail mail interception, that requires an individualized search warrant signed by a judge based on probable cause to believe that the mail contains evidence of a crime. But these folks want to read everything transmitted. Might one reasonably suspect that they have no respect for our Constitution?
Paul Merrell

Google, Facebook made secret deal to divvy up market, Texas alleges - POLITICO - 1 views

  • Google and Facebook, the No. 1 and No. 2 players in online advertising, made a secret illegal pact in 2018 to divide up the market for ads on websites and apps, according to an antitrust suit filed Wednesday against the search giant. The suit — filed by Texas and eight other states — alleges that the companies colluded to fix prices and divvy up the market for mobile advertising between them.
  • The allegation that Google teamed up with Facebook to suppress competition mirrors a major claim in a separate antitrust suit the Justice Department filed against the company in October: that Google teamed up with Apple to help ensure the continued dominance of its search engine. Such allegations provide some of the strongest ammunition yet to advocates who argue that the U.S. major tech companies have gotten too big and are using their power — sometimes in conjunction with each other — to control markets.Many of the details about the Google-Facebook agreement, including its specific language, are redacted from the complaint. But the states say it “fixes prices and allocates markets between Google and Facebook as competing bidders in the auctions for publishers’ web display and in-app advertising inventory.”
  • The complaint alleges that the agreement was prompted by Facebook’s move in 2017 to use “header bidding” — a technology popular with website publishers that helped them increase the money they made from advertising. While Facebook sells ads on its own platform, it also operates a network to let advertisers offer ads on third-party apps and mobile websites.
  • ...1 more annotation...
  • Google was concerned about the move to header bidding, the complaint alleges, because it posed an “existential threat” to its own advertising exchange and limited the ability of the search giant to use information from its ad-buying and selling tools to its advantage. Those tools let Google cherry pick the highest value advertising spots and ads, according to the complaint.Within months of Facebook’s announcement, Google approached it to open negotiations, the complaint alleged, and the two companies eventually cut a deal: Facebook would cut back on the use of header bidding and use Google’s ad server. In exchange, the complaint alleges that Google gave Facebook advantages in its auctions.
Gonzalo San Gil, PhD.

France Implements Administrative Net Censorship | La Quadrature du Net - 0 views

  •  
    "Paris, February 6, 2015 - After review by the French Cabinet last Wednesday, the implementation decree for the administrative blocking of pedopornographic and terrorist websites was published today. This decree implements the provisions of to the Loppsi Act (15 March 2011) and the "Terrorism" Act (13 November 2014), both of which La Quadrature du Net opposed. It gives the government the power to directly order French telecom operators to block access to websites deemed to convey content relating to child abuse or terrorism, without any court order."
  •  
    "Paris, February 6, 2015 - After review by the French Cabinet last Wednesday, the implementation decree for the administrative blocking of pedopornographic and terrorist websites was published today. This decree implements the provisions of to the Loppsi Act (15 March 2011) and the "Terrorism" Act (13 November 2014), both of which La Quadrature du Net opposed. It gives the government the power to directly order French telecom operators to block access to websites deemed to convey content relating to child abuse or terrorism, without any court order."
Paul Merrell

U.S., allies urge Facebook for backdoor to encryption as they fight child abuse - Reuters - 1 views

  • The United States, the United Kingdom and Australia have called on Facebook Inc to not go ahead with end-to-end encryption across its messaging services unless law enforcement officials have backdoor access, saying encryption hindered the fight against child abuse and terrorism.
  • The United States and United Kingdom also signed a special data agreement that would fast track requests from law enforcement to technology companies for information about the communications of terrorists and child predators. Law enforcement could get information in weeks or even days instead of the current wait of six months to two years. The latest tug-of-war between governments and tech companies over user data could also impact Apple Inc, Alphabet Inc’s Google and Microsoft Corp, as well as smaller encrypted chat apps like Signal.
Paul Merrell

Race to Introduce Fascist Internet Regulations in Russia Continues - Now under the Bann... - 0 views

  • Russian lawmaker Vitaly Milonov, on Monday, proposed a bill aimed to ban children under the age of 14 from social media. Although the bill is touted under the banner of child protection, it also aims to introduce the mandatory submission of passport data. In January Russia introduced semi-fascist regulations to severely curb the rights of bloggers and independent media.
  • Vitaly Milnov, generally known for being ultra-conservative, introduced the controversial bill on Monday. Touting the bill under the banner of wanting to protect children and limit their access to social media the bill has far deeper implications. Parents could very well self-regulate their children’s access to social media. The bill, however, implies that it would become mandatory for social media users to submit their passport data. Moreover, the bill also proposes that the use of pseudonyms will be banned. The proposed legislation also aims to introducing strict rules, requiring two-party consent before the publication of screenshots of online correspondence. The bill reads, among others: “Social networks create a special virtual world where a person spends significant part of their life, contacting other people and essentially doing everything that they would do in real world. This world can’t be left unregulated by law. Especially now, when growing number of users are falling victim to different types of fraud.” Even though Milonov is generally viewed as ultra-conservative, there are about 62 percent of Russians who according to polls support the ban of social networks for children while 39 percent supported using passport data to create an online account, a poll by the state-funded pollster VTsIOM revealed Monday.
  • Social media has come under intense scrutiny in Russia in recent months. Disturbingly, there are very few Russians who have received independent information about the not so overtly advertised implications of this scrutiny, of the proposed bill, and of plans to create a “Russian internet” to filter “unwanted foreign content. Russia also cracks down on independent bloggers and journalists. On January 1, 2016 the Russian Federation implemented amendments to laws that further censor the internet and potentially independent media. These laws are being sold under the guise of empowering internet users and the right to protect personal information. The amendments follow legislation from 2014 that infringed on the rights of bloggers.
Paul Merrell

Microsoft debuts early test version of Oxite open source blogging engine | Open Source ... - 0 views

  • WordPress has more competition to be. Microsoft’s Codeplex team has developed an open source blogging engine that can support simple blogs and large web sites such as its own MIX Online.
  • “Oxite was developed carefully and painstakingly to be a great blogging platform, or a starting point for your own web site project with CMS needs,” according to Microsoft.com.
  • Oxite offers support for multiple blogs per site.  ”Oxite includes the ability to create and edit an arbitrary set of pages on your site. Want an ‘about’ page? You got it. Need a special page about your dogs, with sub-pages for each of those special animals? Yep, no worries,” Microsoft continues. “The ability to add pages as a child of another page is all built in. The web-based editing and creation interface lets you put whatever HTML you want onto your pages, and the built-in authentication system means that only you will be able to edit them.
  •  
    The ability to create child pages of a parent page is something I haven't seen before in a blogging app. There are a ton of CMS that offer such features, but blogs have been an exception.
Gonzalo San Gil, PhD.

Net Censorship Comes Before the EU Parliament | La Quadrature du Net - 1 views

  •  
    [ Last Spring, the European Commissioner for Home Affairs, Cecilia Malmström, presented a proposal for a directive to combat child exploitation. Unfortunately, this very important and sensitive matter is used to introduce dangerous provisions regarding Internet blocking, which could pave the way for a wider censorship of the Internet in Europe. The EU Parliament must absolutely reject this Trojan horse and uphold the fundamental rights of EU citizens ...]
Gonzalo San Gil, PhD.

Achieving Impossible Things with Free Culture and Commons-Based Enterprise : Terry Hanc... - 0 views

  •  
    "Author: Terry Hancock Keywords: free software; open source; free culture; commons-based peer production; commons-based enterprise; Free Software Magazine; Blender Foundation; Blender Open Movies; Wikipedia; Project Gutenberg; Open Hardware; One Laptop Per Child; Sugar Labs; licensing; copyleft; hosting; marketing; design; online community; Debian GNU/Linux; GNU General Public License; Creative Commons Attribution-ShareAlike License; TAPR Open Hardware License; collective patronage; women in free software; Creative Commons; OScar; C,mm,n; Free Software Foundation; Open Source Initiative; Freedom Defined; Free Software Definition; Debian Free Software Guidelines; Sourceforge; Google Code; digital rights management; digital restrictions management; technological protection measures; DRM; TPM; linux; gnu; manifesto Publisher: Free Software Magazine Press Year: 2009 Language: English Collection: opensource"
  •  
    "Author: Terry Hancock Keywords: free software; open source; free culture; commons-based peer production; commons-based enterprise; Free Software Magazine; Blender Foundation; Blender Open Movies; Wikipedia; Project Gutenberg; Open Hardware; One Laptop Per Child; Sugar Labs; licensing; copyleft; hosting; marketing; design; online community; Debian GNU/Linux; GNU General Public License; Creative Commons Attribution-ShareAlike License; TAPR Open Hardware License; collective patronage; women in free software; Creative Commons; OScar; C,mm,n; Free Software Foundation; Open Source Initiative; Freedom Defined; Free Software Definition; Debian Free Software Guidelines; Sourceforge; Google Code; digital rights management; digital restrictions management; technological protection measures; DRM; TPM; linux; gnu; manifesto Publisher: Free Software Magazine Press Year: 2009 Language: English Collection: opensource"
Paul Merrell

Visit the Wrong Website, and the FBI Could End Up in Your Computer | Threat Level | WIRED - 0 views

  • Security experts call it a “drive-by download”: a hacker infiltrates a high-traffic website and then subverts it to deliver malware to every single visitor. It’s one of the most powerful tools in the black hat arsenal, capable of delivering thousands of fresh victims into a hackers’ clutches within minutes. Now the technique is being adopted by a different kind of a hacker—the kind with a badge. For the last two years, the FBI has been quietly experimenting with drive-by hacks as a solution to one of law enforcement’s knottiest Internet problems: how to identify and prosecute users of criminal websites hiding behind the powerful Tor anonymity system. The approach has borne fruit—over a dozen alleged users of Tor-based child porn sites are now headed for trial as a result. But it’s also engendering controversy, with charges that the Justice Department has glossed over the bulk-hacking technique when describing it to judges, while concealing its use from defendants. Critics also worry about mission creep, the weakening of a technology relied on by human rights workers and activists, and the potential for innocent parties to wind up infected with government malware because they visited the wrong website. “This is such a big leap, there should have been congressional hearings about this,” says ACLU technologist Chris Soghoian, an expert on law enforcement’s use of hacking tools. “If Congress decides this is a technique that’s perfectly appropriate, maybe that’s OK. But let’s have an informed debate about it.”
  • The FBI’s use of malware is not new. The bureau calls the method an NIT, for “network investigative technique,” and the FBI has been using it since at least 2002 in cases ranging from computer hacking to bomb threats, child porn to extortion. Depending on the deployment, an NIT can be a bulky full-featured backdoor program that gives the government access to your files, location, web history and webcam for a month at a time, or a slim, fleeting wisp of code that sends the FBI your computer’s name and address, and then evaporates. What’s changed is the way the FBI uses its malware capability, deploying it as a driftnet instead of a fishing line. And the shift is a direct response to Tor, the powerful anonymity system endorsed by Edward Snowden and the State Department alike.
Gonzalo San Gil, PhD.

A Guide to the Dark Web's Lighter Side | WIRED - 2 views

  •  
    "Diego Patiño That underground warren of anonymous sites known as the dark web has a reputation for nightmarish stuff like child porn and hit men for hire. It does indeed contain those horrors-and a lot of perfectly decent things. Fire up your Tor browser and explore the lighter sides of the dark web with your conscience intact."
Paul Merrell

Feds Force Suspect To Unlock An Apple iPhone X With Their Face - 0 views

  • It finally happened. The feds forced an Apple iPhone X owner to unlock their device with their face.A child abuse investigation unearthed by Forbes includes the first known case in which law enforcement used Apple Face ID facial recognition technology to open a suspect's iPhone. That's by any police agency anywhere in the world, not just in America.It happened on August 10, when the FBI searched the house of 28-year-old Grant Michalski, a Columbus, Ohio, resident who would later that month be charged with receiving and possessing child pornography. With a search warrant in hand, a federal investigator told Michalski to put his face in front of the phone, which he duly did. That allowed the agent to pick through the suspect's online chats, photos and whatever else he deemed worthy of investigation.The case marks another significant moment in the ongoing battle between law enforcement and tech providers, with the former trying to break the myriad security protections put in place by the latter. Since the fight between the world's most valuable company and the FBI in San Bernardino over access to an iPhone in 2016, Forbes has been tracking the various ways cops have been trying to break Apple's protections.
Paul Merrell

Civil Rights Coalition files FCC Complaint Against Baltimore Police Department for Ille... - 0 views

  • This week the Center for Media Justice, ColorOfChange.org, and New America’s Open Technology Institute filed a complaint with the Federal Communications Commission alleging the Baltimore police are violating the federal Communications Act by using cell site simulators, also known as Stingrays, that disrupt cellphone calls and interfere with the cellular network—and are doing so in a way that has a disproportionate impact on communities of color. Stingrays operate by mimicking a cell tower and directing all cellphones in a given area to route communications through the Stingray instead of the nearby tower. They are especially pernicious surveillance tools because they collect information on every single phone in a given area—not just the suspect’s phone—this means they allow the police to conduct indiscriminate, dragnet searches. They are also able to locate people inside traditionally-protected private spaces like homes, doctors’ offices, or places of worship. Stingrays can also be configured to capture the content of communications. Because Stingrays operate on the same spectrum as cellular networks but are not actually transmitting communications the way a cell tower would, they interfere with cell phone communications within as much as a 500 meter radius of the device (Baltimore’s devices may be limited to 200 meters). This means that any important phone call placed or text message sent within that radius may not get through. As the complaint notes, “[d]epending on the nature of an emergency, it may be urgently necessary for a caller to reach, for example, a parent or child, doctor, psychiatrist, school, hospital, poison control center, or suicide prevention hotline.” But these and even 911 calls could be blocked.
  • The Baltimore Police Department could be among the most prolific users of cell site simulator technology in the country. A Baltimore detective testified last year that the BPD used Stingrays 4,300 times between 2007 and 2015. Like other law enforcement agencies, Baltimore has used its devices for major and minor crimes—everything from trying to locate a man who had kidnapped two small children to trying to find another man who took his wife’s cellphone during an argument (and later returned it). According to logs obtained by USA Today, the Baltimore PD also used its Stingrays to locate witnesses, to investigate unarmed robberies, and for mysterious “other” purposes. And like other law enforcement agencies, the Baltimore PD has regularly withheld information about Stingrays from defense attorneys, judges, and the public. Moreover, according to the FCC complaint, the Baltimore PD’s use of Stingrays disproportionately impacts African American communities. Coming on the heels of a scathing Department of Justice report finding “BPD engages in a pattern or practice of conduct that violates the Constitution or federal law,” this may not be surprising, but it still should be shocking. The DOJ’s investigation found that BPD not only regularly makes unconstitutional stops and arrests and uses excessive force within African-American communities but also retaliates against people for constitutionally protected expression, and uses enforcement strategies that produce “severe and unjustified disparities in the rates of stops, searches and arrests of African Americans.”
  • Adding Stingrays to this mix means that these same communities are subject to more surveillance that chills speech and are less able to make 911 and other emergency calls than communities where the police aren’t regularly using Stingrays. A map included in the FCC complaint shows exactly how this is impacting Baltimore’s African-American communities. It plots hundreds of addresses where USA Today discovered BPD was using Stingrays over a map of Baltimore’s black population based on 2010 Census data included in the DOJ’s recent report:
  • ...2 more annotations...
  • The Communications Act gives the FCC the authority to regulate radio, television, wire, satellite, and cable communications in all 50 states, the District of Columbia and U.S. territories. This includes being responsible for protecting cellphone networks from disruption and ensuring that emergency calls can be completed under any circumstances. And it requires the FCC to ensure that access to networks is available “to all people of the United States, without discrimination on the basis of race, color, religion, national origin, or sex.” Considering that the spectrum law enforcement is utilizing without permission is public property leased to private companies for the purpose of providing them next generation wireless communications, it goes without saying that the FCC has a duty to act.
  • But we should not assume that the Baltimore Police Department is an outlier—EFF has found that law enforcement has been secretly using stingrays for years and across the country. No community should have to speculate as to whether such a powerful surveillance technology is being used on its residents. Thus, we also ask the FCC to engage in a rule-making proceeding that addresses not only the problem of harmful interference but also the duty of every police department to use Stingrays in a constitutional way, and to publicly disclose—not hide—the facts around acquisition and use of this powerful wireless surveillance technology.  Anyone can support the complaint by tweeting at FCC Commissioners or by signing the petitions hosted by Color of Change or MAG-Net.
  •  
    An important test case on the constitutionality of stingray mobile device surveillance.
Paul Merrell

Google, ACLU call to delay government hacking rule | TheHill - 0 views

  • A coalition of 26 organizations, including the American Civil Liberties Union (ACLU) and Google, signed a letter Monday asking lawmakers to delay a measure that would expand the government’s hacking authority. The letter asks Senate Majority Leader Mitch McConnellMitch McConnellTrump voices confidence on infrastructure plan GOP leaders to Obama: Leave Iran policy to Trump GOP debates going big on tax reform MORE (R-Ky.) and Minority Leader Harry ReidHarry ReidNevada can’t trust Trump to protect public lands Sanders, Warren face tough decision on Trump Google, ACLU call to delay government hacking rule MORE (D-Nev.), plus House Speaker Paul RyanPaul RyanTrump voices confidence on infrastructure plan GOP leaders to Obama: Leave Iran policy to Trump GOP debates going big on tax reform MORE (R-Wis.), and House Minority Leader Nancy Pelosi (D-Calif.) to further review proposed changes to Rule 41 and delay its implementation until July 1, 2017. ADVERTISEMENTThe Department of Justice’s alterations to the rule would allow law enforcement to use a single warrant to hack multiple devices beyond the jurisdiction that the warrant was issued in. The FBI used such a tactic to apprehend users of the child pornography dark website, Playpen. It took control of the dark website for two weeks and after securing two warrants, installed malware on Playpen users computers to acquire their identities. But the signatories of the letter — which include advocacy groups, companies and trade associations — are raising questions about the effects of the change. 
  •  
    ".. no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." Fourth Amendment. The changes to Rule 41 ignore the particularity requirement by allowing the government to search computers that are not particularly identified in multiple locations not particularly identifed, in other words, a general warrant that is precisely the reason the particularity requirement was adopted to outlaw.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

Obama administration opts not to force firms to decrypt data - for now - The Washington... - 1 views

  • After months of deliberation, the Obama administration has made a long-awaited decision on the thorny issue of how to deal with encrypted communications: It will not — for now — call for legislation requiring companies to decode messages for law enforcement. Rather, the administration will continue trying to persuade companies that have moved to encrypt their customers’ data to create a way for the government to still peer into people’s data when needed for criminal or terrorism investigations. “The administration has decided not to seek a legislative remedy now, but it makes sense to continue the conversations with industry,” FBI Director James B. Comey said at a Senate hearing Thursday of the Homeland Security and Governmental Affairs Committee.
  • The decision, which essentially maintains the status quo, underscores the bind the administration is in — balancing competing pressures to help law enforcement and protect consumer privacy. The FBI says it is facing an increasing challenge posed by the encryption of communications of criminals, terrorists and spies. A growing number of companies have begun to offer encryption in which the only people who can read a message, for instance, are the person who sent it and the person who received it. Or, in the case of a device, only the device owner has access to the data. In such cases, the companies themselves lack “backdoors” or keys to decrypt the data for government investigators, even when served with search warrants or intercept orders.
  • The decision was made at a Cabinet meeting Oct. 1. “As the president has said, the United States will work to ensure that malicious actors can be held to account — without weakening our commitment to strong encryption,” National Security Council spokesman Mark Stroh said. “As part of those efforts, we are actively engaged with private companies to ensure they understand the public safety and national security risks that result from malicious actors’ use of their encrypted products and services.” But privacy advocates are concerned that the administration’s definition of strong encryption also could include a system in which a company holds a decryption key or can retrieve unencrypted communications from its servers for law enforcement. “The government should not erode the security of our devices or applications, pressure companies to keep and allow government access to our data, mandate implementation of vulnerabilities or backdoors into products, or have disproportionate access to the keys to private data,” said Savecrypto.org, a coalition of industry and privacy groups that has launched a campaign to petition the Obama administration.
  • ...3 more annotations...
  • To Amie Stepanovich, the U.S. policy manager for Access, one of the groups signing the petition, the status quo isn’t good enough. “It’s really crucial that even if the government is not pursuing legislation, it’s also not pursuing policies that will weaken security through other methods,” she said. The FBI and Justice Department have been talking with tech companies for months. On Thursday, Comey said the conversations have been “increasingly productive.” He added: “People have stripped out a lot of the venom.” He said the tech executives “are all people who care about the safety of America and also care about privacy and civil liberties.” Comey said the issue afflicts not just federal law enforcement but also state and local agencies investigating child kidnappings and car crashes — “cops and sheriffs . . . [who are] increasingly encountering devices they can’t open with a search warrant.”
  • One senior administration official said the administration thinks it’s making enough progress with companies that seeking legislation now is unnecessary. “We feel optimistic,” said the official, who spoke on the condition of anonymity to describe internal discussions. “We don’t think it’s a lost cause at this point.” Legislation, said Rep. Adam Schiff (D-Calif.), is not a realistic option given the current political climate. He said he made a recent trip to Silicon Valley to talk to Twitter, Facebook and Google. “They quite uniformly are opposed to any mandate or pressure — and more than that, they don’t want to be asked to come up with a solution,” Schiff said. Law enforcement officials know that legislation is a tough sell now. But, one senior official stressed, “it’s still going to be in the mix.” On the other side of the debate, technology, diplomatic and commerce agencies were pressing for an outright statement by Obama to disavow a legislative mandate on companies. But their position did not prevail.
  • Daniel Castro, vice president of the Information Technology & Innovation Foundation, said absent any new laws, either in the United States or abroad, “companies are in the driver’s seat.” He said that if another country tried to require companies to retain an ability to decrypt communications, “I suspect many tech companies would try to pull out.”
  •  
    # ! upcoming Elections...
Paul Merrell

Editorial - Mr. Obama's Internet Agenda - NYTimes.com - 0 views

  • President-elect Barack Obama recently announced an ambitious plan to build up the nation’s Internet infrastructure as part of his proposed economic stimulus package.
  • The United States has long been the world leader in technology, but when it comes to the Internet, it is fast falling behind. America now ranks 15th in the world in access to high-speed Internet connections. A cornerstone of Mr. Obama’s agenda is promoting universal, affordable high-speed Internet.
  • In a speech this month about his economic stimulus plan, he said that he intends to ensure that every child has a chance to get online and that he would use some of the stimulus money to connect libraries and schools.
  • ...2 more annotations...
  • Mr. Obama has also been a strong supporter of “network neutrality,” the principle that Internet service providers should not be able to discriminate against any of the information that they carry.
  • “This is the Eisenhower Interstate highway moment for the Internet,” argues Ben Scott, policy director of the media reform group Free Press.
  •  
    Whether this is in fact an Eisenhower Interstate Highway moment for the Internet will depend mightily on the long-term spending commitment to infrastructure construction, maintenance, and improvement. The Interstate Highway system was a Cold War initiative under Eisenhower to develop a comprehensive and expansive national highway freeway system, heavily underwritten by Defense Department spending reflected in its design. E.g., highways capable of serving not only for rapid transport of military supplies, but also as aircraft landing fields, "rest stops" to provide the core for troop garrisons in the event of an invasion, etc. In other words, to achieve lasting benefits, Congress will need to be brought on board. The extent to which such funding will be spent on "bail-out" temporary rescues of failing companies rather than fueling economic growth will be another major factor.
Paul Merrell

A Short Guide to the Internet's Biggest Enemies | Electronic Frontier Foundation - 1 views

  • Reporters Without Borders (RSF) released its annual “Enemies of the Internet” index this week—a ranking first launched in 2006 intended to track countries that repress online speech, intimidate and arrest bloggers, and conduct surveillance of their citizens.  Some countries have been mainstays on the annual index, while others have been able to work their way off the list.  Two countries particularly deserving of praise in this area are Tunisia and Myanmar (Burma), both of which have stopped censoring the Internet in recent years and are headed in the right direction toward Internet freedom. In the former category are some of the world’s worst offenders: Cuba, North Korea, China, Iran, Saudi Arabia, Vietnam, Belarus, Bahrain, Turkmenistan, Syria.  Nearly every one of these countries has amped up their online repression in recent years, from implementing sophisticated surveillance (Syria) to utilizing targeted surveillance tools (Vietnam) to increasing crackdowns on online speech (Saudi Arabia).  These are countries where, despite advocacy efforts by local and international groups, no progress has been made. The newcomers  A third, perhaps even more disheartening category, is the list of countries new to this year's index.  A motley crew, these nations have all taken new, harsh approaches to restricting speech or monitoring citizens:
  • United States: This is the first time the US has made it onto RSF’s list.  While the US government doesn’t censor online content, and pours money into promoting Internet freedom worldwide, the National Security Agency’s unapologetic dragnet surveillance and the government’s treatment of whistleblowers have earned it a spot on the index. United Kingdom: The European nation has been dubbed by RSF as the “world champion of surveillance” for its recently-revealed depraved strategies for spying on individuals worldwide.  The UK also joins countries like Ethiopia and Morocco in using terrorism laws to go after journalists.  Not noted by RSF, but also important, is the fact that the UK is also cracking down on legal pornography, forcing Internet users to opt-in with their ISP if they wish to view it and creating a slippery slope toward overblocking.  This is in addition to the government’s use of an opaque, shadowy NGO to identify child sexual abuse images, sometimes resulting instead in censorship of legitimate speech.
1 - 20 of 27 Next ›
Showing 20 items per page