Skip to main content

Home/ Future of the Web/ Group items tagged could

Rss Feed Group items tagged

Gary Edwards

Developer: Dump JavaScript for faster Web loading | CIO - 0 views

  • Accomplishing the goal of a high-speed, responsive Web experience without loading JavaScript "could probably be done by linking anchor elements to JSON/XML (or a new definition) API endpoints [and] having the browser internally load the data into a new data structure," the proposal states.
  • The browser "then replaces DOM elements with whatever data that was loaded as needed.
  • The initial data and standard error responses could be in header fixtures, which could be replaced later if so desired. "The HTML body thus becomes a templating language with all the content residing in the fixtures that can be dynamically reloaded without JavaScript."
  •  
    "A W3C (World Wide Web Consortium) mailing list post entitled "HTML6 proposal for single-page Web apps without JavaScript" details the proposal, dated March 20. "The overall purpose [of the plan] is to reduce response times when loading Web pages," said Web developer Bobby Mozumder, editor in chief of FutureClaw magazine, in an email. "This is the difference between a 300ms page load vs 10ms. The faster you are, the better people are going to feel about using your Website." The proposal cites a standard design pattern emerging via front-end JavaScript frameworks where content is loaded dynamically via JSON APIs. "This is the single-page app Web design pattern," said Mozumder. "Everyone's into it because the responsiveness is so much better than loading a full page -- 10-50ms with a clean API load vs. 300-1500ms for a full HTML page load. Since this is so common now, can we implement this directly in the browsers via HTML so users can dynamically run single-page apps without JavaScript?" Accomplishing the goal of a high-speed, responsive Web experience without loading JavaScript "could probably be done by linking anchor elements to JSON/XML (or a new definition) API endpoints [and] having the browser internally load the data into a new data structure," the proposal states. The browser "then replaces DOM elements with whatever data that was loaded as needed." The initial data and standard error responses could be in header fixtures, which could be replaced later if so desired. "The HTML body thus becomes a templating language with all the content residing in the fixtures that can be dynamically reloaded without JavaScript." JavaScript frameworks and JavaScript are leveraged for loading now, but there are issues with these, Mozumder explained. "Should we force millions of Web developers to learn JavaScript, a framework, and an associated templating language if they want a speedy, responsive Web site out-of-the-box? This is a huge barrier for beginners, and right n
Paul Merrell

Cy Vance's Proposal to Backdoor Encrypted Devices Is Riddled With Vulnerabilities | Jus... - 0 views

  • Less than a week after the attacks in Paris — while the public and policymakers were still reeling, and the investigation had barely gotten off the ground — Cy Vance, Manhattan’s District Attorney, released a policy paper calling for legislation requiring companies to provide the government with backdoor access to their smartphones and other mobile devices. This is the first concrete proposal of this type since September 2014, when FBI Director James Comey reignited the “Crypto Wars” in response to Apple’s and Google’s decisions to use default encryption on their smartphones. Though Comey seized on Apple’s and Google’s decisions to encrypt their devices by default, his concerns are primarily related to end-to-end encryption, which protects communications that are in transit. Vance’s proposal, on the other hand, is only concerned with device encryption, which protects data stored on phones. It is still unclear whether encryption played any role in the Paris attacks, though we do know that the attackers were using unencrypted SMS text messages on the night of the attack, and that some of them were even known to intelligence agencies and had previously been under surveillance. But regardless of whether encryption was used at some point during the planning of the attacks, as I lay out below, prohibiting companies from selling encrypted devices would not prevent criminals or terrorists from being able to access unbreakable encryption. Vance’s primary complaint is that Apple’s and Google’s decisions to provide their customers with more secure devices through encryption interferes with criminal investigations. He claims encryption prevents law enforcement from accessing stored data like iMessages, photos and videos, Internet search histories, and third party app data. He makes several arguments to justify his proposal to build backdoors into encrypted smartphones, but none of them hold water.
  • Before addressing the major privacy, security, and implementation concerns that his proposal raises, it is worth noting that while an increase in use of fully encrypted devices could interfere with some law enforcement investigations, it will help prevent far more crimes — especially smartphone theft, and the consequent potential for identity theft. According to Consumer Reports, in 2014 there were more than two million victims of smartphone theft, and nearly two-thirds of all smartphone users either took no steps to secure their phones or their data or failed to implement passcode access for their phones. Default encryption could reduce instances of theft because perpetrators would no longer be able to break into the phone to steal the data.
  • Vance argues that creating a weakness in encryption to allow law enforcement to access data stored on devices does not raise serious concerns for security and privacy, since in order to exploit the vulnerability one would need access to the actual device. He considers this an acceptable risk, claiming it would not be the same as creating a widespread vulnerability in encryption protecting communications in transit (like emails), and that it would be cheap and easy for companies to implement. But Vance seems to be underestimating the risks involved with his plan. It is increasingly important that smartphones and other devices are protected by the strongest encryption possible. Our devices and the apps on them contain astonishing amounts of personal information, so much that an unprecedented level of harm could be caused if a smartphone or device with an exploitable vulnerability is stolen, not least in the forms of identity fraud and credit card theft. We bank on our phones, and have access to credit card payments with services like Apple Pay. Our contact lists are stored on our phones, including phone numbers, emails, social media accounts, and addresses. Passwords are often stored on people’s phones. And phones and apps are often full of personal details about their lives, from food diaries to logs of favorite places to personal photographs. Symantec conducted a study, where the company spread 50 “lost” phones in public to see what people who picked up the phones would do with them. The company found that 95 percent of those people tried to access the phone, and while nearly 90 percent tried to access private information stored on the phone or in other private accounts such as banking services and email, only 50 percent attempted contacting the owner.
  • ...8 more annotations...
  • Vance attempts to downplay this serious risk by asserting that anyone can use the “Find My Phone” or Android Device Manager services that allow owners to delete the data on their phones if stolen. However, this does not stand up to scrutiny. These services are effective only when an owner realizes their phone is missing and can take swift action on another computer or device. This delay ensures some period of vulnerability. Encryption, on the other hand, protects everyone immediately and always. Additionally, Vance argues that it is safer to build backdoors into encrypted devices than it is to do so for encrypted communications in transit. It is true that there is a difference in the threats posed by the two types of encryption backdoors that are being debated. However, some manner of widespread vulnerability will inevitably result from a backdoor to encrypted devices. Indeed, the NSA and GCHQ reportedly hacked into a database to obtain cell phone SIM card encryption keys in order defeat the security protecting users’ communications and activities and to conduct surveillance. Clearly, the reality is that the threat of such a breach, whether from a hacker or a nation state actor, is very real. Even if companies go the extra mile and create a different means of access for every phone, such as a separate access key for each phone, significant vulnerabilities will be created. It would still be possible for a malicious actor to gain access to the database containing those keys, which would enable them to defeat the encryption on any smartphone they took possession of. Additionally, the cost of implementation and maintenance of such a complex system could be high.
  • Privacy is another concern that Vance dismisses too easily. Despite Vance’s arguments otherwise, building backdoors into device encryption undermines privacy. Our government does not impose a similar requirement in any other context. Police can enter homes with warrants, but there is no requirement that people record their conversations and interactions just in case they someday become useful in an investigation. The conversations that we once had through disposable letters and in-person conversations now happen over the Internet and on phones. Just because the medium has changed does not mean our right to privacy has.
  • In addition to his weak reasoning for why it would be feasible to create backdoors to encrypted devices without creating undue security risks or harming privacy, Vance makes several flawed policy-based arguments in favor of his proposal. He argues that criminals benefit from devices that are protected by strong encryption. That may be true, but strong encryption is also a critical tool used by billions of average people around the world every day to protect their transactions, communications, and private information. Lawyers, doctors, and journalists rely on encryption to protect their clients, patients, and sources. Government officials, from the President to the directors of the NSA and FBI, and members of Congress, depend on strong encryption for cybersecurity and data security. There are far more innocent Americans who benefit from strong encryption than there are criminals who exploit it. Encryption is also essential to our economy. Device manufacturers could suffer major economic losses if they are prohibited from competing with foreign manufacturers who offer more secure devices. Encryption also protects major companies from corporate and nation-state espionage. As more daily business activities are done on smartphones and other devices, they may now hold highly proprietary or sensitive information. Those devices could be targeted even more than they are now if all that has to be done to access that information is to steal an employee’s smartphone and exploit a vulnerability the manufacturer was required to create.
  • Vance also suggests that the US would be justified in creating such a requirement since other Western nations are contemplating requiring encryption backdoors as well. Regardless of whether other countries are debating similar proposals, we cannot afford a race to the bottom on cybersecurity. Heads of the intelligence community regularly warn that cybersecurity is the top threat to our national security. Strong encryption is our best defense against cyber threats, and following in the footsteps of other countries by weakening that critical tool would do incalculable harm. Furthermore, even if the US or other countries did implement such a proposal, criminals could gain access to devices with strong encryption through the black market. Thus, only innocent people would be negatively affected, and some of those innocent people might even become criminals simply by trying to protect their privacy by securing their data and devices. Finally, Vance argues that David Kaye, UN Special Rapporteur for Freedom of Expression and Opinion, supported the idea that court-ordered decryption doesn’t violate human rights, provided certain criteria are met, in his report on the topic. However, in the context of Vance’s proposal, this seems to conflate the concepts of court-ordered decryption and of government-mandated encryption backdoors. The Kaye report was unequivocal about the importance of encryption for free speech and human rights. The report concluded that:
  • States should promote strong encryption and anonymity. National laws should recognize that individuals are free to protect the privacy of their digital communications by using encryption technology and tools that allow anonymity online. … States should not restrict encryption and anonymity, which facilitate and often enable the rights to freedom of opinion and expression. Blanket prohibitions fail to be necessary and proportionate. States should avoid all measures that weaken the security that individuals may enjoy online, such as backdoors, weak encryption standards and key escrows. Additionally, the group of intelligence experts that was hand-picked by the President to issue a report and recommendations on surveillance and technology, concluded that: [R]egarding encryption, the U.S. Government should: (1) fully support and not undermine efforts to create encryption standards; (2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and (3) increase the use of encryption and urge US companies to do so, in order to better protect data in transit, at rest, in the cloud, and in other storage.
  • The clear consensus among human rights experts and several high-ranking intelligence experts, including the former directors of the NSA, Office of the Director of National Intelligence, and DHS, is that mandating encryption backdoors is dangerous. Unaddressed Concerns: Preventing Encrypted Devices from Entering the US and the Slippery Slope In addition to the significant faults in Vance’s arguments in favor of his proposal, he fails to address the question of how such a restriction would be effectively implemented. There is no effective mechanism for preventing code from becoming available for download online, even if it is illegal. One critical issue the Vance proposal fails to address is how the government would prevent, or even identify, encrypted smartphones when individuals bring them into the United States. DHS would have to train customs agents to search the contents of every person’s phone in order to identify whether it is encrypted, and then confiscate the phones that are. Legal and policy considerations aside, this kind of policy is, at the very least, impractical. Preventing strong encryption from entering the US is not like preventing guns or drugs from entering the country — encrypted phones aren’t immediately obvious as is contraband. Millions of people use encrypted devices, and tens of millions more devices are shipped to and sold in the US each year.
  • Finally, there is a real concern that if Vance’s proposal were accepted, it would be the first step down a slippery slope. Right now, his proposal only calls for access to smartphones and devices running mobile operating systems. While this policy in and of itself would cover a number of commonplace devices, it may eventually be expanded to cover laptop and desktop computers, as well as communications in transit. The expansion of this kind of policy is even more worrisome when taking into account the speed at which technology evolves and becomes widely adopted. Ten years ago, the iPhone did not even exist. Who is to say what technology will be commonplace in 10 or 20 years that is not even around today. There is a very real question about how far law enforcement will go to gain access to information. Things that once seemed like merely science fiction, such as wearable technology and artificial intelligence that could be implanted in and work with the human nervous system, are now available. If and when there comes a time when our “smart phone” is not really a device at all, but is rather an implant, surely we would not grant law enforcement access to our minds.
  • Policymakers should dismiss Vance’s proposal to prohibit the use of strong encryption to protect our smartphones and devices in order to ensure law enforcement access. Undermining encryption, regardless of whether it is protecting data in transit or at rest, would take us down a dangerous and harmful path. Instead, law enforcement and the intelligence community should be working to alter their skills and tactics in a fast-evolving technological world so that they are not so dependent on information that will increasingly be protected by encryption.
Paul Merrell

From Radio to Porn, British Spies Track Web Users' Online Identities - 1 views

  • HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the Internet.” Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media and news websites, search engines, chat forums, and blogs. The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into Internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
  • Amid a renewed push from the U.K. government for more surveillance powers, more than two dozen documents being disclosed today by The Intercept reveal for the first time several major strands of GCHQ’s existing electronic eavesdropping capabilities.
  • The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails and Internet browsing logs of Brits, Americans, and any other citizens — all without a court order or judicial warrant
  • ...17 more annotations...
  • A huge volume of the Internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis. Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events” — a term the agency uses to refer to metadata records — with about 10 billion new entries added every day. As of March 2009, the largest slice of data Black Hole held — 41 percent — was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.
  • Throughout this period, as smartphone sales started to boom, the frequency of people’s Internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data. By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it said would be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.”
  • A document from the GCHQ target analysis center (GTAC) shows the Black Hole repository’s structure.
  • The data is searched by GCHQ analysts in a hunt for behavior online that could be connected to terrorism or other criminal activity. But it has also served a broader and more controversial purpose — helping the agency hack into European companies’ computer networks. In the lead up to its secret mission targeting Netherlands-based Gemalto, the largest SIM card manufacturer in the world, GCHQ used MUTANT BROTH in an effort to identify the company’s employees so it could hack into their computers. The system helped the agency analyze intercepted Facebook cookies it believed were associated with Gemalto staff located at offices in France and Poland. GCHQ later successfully infiltrated Gemalto’s internal networks, stealing encryption keys produced by the company that protect the privacy of cell phone communications.
  • Similarly, MUTANT BROTH proved integral to GCHQ’s hack of Belgian telecommunications provider Belgacom. The agency entered IP addresses associated with Belgacom into MUTANT BROTH to uncover information about the company’s employees. Cookies associated with the IPs revealed the Google, Yahoo, and LinkedIn accounts of three Belgacom engineers, whose computers were then targeted by the agency and infected with malware. The hacking operation resulted in GCHQ gaining deep access into the most sensitive parts of Belgacom’s internal systems, granting British spies the ability to intercept communications passing through the company’s networks.
  • In March, a U.K. parliamentary committee published the findings of an 18-month review of GCHQ’s operations and called for an overhaul of the laws that regulate the spying. The committee raised concerns about the agency gathering what it described as “bulk personal datasets” being held about “a wide range of people.” However, it censored the section of the report describing what these “datasets” contained, despite acknowledging that they “may be highly intrusive.” The Snowden documents shine light on some of the core GCHQ bulk data-gathering programs that the committee was likely referring to — pulling back the veil of secrecy that has shielded some of the agency’s most controversial surveillance operations from public scrutiny. KARMA POLICE and MUTANT BROTH are among the key bulk collection systems. But they do not operate in isolation — and the scope of GCHQ’s spying extends far beyond them.
  • The agency operates a bewildering array of other eavesdropping systems, each serving its own specific purpose and designated a unique code name, such as: SOCIAL ANTHROPOID, which is used to analyze metadata on emails, instant messenger chats, social media connections and conversations, plus “telephony” metadata about phone calls, cell phone locations, text and multimedia messages; MEMORY HOLE, which logs queries entered into search engines and associates each search with an IP address; MARBLED GECKO, which sifts through details about searches people have entered into Google Maps and Google Earth; and INFINITE MONKEYS, which analyzes data about the usage of online bulletin boards and forums. GCHQ has other programs that it uses to analyze the content of intercepted communications, such as the full written body of emails and the audio of phone calls. One of the most important content collection capabilities is TEMPORA, which mines vast amounts of emails, instant messages, voice calls and other communications and makes them accessible through a Google-style search tool named XKEYSCORE.
  • As of September 2012, TEMPORA was collecting “more than 40 billion pieces of content a day” and it was being used to spy on people across Europe, the Middle East, and North Africa, according to a top-secret memo outlining the scope of the program. The existence of TEMPORA was first revealed by The Guardian in June 2013. To analyze all of the communications it intercepts and to build a profile of the individuals it is monitoring, GCHQ uses a variety of different tools that can pull together all of the relevant information and make it accessible through a single interface. SAMUEL PEPYS is one such tool, built by the British spies to analyze both the content and metadata of emails, browsing sessions, and instant messages as they are being intercepted in real time. One screenshot of SAMUEL PEPYS in action shows the agency using it to monitor an individual in Sweden who visited a page about GCHQ on the U.S.-based anti-secrecy website Cryptome.
  • Partly due to the U.K.’s geographic location — situated between the United States and the western edge of continental Europe — a large amount of the world’s Internet traffic passes through its territory across international data cables. In 2010, GCHQ noted that what amounted to “25 percent of all Internet traffic” was transiting the U.K. through some 1,600 different cables. The agency said that it could “survey the majority of the 1,600” and “select the most valuable to switch into our processing systems.”
  • According to Joss Wright, a research fellow at the University of Oxford’s Internet Institute, tapping into the cables allows GCHQ to monitor a large portion of foreign communications. But the cables also transport masses of wholly domestic British emails and online chats, because when anyone in the U.K. sends an email or visits a website, their computer will routinely send and receive data from servers that are located overseas. “I could send a message from my computer here [in England] to my wife’s computer in the next room and on its way it could go through the U.S., France, and other countries,” Wright says. “That’s just the way the Internet is designed.” In other words, Wright adds, that means “a lot” of British data and communications transit across international cables daily, and are liable to be swept into GCHQ’s databases.
  • A map from a classified GCHQ presentation about intercepting communications from undersea cables. GCHQ is authorized to conduct dragnet surveillance of the international data cables through so-called external warrants that are signed off by a government minister. The external warrants permit the agency to monitor communications in foreign countries as well as British citizens’ international calls and emails — for example, a call from Islamabad to London. They prohibit GCHQ from reading or listening to the content of “internal” U.K. to U.K. emails and phone calls, which are supposed to be filtered out from GCHQ’s systems if they are inadvertently intercepted unless additional authorization is granted to scrutinize them. However, the same rules do not apply to metadata. A little-known loophole in the law allows GCHQ to use external warrants to collect and analyze bulk metadata about the emails, phone calls, and Internet browsing activities of British people, citizens of closely allied countries, and others, regardless of whether the data is derived from domestic U.K. to U.K. communications and browsing sessions or otherwise. In March, the existence of this loophole was quietly acknowledged by the U.K. parliamentary committee’s surveillance review, which stated in a section of its report that “special protection and additional safeguards” did not apply to metadata swept up using external warrants and that domestic British metadata could therefore be lawfully “returned as a result of searches” conducted by GCHQ.
  • Perhaps unsurprisingly, GCHQ appears to have readily exploited this obscure legal technicality. Secret policy guidance papers issued to the agency’s analysts instruct them that they can sift through huge troves of indiscriminately collected metadata records to spy on anyone regardless of their nationality. The guidance makes clear that there is no exemption or extra privacy protection for British people or citizens from countries that are members of the Five Eyes, a surveillance alliance that the U.K. is part of alongside the U.S., Canada, Australia, and New Zealand. “If you are searching a purely Events only database such as MUTANT BROTH, the issue of location does not occur,” states one internal GCHQ policy document, which is marked with a “last modified” date of July 2012. The document adds that analysts are free to search the databases for British metadata “without further authorization” by inputing a U.K. “selector,” meaning a unique identifier such as a person’s email or IP address, username, or phone number. Authorization is “not needed for individuals in the U.K.,” another GCHQ document explains, because metadata has been judged “less intrusive than communications content.” All the spies are required to do to mine the metadata troves is write a short “justification” or “reason” for each search they conduct and then click a button on their computer screen.
  • Intelligence GCHQ collects on British persons of interest is shared with domestic security agency MI5, which usually takes the lead on spying operations within the U.K. MI5 conducts its own extensive domestic surveillance as part of a program called DIGINT (digital intelligence).
  • GCHQ’s documents suggest that it typically retains metadata for periods of between 30 days to six months. It stores the content of communications for a shorter period of time, varying between three to 30 days. The retention periods can be extended if deemed necessary for “cyber defense.” One secret policy paper dated from January 2010 lists the wide range of information the agency classes as metadata — including location data that could be used to track your movements, your email, instant messenger, and social networking “buddy lists,” logs showing who you have communicated with by phone or email, the passwords you use to access “communications services” (such as an email account), and information about websites you have viewed.
  • Records showing the full website addresses you have visited — for instance, www.gchq.gov.uk/what_we_do — are treated as content. But the first part of an address you have visited — for instance, www.gchq.gov.uk — is treated as metadata. In isolation, a single metadata record of a phone call, email, or website visit may not reveal much about a person’s private life, according to Ethan Zuckerman, director of Massachusetts Institute of Technology’s Center for Civic Media. But if accumulated and analyzed over a period of weeks or months, these details would be “extremely personal,” he told The Intercept, because they could reveal a person’s movements, habits, religious beliefs, political views, relationships, and even sexual preferences. For Zuckerman, who has studied the social and political ramifications of surveillance, the most concerning aspect of large-scale government data collection is that it can be “corrosive towards democracy” — leading to a chilling effect on freedom of expression and communication. “Once we know there’s a reasonable chance that we are being watched in one fashion or another it’s hard for that not to have a ‘panopticon effect,’” he said, “where we think and behave differently based on the assumption that people may be watching and paying attention to what we are doing.”
  • When compared to surveillance rules in place in the U.S., GCHQ notes in one document that the U.K. has “a light oversight regime.” The more lax British spying regulations are reflected in secret internal rules that highlight greater restrictions on how NSA databases can be accessed. The NSA’s troves can be searched for data on British citizens, one document states, but they cannot be mined for information about Americans or other citizens from countries in the Five Eyes alliance. No such constraints are placed on GCHQ’s own databases, which can be sifted for records on the phone calls, emails, and Internet usage of Brits, Americans, and citizens from any other country. The scope of GCHQ’s surveillance powers explain in part why Snowden told The Guardian in June 2013 that U.K. surveillance is “worse than the U.S.” In an interview with Der Spiegel in July 2013, Snowden added that British Internet cables were “radioactive” and joked: “Even the Queen’s selfies to the pool boy get logged.”
  • In recent years, the biggest barrier to GCHQ’s mass collection of data does not appear to have come in the form of legal or policy restrictions. Rather, it is the increased use of encryption technology that protects the privacy of communications that has posed the biggest potential hindrance to the agency’s activities. “The spread of encryption … threatens our ability to do effective target discovery/development,” says a top-secret report co-authored by an official from the British agency and an NSA employee in 2011. “Pertinent metadata events will be locked within the encrypted channels and difficult, if not impossible, to prise out,” the report says, adding that the agencies were working on a plan that would “(hopefully) allow our Internet Exploitation strategy to prevail.”
Paul Merrell

Eric Holder: The Justice Department could strike deal with Edward Snowden - 0 views

  • Eric Holder: The Justice Department could strike deal with Edward SnowdenMichael IsikoffChief Investigative CorrespondentJuly 6, 2015Former U.S. Attorney General Eric Holder. (Photo: Olivier Douliery-Pool/Getty) Former Attorney General Eric Holder said today that a “possibility exists” for the Justice Department to cut a deal with former NSA contractor Edward Snowden that would allow him to return to the United States from Moscow. In an interview with Yahoo News, Holder said “we are in a different place as a result of the Snowden disclosures” and that “his actions spurred a necessary debate” that prompted President Obama and Congress to change policies on the bulk collection of phone records of American citizens. Asked if that meant the Justice Department might now be open to a plea bargain that allows Snowden to return from his self-imposed exile in Moscow, Holder replied: “I certainly think there could be a basis for a resolution that everybody could ultimately be satisfied with. I think the possibility exists.”
  • But his remarks to Yahoo News go further than any current or former Obama administration official in suggesting that Snowden’s disclosures had a positive impact and that the administration might be open to a negotiated plea that the self-described whistleblower could accept, according to his lawyer Ben Wizner.
  • It’s also not clear whether Holder’s comments signal a shift in Obama administration attitudes that could result in a resolution of the charges against Snowden. Melanie Newman, chief spokeswoman for Attorney General Loretta Lynch, Holder’s successor, immediately shot down the idea that the Justice Department was softening its stance on Snowden. “This is an ongoing case so I am not going to get into specific details but I can say our position regarding bringing Edward Snowden back to the United States to face charges has not changed,” she said in an email.
  • ...1 more annotation...
  • Three sources familiar with informal discussions of Snowden’s case told Yahoo News that one top U.S. intelligence official, Robert Litt, the chief counsel to Director of National Intelligence James Clapper, recently privately floated the idea that the government might be open to a plea bargain in which Snowden returns to the United States, pleads guilty to one felony count and receives a prison sentence of three to five years in exchange for full cooperation with the government.
Paul Merrell

The Latest Rules on How Long NSA Can Keep Americans' Encrypted Data Look Too Familiar |... - 0 views

  • Does the National Security Agency (NSA) have the authority to collect and keep all encrypted Internet traffic for as long as is necessary to decrypt that traffic? That was a question first raised in June 2013, after the minimization procedures governing telephone and Internet records collected under Section 702 of the Foreign Intelligence Surveillance Act were disclosed by Edward Snowden. The issue quickly receded into the background, however, as the world struggled to keep up with the deluge of surveillance disclosures. The Intelligence Authorization Act of 2015, which passed Congress this last December, should bring the question back to the fore. It established retention guidelines for communications collected under Executive Order 12333 and included an exception that allows NSA to keep ‘incidentally’ collected encrypted communications for an indefinite period of time. This creates a massive loophole in the guidelines. NSA’s retention of encrypted communications deserves further consideration today, now that these retention guidelines have been written into law. It has become increasingly clear over the last year that surveillance reform will be driven by technological change—specifically by the growing use of encryption technologies. Therefore, any legislation touching on encryption should receive close scrutiny.
  • Section 309 of the intel authorization bill describes “procedures for the retention of incidentally acquired communications.” It establishes retention guidelines for surveillance programs that are “reasonably anticipated to result in the acquisition of [telephone or electronic communications] to or from a United States person.” Communications to or from a United States person are ‘incidentally’ collected because the U.S. person is not the actual target of the collection. Section 309 states that these incidentally collected communications must be deleted after five years unless they meet a number of exceptions. One of these exceptions is that “the communication is enciphered or reasonably believed to have a secret meaning.” This exception appears to be directly lifted from NSA’s minimization procedures for data collected under Section 702 of FISA, which were declassified in 2013. 
  • While Section 309 specifically applies to collection taking place under E.O. 12333, not FISA, several of the exceptions described in Section 309 closely match exceptions in the FISA minimization procedures. That includes the exception for “enciphered” communications. Those minimization procedures almost certainly served as a model for these retention guidelines and will likely shape how this new language is interpreted by the Executive Branch. Section 309 also asks the heads of each relevant member of the intelligence community to develop procedures to ensure compliance with new retention requirements. I expect those procedures to look a lot like the FISA minimization guidelines.
  • ...6 more annotations...
  • This language is broad, circular, and technically incoherent, so it takes some effort to parse appropriately. When the minimization procedures were disclosed in 2013, this language was interpreted by outside commentators to mean that NSA may keep all encrypted data that has been incidentally collected under Section 702 for at least as long as is necessary to decrypt that data. Is this the correct interpretation? I think so. It is important to realize that the language above isn’t just broad. It seems purposefully broad. The part regarding relevance seems to mirror the rationale NSA has used to justify its bulk phone records collection program. Under that program, all phone records were relevant because some of those records could be valuable to terrorism investigations and (allegedly) it isn’t possible to collect only those valuable records. This is the “to find a needle a haystack, you first have to have the haystack” argument. The same argument could be applied to encrypted data and might be at play here.
  • This exception doesn’t just apply to encrypted data that might be relevant to a current foreign intelligence investigation. It also applies to cases in which the encrypted data is likely to become relevant to a future intelligence requirement. This is some remarkably generous language. It seems one could justify keeping any type of encrypted data under this exception. Upon close reading, it is difficult to avoid the conclusion that these procedures were written carefully to allow NSA to collect and keep a broad category of encrypted data under the rationale that this data might contain the communications of NSA targets and that it might be decrypted in the future. If NSA isn’t doing this today, then whoever wrote these minimization procedures wanted to at least ensure that NSA has the authority to do this tomorrow.
  • There are a few additional observations that are worth making regarding these nominally new retention guidelines and Section 702 collection. First, the concept of incidental collection as it has typically been used makes very little sense when applied to encrypted data. The way that NSA’s Section 702 upstream “about” collection is understood to work is that technology installed on the network does some sort of pattern match on Internet traffic; say that an NSA target uses example@gmail.com to communicate. NSA would then search content of emails for references to example@gmail.com. This could notionally result in a lot of incidental collection of U.S. persons’ communications whenever the email that references example@gmail.com is somehow mixed together with emails that have nothing to do with the target. This type of incidental collection isn’t possible when the data is encrypted because it won’t be possible to search and find example@gmail.com in the body of an email. Instead, example@gmail.com will have been turned into some alternative, indecipherable string of bits on the network. Incidental collection shouldn’t occur because the pattern match can’t occur in the first place. This demonstrates that, when communications are encrypted, it will be much harder for NSA to search Internet traffic for a unique ID associated with a specific target.
  • This lends further credence to the conclusion above: rather than doing targeted collection against specific individuals, NSA is collecting, or plans to collect, a broad class of data that is encrypted. For example, NSA might collect all PGP encrypted emails or all Tor traffic. In those cases, NSA could search Internet traffic for patterns associated with specific types of communications, rather than specific individuals’ communications. This would technically meet the definition of incidental collection because such activity would result in the collection of communications of U.S. persons who aren’t the actual targets of surveillance. Collection of all Tor traffic would entail a lot of this “incidental” collection because the communications of NSA targets would be mixed with the communications of a large number of non-target U.S. persons. However, this “incidental” collection is inconsistent with how the term is typically used, which is to refer to over-collection resulting from targeted surveillance programs. If NSA were collecting all Tor traffic, that activity wouldn’t actually be targeted, and so any resulting over-collection wouldn’t actually be incidental. Moreover, greater use of encryption by the general public would result in an ever-growing amount of this type of incidental collection.
  • This type of collection would also be inconsistent with representations of Section 702 upstream collection that have been made to the public and to Congress. Intelligence officials have repeatedly suggested that search terms used as part of this program have a high degree of specificity. They have also argued that the program is an example of targeted rather than bulk collection. ODNI General Counsel Robert Litt, in a March 2014 meeting before the Privacy and Civil Liberties Oversight Board, stated that “there is either a misconception or a mischaracterization commonly repeated that Section 702 is a form of bulk collection. It is not bulk collection. It is targeted collection based on selectors such as telephone numbers or email addresses where there’s reason to believe that the selector is relevant to a foreign intelligence purpose.” The collection of Internet traffic based on patterns associated with types of communications would be bulk collection; more akin to NSA’s collection of phone records en mass than it is to targeted collection focused on specific individuals. Moreover, this type of collection would certainly fall within the definition of bulk collection provided just last week by the National Academy of Sciences: “collection in which a significant portion of the retained data pertains to identifiers that are not targets at the time of collection.”
  • The Section 702 minimization procedures, which will serve as a template for any new retention guidelines established for E.O. 12333 collection, create a large loophole for encrypted communications. With everything from email to Internet browsing to real-time communications moving to encrypted formats, an ever-growing amount of Internet traffic will fall within this loophole.
  •  
    Tucked into a budget authorization act in December without press notice. Section 309 (the Act is linked from the article) appears to be very broad authority for the NSA to intercept any form of telephone or other electronic information in bulk. There are far more exceptions from the five-year retention limitation than the encrypted information exception. When reading this, keep in mind that the U.S. intelligence community plays semantic games to obfuscate what it does. One of its word plays is that communications are not "collected" until an analyst looks at or listens to partiuclar data, even though the data will be searched to find information countless times before it becomes "collected." That searching was the major basis for a decision by the U.S. District Court in Washington, D.C. that bulk collection of telephone communications was unconstitutional: Under the Fourth Amendment, a "search" or "seizure" requiring a judicial warrant occurs no later than when the information is intercepted. That case is on appeal, has been briefed and argued, and a decision could come any time now. Similar cases are pending in two other courts of appeals. Also, an important definition from the new Intelligence Authorization Act: "(a) DEFINITIONS.-In this section: (1) COVERED COMMUNICATION.-The term ''covered communication'' means any nonpublic telephone or electronic communication acquired without the consent of a person who is a party to the communication, including communications in electronic storage."       
Paul Merrell

Microsoft Pitches Technology That Can Read Facial Expressions at Political Rallies - 1 views

  • On the 21st floor of a high-rise hotel in Cleveland, in a room full of political operatives, Microsoft’s Research Division was advertising a technology that could read each facial expression in a massive crowd, analyze the emotions, and report back in real time. “You could use this at a Trump rally,” a sales representative told me. At both the Republican and Democratic conventions, Microsoft sponsored event spaces for the news outlet Politico. Politico, in turn, hosted a series of Microsoft-sponsored discussions about the use of data technology in political campaigns. And throughout Politico’s spaces in both Philadelphia and Cleveland, Microsoft advertised an array of products from “Microsoft Cognitive Services,” its artificial intelligence and cloud computing division. At one exhibit, titled “Realtime Crowd Insights,” a small camera scanned the room, while a monitor displayed the captured image. Every five seconds, a new image would appear with data annotated for each face — an assigned serial number, gender, estimated age, and any emotions detected in the facial expression. When I approached, the machine labeled me “b2ff” and correctly identified me as a 23-year-old male.
  • “Realtime Crowd Insights” is an Application Programming Interface (API), or a software tool that connects web applications to Microsoft’s cloud computing services. Through Microsoft’s emotional analysis API — a component of Realtime Crowd Insights — applications send an image to Microsoft’s servers. Microsoft’s servers then analyze the faces and return emotional profiles for each one. In a November blog post, Microsoft said that the emotional analysis could detect “anger, contempt, fear, disgust, happiness, neutral, sadness or surprise.” Microsoft’s sales representatives told me that political campaigns could use the technology to measure the emotional impact of different talking points — and political scientists could use it to study crowd response at rallies.
  • Facial recognition technology — the identification of faces by name — is already widely used in secret by law enforcement, sports stadiums, retail stores, and even churches, despite being of questionable legality. As early as 2002, facial recognition technology was used at the Super Bowl to cross-reference the 100,000 attendees to a database of the faces of known criminals. The technology is controversial enough that in 2013, Google tried to ban the use of facial recognition apps in its Google glass system. But “Realtime Crowd Insights” is not true facial recognition — it could not identify me by name, only as “b2ff.” It did, however, store enough data on each face that it could continuously identify it with the same serial number, even hours later. The display demonstrated that capability by distinguishing between the number of total faces it had seen, and the number of unique serial numbers. Photo: Alex Emmons
  • ...2 more annotations...
  • Instead, “Realtime Crowd Insights” is an example of facial characterization technology — where computers analyze faces without necessarily identifying them. Facial characterization has many positive applications — it has been tested in the classroom, as a tool for spotting struggling students, and Microsoft has boasted that the tool will even help blind people read the faces around them. But facial characterization can also be used to assemble and store large profiles of information on individuals, even anonymously.
  • Alvaro Bedoya, a professor at Georgetown Law School and expert on privacy and facial recognition, has hailed that code of conduct as evidence that Microsoft is trying to do the right thing. But he pointed out that it leaves a number of questions unanswered — as illustrated in Cleveland and Philadelphia. “It’s interesting that the app being shown at the convention ‘remembered’ the faces of the people who walked by. That would seem to suggest that their faces were being stored and processed without the consent that Microsoft’s policy requires,” Bedoya said. “You have to wonder: What happened to the face templates of the people who walked by that booth? Were they deleted? Or are they still in the system?” Microsoft officials declined to comment on exactly what information is collected on each face and what data is retained or stored, instead referring me to their privacy policy, which does not address the question. Bedoya also pointed out that Microsoft’s marketing did not seem to match the consent policy. “It’s difficult to envision how companies will obtain consent from people in large crowds or rallies.”
  •  
    But nobody is saying that the output of this technology can't be combined with the output of facial recognition technology to let them monitor you individually AND track your emotions. Fortunately, others are fighting back with knowledge and tech to block facial recognition. http://goo.gl/JMQM2W
Gonzalo San Gil, PhD.

Ten Years in Jail For UK Internet Pirates: How the New Bill Reads - TorrentFreak - 0 views

  •  
    " By Andy on December 4, 2016 C: 102 News The Digital Economy Bill is currently at the report stage. It hasn't yet become law and could still be amended. However, as things stand those who upload any amount of infringing content to the Internet could face up to 10 years in jail. With the latest bill now published, we take a look at how file-sharers could be affected."
Paul Merrell

'Manhunting Timeline' Further Suggests US Pressured Countries to Prosecute WikiLeaks Ed... - 0 views

  • An entry in something the government calls a “Manhunting Timeline” suggests that the United States pressured officials of countries around the world to prosecute WikiLeaks editor-in-chief, Julian Assange, in 2010. The file—marked unclassified, revealed by National Security Agency whistleblower Edward Snowden and published by The Intercept—is dated August 2010. Under the headline, “United States, Australia, Great Britain, Germany, Iceland” – it states: The United States on 10 August urged other nations with forces in Afghanistan, including Australia, United Kingdom and Germany, to consider filing criminal charges against Julian Assange, founder of the rogue WikiLeaks Internet website and responsible for the unauthorized publication of over 70,000 classified documents covering the war in Afghanistan. The documents may have been provided to WikiLeaks by Army Private First Class Bradley Manning. The appeal exemplifies the start of an international effort to focus the legal element of national power upon non-state actor Assange and the human network that supports WikiLeaks. Another document—a top-secret page from an internal wiki—indicates there has been discussion in the NSA with the Threat Operations Center Oversight and Compliance (NOC) and Office of General Counsel (OGC) on the legality of designating WikiLeaks a “malicious foreign actor” and whether this would make it permissible to conduct surveillance on Americans accessing the website. “Can we treat a foreign server who stores or potentially disseminates leaked or stolen data on its server as a ‘malicious foreign actor’ for the purpose of targeting with no defeats?” Examples: WikiLeaks, thepiratebay.org). The NOC/OGC answered, “Let me get back to you.” (The page does not indicate if anyone ever got back to the NSA. And “defeats” essentially means protections.)
  • GCHQ, the NSA’s counterpart in the UK, had a program called “ANTICRISIS GIRL,” which could engage in “targeted website monitoring.” This means data of hundreds of users accessing a website, like WikiLeaks, could be collected. The IP addresses of readers and supporters could be monitored. The agency could even target the publisher if it had a public dropbox or submission system. NSA and GCHQ could also target the foreign “branches” of the hacktivist group, Anonymous. An answer to another question from the wiki entry involves the question, “Is it okay to query against a foreign server known to be malicious even if there is a possibility that US persons could be using it as well? Example: thepiratebay.org.” The NOC/OGC responded, “Okay to go after foreign servers which US people use also (with no defeats). But try to minimize to ‘post’ only for example to filter out non-pertinent information.” WikiLeaks is not an example in this question, however, if it was designated as a “malicious foreign actor,” then the NSA would do queries of American users.
  • Michael Ratner, a lawyer from the Center for Constitutional Rights (CCR) who represents WikiLeaks, said on “Democracy Now!”, this shows he has every reason to fear what would happen if he set foot outside of the embassy. The files show some of the extent to which the US and UK have tried to destroy WikiLeaks. CCR added in a statement, “These NSA documents should make people understand why Julian Assange was granted diplomatic asylum, why he must be given safe passage to Ecuador, and why he must keep himself out of the hands of the United States and apparently other countries as well. These revelations only corroborate the expectation that Julian Assange is on a US target list for prosecution under the archaic “Espionage Act,” for what is nothing more than publishing evidence of government misconduct.” “These documents demonstrate that the political persecution of WikiLeaks is very much alive,”Baltasar Garzón, the Spanish former judge who now represents the group, told The Intercept. “The paradox is that Julian Assange and the WikiLeaks organization are being treated as a threat instead of what they are: a journalist and a media organization that are exercising their fundamental right to receive and impart information in its original form, free from omission and censorship, free from partisan interests, free from economic or political pressure.”
Gonzalo San Gil, PhD.

Fix Copyright! | Help us Reform Copyright - 0 views

  •  
    "01 DYSFUNCTIONAL & NOT FIT FOR THE DIGITAL WORLD Copyright reform is needed to adapt to the digital world we live in. Under the current system everything tends to fall under copyright unless it is covered by a specific exception in the law. The trouble is that these exceptions are narrow, specific and technologically outdated: the list was written in 2001! This was well before YouTube and Facebook were created. As a result, everyday habits of online users could be considered illegal today. A blogger linking to copyrighted content, a meme based on a copyrighted image, a video with some footage from an existing movie or a song: all of that could create issues for the user that posted them."
  •  
    "01 DYSFUNCTIONAL & NOT FIT FOR THE DIGITAL WORLD Copyright reform is needed to adapt to the digital world we live in. Under the current system everything tends to fall under copyright unless it is covered by a specific exception in the law. The trouble is that these exceptions are narrow, specific and technologically outdated: the list was written in 2001! This was well before YouTube and Facebook were created. As a result, everyday habits of online users could be considered illegal today. A blogger linking to copyrighted content, a meme based on a copyrighted image, a video with some footage from an existing movie or a song: all of that could create issues for the user that posted them."
Gonzalo San Gil, PhD.

Will the Italian Presidency of the EU Council Support Net Neutrality? | La Quadrature d... - 0 views

  •  
    "Submitted on 9 May 2014 - 16:11 Kroes Telecoms Package Net neutrality press release Printer-friendly version Send by email Français Paris, 9 May 2014 - The voice of the Italian presidency of the Council of the European Union could mark a real departure from the usual government talk chastising the vote on Net Neutrality adopted by the European Parliament! According to the information portal Euractiv, the Italian presidency could support the text voted by the Members of the European Parliament and be ready to defend it in front of the European governments and telecommunications industry. As the publication of the guidance report of the Council of the European Union about the Net Neutrality (scheduled for 5 or 6 of June) nears, La Quadrature du Net welcomes this encouraging position and asks European citizens to invite their governments to follow this example."
  •  
    "Submitted on 9 May 2014 - 16:11 Kroes Telecoms Package Net neutrality press release Printer-friendly version Send by email Français Paris, 9 May 2014 - The voice of the Italian presidency of the Council of the European Union could mark a real departure from the usual government talk chastising the vote on Net Neutrality adopted by the European Parliament! According to the information portal Euractiv, the Italian presidency could support the text voted by the Members of the European Parliament and be ready to defend it in front of the European governments and telecommunications industry. As the publication of the guidance report of the Council of the European Union about the Net Neutrality (scheduled for 5 or 6 of June) nears, La Quadrature du Net welcomes this encouraging position and asks European citizens to invite their governments to follow this example."
Gonzalo San Gil, PhD.

Outernet Product Test Location - 0 views

  •  
    [https://www.outernet.is/] "Please tell us where you think Outernet should be switched on first. Remember, Outernet plans to eventually make service available everywhere and always for free. In addition to thinking about what might be a preference for your own local Outernet service, also consider the need to make Outernet as effective as possible from the outset. Think about the greatest impact Outernet could have in radical change as well as how many hypotheses about Outernet could be tested and what aspects of information freedom can be altered. The product test will take place over Ku band and come online in late summer 2014."
  •  
    [https://www.outernet.is/] "Please tell us where you think Outernet should be switched on first. Remember, Outernet plans to eventually make service available everywhere and always for free. In addition to thinking about what might be a preference for your own local Outernet service, also consider the need to make Outernet as effective as possible from the outset. Think about the greatest impact Outernet could have in radical change as well as how many hypotheses about Outernet could be tested and what aspects of information freedom can be altered. The product test will take place over Ku band and come online in late summer 2014."
Gary Edwards

Two Microsofts: Mulling an alternate reality | ZDNet - 1 views

  • Judge Jackson had it right. And the Court of Appeals? Not so much
  • Judge Jackson is an American hero and news of his passing thumped me hard. His ruling against Microsoft and the subsequent overturn of that ruling resulted, IMHO, in two extraordinary directions that changed the world. Sure the what-if game is interesting, but the reality itself is stunning enough. Of course, Judge Jackson sought to break the monopoly. The US Court of Appeals overturn resulted in the monopoly remaining intact, but the Internet remaining free and open. Judge Jackson's breakup plan had a good shot at achieving both a breakup of the monopoly and, a free and open Internet. I admit though that at the time I did not favor the Judge's plan. And i actually did submit a proposal based on Microsoft having to both support the WiNE project, and, provide a complete port to WiNE to any software provider requesting a port. I wanted to break the monopolist's hold on the Windows Productivity Environment and the hundreds of millions of investment dollars and time that had been spent on application development forever trapped on that platform. For me, it was the productivity platform that had to be broken.
  • I assume the good Judge thought that separating the Windows OS from Microsoft Office / Applications would force the OS to open up the secret API's even as the OS continued to evolve. Maybe. But a full disclosure of the API's coupled with the community service "port to WiNE" requirement might have sped up the process. Incredibly, the "Undocumented Windows Secrets" industry continues to thrive, and the legendary Andrew Schulman's number is still at the top of Silicon Valley legal profession speed dials. http://goo.gl/0UGe8 Oh well. The Court of Appeals stopped the breakup, leaving the Windows Productivity Platform intact. Microsoft continues to own the "client" in "Client/Server" computing. Although Microsoft was temporarily stopped from leveraging their desktop monopoly to an iron fisted control and dominance of the Internet, I think what were watching today with the Cloud is Judge Jackson's worst nightmare. And mine too. A great transition is now underway, as businesses and enterprises begin the move from legacy client/server business systems and processes to a newly emerging Cloud Productivity Platform. In this great transition, Microsoft holds an inside straight. They have all the aces because they own the legacy desktop productivity platform, and can control the transition to the Cloud. No doubt this transition is going to happen. And it will severely disrupt and change Microsoft's profit formula. But if the Redmond reprobate can provide a "value added" transition of legacy business systems and processes, and direct these new systems to the Microsoft Cloud, the profits will be immense.
  • ...1 more annotation...
  • Judge Jackson sought to break the ability of Microsoft to "leverage" their existing monopoly into the Internet and his plan was overturned and replaced by one based on judicial oversight. Microsoft got a slap on the wrist from the Court of Appeals, but were wailed on with lawsuits from the hundreds of parties injured by their rampant criminality. Some put the price of that criminality as high as $14 Billion in settlements. Plus, the shareholders forced Chairman Bill to resign. At the end of the day though, Chairman Bill was right. Keeping the monopoly intact was worth whatever penalty Microsoft was forced to pay. He knew that even the judicial over-site would end one day. Which it did. And now his company is ready to go for it all by leveraging and controlling the great productivity transition. No business wants to be hostage to a cold heart'd monopolist. But there is huge difference between a non-disruptive and cost effective, process-by-process value-added transition to a Cloud Productivity Platform, and, the very disruptive and costly "rip-out-and-replace" transition offered by Google, ZOHO, Box, SalesForce and other Cloud Productivity contenders. Microsoft, and only Microsoft, can offer the value-added transition path. If they get the Cloud even halfway right, they will own business productivity far into the future. Rest in Peace Judge Jackson. Your efforts were heroic and will be remembered as such. ~ge~
  •  
    Comments on the latest SVN article mulling the effects of Judge Thomas Penfield Jackson's anti trust ruling and proposed break up of Microsoft. comment: "Chinese Wall" Ummm, there was a Chinese Wall between Microsoft Os and the MS Applciations layer. At least that's what Chairman Bill promised developers at a 1990 OS/2-Windows Conference I attended. It was a developers luncheon, hosted by Microsoft, with Chairman Bill speaking to about 40 developers with applications designed to run on the then soon to be released Windows 3.0. In his remarks, the Chairman described his vision of commoditizing the personal computer market through an open hardware-reference platform on the one side of the Windows OS, and provisioning an open application developers layer on the other using open and totally transparent API's. Of course the question came up concerning the obvious advantage Microsoft applications would have. Chairman Bill answered the question by describing the Chinese Wall that existed between Microsoft's OS and Apps develop departments. He promised that OS API's would be developed privately and separate from the Apps department, and publicly disclosed to ALL developers at the same time. Oh yeah. There was lots of anti IBM - evil empire stuff too :) Of course we now know this was a line of crap. Microsoft Apps was discovered to have been using undocumented and secret Window API's. http://goo.gl/0UGe8. Microsoft Apps had a distinct advantage over the competition, and eventually the entire Windows Productivity Platform became dependent on the MSOffice core. The company I worked for back then, Pyramid Data, had the first Contact Management application for Windows; PowerLeads. Every Friday night we would release bug fixes and improvements using Wildcat BBS. By Monday morning we would be slammed with calls from users complaining that they had downloaded the Friday night patch, and now some other application would not load or function properly. Eventually we tracked th
Gary Edwards

Cisco buys PostPath: WebEx to compete with Exchange, Outlook, Office? | Between the Lin... - 0 views

  •  
    Once you add in better email and calendar support WebEx could become more appealing to the enterprise. PostPath has a Linux based collaboration system built on an AJAX client that doesn't need a browser. Cisco added that the company's strategy is to develop "an integrated collaboration platform designed for how we work today and into the future. And better yet: PostPath's pitch is that it is an Exchange alternative and a "Linux-based corporate email server." Let's read between the lines: Doesn't this sound a lot like an end-run around Microsoft Office, Outlook and Exchange just like Google is trying to do with Google Apps? Cisco probably has no desire to compete head on with Microsoft (or at least admit it), but the company obviously sees something here and coupling PostPath with WebEx could be a threat to Redmond. In fact, Cisco could be a bigger threat to Microsoft in the enterprise than Google. Why? Cisco already sells enterprises a lot of stuff. Isn't a collaboration suite really just an extension of the network?
Gary Edwards

What Oracle Sees in Sun Microsystems | NewsFactor Network - 0 views

  • Citigroup's Thill estimates Oracle could cut between 40 percent and 70 percent of Sun's roughly 33,000 employees. Excluding restructuring costs, Oracle expects Sun to add $1.5 billion in profit during the first year after the acquisition closes this summer, and another $2 billion the following year. Oracle executives declined to say how many jobs would be eliminated.
  • Citigroup's Thill estimates Oracle could cut between 40 percent and 70 percent of Sun's roughly 33,000 employees. Excluding restructuring costs, Oracle expects Sun to add $1.5 billion in profit during the first year after the acquisition closes this summer, and another $2 billion the following year. Oracle executives declined to say how many jobs would be eliminated.
  •  
    Good article from Aaron Ricadela. The focus is on Java, Sun's hardware-Server business, and Oracle's business objectives. No mention of OpenOffice or ODf though. There is however an interesting quote from IBM regarding the battle between Java and Microsoft .NET. Also, no mention of a OpenOffice-Java Foundation that would truly open source these technologies.

    When we were involved with the Massachusetts Pilot Study and ODF Plug-in proposals, IBM and Oracle lead the effort to open source the da Vinci plug-in. They put together a group of vendors known as "the benefactors", with the objective of completing work on da Vinci while forming a patent pool - open source foundation for all OpenOffice and da Vinci source. This idea was based on the Eclipse model.

    One of the more interesting ideas coming out of the IBM-Oracle led "benefactors", was the idea of breaking OpenOffice into components that could then be re-purposed by the Eclipse community of developers. The da Vinci plug-in was to be the integration bridge between Eclipse and the Microsoft Office productivity environment. Very cool. And no doubt IBM and Oracle were in synch on this in 2006. The problem was that they couldn't convince Sun to go along with the plan.

    Sun of course owned both Java and OpenOffice, and thought they could build a better ODF plug-in for OpenOffice (and own that too). A year later, Sun actually did produce an ODF plug-in for MSOffice. It was sent to Massachusetts on July 3rd, 2007, and tested against the same set of 150 critical documents da Vinci had to successfully convert without breaking. The next day, July 4th, Massachusetts announced their decision that they would approve the use of both ODF and OOXML! The much hoped for exclusive ODF requirement failed in Massachusetts exactly because Sun insisted on their way or the highway.

    Let's hope Oracle can right the ship and get OpenOffice-ODF-Java back on track.

    "......To gain
Paul Merrell

For sale: Systems that can secretly track where cellphone users go around the globe - T... - 0 views

  • Makers of surveillance systems are offering governments across the world the ability to track the movements of almost anybody who carries a cellphone, whether they are blocks away or on another continent. The technology works by exploiting an essential fact of all cellular networks: They must keep detailed, up-to-the-minute records on the locations of their customers to deliver calls and other services to them. Surveillance systems are secretly collecting these records to map people’s travels over days, weeks or longer, according to company marketing documents and experts in surveillance technology.
  • The world’s most powerful intelligence services, such as the National Security Agency and Britain’s GCHQ, long have used cellphone data to track targets around the globe. But experts say these new systems allow less technically advanced governments to track people in any nation — including the United States — with relative ease and precision.
  • It is unclear which governments have acquired these tracking systems, but one industry official, speaking on the condition of anonymity to share sensitive trade information, said that dozens of countries have bought or leased such technology in recent years. This rapid spread underscores how the burgeoning, multibillion-dollar surveillance industry makes advanced spying technology available worldwide. “Any tin-pot dictator with enough money to buy the system could spy on people anywhere in the world,” said Eric King, deputy director of Privacy International, a London-based activist group that warns about the abuse of surveillance technology. “This is a huge problem.”
  • ...9 more annotations...
  • Security experts say hackers, sophisticated criminal gangs and nations under sanctions also could use this tracking technology, which operates in a legal gray area. It is illegal in many countries to track people without their consent or a court order, but there is no clear international legal standard for secretly tracking people in other countries, nor is there a global entity with the authority to police potential abuses.
  • tracking systems that access carrier location databases are unusual in their ability to allow virtually any government to track people across borders, with any type of cellular phone, across a wide range of carriers — without the carriers even knowing. These systems also can be used in tandem with other technologies that, when the general location of a person is already known, can intercept calls and Internet traffic, activate microphones, and access contact lists, photos and other documents. Companies that make and sell surveillance technology seek to limit public information about their systems’ capabilities and client lists, typically marketing their technology directly to law enforcement and intelligence services through international conferences that are closed to journalists and other members of the public.
  • Yet marketing documents obtained by The Washington Post show that companies are offering powerful systems that are designed to evade detection while plotting movements of surveillance targets on computerized maps. The documents claim system success rates of more than 70 percent. A 24-page marketing brochure for SkyLock, a cellular tracking system sold by Verint, a maker of analytics systems based in Melville, N.Y., carries the subtitle “Locate. Track. Manipulate.” The document, dated January 2013 and labeled “Commercially Confidential,” says the system offers government agencies “a cost-effective, new approach to obtaining global location information concerning known targets.”
  • (Privacy International has collected several marketing brochures on cellular surveillance systems, including one that refers briefly to SkyLock, and posted them on its Web site. The 24-page SkyLock brochure and other material was independently provided to The Post by people concerned that such systems are being abused.)
  • Verint, which also has substantial operations in Israel, declined to comment for this story. It says in the marketing brochure that it does not use SkyLock against U.S. or Israeli phones, which could violate national laws. But several similar systems, marketed in recent years by companies based in Switzerland, Ukraine and elsewhere, likely are free of such limitations.
  • The tracking technology takes advantage of the lax security of SS7, a global network that cellular carriers use to communicate with one another when directing calls, texts and Internet data. The system was built decades ago, when only a few large carriers controlled the bulk of global phone traffic. Now thousands of companies use SS7 to provide services to billions of phones and other mobile devices, security experts say. All of these companies have access to the network and can send queries to other companies on the SS7 system, making the entire network more vulnerable to exploitation. Any one of these companies could share its access with others, including makers of surveillance systems.
  • Companies that market SS7 tracking systems recommend using them in tandem with “IMSI catchers,” increasingly common surveillance devices that use cellular signals collected directly from the air to intercept calls and Internet traffic, send fake texts, install spyware on a phone, and determine precise locations. IMSI catchers — also known by one popular trade name, StingRay — can home in on somebody a mile or two away but are useless if a target’s general location is not known. SS7 tracking systems solve that problem by locating the general area of a target so that IMSI catchers can be deployed effectively. (The term “IMSI” refers to a unique identifying code on a cellular phone.)
  • Verint can install SkyLock on the networks of cellular carriers if they are cooperative — something that telecommunications experts say is common in countries where carriers have close relationships with their national governments. Verint also has its own “worldwide SS7 hubs” that “are spread in various locations around the world,” says the brochure. It does not list prices for the services, though it says that Verint charges more for the ability to track targets in many far-flung countries, as opposed to only a few nearby ones. Among the most appealing features of the system, the brochure says, is its ability to sidestep the cellular operators that sometimes protect their users’ personal information by refusing government requests or insisting on formal court orders before releasing information.
  • Another company, Defentek, markets a similar system called Infiltrator Global Real-Time Tracking System on its Web site, claiming to “locate and track any phone number in the world.” The site adds: “It is a strategic solution that infiltrates and is undetected and unknown by the network, carrier, or the target.”
  •  
    The Verint company has very close ties to the Iraeli government. Its former parent company Comverse, was heavily subsidized by Israel and the bulk of its manufacturing and code development was done in Israel. See https://en.wikipedia.org/wiki/Comverse_Technology "In December 2001, a Fox News report raised the concern that wiretapping equipment provided by Comverse Infosys to the U.S. government for electronic eavesdropping may have been vulnerable, as these systems allegedly had a back door through which the wiretaps could be intercepted by unauthorized parties.[55] Fox News reporter Carl Cameron said there was no reason to believe the Israeli government was implicated, but that "a classified top-secret investigation is underway".[55] A March 2002 story by Le Monde recapped the Fox report and concluded: "Comverse is suspected of having introduced into its systems of the 'catch gates' in order to 'intercept, record and store' these wire-taps. This hardware would render the 'listener' himself 'listened to'."[56] Fox News did not pursue the allegations, and in the years since, there have been no legal or commercial actions of any type taken against Comverse by the FBI or any other branch of the US Government related to data access and security issues. While no real evidence has been presented against Comverse or Verint, the allegations have become a favorite topic of conspiracy theorists.[57] By 2005, the company had $959 million in sales and employed over 5,000 people, of whom about half were located in Israel.[16]" Verint is also the company that got the Dept. of Homeland Security contract to provide and install an electronic and video surveillance system across the entire U.S. border with Mexico.  One need not be much of a conspiracy theorist to have concerns about Verint's likely interactions and data sharing with the NSA and its Israeli equivalent, Unit 8200. 
Gonzalo San Gil, PhD.

How the rise of open source could improve software security - 0 views

  •  
    "Openness by itself does not yield more secure code, but a new dependence on open source by major software players could ensure more rigorous scrutiny"
  •  
    "Openness by itself does not yield more secure code, but a new dependence on open source by major software players could ensure more rigorous scrutiny"
  •  
    "Openness by itself does not yield more secure code, but a new dependence on open source by major software players could ensure more rigorous scrutiny"
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

Data Transfer Pact Between U.S. and Europe Is Ruled Invalid - The New York Times - 0 views

  • Europe’s highest court on Tuesday struck down an international agreement that allowed companies to move digital information like people’s web search histories and social media updates between the European Union and the United States. The decision left the international operations of companies like Google and Facebook in a sort of legal limbo even as their services continued working as usual.The ruling, by the European Court of Justice, said the so-called safe harbor agreement was flawed because it allowed American government authorities to gain routine access to Europeans’ online information. The court said leaks from Edward J. Snowden, the former contractor for the National Security Agency, made it clear that American intelligence agencies had almost unfettered access to the data, infringing on Europeans’ rights to privacy. The court said data protection regulators in each of the European Union’s 28 countries should have oversight over how companies collect and use online information of their countries’ citizens. European countries have widely varying stances towards privacy.
  • Data protection advocates hailed the ruling. Industry executives and trade groups, though, said the decision left a huge amount of uncertainty for big companies, many of which rely on the easy flow of data for lucrative businesses like online advertising. They called on the European Commission to complete a new safe harbor agreement with the United States, a deal that has been negotiated for more than two years and could limit the fallout from the court’s decision.
  • Some European officials and many of the big technology companies, including Facebook and Microsoft, tried to play down the impact of the ruling. The companies kept their services running, saying that other agreements with the European Union should provide an adequate legal foundation.But those other agreements are now expected to be examined and questioned by some of Europe’s national privacy watchdogs. The potential inquiries could make it hard for companies to transfer Europeans’ information overseas under the current data arrangements. And the ruling appeared to leave smaller companies with fewer legal resources vulnerable to potential privacy violations.
  • ...3 more annotations...
  • “We can’t assume that anything is now safe,” Brian Hengesbaugh, a privacy lawyer with Baker & McKenzie in Chicago who helped to negotiate the original safe harbor agreement. “The ruling is so sweepingly broad that any mechanism used to transfer data from Europe could be under threat.”At issue is the sort of personal data that people create when they post something on Facebook or other social media; when they do web searches on Google; or when they order products or buy movies from Amazon or Apple. Such data is hugely valuable to companies, which use it in a broad range of ways, including tailoring advertisements to individuals and promoting products or services based on users’ online activities.The data-transfer ruling does not apply solely to tech companies. It also affects any organization with international operations, such as when a company has employees in more than one region and needs to transfer payroll information or allow workers to manage their employee benefits online.
  • But it was unclear how bulletproof those treaties would be under the new ruling, which cannot be appealed and went into effect immediately. Europe’s privacy watchdogs, for example, remain divided over how to police American tech companies.France and Germany, where companies like Facebook and Google have huge numbers of users and have already been subject to other privacy rulings, are among the countries that have sought more aggressive protections for their citizens’ personal data. Britain and Ireland, among others, have been supportive of Safe Harbor, and many large American tech companies have set up overseas headquarters in Ireland.
  • “For those who are willing to take on big companies, this ruling will have empowered them to act,” said Ot van Daalen, a Dutch privacy lawyer at Project Moore, who has been a vocal advocate for stricter data protection rules. The safe harbor agreement has been in place since 2000, enabling American tech companies to compile data generated by their European clients in web searches, social media posts and other online activities.
  •  
    Another take on it from EFF: https://www.eff.org/deeplinks/2015/10/europes-court-justice-nsa-surveilance Expected since the Court's Advocate General released an opinion last week, presaging today's opinion.  Very big bucks involved behind the scenes because removing U.S.-based internet companies from the scene in the E.U. would pave the way for growth of E.U.-based companies.  The way forward for the U.S. companies is even more dicey because of a case now pending in the U.S.  The Second U.S. Circuit Court of Appeals is about to decide a related case in which Microsoft was ordered by the lower court to produce email records stored on a server in Ireland. . Should the Second Circuit uphold the order and the Supreme Court deny review, then under the principles announced today by the Court in the E.U., no U.S.-based company could ever be allowed to have "possession, custody, or control" of the data of E.U. citizens. You can bet that the E.U. case will weigh heavily in the Second Circuit's deliberations.  The E.U. decision is by far and away the largest legal event yet flowing out of the Edward Snowden disclosures, tectonic in scale. Up to now, Congress has succeeded in confining all NSA reforms to apply only to U.S. citizens. But now the large U.S. internet companies, Google, Facebook, Microsoft, Dropbox, etc., face the loss of all Europe as a market. Congress *will* be forced by their lobbying power to extend privacy protections to "non-U.S. persons."  Thank you again, Edward Snowden.
Gonzalo San Gil, PhD.

How the US could block the Comcast/Time Warner Cable merger | Ars Technica - 0 views

  •  
    "Comcast's $45.2 billion acquisition of Time Warner Cable (TWC) is expected to be thoroughly scrutinized by the Department of Justice (DOJ) and Federal Communications Commission (FCC), and it could be blocked if the agencies decide the merger would significantly reduce competition and harm consumers"
  •  
    "Comcast's $45.2 billion acquisition of Time Warner Cable (TWC) is expected to be thoroughly scrutinized by the Department of Justice (DOJ) and Federal Communications Commission (FCC), and it could be blocked if the agencies decide the merger would significantly reduce competition and harm consumers"
Gonzalo San Gil, PhD.

Relaxing "Neutrality" Principles Could Unlock Online Innovation | MIT Technology Review - 1 views

  •  
    "Letting go of an obsession with net neutrality could free technologists to make online services even better. By George Anders " [ # ! The '#Trap' remains... # ! ... as available #bandwidth continue to be as a matter of the # ! #Money one can #pay and, unless #Providers seriously #engage # ! in #price # ! #lowering -and #QoS guaranteeing, the '#DigitalDivide' # ! will #remain #widening... ]
  •  
    "Letting go of an obsession with net neutrality could free technologists to make online services even better. By George Anders "
  •  
    "Letting go of an obsession with net neutrality could free technologists to make online services even better. By George Anders " [ # ! The '#Trap' remains... # ! ... as available #bandwidth continue to be as a matter of the # ! #Money one can #pay and, unless #Providers seriously #engage # ! in #price # ! #lowering -and #QoS guaranteeing, the '#DigitalDivide' # ! will #remain #widening... ]
1 - 20 of 392 Next › Last »
Showing 20 items per page