Skip to main content

Home/ Open Web/ Group items tagged archives

Rss Feed Group items tagged

Paul Merrell

Archiveteam - 0 views

  • HISTORY IS OUR FUTURE And we've been trashing our history Archive Team is a loose collective of rogue archivists, programmers, writers and loudmouths dedicated to saving our digital heritage. Since 2009 this variant force of nature has caught wind of shutdowns, shutoffs, mergers, and plain old deletions - and done our best to save the history before it's lost forever. Along the way, we've gotten attention, resistance, press and discussion, but most importantly, we've gotten the message out: IT DOESN'T HAVE TO BE THIS WAY. This website is intended to be an offloading point and information depot for a number of archiving projects, all related to saving websites or data that is in danger of being lost. Besides serving as a hub for team-based pulling down and mirroring of data, this site will provide advice on managing your own data and rescuing it from the brink of destruction. Currently Active Projects (Get Involved Here!) Archive Team recruiting Want to code for Archive Team? Here's a starting point.
  • Archive Team is a loose collective of rogue archivists, programmers, writers and loudmouths dedicated to saving our digital heritage. Since 2009 this variant force of nature has caught wind of shutdowns, shutoffs, mergers, and plain old deletions - and done our best to save the history before it's lost forever. Along the way, we've gotten attention, resistance, press and discussion, but most importantly, we've gotten the message out: IT DOESN'T HAVE TO BE THIS WAY. This website is intended to be an offloading point and information depot for a number of archiving projects, all related to saving websites or data that is in danger of being lost. Besides serving as a hub for team-based pulling down and mirroring of data, this site will provide advice on managing your own data and rescuing it from the brink of destruction.
  • Who We Are and how you can join our cause! Deathwatch is where we keep track of sites that are sickly, dying or dead. Fire Drill is where we keep track of sites that seem fine but a lot depends on them. Projects is a comprehensive list of AT endeavors. Philosophy describes the ideas underpinning our work. Some Starting Points The Introduction is an overview of basic archiving methods. Why Back Up? Because they don't care about you. Back Up your Facebook Data Learn how to liberate your personal data from Facebook. Software will assist you in regaining control of your data by providing tools for information backup, archiving and distribution. Formats will familiarise you with the various data formats, and how to ensure your files will be readable in the future. Storage Media is about where to get it, what to get, and how to use it. Recommended Reading links to others sites for further information. Frequently Asked Questions is where we answer common questions.
  •  
    The Archive Team Warrior is a virtual archiving appliance. You can run it to help with the ArchiveTeam archiving efforts. It will download sites and upload them to our archive - and it's really easy to do! The warrior is a virtual machine, so there is no risk to your computer. The warrior will only use your bandwidth and some of your disk space. It will get tasks from and report progress to the Tracker. Basic usage The warrior runs on Windows, OS X and Linux using a virtual machine. You'll need one of: VirtualBox (recommended) VMware workstation/player (free-gratis for personal use) See below for alternative virtual machines Partners with and contributes lots of archives to the Wayback Machine. Here's how you can help by contributing some bandwidth if you run an always-on box with an internet connection.
Paul Merrell

Most Agencies Falling Short on Mandate for Online Records - 0 views

  • Nearly 20 years after Congress passed the Electronic Freedom of Information Act Amendments (E-FOIA), only 40 percent of agencies have followed the law's instruction for systematic posting of records released through FOIA in their electronic reading rooms, according to a new FOIA Audit released today by the National Security Archive at www.nsarchive.org to mark Sunshine Week. The Archive team audited all federal agencies with Chief FOIA Officers as well as agency components that handle more than 500 FOIA requests a year — 165 federal offices in all — and found only 67 with online libraries populated with significant numbers of released FOIA documents and regularly updated.
  • Congress called on agencies to embrace disclosure and the digital era nearly two decades ago, with the passage of the 1996 "E-FOIA" amendments. The law mandated that agencies post key sets of records online, provide citizens with detailed guidance on making FOIA requests, and use new information technology to post online proactively records of significant public interest, including those already processed in response to FOIA requests and "likely to become the subject of subsequent requests." Congress believed then, and openness advocates know now, that this kind of proactive disclosure, publishing online the results of FOIA requests as well as agency records that might be requested in the future, is the only tenable solution to FOIA backlogs and delays. Thus the National Security Archive chose to focus on the e-reading rooms of agencies in its latest audit. Even though the majority of federal agencies have not yet embraced proactive disclosure of their FOIA releases, the Archive E-FOIA Audit did find that some real "E-Stars" exist within the federal government, serving as examples to lagging agencies that technology can be harnessed to create state-of-the art FOIA platforms. Unfortunately, our audit also found "E-Delinquents" whose abysmal web performance recalls the teletype era.
  • E-Delinquents include the Office of Science and Technology Policy at the White House, which, despite being mandated to advise the President on technology policy, does not embrace 21st century practices by posting any frequently requested records online. Another E-Delinquent, the Drug Enforcement Administration, insults its website's viewers by claiming that it "does not maintain records appropriate for FOIA Library at this time."
  • ...9 more annotations...
  • "The presumption of openness requires the presumption of posting," said Archive director Tom Blanton. "For the new generation, if it's not online, it does not exist." The National Security Archive has conducted fourteen FOIA Audits since 2002. Modeled after the California Sunshine Survey and subsequent state "FOI Audits," the Archive's FOIA Audits use open-government laws to test whether or not agencies are obeying those same laws. Recommendations from previous Archive FOIA Audits have led directly to laws and executive orders which have: set explicit customer service guidelines, mandated FOIA backlog reduction, assigned individualized FOIA tracking numbers, forced agencies to report the average number of days needed to process requests, and revealed the (often embarrassing) ages of the oldest pending FOIA requests. The surveys include:
  • The federal government has made some progress moving into the digital era. The National Security Archive's last E-FOIA Audit in 2007, " File Not Found," reported that only one in five federal agencies had put online all of the specific requirements mentioned in the E-FOIA amendments, such as guidance on making requests, contact information, and processing regulations. The new E-FOIA Audit finds the number of agencies that have checked those boxes is now much higher — 100 out of 165 — though many (66 in 165) have posted just the bare minimum, especially when posting FOIA responses. An additional 33 agencies even now do not post these types of records at all, clearly thwarting the law's intent.
  • The FOIAonline Members (Department of Commerce, Environmental Protection Agency, Federal Labor Relations Authority, Merit Systems Protection Board, National Archives and Records Administration, Pension Benefit Guaranty Corporation, Department of the Navy, General Services Administration, Small Business Administration, U.S. Citizenship and Immigration Services, and Federal Communications Commission) won their "E-Star" by making past requests and releases searchable via FOIAonline. FOIAonline also allows users to submit their FOIA requests digitally.
  • THE E-DELINQUENTS: WORST OVERALL AGENCIES In alphabetical order
  • Key Findings
  • Excuses Agencies Give for Poor E-Performance
  • Justice Department guidance undermines the statute. Currently, the FOIA stipulates that documents "likely to become the subject of subsequent requests" must be posted by agencies somewhere in their electronic reading rooms. The Department of Justice's Office of Information Policy defines these records as "frequently requested records… or those which have been released three or more times to FOIA requesters." Of course, it is time-consuming for agencies to develop a system that keeps track of how often a record has been released, which is in part why agencies rarely do so and are often in breach of the law. Troublingly, both the current House and Senate FOIA bills include language that codifies the instructions from the Department of Justice. The National Security Archive believes the addition of this "three or more times" language actually harms the intent of the Freedom of Information Act as it will give agencies an easy excuse ("not requested three times yet!") not to proactively post documents that agency FOIA offices have already spent time, money, and energy processing. We have formally suggested alternate language requiring that agencies generally post "all records, regardless of form or format that have been released in response to a FOIA request."
  • Disabilities Compliance. Despite the E-FOIA Act, many government agencies do not embrace the idea of posting their FOIA responses online. The most common reason agencies give is that it is difficult to post documents in a format that complies with the Americans with Disabilities Act, also referred to as being "508 compliant," and the 1998 Amendments to the Rehabilitation Act that require federal agencies "to make their electronic and information technology (EIT) accessible to people with disabilities." E-Star agencies, however, have proven that 508 compliance is no barrier when the agency has a will to post. All documents posted on FOIAonline are 508 compliant, as are the documents posted by the Department of Defense and the Department of State. In fact, every document created electronically by the US government after 1998 should already be 508 compliant. Even old paper records that are scanned to be processed through FOIA can be made 508 compliant with just a few clicks in Adobe Acrobat, according to this Department of Homeland Security guide (essentially OCRing the text, and including information about where non-textual fields appear). Even if agencies are insistent it is too difficult to OCR older documents that were scanned from paper, they cannot use that excuse with digital records.
  • Privacy. Another commonly articulated concern about posting FOIA releases online is that doing so could inadvertently disclose private information from "first person" FOIA requests. This is a valid concern, and this subset of FOIA requests should not be posted online. (The Justice Department identified "first party" requester rights in 1989. Essentially agencies cannot use the b(6) privacy exemption to redact information if a person requests it for him or herself. An example of a "first person" FOIA would be a person's request for his own immigration file.) Cost and Waste of Resources. There is also a belief that there is little public interest in the majority of FOIA requests processed, and hence it is a waste of resources to post them. This thinking runs counter to the governing principle of the Freedom of Information Act: that government information belongs to US citizens, not US agencies. As such, the reason that a person requests information is immaterial as the agency processes the request; the "interest factor" of a document should also be immaterial when an agency is required to post it online. Some think that posting FOIA releases online is not cost effective. In fact, the opposite is true. It's not cost effective to spend tens (or hundreds) of person hours to search for, review, and redact FOIA requests only to mail it to the requester and have them slip it into their desk drawer and forget about it. That is a waste of resources. The released document should be posted online for any interested party to utilize. This will only become easier as FOIA processing systems evolve to automatically post the documents they track. The State Department earned its "E-Star" status demonstrating this very principle, and spent no new funds and did not hire contractors to build its Electronic Reading Room, instead it built a self-sustaining platform that will save the agency time and money going forward.
Paul Merrell

This project aims to make '404 not found' pages a thing of the past - 0 views

  • The Internet is always changing. Sites are rising and falling, content is deleted, and bad URLs can lead to '404 Not Found' errors that are as helpful as a brick wall. A new project proposes an do away with dead 404 errors by implementing new HTML code that will help access prior versions of hyperlinked content. With any luck, that means that you’ll never have to run into a dead link again. The “404-No-More” project is backed by a formidable coalition including members from organizations like the Harvard Library Innovation Lab, Los Alamos National Laboratory, Old Dominion University, and the Berkman Center for Internet & Society. Part of the Knight News Challenge, which seeks to strengthen the Internet for free expression and innovation through a variety of initiatives, 404-No-More recently reached the semifinal stage. The project aims to cure so-called link rot, the process by which hyperlinks become useless overtime because they point to addresses that are no longer available. If implemented, websites such as Wikipedia and other reference documents would be vastly improved. The new feature would also give Web authors a way provide links that contain both archived copies of content and specific dates of reference, the sort of information that diligent readers have to hunt down on a website like Archive.org.
  • While it may sound trivial, link rot can actually have real ramifications. Nearly 50 percent of the hyperlinks in Supreme Court decisions no longer work, a 2013 study revealed. Losing footnotes and citations in landmark legal decisions can mean losing crucial information and context about the laws that govern us. The same study found that 70 percent of URLs within the Harvard Law Review and similar journals didn’t link to the originally cited information, considered a serious loss surrounding the discussion of our laws. The project’s proponents have come up with more potential uses as well. Activists fighting censorship will have an easier time combatting government takedowns, for instance. Journalists will be much more capable of researching dynamic Web pages. “If every hyperlink was annotated with a publication date, you could automatically view an archived version of the content as the author intended for you to see it,” the project’s authors explain. The ephemeral nature of the Web could no longer be used as a weapon. Roger Macdonald, a director at the Internet Archive, called the 404-No-More project “an important contribution to preservation of knowledge.”
  • The new feature would come in the form of introducing the mset attribute to the <a> element in HTML, which would allow users of the code to specify multiple dates and copies of content as an external resource. For instance, if both the date of reference and the location of a copy of targeted content is known by an author, the new code would like like this: The 404-No-More project’s goals are numerous, but the ultimate goal is to have mset become a new HTML standard for hyperlinks. “An HTML standard that incorporates archives for hyperlinks will loop in these efforts and make the Web better for everyone,” project leaders wrote, “activists, journalists, and regular ol’ everyday web users.”
Paul Merrell

Web Browser Supports Time Travel (Library of Congress) - 0 views

  • September 24, 2010 -- For those who use the Mozilla Firefox browser, you now have the option to time travel through the web.  MementoFox  is a free extension that users can add-on to their browsers. The extension implements the Memento protocal , which the Los Alamos National Laboratory and Old Dominion University are developing to enable capture of and access to older versions of websites. 
  • have the option to time travel through the web.  MementoFox  is a free extension that users can add-on to their browsers. The extension implements the Memento protocal , which the Los Alamos National Laboratory and Old Dominion University are developing to enable capture of and access to older versions of websites. 
  • Memento allows a user to link resources, or web pages, with their previous versions automatically.  The term Memento refers to an archival record of a resource as well as the technological framework that supports the ability to discover and browse older versions of Web resources.  One of the challenges of researching older versions of Internet resources is searching for them in web archives. With the Firefox add-on, users can easily view older versions in the same browser without having to search across archives. After a URL is specified in the browser, the newly released extension allows users to set a target day, using the slider bar (the "add-on").  The protocol will search archives across the web for previous versions of the URL.   As long as those archives are available on a server accessible across the web, MementoFox will return the previous targeted version. MementoFox is available for download  and developers interested in working with the protocol can join the development group .
  • ...1 more annotation...
  • The Memento project receives support from the Library of Congress National Digital Information Infrastructure and Preservation Program.
Paul Merrell

Canadian Spies Collect Domestic Emails in Secret Security Sweep - The Intercept - 0 views

  • Canada’s electronic surveillance agency is covertly monitoring vast amounts of Canadians’ emails as part of a sweeping domestic cybersecurity operation, according to top-secret documents. The surveillance initiative, revealed Wednesday by CBC News in collaboration with The Intercept, is sifting through millions of emails sent to Canadian government agencies and departments, archiving details about them on a database for months or even years. The data mining operation is carried out by the Communications Security Establishment, or CSE, Canada’s equivalent of the National Security Agency. Its existence is disclosed in documents obtained by The Intercept from NSA whistleblower Edward Snowden. The emails are vacuumed up by the Canadian agency as part of its mandate to defend against hacking attacks and malware targeting government computers. It relies on a system codenamed PONY EXPRESS to analyze the messages in a bid to detect potential cyber threats.
  • Last year, CSE acknowledged it collected some private communications as part of cybersecurity efforts. But it refused to divulge the number of communications being stored or to explain for how long any intercepted messages would be retained. Now, the Snowden documents shine a light for the first time on the huge scope of the operation — exposing the controversial details the government withheld from the public. Under Canada’s criminal code, CSE is not allowed to eavesdrop on Canadians’ communications. But the agency can be granted special ministerial exemptions if its efforts are linked to protecting government infrastructure — a loophole that the Snowden documents show is being used to monitor the emails. The latest revelations will trigger concerns about how Canadians’ private correspondence with government employees are being archived by the spy agency and potentially shared with police or allied surveillance agencies overseas, such as the NSA. Members of the public routinely communicate with government employees when, for instance, filing tax returns, writing a letter to a member of parliament, applying for employment insurance benefits or submitting a passport application.
  • Chris Parsons, an internet security expert with the Toronto-based internet think tank Citizen Lab, told CBC News that “you should be able to communicate with your government without the fear that what you say … could come back to haunt you in unexpected ways.” Parsons said that there are legitimate cybersecurity purposes for the agency to keep tabs on communications with the government, but he added: “When we collect huge volumes, it’s not just used to track bad guys. It goes into data stores for years or months at a time and then it can be used at any point in the future.” In a top-secret CSE document on the security operation, dated from 2010, the agency says it “processes 400,000 emails per day” and admits that it is suffering from “information overload” because it is scooping up “too much data.” The document outlines how CSE built a system to handle a massive 400 terabytes of data from Internet networks each month — including Canadians’ emails — as part of the cyber operation. (A single terabyte of data can hold about a billion pages of text, or about 250,000 average-sized mp3 files.)
  • ...1 more annotation...
  • The agency notes in the document that it is storing large amounts of “passively tapped network traffic” for “days to months,” encompassing the contents of emails, attachments and other online activity. It adds that it stores some kinds of metadata — data showing who has contacted whom and when, but not the content of the message — for “months to years.” The document says that CSE has “excellent access to full take data” as part of its cyber operations and is receiving policy support on “use of intercepted private communications.” The term “full take” is surveillance-agency jargon that refers to the bulk collection of both content and metadata from Internet traffic. Another top-secret document on the surveillance dated from 2010 suggests the agency may be obtaining at least some of the data by covertly mining it directly from Canadian Internet cables. CSE notes in the document that it is “processing emails off the wire.”
  •  
    " CANADIAN SPIES COLLECT DOMESTIC EMAILS IN SECRET SECURITY SWEEP BY RYAN GALLAGHER AND GLENN GREENWALD @rj_gallagher@ggreenwald YESTERDAY AT 2:02 AM SHARE TWITTER FACEBOOK GOOGLE EMAIL PRINT POPULAR EXCLUSIVE: TSA ISSUES SECRET WARNING ON 'CATASTROPHIC' THREAT TO AVIATION CHICAGO'S "BLACK SITE" DETAINEES SPEAK OUT WHY DOES THE FBI HAVE TO MANUFACTURE ITS OWN PLOTS IF TERRORISM AND ISIS ARE SUCH GRAVE THREATS? NET NEUTRALITY IS HERE - THANKS TO AN UNPRECEDENTED GUERRILLA ACTIVISM CAMPAIGN HOW SPIES STOLE THE KEYS TO THE ENCRYPTION CASTLE Canada's electronic surveillance agency is covertly monitoring vast amounts of Canadians' emails as part of a sweeping domestic cybersecurity operation, according to top-secret documents. The surveillance initiative, revealed Wednesday by CBC News in collaboration with The Intercept, is sifting through millions of emails sent to Canadian government agencies and departments, archiving details about them on a database for months or even years. The data mining operation is carried out by the Communications Security Establishment, or CSE, Canada's equivalent of the National Security Agency. Its existence is disclosed in documents obtained by The Intercept from NSA whistleblower Edward Snowden. The emails are vacuumed up by the Canadian agency as part of its mandate to defend against hacking attacks and malware targeting government computers. It relies on a system codenamed PONY EXPRESS to analyze the messages in a bid to detect potential cyber threats. Last year, CSE acknowledged it collected some private communications as part of cybersecurity efforts. But it refused to divulge the number of communications being stored or to explain for how long any intercepted messages would be retained. Now, the Snowden documents shine a light for the first time on the huge scope of the operation - exposing the controversial details the government withheld from the public. Under Canada's criminal code, CSE is no
Paul Merrell

The Internet of Things Will Turn Large-Scale Hacks into Real World Disasters | Motherboard - 0 views

  • Disaster stories involving the Internet of Things are all the rage. They feature cars (both driven and driverless), the power grid, dams, and tunnel ventilation systems. A particularly vivid and realistic one, near-future fiction published last month in New York Magazine, described a cyberattack on New York that involved hacking of cars, the water system, hospitals, elevators, and the power grid. In these stories, thousands of people die. Chaos ensues. While some of these scenarios overhype the mass destruction, the individual risks are all real. And traditional computer and network security isn’t prepared to deal with them.Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).
  • So far, internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by—presumptively—China in 2015. On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door—or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location. With the advent of the Internet of Things and cyber-physical systems in general, we've given the internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete. Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.
  •  
    Bruce Scneier on the insecurity of the Internet of Things, and possible consequences.
Paul Merrell

Remaining Snowden docs will be released to avert 'unspecified US war' - ‪Cryp... - 0 views

  • All the remaining Snowden documents will be released next month, according t‪o‬ whistle-blowing site ‪Cryptome, which said in a tweet that the release of the info by unnamed third parties would be necessary to head off an unnamed "war".‬‪Cryptome‬ said it would "aid and abet" the release of "57K to 1.7M" new documents that had been "withheld for national security-public debate [sic]". <a href="http://pubads.g.doubleclick.net/gampad/jump?iu=/6978/reg_security/front&sz=300x250%7C300x600&tile=3&c=33U7RchawQrMoAAHIac14AAAKH&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" target="_blank"> <img src="http://pubads.g.doubleclick.net/gampad/ad?iu=/6978/reg_security/front&sz=300x250%7C300x600&tile=3&c=33U7RchawQrMoAAHIac14AAAKH&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" alt=""></a> The site clarified that will not be publishing the documents itself.Transparency activists would welcome such a release but such a move would be heavily criticised by inteligence agencies and military officials, who argue that Snowden's dump of secret documents has set US and allied (especially British) intelligence efforts back by years.
  • As things stand, the flow of Snowden disclosures is controlled by those who have access to the Sn‪o‬wden archive, which might possibly include Snowden confidants such as Glenn Greenwald and Laura Poitras. In some cases, even when these people release information to mainstream media organisations, it is then suppressed by these organisations after negotiation with the authorities. (In one such case, some key facts were later revealed by the Register.)"July is when war begins unless headed off by Snowden full release of crippling intel. After war begins not a chance of release," Cryptome tweeted on its official feed."Warmongerers are on a rampage. So, yes, citizens holding Snowden docs will do the right thing," it said.
  • "For more on Snowden docs release in July watch for Ellsberg, special guest and others at HOPE, July 18-20: http://www.hope.net/schedule.html," it added.HOPE (Hackers On Planet Earth) is a well-regarded and long-running hacking conference organised by 2600 magazine. Previous speakers at the event have included Kevin Mitnick, Steve Wozniak and Jello Biafra.In other developments, ‪Cryptome‬ has started a Kickstarter fund to release its entire archive in the form of a USB stick archive. It wants t‪o‬ raise $100,000 to help it achieve its goal. More than $14,000 has already been raised.The funding drive follows a dispute between ‪Cryptome‬ and its host Network Solutions, which is owned by web.com. Access to the site was bl‪o‬cked f‪o‬ll‪o‬wing a malware infection last week. ‪Cryptome‬ f‪o‬under J‪o‬hn Y‪o‬ung criticised the host, claiming it had ‪o‬ver-reacted and had been sl‪o‬w t‪o‬ rest‪o‬re access t‪o‬ the site, which ‪Cryptome‬ criticised as a form of cens‪o‬rship.In resp‪o‬nse, ‪Cryptome‬ plans to more widely distribute its content across multiple sites as well as releasing the planned USB stick archive. ®
  •  
    Can't happen soon enough. 
Gary Edwards

Inbox Unchained: Mailbox just fixed email on the iPhone | The Verge - 0 views

  •  
    Good video demonstration of Mailbox, the new iOS app that will be released in the future for Android and the desktop.  Excellent Productivity issue discussion, as the founder of Mailbox explains what they are trying to do.  An excellent video coupled with a great interview and explanation of mobile productivity.   excerpt: "He asked himself, "What are people trying to do with email? What are the goals?" He started with Apple's Mail app for iPhone, which people were already familiar with, and injected elements of to-do apps he liked, since increasingly people are using their inboxes as to-do lists. The point was to create an experience that was distinctly mobile - an app that would let you take meaningful action while you're in line at Starbucks. Mailbox needed to intelligently display emails so you can parse and deal with them as quickly as possible. Most email apps require two or three taps to archive an email - perhaps the most common action you take on emails while you're mobile - but Mailbox only requires one: a swipe to the side. "Our biggest a-ha moment was when we realized that the primary use case of email on the phone is triage," Underwood says. Mailbox takes the reality of people using their inboxes as to-do lists and and builds on what Mail and Sparrow did right (push notifications and nicely threaded messages, respectively). SNOOZING MESSAGES To conserve space, Mailbox turns email conversations into SMS-like bubbles, which lets you quickly fly through an entire email chain. Once you've read a message, it shrinks in size so skimming threads is a snap. "Email will feel more and more like chat, and we'll continue to iterate towards that," Underwood says. "EMAIL WILL FEEL MORE AND MORE LIKE CHAT, AND WE'LL CONTINUE TO ITERATE TOWARDS THAT." Mailbox introduces a few other gestures, such as a swipe to the left that lets you "snooze" a message to be reminded about later. You can choose between a few snooze options: Later Today, This Eveni
Gary Edwards

Office to finally fully support ODF, Open XML, and PDF formats | ZDNet - 0 views

  •  
    The king of clicks returns!  No doubt there was a time when the mere mention of ODF and the now legendary XML "document" format wars with Microsoft could drive click counts into the statisphere.  Sorry to say though, those times are long gone. It's still a good story though.  Even if the fate of mankind and the future of the Internet no longer hinges on the outcome.  There is that question that continues defy answer; "Did Microsoft win or lose?"  So the mere announcement of supported formats in MSOffice XX is guaranteed to rev the clicks somewhat. Veteran ODF clickmeister SVN does make an interesting observation though: "The ironic thing is that, while this was as hotly debated am issue in the mid-2000s as are mobile patents and cloud implementation is today, this news was barely noticed. That's a mistake. Updegrove points out, "document interoperability and vendor neutrality matter more now than ever before as paper archives disappear and literally all of human knowledge is entrusted to electronic storage." He concluded, "Only if documents can be easily exchanged and reliably accessed on an ongoing basis will competition in the present be preserved, and the availability of knowledge down through the ages be assured. Without robust, universally adopted document formats, both of those goals will be impossible to attain." Updegrove's right of course. Don't believe me? Go into your office's archives and try to bring up documents your wrote in the 90s in WordPerfect or papers your staff created in the 80s with WordStar. If you don't want to lose your institutional memory, open document standards support is more important than ever. "....................................... Sorry but Updegrove is wrong.  Woefully wrong. The Web is the future.  Sure interoperability matters, but only as far as the Web and the future of Cloud Computing is concerned.  Sadly neither ODF or Open XML are Web ready.  The language of the Web is famously HTML, now HTML5+
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Paul Merrell

HTML5: Getting to Last Call - W3C Blog - 0 views

  • We started to work on HTML5 back in 2007 and have been going through issues since then. In November 2009, the HTML Chairs instituted a decision policy, which allowed us to close around 20 issues or so. We now have around 200 bugs and 25 issues on the document. In order to drive the Group to Last Call, the HTML Chairs, following the advice from the W3C Team, produced a timeline to get the initial Last Call for HTML5. The W3C team expresses its strong support to the chairs of the HTML Working Group in their efforts to lead the group toward an initial Last Call according to the published timeline. All new bugs related to the HTML5 specification received after the first of October 2010 will be treated as Last Call comments, with possible exceptions granted by the Chairs. The intention is to get to the initial Last Call and have a feature-complete document. The HTML Chairs will keep driving the Group forward after that date in order to resolve all the bugs received by October 1. The expectation is to issue the Last Call document at the end of May 2011. I encourage everyone to send bugs prior to October 1 and keep track of them in order to escalate them to the Working Group if necessary.
  •  
    Get your HTML 5 bug reports filed *before* October 1.  See http://lists.w3.org/Archives/Public/public-html/2010Sep/0074.html for more details.
Paul Merrell

NSA Will Destroy Archived Metadata When Program Stops - 0 views

  • Four months from now, at the same time that the National Security Agency finally abandons the massive domestic telephone dragnet exposed by whistleblower Edward Snowden, it will also stop perusing the vast archive of data collected by the program. The NSA announced on Monday that it will expunge all the telephone metadata it previously swept up, citing Section 215 of the U.S.A Patriot Act. The program was ruled illegal by a federal appeals court in May. In June, Congress voted to end the program, but gave the NSA until the end of November to phase it out. The historical metadata —  records of American phone calls showing who called who, when, and for how long — will be put out of the reach of analysts on November 29, although technical personnel will have access for three more months. The program started 14 years ago, and operated under rules requiring data be retained for five years, and then destroyed.
  • The only possible hold-up, ironically, would be if any of the civil lawsuits prompted by the program prohibit the destruction of the data. “The telephony metadata” will be “preserved solely because of preservation obligations in pending civil litigation,” the Office of the Director of National Intelligence announced. “As soon as possible, NSA will destroy the Section 215 bulk telephony metadata upon expiration of its litigation preservation obligations.” ACLU staff attorney Alex Abdo told The Intercept his organization is “pleased that the NSA intends to purge the call records it has collected illegally.” But, he added: “Even with today’s pledge, the devil may be in the details.”
Paul Merrell

Cover Pages: XML Daily Newslink: Friday, 12 November 2010 - 0 views

  • HTTP Framework for Time-Based Access to Resource States: Memento Herbert Van de Sompel, Michael Nelson, Robert Sanderson; IETF I-D Representatives of Los Alamos National Laboratory and Old Dominion University have published a first IETF Working Draft of HTTP Framework for Time-Based Access to Resource States: Memento. According to the editor's iMinds blog: "While the days of human time travel as described in many a science fiction novel are yet to come, time travel on the Web has recently become a reality thanks to the Memento project. In essence, Memento adds a time dimension to the Web: enter the Web address of a resource in your browser and set a time slider to a desired moment in the Web's past, and see what the resource looked like around that time... Technically, Memento achieves this by: (a) Leveraging systems that host archival Web content, including Web archives, content management systems, and software versioning systems; (b) Extending the Web's most commonly used protocol (HTTP) with the capability to specify a datetime in protocol requests, and by applying an existing HTTP capability (content negotiation) in a new dimension: 'time'. The result is a Web in which navigating the past is as seamless as navigating the present... The Memento concepts have attracted significant international attention since they were first published in November 2009, and compliant tools are already emerging. For example, at the client side there is the MementoFox add-on for FireFox, and a Memento app for Android; at the server side, there is a plug-in for MediaWiki servers, and the Wayback software that is widely used by Web archives, worldwide, was recently enhanced with Memento support..."
Paul Merrell

Internet Archive: Scanning Services - 0 views

  •  
    I've been looking for a permanent online home for a couple of historical works I co-authored. My guidiing criterion has been the best chance of the works' long-term survival in a publicly-accessible form after my death. I think I may have just found my solution. 
Gary Edwards

Furious Over End Of Google Reader - Business Insider - 1 views

  •  
    "Gary Edwards on Mar 15, 8:25 PM said: There are only three apps i load at boot-up: gMail, gReader, and gWave. Ooops! Google Wave was cancelled over a year ago. Owning the end-users attention at boot-up proved to be an essential factor to the Microsoft monopoly. They built an iron fisted empire out of owning the point of boot-up. So it's very strange to see Google give up the very thing other cloud platform contenders would no doubt kill for. Very strange. Even stranger though is the perception that Google + will somehow now move to center stage? The only reason i use Google+ is because it's easy to point to an article and post a comment from Google Reader to my + circles. Other than that i have no use for +. Nicolas Carr posted an interesting comment on Google's cancellation of gReader yesterday. He tried to argue that there is a difference between "tools" and "platforms", and Google was more interested in building a platform than maintaining "tools" like gReader. So, Google+ is now essential to the Google Platform? Unfortunately, the otherwise brilliant and cosmic insightful Mr. Carr, fails to make that case. Microsoft became a platform when they succeeded in positioning their OS as the essential factor bridging an explosively innovative and rapidly commoditiz'ing Windows hardware reference platform, and, he equally rapid and innovative Windows software application platform. Both software and hardware were being written and developed to the Windows OS, with features doubling and costs being halved at a rate that even Moore's Law envied. Microsoft fully cemented the emerging hardware - OS - application platform with a business productivity environment that necessitated the use of the MS Office suite of servers and apps. That lock on business productivity has yet to be broken. And even though the mighty Google Apps has made some progress convincing businesses to rip-out-and-replace their legacy business productivity systems and re write to the Google Cloud P
Paul Merrell

EU-US Personal Data Privacy Deal 'Cracked Beyond Repair' - 0 views

  • Privacy Shield is the proposed new deal between the EU and the US that is supposed to safeguard all personal data on EU citizens held on computer systems in the US from being subject to mass surveillance by the US National Security Agency. The data can refer to any transaction — web purchases, cars or clothing — involving an EU citizen whose data is held on US servers. Privacy groups say Privacy Shield — which replaces the Safe Harbor agreement ruled unlawful in October 2015 — does not meet strict EU standard on the use of personal data. Monique Goyens, Director General of the European Consumer Organization (BEUC) told Sputnik: “We consider that the shield is cracked beyond repair and is unlikely to stand scrutiny by the European Court of Justice. A fundamental problem remains that the US side of the shield is made of clay, not iron.”
  • The agreement has been under negotiation for months ever since the because the European Court of Justice ruled in October 2015 that the previous EU-US data agreement — Safe Harbor — was invalid. The issue arises from the strict EU laws — enshrined in the Charter of Fundamental Rights of the European Union — to the privacy of their personal data.
  • The Safe Harbor agreement was a quasi-judicial understanding that the US undertook to agree that it would ensure that EU citizens’ data on US servers would be held and protected under the same restrictions as it would be under EU law and directives. The data covers a huge array of information — from Internet and communications usage, to sales transactions, import and exports.
  • ...1 more annotation...
  • The case arose when Maximillian Schrems, a Facebook user, lodged a complaint with the Irish Data Protection Commissioner, arguing that — in the light of the revelations by ex-CIA contractor Edward Snowden of mass surveillance by the US National Security Agency (NSA) — the transfer of data from Facebook’s Irish subsidiary onto the company’s servers in the US does not provide sufficient protection of his personal data. The court ruled that: “the Safe Harbor Decision denies the national supervisory authorities their powers where a person calls into question whether the decision is compatible with the protection of the privacy and of the fundamental rights and freedoms of individuals.”
  •  
    Off we go for another trip to the European Court of Justice.
Paul Merrell

Spooked By Lax U.S. Data Privacy, European Firms Build Their Own Cloud Services - 0 views

  • A few recent legal developments affecting U.S. online privacy have rightfully troubled privacy advocates and civil libertarians on American soil. In addition to the Patriot Act's relaxed regulation of law enforcement's access to private data, recent court rulings have made it clear that U.S. authorities can secretly request data from tech companies without the user ever knowing. If this seems objectionable from the standpoint of U.S. citizens, imagine how it looks to outsiders who are storing their data there. Some European companies who do business with U.S. technology companies are concerned enough to start looking elsewhere for infrastructure
  •  
    Draconian post-9/11 U.S. government surveillance statutes will cost U.S. economy as customers shop for cloud services elsewhere.  
Paul Merrell

In Cryptography, Advances in Program Obfuscation | Simons Foundation - 0 views

  • “A program obfuscator would be a powerful tool for finding plausible constructions for just about any cryptographic task you could conceive of,” said Yuval Ishai, of the Technion in Haifa, Israel. Precisely because of obfuscation’s power, many computer scientists, including Sahai and his colleagues, thought it was impossible. “We were convinced it was too powerful to exist,” he said. Their earliest research findings seemed to confirm this, showing that the most natural form of obfuscation is indeed impossible to achieve for all programs. Then, on July 20, 2013, Sahai and five co-authors posted a paper on the Cryptology ePrint Archive demonstrating a candidate protocol for a kind of obfuscation known as “indistinguishability obfuscation.” Two days later, Sahai and one of his co-authors, Brent Waters, of the University of Texas, Austin, posted a second paper that suggested, together with the first paper, that this somewhat arcane form of obfuscation may possess much of the power cryptographers have dreamed of. “This is the first serious positive result” when it comes to trying to find a universal obfuscator, said Boaz Barak, of Microsoft Research in Cambridge, Mass. “The cryptography community is very excited.” In the six months since the original paper was posted, more papers have appeared on the ePrint archive with “obfuscation” in the title than in the previous 17 years.
Paul Merrell

NSA Surveillance: Snowden Docs Raise Questions About U.S. Phone Calls - 0 views

  • Third in a series. Part 1 here; Part 2 here. When it comes to the National Security Agency’s recently disclosed use of automated speech recognition technology to search, index and transcribe voice communications, people in the United States may well be asking: But are they transcribing my phone calls? The answer is maybe. A clear-cut answer is elusive because documents in the Snowden archive describe the capability to turn speech into text, but not the extent of its use — and the U.S. intelligence community refuses to answer even the most basic questions on the topic.
  • thanks to previous explorations of the Snowden archive and some documents released by the Obama administration, we know there are four major methods the NSA uses to get access to phone calls involving Americans — and only one of them technically precludes the use of speech recognition.
1 - 20 of 91 Next › Last »
Showing 20 items per page