Skip to main content

Home/ Future of the Web/ Group items tagged tagged

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

Tagging Tutorial - Green Options Community - 0 views

  •  
    [.... What is tagging? Why is it important? Tags allow information from around the community to be aggregated together. They are labels that can be added to forum posts or Wiki articles to tie pieces of content together to make relevant information easier to find. ...]
Paul Merrell

'Pardon Snowden' Campaign Takes Off As Sanders, Ellsberg, And Others Join - 0 views

  • Prominent activists, lawmakers, artists, academics, and other leading voices in civil society, including Sen. Bernie Sanders (I-Vt.), are joining the campaign to get a pardon for National Security Agency (NSA) whistleblower Edward Snowden. “The information disclosed by Edward Snowden has allowed Congress and the American people to understand the degree to which the NSA has abused its authority and violated our constitutional rights,” Sanders wrote for the Guardian on Wednesday. “Now we must learn from the troubling revelations Mr. Snowden brought to light. Our intelligence and law enforcement agencies must be given the tools they need to protect us, but that can be done in a way that does not sacrifice our rights.” Pentagon Papers whistleblower Daniel Ellsberg, who co-founded the public interest journalism advocacy group Freedom of the Press Foundation, where Snowden is a board member, also wrote, “Ed Snowden should be freed of the legal burden hanging over him. They should remove the indictment, pardon him if that’s the way to do it, so that he is no longer facing prison.” Snowden faces charges under the Espionage Act after he released classified NSA files to media outlets in 2013 exposing the U.S. government’s global mass surveillance operations. He fled to Hong Kong, then Russia, where he has been living under political asylum for the past three years.
  • The Pardon Snowden campaign, supported by the American Civil Liberties Union (ACLU), Amnesty International, and Human Rights Watch (HRW), urgespeople around the world to write to Obama throughout his last four months in the White House.
  •  
    If you want to take part, the action page is at https://www.pardonsnowden.org/
Paul Merrell

EU Committee Votes to Make All Smartphone Vendors Utilize a Standard Charger - HotHardware - 0 views

  • The EU has been known to make a lot of odd decisions when it comes to tech, such as forcing Microsoft's hand at including a "browser wheel" with its Windows OS, but this latest decision is one I think most people will agree with. One thing that's frustrating about different smartphones is the occasional requirement to use a different charger. More frustrating is actually losing one of these chargers, and being unable to charge your phone even though you might have 8 of another charger readily available.
  • While this decision would cut down on this happening, the focus is to cut down on waste. On Thursday, the EU's internal market and consumer protection committee voted on forcing smartphone vendors to adopt a standard charger, which common sense would imply means micro USB, given it's already featured on the majority of smartphones out there. The major exception is Apple, which deploys a Lightning connector with its latest iPhones. Apple already offers Lightning to micro USB cables, but again, those are only useful if you happen to own one, making a sudden loss of a charger all-the-more frustrating. While Lightning might offer some slight benefits, Apple implementing a micro USB connector instead would make situations like those a lot easier to deal with (I am sure a lot of us have multiple micro USB cables lying around). Even though this law was a success in the initial voting, the government group must still bring the proposal to the Council which will then lead to another vote being made in the Parliament. If it does end up passing, I have a gut feeling that Apple will modify only its European models to adhere to the law, while its worldwide models will remain with the Lightning connector. Or, Apple might be able to circumvent the law if it offers to include the micro USB cable in the box, essentially shipping the phone with that connector.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

Facebook's Deepface Software Has Gotten Them in Deep Trouble | nsnbc international - 0 views

  • In a Chicago court, several Facebook users filed a class-action lawsuit against the social media giant for allegedly violating its users’ privacy rights to acquire the largest privately held stash of biometric face-recognition data in the world. The court documents reveal claims that “Facebook began violating the Illinois Biometric Information Privacy Act (IBIPA) of 2008 in 2010, in a purported attempt to make the process of tagging friends easier.”
  • This was accomplished through the “tag suggestions” feature provided by Facebook which “scans all pictures uploaded by users and identifies any Facebook friends they may want to tag.” The Facebook users maintain that this feature is a “form of data mining [that] violates user’s privacy”. One plaintiff said this is a “brazen disregard for its users’ privacy rights,” through which Facebook has “secretly amassed the world’s largest privately held database of consumer biometrics data.” Because “Facebook actively conceals” their protocol using “faceprint databases” to identify Facebook users in photos, and “doesn’t disclose its wholesale biometrics data collection practices in its privacy policies, nor does it even ask users to acknowledge them.”
  • This would be a violation of the IBIPA which states it is “unlawful to collect biometric data without written notice to the subject stating the purpose and length of the data collection, and without obtaining the subject’s written release.” Because all users are automatically part of the “faceprint’ facial recognition program, this is an illegal act in the state of Illinois, according to the complaint. Jay Edelson, attorney for the plaintiffs, asserts the opt-out ability to prevent other Facebook users from tagging them in photos is “insufficient”.
  • ...1 more annotation...
  • Deepface is the name of the new technology researchers at Facebook created in order to identify people in pictures; mimicking the way humans recognize the differences in each other’s faces. Facebook has already implemented facial recognition software (FRS) to suggest names for tagging photos; however Deepface can “identify faces from a side view” as well as when the person is directly facing the camera in the picture. In 2013, Erin Egan, chief privacy officer for Facebook, said that this upgrade “would give users better control over their personal information, by making it easier to identify posted photos in which they appear.” Egan explained: “Our goal is to facilitate tagging so that people know when there are photos of them on our service.” Facebook has stated that they retain information from their users that is syphoned from all across the web. This data is used to increase Facebook’s profits with the information being sold for marketing purposes. This is the impressive feature of Deepface; as previous FRS can only decipher faces in images that are frontal views of people. Shockingly, Deepface displays 97.25% accuracy in identifying faces in photos. That is quite a feat considering humans have a 97.53% accuracy rate. In order to ensure accuracy, Deepface “conducts its analysis based on more than 120 million different parameters.”
Paul Merrell

Zuckerberg set up fraudulent scheme to 'weaponise' data, court case alleges | Technolog... - 1 views

  • Mark Zuckerberg faces allegations that he developed a “malicious and fraudulent scheme” to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive “weaponised” the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal. A legal motion filed last week in the superior court of San Mateo draws upon extensive confidential emails and messages between Facebook senior executives including Mark Zuckerberg. He is named individually in the case and, it is claimed, had personal oversight of the scheme. Facebook rejects all claims, and has made a motion to have the case dismissed using a free speech defence.
  • It claims the first amendment protects its right to make “editorial decisions” as it sees fit. Zuckerberg and other senior executives have asserted that Facebook is a platform not a publisher, most recently in testimony to Congress.
  • Heather Whitney, a legal scholar who has written about social media companies for the Knight First Amendment Institute at Columbia University, said, in her opinion, this exposed a potential tension for Facebook. “Facebook’s claims in court that it is an editor for first amendment purposes and thus free to censor and alter the content available on its site is in tension with their, especially recent, claims before the public and US Congress to be neutral platforms.” The company that has filed the case, a former startup called Six4Three, is now trying to stop Facebook from having the case thrown out and has submitted legal arguments that draw on thousands of emails, the details of which are currently redacted. Facebook has until next Tuesday to file a motion requesting that the evidence remains sealed, otherwise the documents will be made public.
Paul Merrell

The best way to read Glenn Greenwald's 'No Place to Hide' - 0 views

  • Journalist Glenn Greenwald just dropped a pile of new secret National Security Agency documents onto the Internet. But this isn’t just some haphazard WikiLeaks-style dump. These documents, leaked to Greenwald last year by former NSA contractor Edward Snowden, are key supplemental reading material for his new book, No Place to Hide, which went on sale Tuesday. Now, you could just go buy the book in hardcover and read it like you would any other nonfiction tome. Thanks to all the additional source material, however, if any work should be read on an e-reader or computer, this is it. Here are all the links and instructions for getting the most out of No Place to Hide.
  • Greenwald has released two versions of the accompanying NSA docs: a compressed version and an uncompressed version. The only difference between these two is the quality of the PDFs. The uncompressed version clocks in at over 91MB, while the compressed version is just under 13MB. For simple reading purposes, just go with the compressed version and save yourself some storage space. Greenwald also released additional “notes” for the book, which are just citations. Unless you’re doing some scholarly research, you can skip this download.
  • No Place to Hide is, of course, available on a wide variety of ebook formats—all of which are a few dollars cheaper than the hardcover version, I might add. Pick your e-poison: Amazon, Nook, Kobo, iBooks. Flipping back and forth Each page of the documents includes a corresponding page number for the book, to allow readers to easily flip between the book text and the supporting documents. If you use the Amazon Kindle version, you also have the option of reading Greenwald’s book directly on your computer using the Kindle for PC app or directly in your browser. Yes, that may be the worst way to read a book. In this case, however, it may be the easiest way to flip back and forth between the book text and the notes and supporting documents. Of course, you can do the same on your e-reader—though it can be a bit of a pain. Those of you who own a tablet are in luck, as they provide the best way to read both ebooks and PDF files. Simply download the book using the e-reader app of your choice, download the PDFs from Greenwald’s website, and dig in. If you own a Kindle, Nook, or other ereader, you may have to convert the PDFs into a format that works well with your device. The Internet is full of tools and how-to guides for how to do this. Here’s one:
  • ...1 more annotation...
  • Kindle users also have the option of using Amazon’s Whispernet service, which converts PDFs into a format that functions best on the company’s e-reader. That will cost you a small fee, however—$0.15 per megabyte, which means the compressed Greenwald docs will cost you a whopping $1.95.
Paul Merrell

How an FBI informant orchestrated the Stratfor hack - 0 views

  • Sitting inside a medium-security federal prison in Kentucky, Jeremy Hammond looks defiant and frustrated.  “[The FBI] could've stopped me,” he told the Daily Dot last month at the Federal Correctional Institution, Manchester. “They could've. They knew about it. They could’ve stopped dozens of sites I was breaking into.” Hammond is currently serving the remainder of a 10-year prison sentence in part for his role in one of the most high-profile cyberattacks of the early 21st century. His 2011 breach of Strategic Forecasting, Inc. (Stratfor) left tens of thousands of Americans vulnerable to identity theft and irrevocably damaged the Texas-based intelligence firm's global reputation. He was also indicted for his role in the June 2011 hack of an Arizona state law enforcement agency's computer servers.
  • There's no question of his guilt: Hammond, 29, admittedly hacked into Stratfor’s network and exfiltrated an estimated 60,000 credit card numbers and associated data and millions of emails, information that was later shared with the whistleblower organization WikiLeaks and the hacker collective Anonymous.   Sealed court documents obtained by the Daily Dot and Motherboard, however, reveal that the attack was instigated and orchestrated not by Hammond, but by an informant, with the full knowledge of the Federal Bureau of Investigation (FBI).  In addition to directly facilitating the breach, the FBI left Stratfor and its customers—which included defense contractors, police chiefs, and National Security Agency employees—vulnerable to future attacks and fraud, and it requested knowledge of the data theft to be withheld from affected customers. This decision would ultimately allow for millions of dollars in damages.
Paul Merrell

Obama lawyers asked secret court to ignore public court's decision on spying | US news ... - 0 views

  • The Obama administration has asked a secret surveillance court to ignore a federal court that found bulk surveillance illegal and to once again grant the National Security Agency the power to collect the phone records of millions of Americans for six months. The legal request, filed nearly four hours after Barack Obama vowed to sign a new law banning precisely the bulk collection he asks the secret court to approve, also suggests that the administration may not necessarily comply with any potential court order demanding that the collection stop.
  • But Carlin asked the Fisa court to set aside a landmark declaration by the second circuit court of appeals. Decided on 7 May, the appeals court ruled that the government had erroneously interpreted the Patriot Act’s authorization of data collection as “relevant” to an ongoing investigation to permit bulk collection. Carlin, in his filing, wrote that the Patriot Act provision remained “in effect” during the transition period. “This court may certainly consider ACLU v Clapper as part of its evaluation of the government’s application, but second circuit rulings do not constitute controlling precedent for this court,” Carlin wrote in the 2 June application. Instead, the government asked the court to rely on its own body of once-secret precedent stretching back to 2006, which Carlin called “the better interpretation of the statute”.
  • But the Fisa court must first decide whether the new bulk-surveillance request is lawful. On Friday, the conservative group FreedomWorks filed a rare motion before the Fisa court, asking it to reject the government’s surveillance request as a violation of the fourth amendment’s prohibition on unreasonable searches and seizures. Fisa court judge Michael Moseman gave the justice department until this coming Friday to respond – and explicitly barred the government from arguing that FreedomWorks lacks the standing to petition the secret court.
Gary Edwards

The NeuroCommons Project: Open RDF Ontologies for Scientific Reseach - 0 views

  •  
    The NeuroCommons project seeks to make all scientific research materials - research articles, annotations, data, physical materials - as available and as useable as they can be. This is done by fostering practices that render information in a form that promotes uniform access by computational agents - sometimes called "interoperability". Semantic Web practices based on RDF will enable knowledge sources to combine meaningfully, semantically precise queries that span multiple information sources.

    Working with the Creative Commons group that sponsors "Neurocommons", Microsoft has developed and released an open source "ontology" add-on for Microsoft Word. The add-on makes use of MSOffice XML panel, Open XML formats, and proprietary "Smart Tags". Microsoft is also making the source code for both the Ontology Add-in for Office Word 2007 and the Creative Commons Add-in for Office Word 2007 tool available under the Open Source Initiative (OSI)-approved Microsoft Public License (Ms-PL) at http://ucsdbiolit.codeplex.com and http://ccaddin2007.codeplex.com,respectively.

    No doubt it will take some digging to figure out what is going on here. Microsoft WPF technologies include Smart Tags and LINQ. The Creative Commons "Neurocommons" ontology work is based on W3C RDF and SPARQL. How these opposing technologies interoperate with legacy MSOffice 2003 and 2007 desktops is an interesting question. One that may hold the answer to the larger problem of re-purposing MSOffice for the Open Web?

    We know Microsoft is re-purposing MSOffice for the MS Web. Perhaps this work with Creative Commons will help to open up the Microsoft desktop productivity environment to the Open Web? One can always hope :)

    Dr Dobbs has the Microsoft - Creative Commons announcement; Microsoft Releases Open Tools for Scientific Research ...... Joins Creative Commons in releasing the Ontology Add-in
Gonzalo San Gil, PhD.

Apps/EasyTAG - GNOME Wiki! - 1 views

  •  
    "EasyTAG is a simple application for viewing and editing tags in audio files. It supports MP3, MP2, MP4/AAC, FLAC, Ogg Opus, Ogg Speex, Ogg Vorbis, MusePack, Monkey's Audio, and WavPack files. And works under Linux or Windows. "
  •  
    "EasyTAG is a simple application for viewing and editing tags in audio files. It supports MP3, MP2, MP4/AAC, FLAC, Ogg Opus, Ogg Speex, Ogg Vorbis, MusePack, Monkey's Audio, and WavPack files. And works under Linux or Windows. "
Paul Merrell

Moscow tells Twitter to store Russian users' data in the country - 0 views

  • Moscow has warned Twitter that it must store Russian users' personal data in Russia, under a new law, the national communications watchdog told AFP on Wednesday. Legislation that came into force on September 1 requires both Russian and foreign social media sites, messenger services and search engines to store the data held on Russian users on servers located inside the country. The controversial law was adopted amid Internet users' growing concerns about the storage of their data, but also as Russia has moved to tighten security on social media and online news sites that are crucial outlets for the political opposition. Non-compliance could lead Russia's communications watchdog Roskomnadzor to block the sites and services. Roskomnadzor spokesman Vadim Ampelonsky confirmed to AFP that Russia had changed its initial position on US-based Twitter, which it had previously said did not fall under the law. Twitter must comply because it now asks users to supply their personal data, Ampelonsky said, confirming earlier comments by the head of Roskomnadzor Alexander Zharov to Russian media.
Paul Merrell

Forget Apple vs. the FBI: WhatsApp Just Switched on Encryption for a Billion People | W... - 0 views

  • For most of the past six weeks, the biggest story out of Silicon Valley was Apple’s battle with the FBI over a federal order to unlock the iPhone of a mass shooter. The company’s refusal touched off a searing debate over privacy and security in the digital age. But this morning, at a small office in Mountain View, California, three guys made the scope of that enormous debate look kinda small. Mountain View is home to WhatsApp, an online messaging service now owned by tech giant Facebook, that has grown into one of the world’s most important applications. More than a billion people trade messages, make phone calls, send photos, and swap videos using the service. This means that only Facebook itself runs a larger self-contained communications network. And today, the enigmatic founders of WhatsApp, Brian Acton and Jan Koum, together with a high-minded coder and cryptographer who goes by the pseudonym Moxie Marlinspike, revealed that the company has added end-to-end encryption to every form of communication on its service.
  • This means that if any group of people uses the latest version of WhatsApp—whether that group spans two people or ten—the service will encrypt all messages, phone calls, photos, and videos moving among them. And that’s true on any phone that runs the app, from iPhones to Android phones to Windows phones to old school Nokia flip phones. With end-to-end encryption in place, not even WhatsApp’s employees can read the data that’s sent across its network. In other words, WhatsApp has no way of complying with a court order demanding access to the content of any message, phone call, photo, or video traveling through its service. Like Apple, WhatsApp is, in practice, stonewalling the federal government, but it’s doing so on a larger front—one that spans roughly a billion devices.
  • The FBI and the Justice Department declined to comment for this story. But many inside the government and out are sure to take issue with the company’s move. In late 2014, WhatsApp encrypted a portion of its network. In the months since, its service has apparently been used to facilitate criminal acts, including the terrorist attacks on Paris last year. According to The New York Times, as recently as this month, the Justice Department was considering a court case against the company after a wiretap order (still under seal) ran into WhatsApp’s end-to-end encryption. “The government doesn’t want to stop encryption,” says Joseph DeMarco, a former federal prosecutor who specializes in cybercrime and has represented various law enforcement agencies backing the Justice Department and the FBI in their battle with Apple. “But the question is: what do you do when a company creates an encryption system that makes it impossible for court-authorized search warrants to be executed? What is the reasonable level of assistance you should ask from that company?”
Paul Merrell

Facebook to pay $5bn fine as regulator settles Cambridge Analytica complaint | Technolo... - 0 views

  • Facebook will pay a record $5bn (£4bn) penalty in the US for “deceiving” users about their ability to keep personal information private, after a year-long investigation into the Cambridge Analytica data breach. The Federal Trade Commission (FTC), the US consumer regulator, also announced a lawsuit against Cambridge Analytica and proposed settlements with the data analysis firm’s former chief executive Alexander Nix and its app developer Aleksandr Kogan. The $5bn fine for Facebook dwarfs the previous record for the largest fine handed down by the FTC for violation of consumers’ privacy, which was a $275m penalty for consumer credit agency Equifax.
Paul Merrell

Ohio's attorney general wants Google to be declared a public utility. - The New York Times - 2 views

  • Ohio’s attorney general, Dave Yost, filed a lawsuit on Tuesday in pursuit of a novel effort to have Google declared a public utility and subject to government regulation.The lawsuit, which was filed in a Delaware County, Ohio court, seeks to use a law that’s over a century old to regulate Google by applying a legal designation historically used for railroads, electricity and the telephone to the search engine.“When you own the railroad or the electric company or the cellphone tower, you have to treat everyone the same and give everybody access,” Mr. Yost, a Republican, said in a statement. He added that Ohio was the first state to bring such a lawsuit against Google.If Google were declared a so-called common carrier like a utility company, it would prevent the company from prioritizing its own products, services and websites in search results.AdvertisementContinue reading the main storyGoogle said it had none of the attributes of a common carrier that usually provide a standardized service for a fee using public assets, such as rights of way.The “lawsuit would make Google Search results worse and make it harder for small businesses to connect directly with customers,” José Castañeda, a Google spokesman, said in a statement. “Ohioans simply don’t want the government to run Google like a gas or electric company. This lawsuit has no basis in fact or law and we’ll defend ourselves against it in court.”Though the Ohio lawsuit is a stretch, there is a long history of government control of certain kinds of companies, said Andrew Schwartzman, a senior fellow at the nonprofit Benton Institute for Broadband & Society. “Think of ‘The Canterbury Tales.’ Travelers needed a place to stay and eat on long road treks, and innkeepers were not allowed to deny them accommodations or rip them off,” he said.
  • After a series of federal lawsuits filed against Google last year, Ohio’s lawsuit is part of a next wave of state actions aimed at regulating and curtailing the power of Big Tech. Also on Tuesday, Colorado’s legislature passed a data privacy law that would allow consumers to opt out of data collection.On Monday, New York’s Senate passed antitrust legislation that would make it easier for plaintiffs to sue dominant platforms for abuse of power. After years of inaction in Congress with tech legislation, states are beginning to fill the regulatory vacuum.Editors’ PicksThe Abandoned Houses of Instagram21 Easy Summer Dinners You’ll Cook (or Throw Together) on Repeat‘King Richard’ Finds Fresh Drama in WatergateAdvertisementContinue reading the main storyAdvertisementContinue reading the main storyOhio was also one of 38 states that filed an antitrust lawsuit in December accusing Google of being a monopoly and using its dominant position in internet search to squeeze out smaller rivals.
Gonzalo San Gil, PhD.

Export - Support - WordPress.com (Backup) - 0 views

  •  
    "Export Your Content to Another Blog or Platform It's your content; you can do whatever you like with it. Go to Tools -> Export in your WordPress.com dashboard to download an XML file of your blog's content. This format, which we call WordPress eXtended RSS or WXR, will contain your posts, pages, comments, categories, and tags."
  •  
    "Export Your Content to Another Blog or Platform It's your content; you can do whatever you like with it. Go to Tools -> Export in your WordPress.com dashboard to download an XML file of your blog's content. This format, which we call WordPress eXtended RSS or WXR, will contain your posts, pages, comments, categories, and tags."
Paul Merrell

PC World - Business Center: Yahoo's Zimbra Reaches for the Cloud - 0 views

  • Yahoo's Zimbra, the provider of a communications and collaboration suite that rivals Microsoft's Office and Outlook/Exchange, will make its cloud computing debut on Tuesday.Zimbra has so far relied entirely on third-party hosting partners to offer the SaaS (software-as-a-service) version of its suite.However, it is now taking advantage of Yahoo's data centers to become a hosting provider for customers in the education market that want to access the suite on demand.
Paul Merrell

NSA Spying Inspires ProtonMail 'End-to-End' Encrypted Email Service | NDTV Gadgets - 0 views

  • ne new email service promising "end-to-end" encryption launched on Friday, and others are being developed while major services such as Google Gmail and Yahoo Mail have stepped up security measures.A major catalyst for email encryption were revelations about widespread online surveillance in documents leaked by Edward Snowden, the former National Security Agency contractor."A lot of people were upset with those revelations, and that coalesced into this effort," said Jason Stockman, a co-developer of ProtonMail, a new encrypted email service which launched Friday with collaboration of scientists from Harvard, the Massachusetts Institute of Technology and the European research lab CERN.Stockman said ProtonMail aims to be as user-friendly as the major commercial services, but with extra security, and with its servers located in Switzerland to make it more difficult for US law enforcement to access.
  • "Our vision is to make encryption and privacy mainstream by making it easy to use," Stockman told AFP. "There's no installation. Everything happens behind the scenes automatically."Even though email encryption using special codes or keys, a system known as PGP, has been around for two decades, "it was so complicated," and did not gain widespread adoption, Stockman said.After testing over the past few months, ProtonMail went public Friday using a "freemium" model a basic account will be free with some added features for a paid account.
  • As our users from China, Iran, Russia, and other countries around the world have shown us in the past months, ProtonMail is an important tool for freedom of speech and we are happy to finally be able to provide this to the whole world," the company said in a blog post.Google and Yahoo recently announced efforts to encrypt their email communications, but some specialists say the effort falls short."These big companies don't want to encrypt your stuff because they spy on you, too," said Bruce Schneier, a well-known cryptographer and author who is chief technology officer for CO3 Systems."Hopefully, the NSA debate is creating incentives for people to build more encryption."Stockman said that with services like Gmail, even if data is encrypted, "they have the key right next to it if you have the key and lock next to each other, so it's pretty much useless."
  • ...3 more annotations...
  • By locating in Switzerland, ProtonMail hopes to avoid the legal woes of services like Lavabit widely believed to be used by Snowden which shut down rather than hand over data to the US government, and which now faces a contempt of court order.Even if a Swiss court ordered data to be turned over, Stockman said, "we would hand over piles of encrypted data. We don't have a key. We never see the password."
  • Lavabit founder Ladar Levison meanwhile hopes to launch a new service with other developers in a coalition known as the "Dark Mail Alliance."Levison told AFP he hopes to have a new encrypted email system in testing within a few months and widely available later this year."The goal is to make it ubiquitous, so people don't have to turn it on," he said.But he added that the technical hurdles are formidable, because the more user-friendly the system becomes, "the more susceptible it is to a sophisticated attacker with fake or spoofed key information."Levison said he hopes Dark Mail will become a new open standard that can be adopted by other email services.
  • on Callas, a cryptographer who developed the PGP standard and later co-founded the secure communications firm Silent Circle, cited challenges in making a system that is both secure and ubiquitous."If you are a bank you have to have an email system that complies with banking regulations," Callas told AFP, which could allow, for example, certain emails to be subject to regulatory or court review."Many of the services on the Internet started with zero security. We want to start with a system that is totally secure and let people dial it down."The new email system would complement Silent Circle's existing secure messaging system and encrypted mobile phone, which was launched earlier this year."If we start competing for customers on the basis of maximum privacy, that's good for everybody," Callas said.
  •  
    They're already so swamped that you have to reserve your user name and wait for an invite. They say they have to add servers. Web site is at https://protonmail.ch/ "ProtonMail works on all devices, including desktops, laptops, tablets, and smartphones. It's as simple as visiting our site and logging in. There are no plugins or apps to install - simply use your favorite web browser." "ProtonMail works on all devices, including desktops, laptops, tablets, and smartphones.
Paul Merrell

Facebook to Pay $550 Million to Settle Facial Recognition Suit - The New York Times - 2 views

  • Facebook said on Wednesday that it had agreed to pay $550 million to settle a class-action lawsuit over its use of facial recognition technology in Illinois, giving privacy groups a major victory that again raised questions about the social network’s data-mining practices.The case stemmed from Facebook’s photo-labeling service, Tag Suggestions, which uses face-matching software to suggest the names of people in users’ photos. The suit said the Silicon Valley company violated an Illinois biometric privacy law by harvesting facial data for Tag Suggestions from the photos of millions of users in the state without their permission and without telling them how long the data would be kept. Facebook has said the allegations have no merit.Under the agreement, Facebook will pay $550 million to eligible Illinois users and for the plaintiffs’ legal fees. The sum dwarfs the $380.5 million that the Equifax credit reporting agency agreed this month to pay to settle a class-action case over a 2017 consumer data breach.
Paul Merrell

ACLU Demands Secret Court Hand Over Crucial Rulings On Surveillance Law - 0 views

  • The American Civil Liberties Union (ACLU) has filed a motion to reveal the secret court opinions with “novel or significant interpretations” of surveillance law, in a renewed push for government transparency. The motion, filed Wednesday by the ACLU and Yale Law School’s Media Freedom and Information Access Clinic, asks the Foreign Intelligence Surveillance Act (FISA) Court, which rules on intelligence gathering activities in secret, to release 23 classified decisions it made between 9/11 and the passage of the USA Freedom Act in June 2015. As ACLU National Security Project staff attorney Patrick Toomey explains, the opinions are part of a “much larger collection of hidden rulings on all sorts of government surveillance activities that affect the privacy rights of Americans.” Among them is the court order that the government used to direct Yahoo to secretly scanits users’ emails for “a specific set of characters.” Toomey writes: These court rulings are essential for the public to understand how federal laws are being construed and implemented. They also show how constitutional protections for personal privacy and expressive activities are being enforced by the courts. In other words, access to these opinions is necessary for the public to properly oversee their government.
  • Although the USA Freedom Act requires the release of novel FISA court opinions on surveillance law, the government maintains that the rule does not apply retroactively—thereby protecting the panel from publishing many of its post-9/11 opinions, which helped create an “unprecedented buildup” of secret surveillance laws. Even after National Security Agency (NSA) whistleblower Edward Snowden revealed the scope of mass surveillance in 2013, sparking widespread outcry, dozens of rulings on spying operations remain hidden from the public eye, which stymies efforts to keep the government accountable, civil liberties advocates say. “These rulings are necessary to inform the public about the scope of the government’s surveillance powers today,” the ACLU’s motion states.
  • Toomey writes that the rulings helped influence a number of novel spying activities, including: The government’s use of malware, which it calls “Network Investigative Techniques” The government’s efforts to compel technology companies to weaken or circumvent their own encryption protocols The government’s efforts to compel technology companies to disclose their source code so that it can identify vulnerabilities The government’s use of “cybersignatures” to search through internet communications for evidence of computer intrusions The government’s use of stingray cell-phone tracking devices under the Foreign Intelligence Surveillance Act (FISA) The government’s warrantless surveillance of Americans under FISA Section 702—a controversial authority scheduled to expire in December 2017 The bulk collection of financial records by the CIA and FBI under Section 215 of the Patriot Act Without these rulings being made public, “it simply isn’t possible to understand the government’s claimed authority to conduct surveillance,” Toomey writes. As he told The Intercept on Wednesday, “The people of this country can’t hold the government accountable for its surveillance activities unless they know what our laws allow. These secret court opinions define the limits of the government’s spying powers. Their disclosure is essential for meaningful public oversight in our democracy.”
1 - 20 of 90 Next › Last »
Showing 20 items per page