Skip to main content

Home/ Future of the Web/ Group items tagged method

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

Dice page | Electronic Frontier Foundation - 0 views

  •  
    "Create strong passphrases with EFF's new random number generators! This page includes information about passwords, different wordlists, and EFF's suggested method for passphrase generation. Use the directions below with EFF's random number generator member gift or your own set of dice. And now, a message from internationally renowned security technologist, author, and EFF Board Member Bruce Schneier:"
Gonzalo San Gil, PhD.

"Piracy Monitoring Outfit Uses Flawed Tracking Technology" - TorrentFreak [# ! Note] - 0 views

  •  
    By Ernesto on June 12, 2016 C: 13 News Every day anti-piracy outfits monitor millions of unauthorized BitTorrent transfers. Among other things, the data collected is used to sent stark warnings to alleged pirates. However, according to a torrent site owner the tracking methods of these companies are not all foolproof.
Paul Merrell

FBI's secret method of unlocking iPhone may never reach Apple | Reuters - 0 views

  • The FBI may be allowed to withhold information about how it broke into an iPhone belonging to a gunman in the December San Bernardino shootings, despite a U.S. government policy of disclosing technology security flaws discovered by federal agencies. Under the U.S. vulnerabilities equities process, the government is supposed to err in favor of disclosing security issues so companies can devise fixes to protect data. The policy has exceptions for law enforcement, and there are no hard rules about when and how it must be applied.Apple Inc has said it would like the government to share how it cracked the iPhone security protections. But the Federal Bureau of Investigation, which has been frustrated by its inability to access data on encrypted phones belonging to criminal suspects, might prefer to keep secret the technique it used to gain access to gunman Syed Farook's phone. The referee is likely to be a White House group formed during the Obama administration to review computer security flaws discovered by federal agencies and decide whether they should be disclosed.
  • Stewart Baker, former general counsel of the NSA and now a lawyer with Steptoe & Johnson, said the review process could be complicated if the cracking method is considered proprietary by the third party that assisted the FBI.Several security researchers have pointed to the Israel-based mobile forensics firm Cellebrite as the likely third party that helped the FBI. That company has repeatedly declined comment.
  •  
    The article is wide of the mark, based on analysis of Executive Branch policy rather than the governing law such as the Freedom of Information Act. And I still find it somewhat ludicrous that a third party with knowledge of the defect could succeed in convincing a court that knowledge of a defect in a company's product is trade-secret proprietary information. "Your honor, my client has discovered a way to break into Mr. Tim Cook's house without a key to his house. That is a valuable trade secret that this Court must keep Mr. Cook from learning." Pow! The Computer Fraud and Abuse Act makes it a crime to access a computer that can connect to the Internet by exploiting a software bug. 
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gonzalo San Gil, PhD.

Unblock Website - 0 views

  •  
    "This section show you how to unblock Geo-Restricted websites with ease. Step by step instructions show you how to unblock websites by using various methods like VPN, Proxy and Smart DNS."
Gary Edwards

Less Talk, More Code: The four rules of the web and compound documents - 0 views

  •  
    The four rules of the web and compound documents A real quirk that truly interests me is the difference in aims between the way documents are typically published and the way that the information within them is reused. A published document is normally in a single 'format' - a paginated layout, and this may comprise text, numerical charts, diagrams, tables of data and so on. My assumption is that, to support a given view or argument, a reference to the entirety of an article is not necessary; The full paper gives the context to the information, but it is much more likely that a small part of this paper contains the novel insight being referenced. In the paper-based method, it is difficult to uniquely identify parts of an article as items in their own right. You could reference a page number, give line numbers, or quote a table number, but this doesn't solve this issue that the author hadn't put time to considering that a chart, a table or a section of text would be reused.
Gary Edwards

ptsefton » OpenOffice.org is bad for the planet - 0 views

  •  
    ptsefton continues his rant that OpenOffice does not support the Open Web. He's been on this rant for so long, i'm wondering if he really thinks there's a chance the lords of ODF and the OpenOffice source code are listening? In this post he describes how useless it is to submit his findings and frustrations with OOo in a bug report. Pretty funny stuff even if you do end up joining the Michael Meeks trek along this trail of tears. Maybe there's another way?

    What would happen if pt moved from targeting the not so open OpenOffice, to target governments and enterprises trying to set future information system requirements?

    NY State is next up on this endless list. Most likely they will follow the lessons of exhaustive pilot studies conducted by Massachusetts, California, Belgium, Denmark and England, and end up mandating the use of both open standard "XML" formats, ODF and OOXML.

    The pilots concluded that there was a need for both XML formats; depending on the needs of different departments and workgroups. The pilot studies scream out a general rule of thumb; if your department has day-to-day business processes bound to MSOffice workgroups, then it makes sense to use MSOffice OOXML going forward. If there is no legacy MSOffice bound workgroup or workflow, it makes sense to move to OpenOffice ODF.

    One thing the pilots make clear is that it is prohibitively costly and disruptive to try to replace MSOffice bound workgroups.

    What NY State might consider is that the Web is going to be an important part of their informations systems future. What a surprise. Every pilot recognized and indeed, emphasized this fact. Yet, they fell short of the obvious conclusion; mandating that desktop applications provide native support for Open Web formats, protocols and interfaces!

    What's wrong with insisting that desktop applciations and office suites support the rapidly advancing HTML+ technologies as well as the applicat
Paul Merrell

Cloud Has Shrinking Effect on StarOffice Price Tag - 0 views

  • Last Friday Sun Microsystems, its fortunes about as low as a snake’s belly, moved its StarOffice franchise into a new Cloud Computing unit with clear instructions to “grow revenues.” StarOffice 9, the latest rev of the Microsoft wannabe, was sent to market Monday priced at $34.95 for a one-off download, half the price of its predecessor, leaving one to assume that it wasn’t selling at 70 bucks – especially since pretty much the same thing can be had for nothing from OpenOffice.org.
  • Volume licenses from Sun start at $25 per user.
  •  
    The author clearly missed that Sun is a full-fledged Microsoft partner and that StarOffice 9, of all OpenOffice.org clones, is the only one that has write support for both ODF v. 1.2 and ODF v. 1.1, the latter of which is the only ODF version being implemented by Microsoft. The other OOo clones write only to ODF 1.2, which is dramatically different from ODF 1.2. So StarOffice will almost certainly have better interop via ODF with MS Office 2007 than will OOo 3.x or Lotus Symphony. For $25 per seat in the enterprise, $34.95 retail. The author simply misses that "pretty much the same thing can [NOT] be had for nothing from OpenOffice.org." There is a method to the claimed Sun madness, methinks. IBM gets left standing at the altar again.
Gonzalo San Gil, PhD.

Artist Revenue Streams | Future of Music Coalition - 0 views

  •  
    [Future of Music Coalition has launched a multi-method research project to assess how musicians' revenue streams are changing in this new music landscape. ...]
Gary Edwards

Mashups turn into an industry as offerings mature | Hinchcliffe Enterprise Web 2.0 | Z... - 0 views

  •  
    Dion has lots to say about the recent Web 2.0 Conference. In this article he covers nine significant announcements from companies specializing in Web based mashups and the related tools for building ad hoc Web applications. This years Web 2.0 was filled with Web developer oriented services, but my favorite was MindTouch. Perhaps because their focus was that of directly engaging end users in the customization of business processes. Yes, the creation of data objects is clearly in the realm of trained developers. And for sure many tools were announced at Web 2.0 to further the much needed wiring of data objects. But once wired and available, services like MindTouch i think will become the way end users interact and create new business productivity methods. Great coverage.

    "...... For awareness and understanding of the fast-growing world of mashups are significant challenges as IT practitioners, business strategists, and software vendors attempt to grapple with what's facing up to be the biggest challenge of all: The habits and expectations of the larger part of a generation of workers who don't yet realize mashups are poised to change many things about the software landscape on the Web and in the workplace. Generational changes can be difficult for businesses to embrace successfully, and while evidence that mashups are remaking the business world are still very much emerging, they certainly hold the promise..."

    ".... while the life of the average Web developer has been greatly improved by the availability of a wide variety of useful open APIs, the average user of the Web hasn't been a direct beneficiary except through the increase in Web apps that are built on the mashup model. And that's because the tools that empower users to weave together existing Web parts and open APIs into the exact solutions they need are just now becoming easy enough and robust enough to readily enable these scenarios. And that doesn't include the variety of
Paul Merrell

NSA contractors use LinkedIn profiles to cash in on national security | Al Jazeera America - 0 views

  • NSA spies need jobs, too. And that is why many covert programs could be hiding in plain sight. Job websites such as LinkedIn and Indeed.com contain hundreds of profiles that reference classified NSA efforts, posted by everyone from career government employees to low-level IT workers who served in Iraq or Afghanistan. They offer a rare glimpse into the intelligence community's projects and how they operate. Now some researchers are using the same kinds of big-data tools employed by the NSA to scrape public LinkedIn profiles for classified programs. But the presence of so much classified information in public view raises serious concerns about security — and about the intelligence industry as a whole. “I’ve spent the past couple of years searching LinkedIn profiles for NSA programs,” said Christopher Soghoian, the principal technologist with the American Civil Liberties Union’s Speech, Privacy and Technology Project.
  • On Aug. 3, The Wall Street Journal published a story about the FBI’s growing use of hacking to monitor suspects, based on information Soghoian provided. The next day, Soghoian spoke at the Defcon hacking conference about how he uncovered the existence of the FBI’s hacking team, known as the Remote Operations Unit (ROU), using the LinkedIn profiles of two employees at James Bimen Associates, with which the FBI contracts for hacking operations. “Had it not been for the sloppy actions of a few contractors updating their LinkedIn profiles, we would have never known about this,” Soghoian said in his Defcon talk. Those two contractors were not the only ones being sloppy.
  • And there are many more. A quick search of Indeed.com using three code names unlikely to return false positives — Dishfire, XKeyscore and Pinwale — turned up 323 résumés. The same search on LinkedIn turned up 48 profiles mentioning Dishfire, 18 mentioning XKeyscore and 74 mentioning Pinwale. Almost all these people appear to work in the intelligence industry. Network-mapping the data Fabio Pietrosanti of the Hermes Center for Transparency and Digital Human Rights noticed all the code names on LinkedIn last December. While sitting with M.C. McGrath at the Chaos Communication Congress in Hamburg, Germany, Pietrosanti began searching the website for classified program names — and getting serious results. McGrath was already developing Transparency Toolkit, a Web application for investigative research, and knew he could improve on Pietrosanti’s off-the-cuff methods.
  • ...2 more annotations...
  • “I was, like, huh, maybe there’s more we can do with this — actually get a list of all these profiles that have these results and use that to analyze the structure of which companies are helping with which programs, which people are helping with which programs, try to figure out in what capacity, and learn more about things that we might not know about,” McGrath said. He set up a computer program called a scraper to search LinkedIn for public profiles that mention known NSA programs, contractors or jargon — such as SIGINT, the agency’s term for “signals intelligence” gleaned from intercepted communications. Once the scraper found the name of an NSA program, it searched nearby for other words in all caps. That allowed McGrath to find the names of unknown programs, too. Once McGrath had the raw data — thousands of profiles in all, with 70 to 80 different program names — he created a network graph that showed the relationships between specific government agencies, contractors and intelligence programs. Of course, the data are limited to what people are posting on their LinkedIn profiles. Still, the network graph gives a sense of which contractors work on several NSA programs, which ones work on just one or two, and even which programs military units in Iraq and Afghanistan are using. And that is just the beginning.
  • Click on the image to view an interactive network illustration of the relationships between specific national security surveillance programs in red, and government organizations or private contractors in blue.
  •  
    What a giggle, public spying on NSA and its contractors using Big Data. The interactive network graph with its sidebar display of relevant data derived from LinkedIn profiles is just too delightful. 
Paul Merrell

ExposeFacts - For Whistleblowers, Journalism and Democracy - 0 views

  • Launched by the Institute for Public Accuracy in June 2014, ExposeFacts.org represents a new approach for encouraging whistleblowers to disclose information that citizens need to make truly informed decisions in a democracy. From the outset, our message is clear: “Whistleblowers Welcome at ExposeFacts.org.” ExposeFacts aims to shed light on concealed activities that are relevant to human rights, corporate malfeasance, the environment, civil liberties and war. At a time when key provisions of the First, Fourth and Fifth Amendments are under assault, we are standing up for a free press, privacy, transparency and due process as we seek to reveal official information—whether governmental or corporate—that the public has a right to know. While no software can provide an ironclad guarantee of confidentiality, ExposeFacts—assisted by the Freedom of the Press Foundation and its “SecureDrop” whistleblower submission system—is utilizing the latest technology on behalf of anonymity for anyone submitting materials via the ExposeFacts.org website. As journalists we are committed to the goal of protecting the identity of every source who wishes to remain anonymous.
  • The seasoned editorial board of ExposeFacts will be assessing all the submitted material and, when deemed appropriate, will arrange for journalistic release of information. In exercising its judgment, the editorial board is able to call on the expertise of the ExposeFacts advisory board, which includes more than 40 journalists, whistleblowers, former U.S. government officials and others with wide-ranging expertise. We are proud that Pentagon Papers whistleblower Daniel Ellsberg was the first person to become a member of the ExposeFacts advisory board. The icon below links to a SecureDrop implementation for ExposeFacts overseen by the Freedom of the Press Foundation and is only accessible using the Tor browser. As the Freedom of the Press Foundation notes, no one can guarantee 100 percent security, but this provides a “significantly more secure environment for sources to get information than exists through normal digital channels, but there are always risks.” ExposeFacts follows all guidelines as recommended by Freedom of the Press Foundation, and whistleblowers should too; the SecureDrop onion URL should only be accessed with the Tor browser — and, for added security, be running the Tails operating system. Whistleblowers should not log-in to SecureDrop from a home or office Internet connection, but rather from public wifi, preferably one you do not frequent. Whistleblowers should keep to a minimum interacting with whistleblowing-related websites unless they are using such secure software.
  •  
    A new resource site for whistle-blowers. somewhat in the tradition of Wikileaks, but designed for encrypted communications between whistleblowers and journalists.  This one has an impressive board of advisors that includes several names I know and tend to trust, among them former whistle-blowers Daniel Ellsberg, Ray McGovern, Thomas Drake, William Binney, and Ann Wright. Leaked records can only be dropped from a web browser running the Tor anonymizer software and uses the SecureDrop system originally developed by Aaron Schwartz. They strongly recommend using the Tails secure operating system that can be installed to a thumb drive and leaves no tracks on the host machine. https://tails.boum.org/index.en.html Curious, I downloaded Tails and installed it to a virtual machine. It's a heavily customized version of Debian. It has a very nice Gnome desktop and blocks any attempt to connect to an external network by means other than installed software that demands encrypted communications. For example, web sites can only be viewed via the Tor anonymizing proxy network. It does take longer for web pages to load because they are moving over a chain of proxies, but even so it's faster than pages loaded in the dial-up modem days, even for web pages that are loaded with graphics, javascript, and other cruft. E.g., about 2 seconds for New York Times pages. All cookies are treated by default as session cookies so disappear when you close the page or the browser. I love my Linux Mint desktop, but I am thinking hard about switching that box to Tails. I've been looking for methods to send a lot more encrypted stuff down the pipe for NSA to store. Tails looks to make that not only easy, but unavoidable. From what I've gathered so far, if you want to install more software on Tails, it takes about an hour to create a customized version and then update your Tails installation from a new ISO file. Tails has a wonderful odor of having been designed for secure computing. Current
Gonzalo San Gil, PhD.

Project Maelstrom - 0 views

  •  
    "Project Maelstrom aims on resolving this by attempting to create an open network of data sources, authentication methods, and applications. Unlike many other competing services, Maelstrom aims to create a comprehensive network of anything required for an individual web application to integrate with the internet as a whole. Just the connections, nothing more. We'll create the network; it'll be up to you to use it. We don't want to get in your way by attempting to compete. "
Paul Merrell

Inside the NSA's War on Internet Security - SPIEGEL ONLINE - 0 views

  • US and British intelligence agencies undertake every effort imaginable to crack all types of encrypted Internet communication. The cloud, it seems, is full of holes. The good news: New Snowden documents show that some forms of encryption still cause problems for the NSA.
  •  
    A must-read. Identifies which encryption methods the NSA has cracked, which they can't, and which they  have difficulties with.
Gonzalo San Gil, PhD.

Apple Patents Technology to Legalize P2P Sharing | TorrentFreak * - 1 views

  •  
    "This means that transferring files between devices is only possible if these support Apple's licensing scheme. That's actually a step backwards from the DRM-free music that's sold in most stores today." [* What 'Apple's licensing scheme' -closed source- can hide?]
  •  
    "This means that transferring files between devices is only possible if these support Apple's licensing scheme. That's actually a step backwards from the DRM-free music that's sold in most stores today." [* What 'Apple's licensing scheme' -closed source- can hide?]
  •  
    A business method software patent combining old elements that are all prior art, including DRM. Yech! "... a patent that makes it possible to license P2P sharing" really puts a spin on reality. If the methods were in the public domain, anyone could use them without a license. That's equivalent to to saying "a government-granted monopoly with the power but no responsibility to collect money from anyone who wants to invade the monopoly's protected rights" and presenting that fact as some sort of tremendous philanthropic act by Apple. On software patent claims as prior art and obvious, see my legal memo on that topic here. http://goo.gl/5X8Kg9
Paul Merrell

WASHINGTON: CIA admits it broke into Senate computers; senators call for spy chief's ou... - 0 views

  • An internal CIA investigation confirmed allegations that agency personnel improperly intruded into a protected database used by Senate Intelligence Committee staff to compile a scathing report on the agency’s detention and interrogation program, prompting bipartisan outrage and at least two calls for spy chief John Brennan to resign.“This is very, very serious, and I will tell you, as a member of the committee, someone who has great respect for the CIA, I am extremely disappointed in the actions of the agents of the CIA who carried out this breach of the committee’s computers,” said Sen. Saxby Chambliss, R-Ga., the committee’s vice chairman.
  • The rare display of bipartisan fury followed a three-hour private briefing by Inspector General David Buckley. His investigation revealed that five CIA employees, two lawyers and three information technology specialists improperly accessed or “caused access” to a database that only committee staff were permitted to use.Buckley’s inquiry also determined that a CIA crimes report to the Justice Department alleging that the panel staff removed classified documents from a top-secret facility without authorization was based on “inaccurate information,” according to a summary of the findings prepared for the Senate and House intelligence committees and released by the CIA.In other conclusions, Buckley found that CIA security officers conducted keyword searches of the emails of staffers of the committee’s Democratic majority _ and reviewed some of them _ and that the three CIA information technology specialists showed “a lack of candor” in interviews with Buckley’s office.
  • The inspector general’s summary did not say who may have ordered the intrusion or when senior CIA officials learned of it.Following the briefing, some senators struggled to maintain their composure over what they saw as a violation of the constitutional separation of powers between an executive branch agency and its congressional overseers.“We’re the only people watching these organizations, and if we can’t rely on the information that we’re given as being accurate, then it makes a mockery of the entire oversight function,” said Sen. Angus King, an independent from Maine who caucuses with the Democrats.The findings confirmed charges by the committee chairwoman, Sen. Dianne Feinstein, D-Calif., that the CIA intruded into the database that by agreement was to be used by her staffers compiling the report on the harsh interrogation methods used by the agency on suspected terrorists held in secret overseas prisons under the George W. Bush administration.The findings also contradicted Brennan’s denials of Feinstein’s allegations, prompting two panel members, Sens. Mark Udall, D-Colo., and Martin Heinrich, D-N.M., to demand that the spy chief resign.
  • ...7 more annotations...
  • Another committee member, Sen. Ron Wyden, D-Ore., and some civil rights groups called for a fuller investigation. The demands clashed with a desire by President Barack Obama, other lawmakers and the CIA to move beyond the controversy over the “enhanced interrogation program” after Feinstein releases her committee’s report, which could come as soon as next weekMany members demanded that Brennan explain his earlier denial that the CIA had accessed the Senate committee database.“Director Brennan should make a very public explanation and correction of what he said,” said Sen. Carl Levin, D-Mich. He all but accused the Justice Department of a coverup by deciding not to pursue a criminal investigation into the CIA’s intrusion.
  • “I thought there might have been information that was produced after the department reached their conclusion,” he said. “What I understand, they have all of the information which the IG has.”He hinted that the scandal goes further than the individuals cited in Buckley’s report.“I think it’s very clear that CIA people knew exactly what they were doing and either knew or should’ve known,” said Levin, adding that he thought that Buckley’s findings should be referred to the Justice Department.A person with knowledge of the issue insisted that the CIA personnel who improperly accessed the database “acted in good faith,” believing that they were empowered to do so because they believed there had been a security violation.“There was no malicious intent. They acted in good faith believing they had the legal standing to do so,” said the knowledgeable person, who asked not to be further identified because they weren’t authorized to discuss the issue publicly. “But it did not conform with the legal agreement reached with the Senate committee.”
  • Feinstein called Brennan’s apology and his decision to submit Buckley’s findings to the accountability board “positive first steps.”“This IG report corrects the record and it is my understanding that a declassified report will be made available to the public shortly,” she said in a statement.“The investigation confirmed what I said on the Senate floor in March _ CIA personnel inappropriately searched Senate Intelligence Committee computers in violation of an agreement we had reached, and I believe in violation of the constitutional separation of powers,” she said.It was not clear why Feinstein didn’t repeat her charges from March that the agency also may have broken the law and had sought to “thwart” her investigation into the CIA’s use of waterboarding, which simulates drowning, sleep deprivation and other harsh interrogation methods _ tactics denounced by many experts as torture.
  • Buckley’s findings clashed with denials by Brennan that he issued only hours after Feinstein’s blistering Senate speech.“As far as the allegations of, you know, CIA hacking into, you know, Senate computers, nothing could be further from the truth. I mean, we wouldn’t do that. I mean, that’s _ that’s just beyond the _ you know, the scope of reason in terms of what we would do,” he said in an appearance at the Council on Foreign Relations.White House Press Secretary Josh Earnest issued a strong defense of Brennan, crediting him with playing an “instrumental role” in the administration’s fight against terrorism, in launching Buckley’s investigation and in looking for ways to prevent such occurrences in the future.Earnest was asked at a news briefing whether there was a credibility issue for Brennan, given his forceful denial in March.“Not at all,” he replied, adding that Brennan had suggested the inspector general’s investigation in the first place. And, he added, Brennan had taken the further step of appointing the accountability board to review the situation and the conduct of those accused of acting improperly to “ensure that they are properly held accountable for that conduct.”
  • The allegations and the separate CIA charge that the committee staff removed classified documents from the secret CIA facility in Northern Virginia without authorization were referred to the Justice Department for investigation.The department earlier this month announced that it had found insufficient evidence on which to proceed with criminal probes into either matter “at this time.” Thursday, Justice Department officials declined comment.
  • In her speech, Feinstein asserted that her staff found the material _ known as the Panetta review, after former CIA Director Leon Panetta, who ordered it _ in the protected database and that the CIA discovered the staff had it by monitoring its computers in violation of the user agreement.The inspector general’s summary, which was prepared for the Senate and the House intelligence committees, didn’t identify the CIA personnel who had accessed the Senate’s protected database.Furthermore, it said, the CIA crimes report to the Justice Department alleging that panel staffers had removed classified materials without permission was grounded on inaccurate information. The report is believed to have been sent by the CIA’s then acting general counsel, Robert Eatinger, who was a legal adviser to the interrogation program.“The factual basis for the referral was not supported, as the author of the referral had been provided inaccurate information on which the letter was based,” said the summary, noting that the Justice Department decided not to pursue the issue.
  • Christopher Anders, senior legislative counsel with the American Civil Liberties Union, criticized the CIA announcement, saying that “an apology isn’t enough.”“The Justice Department must refer the (CIA) inspector general’s report to a federal prosecutor for a full investigation into any crimes by CIA personnel or contractors,” said Anders.
  •  
    And no one but the lowest ranking staffer knew anything about it, not even the CIA lawyer who made the criminal referral to the Justice Dept., alleging that the Senate Intelligence Committee had accessed classified documents it wasn't authorized to access. So the Justice Dept. announces that there's insufficient evidence to warrant a criminal investigation. As though the CIA lawyer's allegations were not based on the unlawful surveillance of the Senate Intelligence Committee's network.  Can't we just get an official announcement that Attorney General Holder has decided that there shall be a cover-up? 
Paul Merrell

Microsoft Demos Real-Time Translation Over Skype - Slashdot - 1 views

  • "Today at the first annual Code Conference, Microsoft demonstrated its new real-time translation in Skype publicly for the first time. Gurdeep Pall, Microsoft's VP of Skype and Lync, compares the technology to Star Trek's Universal Translator. During the demonstration, Pall converses in English with a coworker in Germany who is speaking German. 'Skype Translator results from decades of work by the industry, years of work by our researchers, and now is being developed jointly by the Skype and Microsoft Translator teams. The demo showed near real-time audio translation from English to German and vice versa, combining Skype voice and IM technologies with Microsoft Translator, and neural network-based speech recognition.'"
  •  
    Haven't yet explored to see what's beneath the marketing hype. And I'm less than excited about the Skype with its NSA tendrils being the vehicle of audio translations of human languages. But given the progress in: [i] automated translations of human texts; [ii] audio screenreaders; and [iii] voice-to-text transcription, this is one we saw coming. Slap the three technologies together and wait until processing power catches up to what's needed to produce a marketable experience. After all, the StarTrek scriptwriters saw this coming too.   Ray Kurzweil, now at Google, should get a lot of the pioneer credit here. His revolutionary optical character recognition algorithms soon found themselves redeployed in text-to-speech synthesis and speech recognition technology. From Wikipedia: "Kurzweil was the principal inventor of the first CCD flatbed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first commercial text-to-speech synthesizer, the first music synthesizer Kurzweil K250 capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition." Not bad for a guy the same age as my younger brother. But Microsoft's announcement here may be more vaporware than hardware in production and lines of executable code. Microsoft has a long history of vaporware announcements to persuade potential customers to hold off on riding with the competition.  And the Softies undoubtedly know that Google's human language text translation capabilities are way out in front and that the voice to text and text to speech API methods have already found a comfortable home in Android and Chromebook. What does Microsoft have that's ready to ship if anything? I'll check it out tomorrow. 
jorge1985

How to share files between computers over network with btsync - Xmodulo - 3 views

  • BitTorrent Sync, también conocida como btsync para abreviar, es una herramienta de sincronización multiplataforma (freeware) y desarrollado por BitTorrent, el famoso protocolo para el intercambio (P2P) de archivos peer-to-peer. A diferencia de los clientes de BitTorrent clásicos, sin embargo, btsync cifra el tráfico y da acceso a los archivos compartidos basados ​​en claves generadas automáticamente a través de diferentes tipos de sistemas operativos y dispositivos
    • jorge1985
       
      importante 
  •  
    "Last updated on February 13, 2015 Authored by Gabriel Cánepa 2 Comments If you are the type of person who uses several devices to work online, I'm sure you must be using, or at least wishing to use, a method for syncing files and directories among those devices."
  •  
    "Last updated on February 13, 2015 Authored by Gabriel Cánepa 2 Comments If you are the type of person who uses several devices to work online, I'm sure you must be using, or at least wishing to use, a method for syncing files and directories among those devices."
Gonzalo San Gil, PhD.

How to record and edit screencasts in Linux | Opensource.com - 1 views

  •  
    [... One of the methods many community leads are attracted to is the creation of online videos that highlight such use cases in clear, easy-to-follow narratives. Recording screencasts like this is actually a pretty straightforward operation. ...]
Gonzalo San Gil, PhD.

How to set up torrent scheduling on Linux - 0 views

  •  
    "Today we will take a look on the methods that Linux users can follow in order to set up a scheduler for their torrent downloads. "
‹ Previous 21 - 40 of 76 Next › Last »
Showing 20 items per page