Skip to main content

Home/ Open Web/ Group items matching "complexity" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Gary Edwards

The State of the Internet Operating System - O'Reilly Radar - 0 views

  •  
    ... The Internet Operating System is an Information Operating System ... Search is key to managing and working "information" ... Media Access ... Communications ... Identity and the Social Graph ... Payment ... Advertising ... Location ... Activity Streams - "Attention" ... Time  ... Image and Speech Recognition ... Government Data ... The Browser Where is the "operating system" in all this? Clearly, it is still evolving. Applications use a hodgepodge of services from multiple different providers to get the information they need. But how different is this from PC application development in the early 1980s, when every application provider wrote their own device drivers to support the hodgepodge of disks, ports, keyboards, and screens that comprised the still emerging personal computer ecosystem? Along came Microsoft with an offer that was difficult to refuse: We'll manage the drivers; all application developers have to do is write software that uses the Win32 APIs, and all of the complexity will be abstracted away. This is the crux of my argument about the internet operating system. We are once again approaching the point at which the Faustian bargain will be made: simply use our facilities, and the complexity will go away. And much as happened during the 1980s, there is more than one company making that promise. We're entering a modern version of "the Great Game", the rivalry to control the narrow passes to the promised future of computing.
Gary Edwards

More details on Microsoft's free Office: Crippled Business Processes | Beyond Binary - CNET News - 0 views

  •  
    Microsoft's free "Office Starter" suite will be able to fully open and display complex OOXML - 2010 MSOffice documents.  But they will not be able to execute macros or edit embedded logic such as Scripts, Macros, OLE, and ODBC connectors.  That's a killer for workgroup-workflow oriented business documents.  A category of "compound documents that includes forms, reports, compound documents and workflow logic. As for what users can do with the applications, Capossela said that Word will be capable of opening and displaying even the most complex documents. However, Office Starter users won't be able to use macros, create automated tables of contents, or add comments, though they will see comments added by others. The approach with Excel is similar, with users able to view and edit documents, but not create their own pivot tables and pivot charts, for example.
Gary Edwards

Google Launches Dart Programming Language - Development - Web Development - Informationweek - 1 views

  •  
    Google releases JavaScript alternative Web application programming language.  Release includes Cloud SQL, a cloud computing database to write Web apps against - using either JavaScript or DART. excerpt: Google on Monday introduced a preview version of Dart, its new programming language for Web applications. The introduction was widely expected, not only because the announcement was listed on the GOTO developer conference schedule, but because a Google engineer described the language and its reason for being in a message sent to a developer mailing list late last year. "The goal of the Dash [Dart's former name] effort is ultimately to replace JavaScript as the lingua franca of Web development on the open Web platform," said Google engineer Mark S. Miller in his post last year. More Insights White Papers The Dodd-Frank Act: Impact on Derivatives Technology Infrastructure Simple is Better: Overcoming the complexity that robs financial data of its potential Analytics Mobility's Next Challenge: 8 Steps to a Secure Environment SaaS 2011: Adoption Soars, Yet Deployment Concerns Linger Webcasts Effective IT Inventory and Asset Management: From Quagmire to Quick Fix Outsourcing Security: What Every Potential Cloud Security Customer Should Know Videos In an interview at Interop New York, Cisco's Justin Griffin shows how their wireless products can physically map radio sources by analyzing the spectrum. This allows you to detect rogue devices and sources of interference. Lars Bak, a Google engineer who helped develop Chrome's V8 JavaScript engine and one of the creators of Dart, said in a phone interview that Google works regularly on large Web applications and that the company's engineers feel they need a new programming language to describe large, complex Web applications.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Gary Edwards

The Collapse of Complex Business Models « Clay Shirky - 1 views

  •  
    A must read for anyone interested in the future of the Open Web, and the changes traditional broadcast media must take to make the great transition.
Gary Edwards

A founder-friendly term sheet - Sam Altman - 1 views

  •  
    Must read for every entrepreneur!  When your product and service can command these kind of terms, for sure your company is worth investing in. "A founder-friendly term sheet When I invest (outside of YC) I make offers with the following term sheet.  I've tried to make the terms reflect what I wanted when I was a founder.  A few people have asked me if I'd share it, so here it is.  I think it's pretty founder-friendly. If you believe the upside risk theory, then it makes sense to offer compelling terms and forgo some downside protection to get the best companies to want to work with you. What's most important is what's not in it: *No option pool.  Taking the option pool out of the pre-money valuation (ie, diluting only founders and not investors for future hires) is just a way to artificially manipulate valuation.  New hires benefit everyone and should dilute everyone. *The company doesn't have to pay any of my legal fees.  Requiring the company to pay investors' legal fees always struck me as particularly egregious-the company can probably make better use of the money than investors can, so I'll pay my own legal fees for the round (in a simple deal with no back and forth they always end up super low anyway). *No expiration.  I got burned once by an exploding offer and haven't forgotten it; the founders can take as much time as they want to think about it.  In practice, people usually decide pretty quickly. *No confidentiality.  Founder/investor relationships are long and important.  The founders should talk to whomever they want, and if they want to tell people what I offered them, I don't really care.  Investors certainly tell each other what they offer companies. (Once we shake hands on a deal, of course, I expect the founders to honor it.) *No participating preferred, non-standard liquidation preference, etc.  There is a 1x liquidation preference, but I'm willing to forgo even that and buy common shares (and sometimes
Gary Edwards

Fast Database Emerges from MIT Class, GPUs and Student's Invention - 0 views

  •  
    Awesome work!  A world changing discovery i think. excerpt: "Mostak built a new parallel database, called MapD, that allows him to crunch complex spatial and GIS data in milliseconds, using off-the-shelf gaming graphical processing units (GPU) like a rack of mini supercomputers. Mostak reports performance gains upwards of 70 times faster than CPU-based systems. Related Stories The five elements of a data scientist's job. Read more» Podcast: A data scientist's approach to predictive analytics for marketers. Read more» Data scientist Edwin Chen on Twitter's business value. Read more» Geofeedia structures Twitter, social media data by location and time. Read more» Mostak said there is more development work to be done on MapD, but the system works and will be available in the near future. He said he is planning to release the new database system under and open source business model similar to MongoDB and its company 10gen. "I had the realization that this had the potential to be majorly disruptive," Mostak said. "There have been all these little research pieces about this algorithm or that algorithm on the GPU, but I thought, 'Somebody needs to make an end-to-end system.' I was shocked that it really hadn't been done." Mostak's undergraduate work was in economics and anthropology; he realized the need for his interactive database while  studying at Harvard's Center for Middle Eastern Studies program. But his hacker-style approach to problem-solving is an example of how attacking a problem from new angles can yield better solutions. Mostak's multidisciplinary background isn't typical for a data scientist or database architect."
Paul Merrell

The People and Tech Behind the Panama Papers - Features - Source: An OpenNews project - 0 views

  • Then we put the data up, but the problem with Solr was it didn’t have a user interface, so we used Project Blacklight, which is open source software normally used by librarians. We used it for the journalists. It’s simple because it allows you to do faceted search—so, for example, you can facet by the folder structure of the leak, by years, by type of file. There were more complex things—it supports queries in regular expressions, so the more advanced users were able to search for documents with a certain pattern of numbers that, for example, passports use. You could also preview and download the documents. ICIJ open-sourced the code of our document processing chain, created by our web developer Matthew Caruana Galizia. We also developed a batch-searching feature. So say you were looking for politicians in your country—you just run it through the system, and you upload your list to Blacklight and you would get a CSV back saying yes, there are matches for these names—not only exact matches, but also matches based on proximity. So you would say “I want Mar Cabra proximity 2” and that would give you “Mar Cabra,” “Mar whatever Cabra,” “Cabra, Mar,”—so that was good, because very quickly journalists were able to see… I have this list of politicians and they are in the data!
  • Last Sunday, April 3, the first stories emerging from the leaked dataset known as the Panama Papers were published by a global partnership of news organizations working in coordination with the International Consortium of Investigative Journalists, or ICIJ. As we begin the second week of reporting on the leak, Iceland’s Prime Minister has been forced to resign, Germany has announced plans to end anonymous corporate ownership, governments around the world launched investigations into wealthy citizens’ participation in tax havens, the Russian government announced that the investigation was an anti-Putin propaganda operation, and the Chinese government banned mentions of the leak in Chinese media. As the ICIJ-led consortium prepares for its second major wave of reporting on the Panama Papers, we spoke with Mar Cabra, editor of ICIJ’s Data & Research unit and lead coordinator of the data analysis and infrastructure work behind the leak. In our conversation, Cabra reveals ICIJ’s years-long effort to build a series of secure communication and analysis platforms in support of genuinely global investigative reporting collaborations.
  • For communication, we have the Global I-Hub, which is a platform based on open source software called Oxwall. Oxwall is a social network, like Facebook, which has a wall when you log in with the latest in your network—it has forum topics, links, you can share files, and you can chat with people in real time.
  • ...3 more annotations...
  • We had the data in a relational database format in SQL, and thanks to ETL (Extract, Transform, and Load) software Talend, we were able to easily transform the data from SQL to Neo4j (the graph-database format we used). Once the data was transformed, it was just a matter of plugging it into Linkurious, and in a couple of minutes, you have it visualized—in a networked way, so anyone can log in from anywhere in the world. That was another reason we really liked Linkurious and Neo4j—they’re very quick when representing graph data, and the visualizations were easy to understand for everybody. The not-very-tech-savvy reporter could expand the docs like magic, and more technically expert reporters and programmers could use the Neo4j query language, Cypher, to do more complex queries, like show me everybody within two degrees of separation of this person, or show me all the connected dots…
  • We believe in open source technology and try to use it as much as possible. We used Apache Solr for the indexing and Apache Tika for document processing, and it’s great because it processes dozens of different formats and it’s very powerful. Tika interacts with Tesseract, so we did the OCRing on Tesseract. To OCR the images, we created an army of 30–40 temporary servers in Amazon that allowed us to process the documents in parallel and do parallel OCR-ing. If it was very slow, we’d increase the number of servers—if it was going fine, we would decrease because of course those servers have a cost.
  • For the visualization of the Mossack Fonseca internal database, we worked with another tool called Linkurious. It’s not open source, it’s licensed software, but we have an agreement with them, and they allowed us to work with it. It allows you to represent data in graphs. We had a version of Linkurious on our servers, so no one else had the data. It was pretty intuitive—journalists had to click on dots that expanded, basically, and could search the names.
Gary Edwards

How would you fix the Linux desktop? | ITworld - 0 views

  • VB integrates with COM
  • QL Server has a DCE/RPC interface. 
  • MS-Office?  all the components (Excel, Word etc.) have a COM and an OLE interface.
  •  
    Comment posted 1 week ago in reply to Zzgomes .....  by Ed Carp.  Finally someone who gets it! OBTW, i replaced Windows 7 with Linux Mint over a year ago and hope to never return.  The thing is though, i am not a member of a Windows productivity workgroup, nor do i need to connect to any Windows databases or servers.  Essentially i am not using any Windows business process or systems.  It's all Internet!!! 100% Web and Cloud Services systems.  And that's why i can dump Windows without a blink! While working for Sursen Corp, it was a very different story.  I had to have Windows XP and Windows 7, plus MSOffice 2003-2007, plus Internet Explorer with access to SharePoint, Skydrive/Live.com.  It's all about the business processes and systems you're part of, or must join.   And that's exactly why the Linux Desktop has failed.  Give Cloud Computing the time needed to re-engineer and re-invent those many Windows business processes, and the Linux Desktop might suceed.  The trick will be in advancing both the Linux Desktop and Application developer layers to target the same Cloud Computing services mobility targets.  ..... Windows will take of itself.   The real fight is in the great transition of business systems and processes moving from the Windows desktp/workgroup productivity model to the Cloud.  Linux Communities must fight to win the great transition. And yes, in the end this all about a massive platform shift.  The fourth wave of computing began with the Internet, and will finally close out the desktop client/server computing model as the Web evolves into the Cloud. excerpt: Most posters here have it completely wrong...the *real* reason Linux doesn't have a decent penetration into the desktop market is quite obvious if you look at the most successful desktop in history - Windows.  All this nonsense about binary driver compatibility, distro fragmentation, CORBA, and all the other red herrings that people are talking about are completely irrelevant
Gary Edwards

Analyzing Your Own Style | Writing and Humanistic Studies at MIT - 0 views

  •  
    Copyblogger originally shared: These 4 Exercises Are Guaranteed to Make You a Better Writer Your writing is good. You know how to position words to make clear sentences. You can string together sentences into meaningful paragraphs. You can take those sentences and arrange them into a persuasive post. But you've plateaued. Your writing is getting predictable, stale, and forgettable. And you're not sure how to break out of that mold. If that's you, then you need to check out these exercise from MIT designed to help you evaluate your copy. You'll learn things like: - Your sentence length pattern - If you correctly emphasize the important parts in your sentences and paragraphs. - Whether you lean on simple, complex, or compound sentences. Analyzing your writing style will highlight your weaknesses, and give you a plan to make your writing better. So, when you've got a few minutes, perform these exercises: http://writing.mit.edu/wcc/resources/writers/analyzingyourownstyle +Demian Farnworth 
Gary Edwards

Microsoft Office fends off open source OpenOffice and LibreOffice but cloud tools gain ground | ZDNet - 0 views

  •  
    Interesting stats coming out from the recent Forrester study on Office Productivity.  The study was conducted by Philipp Karcher, and it shows a fcoming collision of two interesting phenomenon that cannot continue to "coexist".  Something has to give. The two phenom are the continuing dominance and use of client/server desktop productivity application anchor, MSOffice, and, the continuing push of all business productivity application to highly mobile cloud-computing platforms.   It seems we are stuck in this truly odd dichotomy where the desktop MSOffice compound document model continues to dominate business productivity processes, yet those same users are spending ever more time mobile and in the cloud.  Something has got to give. And yes, I am very concerned about the fact that neither of the native XML document formats {used by MSOffice (OXML), OpenOffice and LibreOffice (ODF)} are designed for highly mobile cloud-computing.   It's been said before, the Web is the future of computing.  And HTML5 is the language of the Web.  HTML is also the most prolific compound-document format ever.  One of the key problems for cloud-computing is the lack of HTML5 ready Office Productivity Suites that can also manage the complexities of integrating cloud-ready data streams. Sadly, when Office Productivity formats went down the rat hole of a 1995 client/server compound document model, the productivity suites went right with them.  Very sad.  But the gaping hole in cloud-computing is going to be filled.  One way or the other.
Gary Edwards

MWC 2010: The Year of the Android | Gadget Lab | Wired.com - 1 views

  • Forget about the iPhone. Microsoft is in a death-match with Google and its free OS.
  •  
    ARCELONA - This year at the Mobile World Congress is the year of Android. Google's operating system debuted here two years ago. Last year we expected a slew of handsets, and saw just a trickle. This year, Android is everywhere, on handsets from HTC, Motorola, Sony Ericsson, and even Garmin-Asus. If this were the world of computers, Android would be in a similar position to Windows: Pretty much every manufacturer puts it on its machines. This is great news for us, the consumer. Android is stable, powerful and now it even runs Flash (I got a sneak peek of Flash running on a Motorola handset here at the show. It crashed). It's even better for the manufacturers, as - unlike Windows Mobile - Android is free. It's also open, so the phone makers can tweak it and trick it out as much as they like. And they do like. Most of the Android phones here at Mobile World Congress are running custom versions of Android, which differentiates them and, in theory at least, makes them easier to use, hiding the complexities of a proper multitasking OS from the user.
Gary Edwards

5 Ways to Convert Your Video Files - 1 views

  •  
    H.264, Ogg Theora, MP4, Xvid, MKV, FLV: The world of online video can be pretty confusing. Not only are there tons of different formats and acronyms, but various devices and services actually have vastly different requirements. A video you downloaded via BitTorrent most likely won't play on your iPhone, and the software that comes with your Flip camera won't be of much use to prepare an upload for Wikipedia. Tools to convert videos have been out for a while, but many of them used to be fairly complex, asking for detailed settings about bit rates, audio codecs and interlacing. However, there have been a number of new applications released in the last couple of months that make converting and even transfering clips and movies between devices much easier. Here are five great free tools to check out: Miro Video Converter - free, supports Google V8 DivX Plus RealPlayer DoubleTwist - 200 compatible devices Vuze - free bittorrent client also converts video files
Gary Edwards

Amazon SDKs Boost Support for Mobile Cloud « Data Center Knowledge - 0 views

  •  
    Amazon Releases Developer SDKs One interesting and important exception is Amazon's recent release of its Software Development Kits (SDK) for Google's Android and Apple's iOS. With these kits, developers are provided with tools that will simplify development of cloud applications stored on the Amazon Web Services cloud platform, or AWS. Developing apps that can use many of the already popular AWS cloud services offers many new opportunities for the developer community, especially due to its low-barrier-to-entry and affordability, enabling more developers with limited resources  to build and provision new mobile cloud services. The new SDK includes libraries that simplify handling of HTTP connections, request retries and error handling, which used to be complex and arduous. Integration of applications with several AWS cloud services, like the Simple Storage Service (S3), SimpleDB database, Simple Notification Service (SNS) and Simple Queue Service (SQN) will be much more accessible than before. For example, it's going to be interesting to see whether developers will build a viable messaging solution atop the AWS SNS service that can actually compete with mobile SMS services - which have been a long-time major cash-cow for many mobile network operators.
Gary Edwards

RuleLab.Net Server: Web system for design, implementation and management of business processes - 0 views

  •  
    RuleLab.Net is a web-based system for designing and implementing the business rules that operate on an application's XML data. Extend your existing applications by adding Rule building and Business Rules Engine (BRE) capabilities. Consolidate your business logic in an easy to read format, build, test, share, and deploy your Rules using the web browser; and integrate them into your system via the BRE. Intuitive GUI, English-like syntax, and centralized repository empower business users with direct access to the Rules.In the RuleLab.Net system, Business Rules are composed and managed over the Internet or Intranet using the web-based Rules Designer. It allows users to associate an application XML data template with Rules, create a vocabulary of natural terms, graphically build complex logical expressions, test the Rules on data samples, and store the Rules in a database. Features include strong data types, reasoning, rule priorities and dependencies, calculation formulas, looping-data-structure support, and a built-in set of computational, aggregate and other data processing functions. Rules and other system objects are stored in XML files that can be downloaded, modified, and uploaded to the online repository. Rule changes made online can be instantly deployed for runtime use by the applications integrated with the BRE. The forward chaining BRE parses XML application data against the ruleset, updates your data XML document, and returns it back to the application along with the comprehensive state information. Written in .NET, the BRE component can be utilized as a managed assembly, a COM object, or through the Web Service.
Gary Edwards

Google Open Sources Heart and Soul of Google Wave Code - 0 views

  •  
    Google programmers open source two components of the Google Wave messaging and collaboration prototype. One includes the Operational Transform, which forms the complex center of the Wave model. Google Wave is an example of the Pushbutton Web, where real-time communications rule the roost. Google July 24 said it released to open source the OT (Operational Transform) code, the framework that enables multiple people to edit a single document in real time across a wide-area network, as well as a basic client/server prototype that uses the wave protocol. The Google Wave Federation Protocol is an open extension to the XMPP core protocol, geared to allow near real-time communication of wave updates between two wave servers.
Gary Edwards

Needlebase - 2 views

  •  
    Move over FlipBoard and QWiki and meet Needle.  The emerging market space for automating the process of collecting Web information to analyse, re-purpose and re-publish is getting crowded.   Needle is designed to: acquire data from multiple sources:  A simple tagging process quickly imports structured data from complex websites, XML feeds, and spreadsheets into a unified database of your design.merge, deduplicate and cleanse: Needle uses intelligent semantics to help you find and merge variant forms of the same record.  Your merges, edits and deletions persist even after the original data is refreshed from its source. merge, deduplicate and cleanse: Needle uses intelligent semantics to help you find and merge variant forms of the same record.  Your merges, edits and deletions persist even after the original data is refreshed from its source. build and publish custom data views: Use Needle's visual UI and powerful query language to configure exactly your desired view of the data, whether as a list, table, grid, or map.  Then, with one click, publish the data for others to see, or export a feed of the clean data to your own local database. Flipboard is famous for the slick republishing / packaging process focused on iOS devices.  Allows end users to choose sources. QWiki takes republishing to the extreme, blending voice over (from wikipedia text) with a slide show of multimedia information.  Edn user does not yet have control and selection of information sources with QWiki. The iOS Sports Illustrated app seems to be the starting point for "immersive webzines", with the NY Times close behind.  Very very slick packaging of basic Web information. Flipboard followed the iOS re-publishing wave with an end-user facing immersive webzine packaging design.  And now we have Needle. Still looking for a business document FlipBoard, where a "project" is packaged in a FlipBoard immersive container.  The iPack would be similar to an iPUB book with the added featur
  •  
    Note: On April 12th, 2011 Needle was acquired by Google.
Paul Merrell

The Little-Known Company That Enables Worldwide Mass Surveillance - 0 views

  • t was a powerful piece of technology created for an important customer. The Medusa system, named after the mythical Greek monster with snakes instead of hair, had one main purpose: to vacuum up vast quantities of internet data at an astonishing speed. The technology was designed by Endace, a little-known New Zealand company. And the important customer was the British electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. Dozens of internal documents and emails from Endace, obtained by The Intercept and reported in cooperation with Television New Zealand, reveal the firm’s key role helping governments across the world harvest vast amounts of information on people’s private emails, online chats, social media conversations, and internet browsing histories.
Paul Merrell

World's first programmable quantum photonic chip | ExtremeTech - 0 views

  • A team of engineering geniuses from the University of Bristol, England has developed the world’s first re-programmable, multi-purpose quantum photonic computer chip that relies on quantum entanglement to perform calculations.With multiple waveguide channels (made from standard silicon dioxide), and eight electrodes (see image above), the silicon chip is capable of repeatedly entangling photons. Depending on how the electrodes are programmed, different quantum states can be produced. The end result is two qubits that can be used to perform quantum computing — and unlike D-Wave’s 128-qubit processor (well, depending on who you ask) this is real quantum computing.
  • We know that entanglement can be used for very effective encryption, but beyond that it’s mostly guesswork. There’s general agreement that qubits should allow for faster computation of very complex numbers — think biological processes and weather systems — and early work by Google suggests that pattern recognition might also be a strength of qubits.
Paul Merrell

Protocols of the Hackers of Zion? « LobeLog - 0 views

  • When Israeli Prime Minister Benjamin Netanyahu met with Google chairman Eric Schmidt on Tuesday afternoon, he boasted about Israel’s “robust hi-tech and cyber industries.” According to The Jerusalem Post, “Netanyahu also noted that ‘Israel was making great efforts to diversify the markets with which it is trading in the technological field.'” Just how diversified and developed Israeli hi-tech innovation has become was revealed the very next morning, when the Russian cyber-security firm Kaspersky Labs, which claims more than 400 million users internationally, announced that sophisticated spyware with the hallmarks of Israeli origin (although no country was explicitly identified) had targeted three European hotels that had been venues for negotiations over Iran’s nuclear program.
  • Wednesday’s Wall Street Journal, one of the first news sources to break the story, reported that Kaspersky itself had been hacked by malware whose code was remarkably similar to that of a virus attributed to Israel. Code-named “Duqu” because it used the letters DQ in the names of the files it created, the malware had first been detected in 2011. On Thursday, Symantec, another cyber-security firm, announced it too had discovered Duqu 2 on its global network, striking undisclosed telecommunication sites in Europe, North Africa, Hong Kong, and  Southeast Asia. It said that Duqu 2 is much more difficult to detect that its predecessor because it lives exclusively in the memory of the computers it infects, rather than writing files to a drive or disk. The original Duqu shared coding with — and was written on the same platform as — Stuxnet, the computer worm  that partially disabled enrichment centrifuges in Iranian nuclear power plants, according to a 2012 report in The New York Times. Intelligence and military experts said that Stuxnet was first tested at Dimona, a nuclear-reactor complex in the Negev desert that houses Israel’s own clandestine nuclear weapons program. While Stuxnet is widely believed to have been a joint Israeli-U.S. operation, Israel seems to have developed and implemented Duqu on its own.
  • Coding of the spyware that targeted two Swiss hotels and one in Vienna—both sites where talks were held between the P5+1 and Iran—so closely resembled that of Duqu that Kaspersky has dubbed it “Duqu 2.” A Kaspersky report contends that the new and improved Duqu would have been almost impossible to create without access to the original Duqu code. Duqu 2’s one hundred “modules” enabled the cyber attackers to commandeer infected computers, compress video feeds  (including those from hotel surveillance cameras), monitor and disrupt telephone service and Wi-Fi, and steal electronic files. The hackers’ penetration of computers used by the front desk would have allowed them to determine the room numbers of negotiators and delegation members. Duqu 2 also gave the hackers the ability to operate two-way microphones in the hotels’ elevators and control their alarm systems.
1 - 20 of 33 Next ›
Showing 20 items per page