Skip to main content

Home/ Future of the Web/ Group items tagged open source Technology

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

Future of Open Source Survey 2016 | surveymonkey.com - 0 views

  •  
    "* 1. Which of the following statements best represents your primary role with regard to open source? Which of the following statements best represents your primary role with regard to open source? Application Developer - I use open source to speed my development of applications Open Source Developer - I work full time contributing to open source projects Architect - I play a key role in the selection of technology, including open source, for my organization Security - I ensure that the applications we build and deploy are secure Development Management - I manage one or more teams of developers that build applications for my company IT Infrastructure and Operations Manager - Responsible for IT infrastructure and operations, identifying and justifying open source technologies and process changes in my company's infrastructure Legal - I am responsible for ensuring open source license compliance within my organization Executive Leader - I lead a company that utilizes open source in the development environment"
  •  
    "* 1. Which of the following statements best represents your primary role with regard to open source? Which of the following statements best represents your primary role with regard to open source? Application Developer - I use open source to speed my development of applications Open Source Developer - I work full time contributing to open source projects Architect - I play a key role in the selection of technology, including open source, for my organization Security - I ensure that the applications we build and deploy are secure Development Management - I manage one or more teams of developers that build applications for my company IT Infrastructure and Operations Manager - Responsible for IT infrastructure and operations, identifying and justifying open source technologies and process changes in my company's infrastructure Legal - I am responsible for ensuring open source license compliance within my organization Executive Leader - I lead a company that utilizes open source in the development environment"
Gary Edwards

Readium at the London Book Fair 2014: Open Source for an Open Publishing Ecosystem: Rea... - 0 views

  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-Android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
  •  
    excerpt/intro: Last month marked the one-year anniversary of the formation of the Readium Foundation (Readium.org), an independent nonprofit launched in March 2013 with the objective of developing commercial-grade open source publishing technology software. The overall goal of Readium.org is to accelerate adoption of ePub 3, HTML5, and the Open Web Platform by the digital publishing industry to help realize the full potential of open-standards-based interoperability. More specifically, the aim is to raise the bar for ePub 3 support across the industry so that ePub maintains its position as the standard distribution format for e-books and expands its reach to include other types of digital publications. In its first year, the Readium consortium added 15 organizations to its membership, including Adobe, Google, IBM, Ingram, KERIS (S. Korea Education Ministry), and the New York Public Library. The membership now boasts publishers, retailers, distributors and technology companies from around the world, including organizations based in France, Germany, Norway, U.S., Canada, China, Korea, and Japan. In addition, in February 2014 the first Readium.org board was elected by the membership and the first three projects being developed by members and other contributors are all nearing "1.0" status. The first project, Readium SDK, is a rendering "engine" enabling native apps to support ePub 3. Readium SDK is available on four platforms-Android, iOS, OS/X, and Windows- and the first product incorporating Readium SDK (by ACCESS Japan) was announced last October. Readium SDK is designed to be DRM-agnostic, and vendors Adobe and Sony have publicized plans to integrate their respective DRM solutions with Readium SDK. A second effort, Readium JS, is a pure JavaScript ePub 3 implementation, with configurations now available for cloud based deployment of ePub files, as well as Readium for Chrome, the successor to the original Readium Chrome extension developed by IDPF as the
Gonzalo San Gil, PhD.

Open Source Licenses | Open Source Initiative - 0 views

  •  
    "Open source licenses are licenses that comply with the Open Source Definition - in brief, they allow software to be freely used, modified, and shared. To be approved by the Open Source Initiative (also known as the OSI), a license must go through the Open Source Initiative's license review process."
  •  
    "Open source licenses are licenses that comply with the Open Source Definition - in brief, they allow software to be freely used, modified, and shared. To be approved by the Open Source Initiative (also known as the OSI), a license must go through the Open Source Initiative's license review process."
Gary Edwards

The NeuroCommons Project: Open RDF Ontologies for Scientific Reseach - 0 views

  •  
    The NeuroCommons project seeks to make all scientific research materials - research articles, annotations, data, physical materials - as available and as useable as they can be. This is done by fostering practices that render information in a form that promotes uniform access by computational agents - sometimes called "interoperability". Semantic Web practices based on RDF will enable knowledge sources to combine meaningfully, semantically precise queries that span multiple information sources.

    Working with the Creative Commons group that sponsors "Neurocommons", Microsoft has developed and released an open source "ontology" add-on for Microsoft Word. The add-on makes use of MSOffice XML panel, Open XML formats, and proprietary "Smart Tags". Microsoft is also making the source code for both the Ontology Add-in for Office Word 2007 and the Creative Commons Add-in for Office Word 2007 tool available under the Open Source Initiative (OSI)-approved Microsoft Public License (Ms-PL) at http://ucsdbiolit.codeplex.com and http://ccaddin2007.codeplex.com,respectively.

    No doubt it will take some digging to figure out what is going on here. Microsoft WPF technologies include Smart Tags and LINQ. The Creative Commons "Neurocommons" ontology work is based on W3C RDF and SPARQL. How these opposing technologies interoperate with legacy MSOffice 2003 and 2007 desktops is an interesting question. One that may hold the answer to the larger problem of re-purposing MSOffice for the Open Web?

    We know Microsoft is re-purposing MSOffice for the MS Web. Perhaps this work with Creative Commons will help to open up the Microsoft desktop productivity environment to the Open Web? One can always hope :)

    Dr Dobbs has the Microsoft - Creative Commons announcement; Microsoft Releases Open Tools for Scientific Research ...... Joins Creative Commons in releasing the Ontology Add-in
Gary Edwards

ptsefton » OpenOffice.org is bad for the planet - 0 views

  •  
    ptsefton continues his rant that OpenOffice does not support the Open Web. He's been on this rant for so long, i'm wondering if he really thinks there's a chance the lords of ODF and the OpenOffice source code are listening? In this post he describes how useless it is to submit his findings and frustrations with OOo in a bug report. Pretty funny stuff even if you do end up joining the Michael Meeks trek along this trail of tears. Maybe there's another way?

    What would happen if pt moved from targeting the not so open OpenOffice, to target governments and enterprises trying to set future information system requirements?

    NY State is next up on this endless list. Most likely they will follow the lessons of exhaustive pilot studies conducted by Massachusetts, California, Belgium, Denmark and England, and end up mandating the use of both open standard "XML" formats, ODF and OOXML.

    The pilots concluded that there was a need for both XML formats; depending on the needs of different departments and workgroups. The pilot studies scream out a general rule of thumb; if your department has day-to-day business processes bound to MSOffice workgroups, then it makes sense to use MSOffice OOXML going forward. If there is no legacy MSOffice bound workgroup or workflow, it makes sense to move to OpenOffice ODF.

    One thing the pilots make clear is that it is prohibitively costly and disruptive to try to replace MSOffice bound workgroups.

    What NY State might consider is that the Web is going to be an important part of their informations systems future. What a surprise. Every pilot recognized and indeed, emphasized this fact. Yet, they fell short of the obvious conclusion; mandating that desktop applications provide native support for Open Web formats, protocols and interfaces!

    What's wrong with insisting that desktop applciations and office suites support the rapidly advancing HTML+ technologies as well as the applicat
Gonzalo San Gil, PhD.

The death of patents and what comes after: Alicia Gibb at TEDxStockholm - YouTube - 0 views

  •  
    "Published on Dec 18, 2012 Alicia Gibb got her start as a technologist from her combination in backgrounds including: informatics and library science, a belief system of freedom of information, inspiration from art and design, and a passion for hardware hacking. Alicia has worked between the crossroads of art and electronics for the past nine years, and has worked for the open source hardware community for the past three. She currently founded and is running the Open Source Hardware Association, an organization to educate and promote building and using open source hardware of all types. In her spare time, Alicia is starting an open source hardware company specific to education. Previous to becoming an advocate and an entrepreneur, Alicia was a researcher and prototyper at Bug Labs where she ran the academic research program and the Test Kitchen, an open R&D Lab. Her projects centered around developing lightweight additions to the BUG platform, as well as a sensor-based data collection modules. She is a member of NYCResistor, co-chair of the Open Hardware Summit, and a member of the advisory board for Linux Journal. She holds a degree in art education, a M.S. in Art History and a M.L.I.S. in Information Science from Pratt Institute. She is self-taught in electronics. Her electronics work has appeared in Wired magazine, IEEE Spectrum, Hackaday and the New York Times. When Alicia is not researching at the crossroads of open technology and innovation she is prototyping artwork that twitches, blinks, and might even be tasty to eat. In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual
Gonzalo San Gil, PhD.

Achieving Impossible Things with Free Culture and Commons-Based Enterprise : Terry Hanc... - 0 views

  •  
    "Author: Terry Hancock Keywords: free software; open source; free culture; commons-based peer production; commons-based enterprise; Free Software Magazine; Blender Foundation; Blender Open Movies; Wikipedia; Project Gutenberg; Open Hardware; One Laptop Per Child; Sugar Labs; licensing; copyleft; hosting; marketing; design; online community; Debian GNU/Linux; GNU General Public License; Creative Commons Attribution-ShareAlike License; TAPR Open Hardware License; collective patronage; women in free software; Creative Commons; OScar; C,mm,n; Free Software Foundation; Open Source Initiative; Freedom Defined; Free Software Definition; Debian Free Software Guidelines; Sourceforge; Google Code; digital rights management; digital restrictions management; technological protection measures; DRM; TPM; linux; gnu; manifesto Publisher: Free Software Magazine Press Year: 2009 Language: English Collection: opensource"
  •  
    "Author: Terry Hancock Keywords: free software; open source; free culture; commons-based peer production; commons-based enterprise; Free Software Magazine; Blender Foundation; Blender Open Movies; Wikipedia; Project Gutenberg; Open Hardware; One Laptop Per Child; Sugar Labs; licensing; copyleft; hosting; marketing; design; online community; Debian GNU/Linux; GNU General Public License; Creative Commons Attribution-ShareAlike License; TAPR Open Hardware License; collective patronage; women in free software; Creative Commons; OScar; C,mm,n; Free Software Foundation; Open Source Initiative; Freedom Defined; Free Software Definition; Debian Free Software Guidelines; Sourceforge; Google Code; digital rights management; digital restrictions management; technological protection measures; DRM; TPM; linux; gnu; manifesto Publisher: Free Software Magazine Press Year: 2009 Language: English Collection: opensource"
Gonzalo San Gil, PhD.

The impact of open cloud technologies on IT- The Inquirer - 0 views

  •  
    " Column Open source developments are at the heart of new cloud technologies, says Jim Zemlin By Jim Zemlin Fri May 29 2015, 12:00 Jim Zemlin NOWHERE are we seeing more open source and collaborative development than in cloud computing."
  •  
    " Column Open source developments are at the heart of new cloud technologies, says Jim Zemlin By Jim Zemlin Fri May 29 2015, 12:00 Jim Zemlin NOWHERE are we seeing more open source and collaborative development than in cloud computing."
Gary Edwards

Why Kindle Should Be An Open Book - Tim O'Reilly at Forbes.com - 0 views

  •  
    Like someone finding out that the rapture has happened, and they've been left behind, Tim O'Reilly shakes his fists and shouts to the heavens that Amazon must support open standards. He argues that in-spite of incredible market success, the Amazon Kindle will fail because the document format is not Open. He even argues that Apple, with the iPod and iPhone, have figured out how to blend Open Web formats and application development with proprietary hardware initiatives....

    "The Amazon Kindle has sparked huge media interest in e-books and has seemingly jump-started the market. Its instant wireless access to hundreds of thousands of e-books and seamless one-click purchasing process would seem to give it an enormous edge over other dedicated e-book platforms. Yet I have a bold prediction: Unless Amazon embraces open e-book standards like epub, which allow readers to read books on a variety of devices, the Kindle will be gone within two or three years."

    TO points to ePub as an open format, apparently not realizing the format falls far short of Open Web advances designed to enable a complete publication-typesetter model. The WebKit and Mozilla open source communities are pushing the envelope of Open Web development with an extremely advanced document model based on HTML5, CSS3, SVG/Canvas, and JavaScript4+. ePub on the other hand is stuck in 1998, supporting the aging HTML4 - CSS2.1 specs. Very sad.

Gonzalo San Gil, PhD.

3 Financial Companies Innovating With Open Source | Linux.com - 0 views

  •  
    "The financial industry is on the verge of an open source breakthrough, say three companies on the cutting edge of the trend. Traditionally very secretive about their technology, banks, hedge funds and other financial services companies have begun in the past few years to talk about how they use open source software in their infrastructure and product development."
  •  
    "The financial industry is on the verge of an open source breakthrough, say three companies on the cutting edge of the trend. Traditionally very secretive about their technology, banks, hedge funds and other financial services companies have begun in the past few years to talk about how they use open source software in their infrastructure and product development."
Gonzalo San Gil, PhD.

Top 10 Open Source Developments of 2015 | Business | LinuxInsider - 0 views

  •  
    "Open source is driving an ever-expanding market. The notion of community-driven development is a growing disruption to proprietary software controlled by commercial vendors, and the free open source software concept has become a major disruption in industry and technology."
  •  
    "Open source is driving an ever-expanding market. The notion of community-driven development is a growing disruption to proprietary software controlled by commercial vendors, and the free open source software concept has become a major disruption in industry and technology."
Paul Merrell

New open-source router firmware opens your Wi-Fi network to strangers | Ars Technica - 0 views

  • We’ve often heard security folks explain their belief that one of the best ways to protect Web privacy and security on one's home turf is to lock down one's private Wi-Fi network with a strong password. But a coalition of advocacy organizations is calling such conventional wisdom into question. Members of the “Open Wireless Movement,” including the Electronic Frontier Foundation (EFF), Free Press, Mozilla, and Fight for the Future are advocating that we open up our Wi-Fi private networks (or at least a small slice of our available bandwidth) to strangers. They claim that such a random act of kindness can actually make us safer online while simultaneously facilitating a better allocation of finite broadband resources. The OpenWireless.org website explains the group’s initiative. “We are aiming to build technologies that would make it easy for Internet subscribers to portion off their wireless networks for guests and the public while maintaining security, protecting privacy, and preserving quality of access," its mission statement reads. "And we are working to debunk myths (and confront truths) about open wireless while creating technologies and legal precedent to ensure it is safe, private, and legal to open your network.”
  • One such technology, which EFF plans to unveil at the Hackers on Planet Earth (HOPE X) conference next month, is open-sourced router firmware called Open Wireless Router. This firmware would enable individuals to share a portion of their Wi-Fi networks with anyone nearby, password-free, as Adi Kamdar, an EFF activist, told Ars on Friday. Home network sharing tools are not new, and the EFF has been touting the benefits of open-sourcing Web connections for years, but Kamdar believes this new tool marks the second phase in the open wireless initiative. Unlike previous tools, he claims, EFF’s software will be free for all, will not require any sort of registration, and will actually make surfing the Web safer and more efficient.
  • Kamdar said that the new firmware utilizes smart technologies that prioritize the network owner's traffic over others', so good samaritans won't have to wait for Netflix to load because of strangers using their home networks. What's more, he said, "every connection is walled off from all other connections," so as to decrease the risk of unwanted snooping. Additionally, EFF hopes that opening one’s Wi-Fi network will, in the long run, make it more difficult to tie an IP address to an individual. “From a legal perspective, we have been trying to tackle this idea that law enforcement and certain bad plaintiffs have been pushing, that your IP address is tied to your identity. Your identity is not your IP address. You shouldn't be targeted by a copyright troll just because they know your IP address," said Kamdar.
  • ...1 more annotation...
  • While the EFF firmware will initially be compatible with only one specific router, the organization would like to eventually make it compatible with other routers and even, perhaps, develop its own router. “We noticed that router software, in general, is pretty insecure and inefficient," Kamdar said. “There are a few major players in the router space. Even though various flaws have been exposed, there have not been many fixes.”
Gary Edwards

Apple's extensions: Good or bad for the open web? | Fyrdility - 0 views

  •  
    Fyrdility asks the question; when it comes to the future of the Open Web, is Apple worse than Microsoft? He laments the fact that Apple pushes forward with innovations that have yet to be discussed by the great Web community. Yes, they faithfully submit these extensions and innovations back to the W3C as open standards proposals, but there is no waiting around for discussion or judgement. Apple is on a mission.

    IMHO, what Apple and the WebKit community do is not that much different from the way GPL based open source communities work, except that Apple works without the GPL guarantee. The WebKit innovations and extensions are similar to GPL forks in the shared source code; done in the open, contributed back to the community, with the community responsible for interoperability going forward.

    There are good forks and there are not so good forks. But it's not always a technology-engineering discussion that drives interop. sometimes it's marketshare and user uptake that carry the day. And indeed, this is very much the case with Apple and the WebKit community. The edge of the Web belongs to WebKit and the iPhone. The "forks" to the Open Web source code are going to weigh heavy on concerns for interop with the greater Web.

    One thing Fyrdility fails to recognize is the importance of the ACiD3 test to future interop. Discussion is important, but nothing beats the leveling effect of broadly measuring innovation for interop - and doing so without crippling innovation.

    "......Apple is heavily involved in the W3C and WHATWG, where they help define specifications. They are also well-known for implementing many unofficial CSS extensions, which are subsequently submitted for standardization. However, Apple is also known for preventing its representatives from participating in panels such as the annual Browser Wars panels at SXSW, which expresses a much less cooperative position...."
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Paul Merrell

Sun on open source: What doesn't kill you... | The Open Road - CNET News - 0 views

  • Open source is the very thing that has crippled Sun, yet Sun is looking to open source, to hobble its competitors and revive its future. We often talk in the technology industry about the need to cannibalize your own business before someone else does it to you. Sun may be a little late off the starting blocks, but it's fascinating to watch its race against time.
  • Having open-sourced its own Solaris operating system, Sun has now tried to corner the market in open source databases with its $1bn purchase of MySQL, the database management system. It now also has its eyes set on the storage market, with a plan to inflict the same pain on incumbents there that it has itself felt from the rise of Linux. It's a hugely gutsy move. It remains to be seen whether it will work, but with Sun's OpenStorage business growing dramatically faster than the rest of the storage industry, it just might work.
Paul Merrell

The People and Tech Behind the Panama Papers - Features - Source: An OpenNews project - 0 views

  • Then we put the data up, but the problem with Solr was it didn’t have a user interface, so we used Project Blacklight, which is open source software normally used by librarians. We used it for the journalists. It’s simple because it allows you to do faceted search—so, for example, you can facet by the folder structure of the leak, by years, by type of file. There were more complex things—it supports queries in regular expressions, so the more advanced users were able to search for documents with a certain pattern of numbers that, for example, passports use. You could also preview and download the documents. ICIJ open-sourced the code of our document processing chain, created by our web developer Matthew Caruana Galizia. We also developed a batch-searching feature. So say you were looking for politicians in your country—you just run it through the system, and you upload your list to Blacklight and you would get a CSV back saying yes, there are matches for these names—not only exact matches, but also matches based on proximity. So you would say “I want Mar Cabra proximity 2” and that would give you “Mar Cabra,” “Mar whatever Cabra,” “Cabra, Mar,”—so that was good, because very quickly journalists were able to see… I have this list of politicians and they are in the data!
  • Last Sunday, April 3, the first stories emerging from the leaked dataset known as the Panama Papers were published by a global partnership of news organizations working in coordination with the International Consortium of Investigative Journalists, or ICIJ. As we begin the second week of reporting on the leak, Iceland’s Prime Minister has been forced to resign, Germany has announced plans to end anonymous corporate ownership, governments around the world launched investigations into wealthy citizens’ participation in tax havens, the Russian government announced that the investigation was an anti-Putin propaganda operation, and the Chinese government banned mentions of the leak in Chinese media. As the ICIJ-led consortium prepares for its second major wave of reporting on the Panama Papers, we spoke with Mar Cabra, editor of ICIJ’s Data & Research unit and lead coordinator of the data analysis and infrastructure work behind the leak. In our conversation, Cabra reveals ICIJ’s years-long effort to build a series of secure communication and analysis platforms in support of genuinely global investigative reporting collaborations.
  • For communication, we have the Global I-Hub, which is a platform based on open source software called Oxwall. Oxwall is a social network, like Facebook, which has a wall when you log in with the latest in your network—it has forum topics, links, you can share files, and you can chat with people in real time.
  • ...3 more annotations...
  • We had the data in a relational database format in SQL, and thanks to ETL (Extract, Transform, and Load) software Talend, we were able to easily transform the data from SQL to Neo4j (the graph-database format we used). Once the data was transformed, it was just a matter of plugging it into Linkurious, and in a couple of minutes, you have it visualized—in a networked way, so anyone can log in from anywhere in the world. That was another reason we really liked Linkurious and Neo4j—they’re very quick when representing graph data, and the visualizations were easy to understand for everybody. The not-very-tech-savvy reporter could expand the docs like magic, and more technically expert reporters and programmers could use the Neo4j query language, Cypher, to do more complex queries, like show me everybody within two degrees of separation of this person, or show me all the connected dots…
  • We believe in open source technology and try to use it as much as possible. We used Apache Solr for the indexing and Apache Tika for document processing, and it’s great because it processes dozens of different formats and it’s very powerful. Tika interacts with Tesseract, so we did the OCRing on Tesseract. To OCR the images, we created an army of 30–40 temporary servers in Amazon that allowed us to process the documents in parallel and do parallel OCR-ing. If it was very slow, we’d increase the number of servers—if it was going fine, we would decrease because of course those servers have a cost.
  • For the visualization of the Mossack Fonseca internal database, we worked with another tool called Linkurious. It’s not open source, it’s licensed software, but we have an agreement with them, and they allowed us to work with it. It allows you to represent data in graphs. We had a version of Linkurious on our servers, so no one else had the data. It was pretty intuitive—journalists had to click on dots that expanded, basically, and could search the names.
Gonzalo San Gil, PhD.

Fedora 22 Advances Linux for Cloud, Workstations, Servers - 0 views

  •  
    "The open-source Fedora 22 Linux distribution, which became generally available May 26, provides desktop, cloud and server users with an updated array of technologies and capabilities. Fedora is Red Hat's community Linux distribution and often serves as an incubator and a proving ground for the latest and greatest open-source technologies. "
  •  
    "The open-source Fedora 22 Linux distribution, which became generally available May 26, provides desktop, cloud and server users with an updated array of technologies and capabilities. Fedora is Red Hat's community Linux distribution and often serves as an incubator and a proving ground for the latest and greatest open-source technologies. "
Gary Edwards

What Oracle Sees in Sun Microsystems | NewsFactor Network - 0 views

  • Citigroup's Thill estimates Oracle could cut between 40 percent and 70 percent of Sun's roughly 33,000 employees. Excluding restructuring costs, Oracle expects Sun to add $1.5 billion in profit during the first year after the acquisition closes this summer, and another $2 billion the following year. Oracle executives declined to say how many jobs would be eliminated.
  • Citigroup's Thill estimates Oracle could cut between 40 percent and 70 percent of Sun's roughly 33,000 employees. Excluding restructuring costs, Oracle expects Sun to add $1.5 billion in profit during the first year after the acquisition closes this summer, and another $2 billion the following year. Oracle executives declined to say how many jobs would be eliminated.
  •  
    Good article from Aaron Ricadela. The focus is on Java, Sun's hardware-Server business, and Oracle's business objectives. No mention of OpenOffice or ODf though. There is however an interesting quote from IBM regarding the battle between Java and Microsoft .NET. Also, no mention of a OpenOffice-Java Foundation that would truly open source these technologies.

    When we were involved with the Massachusetts Pilot Study and ODF Plug-in proposals, IBM and Oracle lead the effort to open source the da Vinci plug-in. They put together a group of vendors known as "the benefactors", with the objective of completing work on da Vinci while forming a patent pool - open source foundation for all OpenOffice and da Vinci source. This idea was based on the Eclipse model.

    One of the more interesting ideas coming out of the IBM-Oracle led "benefactors", was the idea of breaking OpenOffice into components that could then be re-purposed by the Eclipse community of developers. The da Vinci plug-in was to be the integration bridge between Eclipse and the Microsoft Office productivity environment. Very cool. And no doubt IBM and Oracle were in synch on this in 2006. The problem was that they couldn't convince Sun to go along with the plan.

    Sun of course owned both Java and OpenOffice, and thought they could build a better ODF plug-in for OpenOffice (and own that too). A year later, Sun actually did produce an ODF plug-in for MSOffice. It was sent to Massachusetts on July 3rd, 2007, and tested against the same set of 150 critical documents da Vinci had to successfully convert without breaking. The next day, July 4th, Massachusetts announced their decision that they would approve the use of both ODF and OOXML! The much hoped for exclusive ODF requirement failed in Massachusetts exactly because Sun insisted on their way or the highway.

    Let's hope Oracle can right the ship and get OpenOffice-ODF-Java back on track.

    "......To gain
Gonzalo San Gil, PhD.

Why Linux Is Future | Rare Input - 0 views

  •  
    "7th June 2013 / Blog, Linux, Open Source, Operating System, Tutorial / 3 comments Linux is the name of revolution in the field of open source technology. Ever since its advent it has grown leaps and bounds and continues to conquer the open source world. What make Linux so desirable are the features which comes with it that too for no cost. Linux was developed by "Linus Torvalds". His aim was to develop an operating system which was free from the hassles of proprietorship and should be free to use, so he developed Linux and the rest is history!"
Gonzalo San Gil, PhD.

Why open source has been a tremendous accelerator for Monsanto | The Enterprisers Proje... - 0 views

    • Gonzalo San Gil, PhD.
       
      # ! #Identification of #OpenSource with #Monsanto # ! is a #wrong #move.. # ! more yet, coming from a '#RedHat sponsored' publication # ! ... despite it could represent a good 'investor support' issue... # ! :(
  •  
    "Our IT organization is continuing to evolve as we engage more in open source. Whether it be what we use for distributed processing, for databases, or to accelerate our compute power or data visualization, we continue to expand the number of open technologies we explore."
  •  
    "Our IT organization is continuing to evolve as we engage more in open source. Whether it be what we use for distributed processing, for databases, or to accelerate our compute power or data visualization, we continue to expand the number of open technologies we explore."
1 - 20 of 91 Next › Last »
Showing 20 items per page