Skip to main content

Home/ Open Web/ Group items tagged working

Rss Feed Group items tagged

Gary Edwards

Drew Houston's Commencement address - MIT News Office - 0 views

  • They say that you're the average of the 5 people you spend the most time with
  • f you have a dream, you can spend a lifetime studying and planning and getting ready for it. What you should be doing is getting started.
  • Your biggest risk isn't failing, it's getting too comfortable.
  • ...10 more annotations...
  • Bill Gates's first company made software for traffic lights.
  • Steve Jobs's first company made plastic whistles that let you make free phone calls
  • Both failed,
  • From now on, failure doesn't matter: you only have to be right once.
  • There are 30,000 days in your life.
  • So that’s how 30,000 ended up on the cheat sheet. That night, I realized there are no warmups, no practice rounds, no reset buttons. Every day we're writing a few more words of a story.
  • So from then on, I stopped trying to make my life perfect, and instead tried to make it interesting.
  • I wanted my story to be an adventure — and that's made all the difference.
  • Instead of trying to make your life perfect, give yourself the freedom to make it an adventure, and go ever upward.
  • Excelsior
  •  
    Excellent and well worth the time to read! Founder of DropBox tells his story and it's full of insight, wisdom and naked truth. excerpt: "I was going to say work on what you love, but that's not really it. It's so easy to convince yourself that you love what you're doing - who wants to admit that they don't? When I think about it, the happiest and most successful people I know don't just love what they do, they're obsessed with solving an important problem, something that matters to them. They remind me of a dog chasing a tennis ball: their eyes go a little crazy, the leash snaps and they go bounding off, plowing through whatever gets in the way. I have some other friends who also work hard and get paid well in their jobs, but they complain as if they were shackled to a desk. The problem is a lot of people don't find their tennis ball right away. Don't get me wrong - I love a good standardized test as much as the next guy, but being king of SAT prep wasn't going to be mine. What scares me is that both the poker bot and Dropbox started out as distractions. That little voice in my head was telling me where to go, and the whole time I was telling it to shut up so I could get back to work. Sometimes that little voice knows best. It took me a while to get it, but the hardest-working people don't work hard because they're disciplined. They work hard because working on an exciting problem is fun. So after today, it's not about pushing yourself; it's about finding your tennis ball, the thing that pulls you. It might take a while, but until you find it, keep listening for that little voice. "
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Gary Edwards

Republic Wireless - Combining WiFi with Cellular to reduce Smartphone Costs - 0 views

  • Do I need to buy minutes from Sprint or anyone else? No. We're the first-ever wireless provider to bundle WiFi calling with access to cellular whenever you need it. Depending on the plan you choose, your Republic Wireless phone will have unlimited* access to data, talk and text when using the Sprint cellular network. Note that the $5 plan offered by Republic is WiFi only and the $10 plan includes cellular talk and text (no data). All Republic plans include unlimited data, talk and text on WiFi. 
  • Can I switch between plans? Yes! When you purchase a new Moto X phone, you’ll be able to choose whatever plan you like—and you can also switch plans up to twice per month as your needs change. For example, if you know you’ll be taking a vacation and might require more cell data one week, you can switch to a cell data plan right from your phone and then switch back to a WiFi “friendlier” plan once you return home.
  •  
    Republic Wireless provides a new kind of smartphone cellular service based on a technology that handles the roll over from WiFi to 3G or 4G cellular in the middle of a call. Very cool, but currently it only works with specially outfitted (custom ROM) Android Moto X phones. (They are working on how to port this custom ROM technology to all Android phones :) The concept is based on the fact that WiFi is cheap, very open and near universally available; while 3G and 4G Cellular is expensive, contractual and proprietary. The idea is to leverage free WiFi wherever they can, and roll over to the Sprint 3G - 4G network when needed. Very cool and the business model seems to have it right. ......................................................................... "Which Moto X plan is right for me? Here's the lowdown on our four new plan options. Depending on your needs and how you want to use your phone, you can choose the plan that's best for you. $5 WiFi only plan This is the most powerful tool in your arsenal of options. Why? You can drop your smartphone bill-at will-to $5. If you're interested in getting serious about cutting costs, you can use this tool to best leverage the WiFi in your life to reduce your phone bill. It's also the ultimate plan for home base stickers and kids who don't need a cellular plan. It's fully unlimited data, talk and text-on WiFi only. $10 WiFi + Cell Talk & Text One of our members, 10thdoctor said :  "I use WiFi for everything, except when I'm traveling and for voice at my school." Yep, this is the perfect plan for that. Our members are around WiFi about 90% of the time. During that 10% of the time where you're away from WiFi, this plan gives you cellular backup for communicating when you need to. This plan both cuts costs and accommodates what's quickly becoming the norm: a day filled with WiFi. $25 WiFi + Cell (3G) Talk, Text & Data Lots of people are on 3G plans today and are paying upwards of $100 a month on
Gary Edwards

WhiteHat Aviator - The most secure browser online - 1 views

  •  
    "FREQUENTLY ASKED QUESTIONS What is WhiteHat Aviator? WhiteHat Aviator; is the most secure , most private Web browser available anywhere. By default, it provides an easy way to bank, shop, and use social networks while stopping viruses from infecting computers, preventing accounts from being hacked, and blocking advertisers from invisibly spying on every click. Why do I need a secure Web browser? According to CA Technologies, 84 percent of hacker attacks in 2009 took advantage of vulnerabilities in Web browsers. Similarly, Symantec found that four of the top five vulnerabilities being exploited were client-side vulnerabilities that were frequently targeted by Web-based attacks. The fact is, that when you visit any website you run the risk of having your surfing history, passwords, real name, workplace, home address, phone number, email, gender, political affiliation, sexual preferences, income bracket, education level, and medical history stolen - and your computer infected with viruses. Sadly, this happens on millions of websites every day. Before you have any chance at protecting yourself, other browsers force you to follow complicated how-to guides, modify settings that only serve advertising empires and install obscure third-party software. What makes WhiteHat Aviator so secure? WhiteHat Aviator; is built on Chromium, the same open-source foundation used by Google Chrome. Chromium has several unique, powerful security features. One is a "sandbox" that prevents websites from stealing files off your computer or infecting it with viruses. As good as Chromium is, we went much further to create the safest online experience possible. WhiteHat Aviator comes ready-to-go with hardened security and privacy settings, giving hackers less to work with. And our browser downloads to you - without any hidden user-tracking functionality. Our default search engine is DuckDuckGo - not Google, which logs your activity. For good measure, Aviator integrates Disconnect
Gary Edwards

How Sir Tim Berners-Lee cut the Gordian Knot of HTML5 | Technology | guardian.co.uk - 0 views

  •  
    Good article with excellent URL references.  Bottom line is that the W3C will support the advance of HTML5 and controversial components such as "canvas", HTML + RDFa, and HTML microdata. excerpt: The key question is: who's going to get their way with HTML5? The companies who want to keep the kitchen sink in? Or those which want it to be a more flexible format which might also be able to displace some rather comfortable organisations that are doing fine with things as they are? Adobe, it turned out, seemed to be trying to slow things down a little. It was accused of trying to put HTML5 "on hold". It strongly denied it. Others said it was using "procedural bullshit". Then Berners-Lee weighed in with a post on the W3 mailing list. First he noted the history: "Some in the community have raised questions recently about whether some work products of the HTML Working Group are within the scope of the Group's charter. Specifically in question were the HTML Canvas 2D API, and the HTML Microdata and HTML+RDFa Working Drafts." (Translation: Adobe seems to have been trying to slow things down on at least one of these points.) And then he pushes: "I agree with the WG [working group] chairs that these items -- data and canvas - are reasonable areas of work for the group. It is appropriate for the group to publish documents in this area." Chop! And that's it. There goes the Gordian Knot. With that simple message, Berners-Lee has probably created a fresh set of headaches for Adobe - but it means that we can also look forward to a web with open standards, rather than proprietary ones, and where commercial interests don't get to push it around.
Gary Edwards

Is Oracle Quietly Killing OpenOffice? | Revelations From An Unwashed Brain - 1 views

  •  
    Bingo!  Took five years, but finally someone gets it: excerpt:  Great question. After 10 years, OpenOffice hasn't had much traction in the enterprise - supported by under 10% of firms, and today it's facing more competition from online apps from Google and Zoho. I'm not counting OpenOffice completely out yet, however, since IBM has been making good progress on features with Symphony and Oracle is positioning OpenOffice for the web, desktop and mobile - a first. But barriers to OpenOffice and Web-based tools persist, and not just on a feature/function basis. Common barriers include: Third-party integration requirements. Some applications only work with Office. For example, one financial services firm I spoke with was forced to retain Office because its employees needed to work with Fiserv, a proprietary data center that is very Microsoft centric. "What was working pretty well was karate chopped." Another firm rolled out OpenOffice.org to 7,00 users and had to revert back 5,00 of them when they discovered one of the main apps they work with only supported Microsoft. User acceptance. Many firms say that they can overcome pretty much all of the technical issues but face challenges around user acceptance. One firm I spoke with went so far as to "customize" their OpenOffice solution with a Microsoft logo and told employees it was a version of Office. The implementation went smoothly. Others have said that they have met resistance from business users who didn't want Office taken off their desktop. Other strategies include providing OpenOffice to only new employees and to transition through attrition. But this can cause compatibility issues. Lack of seamless interoperability with Office. Just like third-party apps may only work with Office, many collaborative activities force use of particular versions of Office. Today's Web-based and OpenOffice solutions do not provide seamless round tripping between Office and their applications. Corel, with its
Gary Edwards

Cloud Computing White Papers by the Open Group - 0 views

  •  
    Cloud Computing White Papers   The Open Group Cloud Work Group exists to create a common understanding among buyers and suppliers of how enterprises of all sizes and scales of operation can include Cloud Computing technology in a safe and secure way in their architectures to realize its significant cost, scalability, and agility benefits. It includes some of the industry's leading cloud providers and end-user organizations, collaborating on standard models and frameworks aimed at eliminating vendor lock-in for enterprises looking to benefit from Cloud products and services. The White Papers on this website form the current output of the Work Group. They are also available in PDF form from The Open Group bookstore for download and printing. Further papers will be added as the Work Group progresses. The initial focus of the Work Group is on business drivers for Cloud Computing, and this is reflected in the first items to appear: The Business Scenario Workshop Report White Paper: Building Return on Investment from Cloud Computing White Paper: Strengthening your Business Case for Using Cloud White Paper: Cloud Buyers' Decision Tree White Paper: Cloud Buyers' Requirements Questionnaire Further White Papers will address other key Work Group topics, including Architecture, Infrastructure, and Security.
Paul Merrell

First working draft of W3C HTML5 - 0 views

  • HTML5 A vocabulary and associated APIs for HTML and XHTML
  • This specification defines the 5th major revision of the core language of the World Wide Web: the Hypertext Markup Language (HTML). In this version, new features are introduced to help Web application authors, new elements are introduced based on research into prevailing authoring practices, and special attention has been given to defining clear conformance criteria for user agents in an effort to improve interoperability.
  • The W3C HTML Working Group is the W3C working group responsible for this specification's progress along the W3C Recommendation track. This specification is the 24 June 2010 Working Draft snapshot. Work on this specification is also done at the WHATWG. The W3C HTML working group actively pursues convergence with the WHATWG, as required by the W3C HTML working group charter.
Gary Edwards

Fast Database Emerges from MIT Class, GPUs and Student's Invention - 0 views

  •  
    Awesome work!  A world changing discovery i think. excerpt: "Mostak built a new parallel database, called MapD, that allows him to crunch complex spatial and GIS data in milliseconds, using off-the-shelf gaming graphical processing units (GPU) like a rack of mini supercomputers. Mostak reports performance gains upwards of 70 times faster than CPU-based systems. Related Stories The five elements of a data scientist's job. Read more» Podcast: A data scientist's approach to predictive analytics for marketers. Read more» Data scientist Edwin Chen on Twitter's business value. Read more» Geofeedia structures Twitter, social media data by location and time. Read more» Mostak said there is more development work to be done on MapD, but the system works and will be available in the near future. He said he is planning to release the new database system under and open source business model similar to MongoDB and its company 10gen. "I had the realization that this had the potential to be majorly disruptive," Mostak said. "There have been all these little research pieces about this algorithm or that algorithm on the GPU, but I thought, 'Somebody needs to make an end-to-end system.' I was shocked that it really hadn't been done." Mostak's undergraduate work was in economics and anthropology; he realized the need for his interactive database while  studying at Harvard's Center for Middle Eastern Studies program. But his hacker-style approach to problem-solving is an example of how attacking a problem from new angles can yield better solutions. Mostak's multidisciplinary background isn't typical for a data scientist or database architect."
Gary Edwards

Microsoft Office to get a dose of OpenDocument - CNET News - 0 views

  •  
    While trying to help a friend understand the issues involved with exchanging MSOffice documnets between the many different versions of MSOffice, I stumbled on this oldy but goody ......... "A group of software developers have created a program to make Microsoft Office work with files in the OpenDocument format, a move that would bridge currently incompatible desktop applications. Gary Edwards, an engineer involved in the open-source OpenOffice.org project and founder of the OpenDocument Foundation, on Thursday discussed the software plug-in on the Web site Groklaw. The new program, which has been under development for about year and finished initial testing last week, is designed to let Microsoft Office manipulate OpenDocument format (ODF) files, Edwards said. "The ODF Plugin installs on the file menu as a natural and transparent part of the 'open,' 'save,' and 'save as' sequences. As far as end users and other application add-ons are concerned, ODF Plugin renders ODF documents as if (they) were native to MS Office," according to Edwards. If the software, which is not yet available, works as described, it will be a significant twist to an ongoing contest between Microsoft and the backers of OpenDocument, a document format gaining more interest lately, particularly among governments. Microsoft will not natively support OpenDocument in Office 2007, which will come out later this year. Company executives have said that there is not sufficient demand and OpenDocument is less functional that its own Office formats. Having a third-party product to save OpenDocument files from Office could give OpenDocument-based products a bump in the marketplace, said Stephen O'Grady, a RedMonk analyst. OpenDocument is the native format for the OpenOffice open-source desktop productivity suite and is supported in others, including KOffice, Sun Microsystems' StarOffice and IBM's Workplace. "To the extent that you get people authoring documents in a format that is natively compatible with
Gary Edwards

The GPL Does Not Depend on the Copyrightability of APIs | Public Knowledge - 0 views

  •  
    Excellent legal piece explaining the options and methods of how software programs use licensed and copyrighted third party libraries through an API. Finally, some clear thinking about Google Android and the Oracle Java Law Suit.
    excerpt: Another option for a developer is to do what Google did when it created Android, and create replacement code libraries that are compatible with the existing code libraries, but which are new copyrighted works. Being "compatible" in this context means that the new libraries are called in the same way that the old libraries are--that is, using the same APIs. But the actual copyrighted code that is being called is a new work. As long as the new developer didn't actually copy code from the original libraries, the new libraries are not infringing. It does not infringe on the copyright of a piece of software to create a new piece of software that works the same way; copyright protects the actual expression (lines of code) but not the functionality of a program. The functionality of a program is protected by patent, or not at all.
    In the Oracle/Google case, no one is arguing that code libraries themselves are not copyrightable. Of course they are and this is why the Google/Oracle dispute has no bearing on the enforceability of the GPL. Instead, the argument is about whether the method of using a code library, the APIs, is subject to a copyright that is independent of the copyright of the code itself. If the argument that APIs are not copyrightable prevails, programs that are created by statically-linking GPL'd code libraries will still be considered derivative works of the code libraries and will still have to be released under the GPL.
    Though irrelevant to the enforceability of the GPL, the Oracle/Google dispute is still interesting. Oracle is claiming that Google, by creating compatible, replacement code libraries that are "called" in the same way as Oracle's code libraries (that is, using the same APIs), infringed
Gary Edwards

Government Market Drags Microsoft Deeper into the Cloud - 0 views

  •  
    Nice article from Scott M. Fulton describing Microsoft's iron fisted lock on government desktop productivity systems and the great transition to a Cloud Productivity Platform.  Keep in mind that in 2005, Massachusetts tried to do the same thing with their SOA effort.  Then Governor Romney put over $1 M into a beta test that produced the now infamous 300 page report written by Sam Hiser.  The details of this test resulted in the even more infamous da Vinci ODF plug-in for Microsoft Office desktops.   The lessons of Massachusetts are simple enough; it's not the formats or office suite applications.  It's the business process!  Conversion of documents not only breaks the document.  It also breaks the embedded "business process". The mystery here is that Microsoft owns the client side of client/server computing.  Compound documents, loaded with intertwined OLE, ODBC, ActiveX, and other embedded protocols and interface dependencies connecting data sources with work flow, are the fuel of these client/server business productivity systems.  Break a compound document and you break the business process.   Even though Massachusetts workers were wonderfully enthusiastic and supportive of an SOA based infrastructure that would include Linux servers and desktops as well as OSS productivity applications, at the end of the day it's all about getting the work done.  Breaking the business process turned out to be a show stopper. Cloud Computing changes all that.  The reason is that the Cloud is rapidly replacing client/server as the target architecture for new productivity developments; including data centers and transaction processing systems.  There are many reasons for the great transition, but IMHO the most important is that the Web combines communications with content, data, and collaborative computing.   Anyone who ever worked with the Microsoft desktop productivity environment knows that the desktop sucks as a communication device.  There was
Paul Merrell

People That Think Social Media Helps Their Work Are Probably Wrong | NeoAcademic - 0 views

  • In an upcoming special issue of Social Science Computer Review, Landers and Callan[1] set out to understand how people actually use social media while at work and how it affects their job performance.  By polling workers across a wide variety of jobs (across at least 17 industries), they identified 8 broad ways that people use social media that they believe help their work, and 9 broad ways that people use social media that they believe harm their work.  Although the harmful social media behaviors were related to decreased job performance, the beneficial social media behaviors were unrelated to job performance.  In short, wasting time on social media hurts you, but trying to use social media to improve your work probably doesn’t actually help.
  • It was in Study 3 that the relationship between the social media behaviors and job performance was determined.  Consistently, negative social media behaviors (e.g., plagiarism, mutlitasking, time theft) were correlated with lower job performance (across task, contextual, counterproductive, and adaptive dimensions).  But in contrast, positive social media behaviors (e.g., crowdsourcing a problem, identifying new customers) were not generally correlated with job performance at all.The researcher then make the following practical recommendation:These findings suggested that simply granting employee access to social media is unlikely to improve job performance unless a specific plan is in place to take advantage of the capabilities it provides. In fact, permitting employee access to social media broadly may be generally harmful to job performance and cannot be recommended based upon these results.
Paul Merrell

2nd Cir. Affirms That Creation of Full-Text Searchable Database of Works Is Fair Use | ... - 0 views

  • The fair use doctrine permits the unauthorized digitization of copyrighted works in order to create a full-text searchable database, the U.S. Court of Appeals for the Second Circuit ruled June 10.Affirming summary judgment in favor of a consortium of university libraries, the court also ruled that the fair use doctrine permits the unauthorized conversion of those works into accessible formats for use by persons with disabilities, such as the blind.
  • The dispute is connected to the long-running conflict between Google Inc. and various authors of books that Google included in a mass digitization program. In 2004, Google began soliciting the participation of publishers in its Google Print for Publishers service, part of what was then called the Google Print project, aimed at making information available for free over the Internet.Subsequently, Google announced a new project, Google Print for Libraries. In 2005, Google Print was renamed Google Book Search and it is now known simply as Google Books. Under this program, Google made arrangements with several of the world's largest libraries to digitize the entire contents of their collections to create an online full-text searchable database.The announcement of this program triggered a copyright infringement action by the Authors Guild that continues to this day.
  • Turning to the fair use question, the court first concluded that the full-text search function of the Hathitrust Digital Library was a “quintessentially transformative use,” and thus constituted fair use. The court said:the result of a word search is different in purpose, character, expression, meaning, and message from the page (and the book) from which it is drawn. Indeed, we can discern little or no resemblance between the original text and the results of the HDL full-text search.There is no evidence that the Authors write with the purpose of enabling text searches of their books. Consequently, the full-text search function does not “supersede[ ] the objects [or purposes] of the original creation.”Turning to the fourth fair use factor—whether the use functions as a substitute for the original work—the court rejected the argument that such use represents lost sales to the extent that it prevents the future development of a market for licensing copies of works to be used in full-text searches.However, the court emphasized that the search function “does not serve as a substitute for the books that are being searched.”
  • ...3 more annotations...
  • Part of the deal between Google and the libraries included an offer by Google to hand over to the libraries their own copies of the digitized versions of their collections.In 2011, a group of those libraries announced the establishment of a new service, called the HathiTrust digital library, to which the libraries would contribute their digitized collections. This database of copies is to be made available for full-text searching and preservation activities. Additionally, it is intended to offer free access to works to individuals who have “print disabilities.” For works under copyright protection, the search function would return only a list of page numbers that a search term appeared on and the frequency of such appearance.
  • The court also rejected the argument that the database represented a threat of a security breach that could result in the full text of all the books becoming available for anyone to access. The court concluded that Hathitrust's assertions of its security measures were unrebutted.Thus, the full-text search function was found to be protected as fair use.
  • The court also concluded that allowing those with print disabilities access to the full texts of the works collected in the Hathitrust database was protected as fair use. Support for this conclusion came from the legislative history of the Copyright Act's fair use provision, 17 U.S.C. §107.
Gary Edwards

Google Chrome OS: Web Platform To Rule Them All -- InformationWeek - 0 views

  •  
    Some good commentary on chrome OS from InformationWeek's Thomas Claburn. Excerpt: With Chrome OS, Google aims to make the Web the primary platform for software development....... The fact that Chrome OS applications will be written using open Web standards like JavaScript, HTML, and CSS might seem like a liability because Web applications still aren't as capable as applications written for specific devices and operating systems. But Google is betting that will change and is working to effect the change on which its bet depends. Within a year or two, Web browsers will gain access to peripherals, through an infrastructure layer above the level of device drivers. Google's work with standards bodies is making that happen..... ..... According to Matt Womer, the "ubiquitous Web activity lead" for W3C, the Web standards consortium, Web protocol groups are working to codify ways to access peripherals like digital cameras, the messaging stack, calendar data, and contact data. There's now a JavaScript API that Web developers can use to get GPS information from mobile phones using the phone's browser, he points out. What that means is that device drivers for Chrome OS will emerge as HTML 5 and related standards mature. Without these, consumers would never use Chrome OS because devices like digital cameras wouldn't be able to transfer data. Womer said the standardization work could move quite quickly, but won't be done until there's an actual implementation. That would be Chrome OS...... ..... Chrome OS will sell itself to developers because, as Google puts it, writing applications for the Web gives "developers the largest user base of any platform."
Gary Edwards

Nebula Builds a Cloud Computer for the Masses - Businessweek - 0 views

  •  
    Fascinating story about Chris Kemp of OpenStack fame, and his recent effort to commoditize Cloud Computing hardware/software systems - Nebula excerpt: "Though it doesn't look like much, (about the size of a four-inch-tall pizza box) Nebula One is the product of dozens of engineers working for two years in secrecyin Mountain View, Calif. It has attracted the attention of some of Silicon Valley's top investors. The three billionaires who made the first investment in Google-Andy Bechtolsheim, David Cheriton, and Ram Shriram-joined forces again to back Nebula One, betting that its technology will invite a dramatic shift in corporate computing that outflanks the titans of the industry. "This is an example of where traditional technology companies have failed the market," says Bechtolsheim, a co-founder of Sun Microsystems (ORCL) and famed hardware engineer. Kleiner Perkins Caufield & Byers, Comcast Ventures, and Highland Capital Partners have also backed Kemp's startup, itself called Nebula, which has raised more than $30 million. The origins of Nebula One go back to Kemp's days at NASA, which he joined in 2006 as director of strategic business development. In 2007, he became a chief information officer, making him, at 29, the youngest senior executive in the U.S. government. In 2010, he became NASA's chief technology officer. Kemp spent much his time at NASA developing more efficient data centers for the agency's various computing efforts. He and a team of engineers built the early parts of what is now known as OpenStack, software that makes it possible to control an entire data center as one computer. To see if other companies could take the idea further, Kemp made the software open source. Big players such as AT&T (T), Hewlett-Packard, IBM, and Rackspace Hosting (RAX) have since incorporated OpenStack into the cloud computing services they sell customers. Kemp had an additional idea: He wanted to use OpenStack as a way to give every company its
Gary Edwards

How would you fix the Linux desktop? | ITworld - 0 views

  • VB integrates with COM
  • QL Server has a DCE/RPC interface. 
  • MS-Office?  all the components (Excel, Word etc.) have a COM and an OLE interface.
  •  
    Comment posted 1 week ago in reply to Zzgomes .....  by Ed Carp.  Finally someone who gets it! OBTW, i replaced Windows 7 with Linux Mint over a year ago and hope to never return.  The thing is though, i am not a member of a Windows productivity workgroup, nor do i need to connect to any Windows databases or servers.  Essentially i am not using any Windows business process or systems.  It's all Internet!!! 100% Web and Cloud Services systems.  And that's why i can dump Windows without a blink! While working for Sursen Corp, it was a very different story.  I had to have Windows XP and Windows 7, plus MSOffice 2003-2007, plus Internet Explorer with access to SharePoint, Skydrive/Live.com.  It's all about the business processes and systems you're part of, or must join.   And that's exactly why the Linux Desktop has failed.  Give Cloud Computing the time needed to re-engineer and re-invent those many Windows business processes, and the Linux Desktop might suceed.  The trick will be in advancing both the Linux Desktop and Application developer layers to target the same Cloud Computing services mobility targets.  ..... Windows will take of itself.   The real fight is in the great transition of business systems and processes moving from the Windows desktp/workgroup productivity model to the Cloud.  Linux Communities must fight to win the great transition. And yes, in the end this all about a massive platform shift.  The fourth wave of computing began with the Internet, and will finally close out the desktop client/server computing model as the Web evolves into the Cloud. excerpt: Most posters here have it completely wrong...the *real* reason Linux doesn't have a decent penetration into the desktop market is quite obvious if you look at the most successful desktop in history - Windows.  All this nonsense about binary driver compatibility, distro fragmentation, CORBA, and all the other red herrings that people are talking about are completely irrelevant
Gary Edwards

Roger Black : "We save trees" - The Story of TreeSaver - 0 views

  •  
    Roger Black is a publication designer, and Filipe Fortes was the project manager for Microsoft's Windows Presentation Foundation - Silverlight project.  Today he specializes in dynamic layout algorithms written in Open Web JavaScript.  Roger is a reknown publication designer, and describes here the genesis behind TreeSaver.  Fascinating story certain to become a key explanation of how digital media ran away with the print publishing industry.   I'm wondering what kind of authoring tools will evolve that can publish directly into the TS JavaScript templates?   My first inclination would be to adapt OOo Impress.  It has an outline view, a notes editing capability, and provides a decent visual canvas.  The problem is that it's locked into "slides".  Can Impress be unlocked and flowing?  That might work. excerpt:  when Microsoft put together a dynamic page layout for The New York Times Reader, did they know that it was the future? It certainly wasn't the immediate present, since they couldn't pry the WPF visual layer off of Windows, leaving it a single-OS solution. (The Times' Reader later was taken up by Adobe, which at least got it to work on both Mac and PC.) Filipe Fortes, PM on the MS news client project, knew. I'd met him when the group invited me out to Redmond to help design the first templates for the Times. Later I saw him at the 2007 Mix conference in Vegas, and I asked him how to make the dynamic page size idea work multi-platform. He said, "We could do it in HTML."
Gary Edwards

Adobe's Web Typography design work lands in WebKit browser | Deep Tech - CNET News - 0 views

  •  
    Adobe has contributed the first "CSS Regions" patch to the OS WebKit project.  CSS Regions is at the core of Adobe's flowing Web Typography work, and has been submitted to the W3C CSS standardization effort.   No mention yet as to what kind of CSS3-HTML5 authoring and publication tools Adobe has in the works, but the inclusion in WebKit will no doubt shake things up in the world of visually-immersive packaging (FlipBoard, OnSwipe, TreeSaver, Needle, etc.) excerpt:Today, the first bit of Adobe-written code landed in the WebKit browser engine project, an early step to try to bring magazine-style layouts to Web pages using an extension to today's CSS (Cascading Style Sheets) technology. Adobe calls the technology CSS Regions. The move begins fulfilling a plan Adobe announced in May to build the technology into WebKit and--if the company can persuade others to embrace it--furthers Adobe's ambition to standardize the advanced CSS layout mechanism. WebKit
Gary Edwards

J.C. Herz Ringling College Commencement Speech: Harvesting the Creative Life - 0 views

  •  
    Wow!  excellent advice, top to bottom.  Very well thought out flow of wisdom. excerpt: The important work that you build your reputation on - you can't just Google it. You don't cut and paste it from Wikipedia. You roll up your sleeves, and bring all your creativity and meaningful skills to bear on the problem of building something.   This is what the world requires - this is what the world rewards. Not just calling yourself creative, but understanding how to exercise your creative powers to some end, to bring your vision and skills together in a meaningful way. This is a powerful thing to be able to do. It gives you tremendous value in a society where attention is currency - being able to capture people's imaginations is the scarcest kind of power in a fractured culture. Creating work that transports and transcends is one of the few ways to create sustainable value in a disposable society. What you do, if you do it well, is never going to be a commodity. Vision, magic, delight. Heart-rocking spectacle. Pulse-pounding action. These things don't get outsourced to some cubicle drone in the developing world.   You are an influential group of people, and today is an important moment, as you set forth to become the chief stewards of your gifts. Because, this is what it means to be a creative professional: figuring out how to be the best steward of your gifts, so that your power to create grows and deepens meaningfully over time. So that your edges stay sharp, and your light stays bright. The life you've chosen is not one that simply requires clocking in and clocking out. You've got to bring your soul to it every day. You've got to be on your game.   That takes discipline. And it takes awareness - of how you're spending your time, and of how what you're doing affects your capability and your capacity. You are going to have to ask yourself, at every turn: is this project making me smarter, or making me stupider. Is this job stoking my fire,
1 - 20 of 395 Next › Last »
Showing 20 items per page