Skip to main content

Home/ Future of the Web/ Group items tagged ideals

Rss Feed Group items tagged

Gonzalo San Gil, PhD.

Copyleft: Pragmatic Idealism - GNU Project - Free Software Foundation - 0 views

  •  
    "by Richard Stallman Every decision a person makes stems from the person's values and goals. People can have many different goals and values; fame, profit, love, survival, fun, and freedom, are just some of the goals that a good person might have. When the goal is a matter of principle, we call that idealism. My work on free software is motivated by an idealistic goal: spreading freedom and cooperation. I want to encourage free software to spread, replacing proprietary software that forbids cooperation, and thus make our society bette"
Gonzalo San Gil, PhD.

Anti-Piracy Law Boosted Music Sales , Plunged Internet Traffic | TorrentFreak - 0 views

  •  
    " y Ernesto on May 9, 2014 C: 54 News A new study on the effects of the IPRED anti-piracy law in Sweden shows that the legislation increased music sales by 36 percent. At the same time, Internet traffic in the country dropped significantly. The results suggest that the law initially had the desired effect, but the researchers also note this didn't last long." [... The question remains, however, whether bankrupting people or throwing them in jail is the ideal strategy in the long run… ||]
  •  
    " y Ernesto on May 9, 2014 C: 54 News A new study on the effects of the IPRED anti-piracy law in Sweden shows that the legislation increased music sales by 36 percent. At the same time, Internet traffic in the country dropped significantly. The results suggest that the law initially had the desired effect, but the researchers also note this didn't last long." [... The question remains, however, whether bankrupting people or throwing them in jail is the ideal strategy in the long run… ||]
Gonzalo San Gil, PhD.

Where is My European Union? - 0 views

    • Gonzalo San Gil, PhD.
       
      [# ! Via Cristina Draghis -> FB's Wake up: The World Needs You...]
  •  
    [As Greece prepares for a monumental decision, there is only one certainty: the European Ideal has been irrevocably damaged...]
  •  
    [As Greece prepares for a monumental decision, there is only one certainty: the European Ideal has been irrevocably damaged...]
Gonzalo San Gil, PhD.

Open Source and Its Lost Ideals - Datamation - 0 views

  •  
    "Mention the year of the Linux desktop, and you are guaranteed to get a laugh. The six words have become a catchphrase, with the implication that it will never happen. But, even more importantly, the laughter indicates a change in attitude."
  •  
    "Mention the year of the Linux desktop, and you are guaranteed to get a laugh. The six words have become a catchphrase, with the implication that it will never happen. But, even more importantly, the laughter indicates a change in attitude."
karen Martinez

A Perfect Guide To Buy The Ideal Kindle For e-Readers - 0 views

  •  
    Digital book arrangement by an Amazon kindle has been an incredible accomplishment in the field of computerized hardware. If you are looking for Amazon Kindle Support then check www.technicalbulls.com
Gary Edwards

Joyeur: Cloud Nine: Specification for a Cloud Computer. A Call to Action. - 0 views

  •  
    what is the cloud? What sort of cloud computer(s) should we be building or expecting from vendors? Are there issues of lock-in that should concern customers of either SaaS clouds or PaaS clouds? I've been thinking about this problem as the CEO of a PaaS cloud computing company for some time. Clouds should be open. They shouldn't be proprietary. More broadly, I believe no vendor currently does everything that's required to serve customers well. What's required for such a cloud? I think an ideal PaaS cloud would have the following nine features:
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Paul Merrell

Cloud Computing: The Nine Features of an Ideal PaaS Cloud - 0 views

  • What sort of cloud computer(s) should we be building or expecting from vendors? Are there issues of lock-in that should concern customers of either SaaS clouds or PaaS clouds? I’ve been thinking about this problem as the CEO of a PaaS cloud computing company for some time. Clouds should be open. They shouldn’t be proprietary. More broadly, I believe no vendor currently does everything that’s required to serve customers well. What’s required for such a cloud? I think an ideal PaaS cloud would have the following nine features:
  • 1. Virtualization Layer Network Stability
  • 2. API for Creation, Deletion, Cloning of Instances
  • ...7 more annotations...
  • 3. Application Layer Interoperability
  • 4. State Layer Interoperability
  • 5. Application Services (e.g. email infrastructure, payments infrastructure)
  • 6. Automatic Scale (deploy and forget about it)
  • 7. Hardware Load Balancing
  • 8. Storage as a Service
  • 9. “Root”, If Required
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Gary Edwards

Ephox EditLive! - Online html editor and web content management software - 0 views

  •  
    Web Content Editing Made Simple Only EditLive! offers true ease of use with enterprise capabilities. It is the ideal solution for editing rich HTML documents in CMS, wikis, blogs, email and more. Give your content authors an editing solution that they will actually use. The Word-like interface makes content creation easy for business users who know nothing about HTML and want to keep it that way.
Gonzalo San Gil, PhD.

Why You Should (or Shouldn't) Switch to Each Leading Linux Desktop - Datamation [# ! + ... - 0 views

    • Gonzalo San Gil, PhD.
       
      # ! the 'weirdness' of comparatives... [... Xfce lacks the ability to drag and drop icons..????? ]
  •  
    [The perfect desktop is undoubtedly the one you would design yourself. However, lacking the necessary time and expertise, many users hop instead from desktop to desktop desktop with the same enthusiasm as others hop between distros, hoping to find the ideal distribution. ...]
Paul Merrell

Public transit in Beverly Hills may soon be driverless, program unanimously approved - ... - 0 views

  • An uncontested vote by the Beverly Hills City Council could guarantee a chauffeur for all residents in the near future. However, instead of a driver, the newly adopted program foresees municipally-owned driverless cars ready to order via a smartphone app. Also known as autonomous vehicles, or AV, driverless cars would appear to be the next big thing not only for people, but local governments as well – if the Beverly Hills City Council can get its AV development program past a few more hurdles, that is. The technology itself has some challenges ahead as well.
  • In the meantime, the conceptual shuttle service, which was unanimously approved at an April 5 city council meeting, is being celebrated.
  • Naming Google and Tesla in its press release, Beverly Hills must first develop a partnership with a manufacturer that can build it a fleet of unmanned cars. There will also be a need to bring in policy experts. All of these outside parties will have a chance to explore the program’s potential together at an upcoming community event.The Wallis Annenberg Center for the Performing Arts will host a summit this fall that will include expert lectures, discussions, and test drives. Er, test rides.Already in the works for Beverly Hills is a fiber optics cable network that will, in addition to providing high-speed internet access to all residents and businesses, one day be an integral part of a public transit system that runs on its users’ spontaneous desires.Obviously, Beverly Hills has some money on hand for the project, and it is also an ideal testing space as the city takes up an area of less than six square miles. Another positive factor is the quality of the city’s roads, which exceeds that of most in the greater Los Angeles area, not to mention California and the whole United States.“It can’t find the lane markings!” Volvo’s North American CEO, Lex Kerssemakers, complained to Los Angeles Mayor Eric Garcetti last month, according to Reuters. “You need to paint the bloody roads here!”Whether lanes are marked or signs are clear has made a big difference in how successfully the new technology works.Unfortunately, the US Department of Transportation considers 65 percent of US roads to be in poor condition, so AV cars may not be in the works for many Americans living outside of Beverly Hills quite as soon.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 0 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. The important point though is that XHTML is a browser specific version of XML, and compatible with the Web Kit layout engine Miro wants to move NCP to. The concept of encoding an existing application-specific format in XML has been around since 1998, when XML was first introduced as a W3C standard, a "structured" subset of SGML. (HTML is also a subset of SGML). The multiplatform StarOffice productivity suite became "OpenOffice" when Sun purchased the company in 1998, and open sourced the code base. The OpenOffice developer team came out with a XML encoding of their existing document formats in 2000. The application specific encoding became an OASIS document format standard proposal in 2002 - also known as ODF. Microsoft followed OpenOffice with a XML encoding of their application-specific binary document formats, known as OOXML. Encoding the existing NCP format in XML, specifically targeting XHTML as a "universal pivot point", would put the NCP Outliner in the Web editor category, without breaking backwards compatibility. The trick is in the XSLT conversion process. But I think that is something much easier to handle then trying to
Gary Edwards

Box extends its enterprise playbook, but users are still at the center | CITEworld - 0 views

  • The 47,000 developers making almost two billion API calls to the Box platform per month are a good start, Levie says, but Box needs to go further and do more to customize its platform to help push this user-centric, everything-everywhere-always model at larger and larger enterprises. 
  • Box for Industries is comprised of three parts: A Box-tailored core service offering, a selection of partner apps, and the implementation services to combine the two of those into something that ideally can be used by any enterprise in any vertical. 
  • Box is announcing solutions for three specific industries: Retail, healthcare, and media/entertainment. For retail, that includes vendor collaboration (helping vendors work with manufacturers and distributors), digital asset management, and retail store enablement.
  • ...6 more annotations...
  • Ted Blosser, senior vice president of Box Platform, also took the stage to show off how managing digital assets benefit from a just-announced metadata template capability that lets you pre-define custom fields so a store's back-office can flag, say, a new jacket as "blue" or "red." Those metadata tags can be pushed to a custom app running on a retail associate's iPad, so you can sort by color, line, or inventory level. Metadata plus Box Workflows equals a powerful content platform for retail that keeps people in sync with their content across geographies and devices, or so the company is hoping. 
  • It's the same collaboration model that cloud storage vendors have been pushing, but customized for very specific verticals, which is exactly the sales pitch that Box wants you to come away with. And developers must be cheering -- Box is going to help them sell their apps to previously inaccessible markets. 
  • More on the standard enterprise side, the so-named Box + Office 365 (previewed a few months back) currently only supports the Windows desktop versions of the productivity suite, but Levie promises web and Mac integrations are on the way. It's pretty basic, but potentially handy for the enterprises that Box supports.
  • The crux of the Office 365 announcement is that people expect that their data will follow them from device to device and from app to app. If people want their Box files and storage in Jive, Box needs to support Jive. And if enterprises are using Microsoft Office 365 to work with their documents -- and they are -- then Box needs to support that too. It's easier than it used to be, Levie says, thanks to Satya Nadella's push for a more open Microsoft. 
  • "We are quite confident that this is the kind of future they're building towards," Levie says -- but just in case, he urged BoxWorks attendees to tweet at Nadella and encourage him to help Box speed development along. 
  • Box SVP of Enterprise Annie Pearl came on stage to discuss how Box Workflow can be used to improve the ways people work with their content in the real world of business. It's worth noting that Box had a workflow tool previously, but it was relatively primitive and seems to have only existed to tick the box -- it didn't really go beyond assigning tasks and soliciting approvals.
  •  
    This will be very interesting. Looks like Box is betting their future on the success of integrating Microsoft Office 365 into the Box Productivity Cloud Service. Which competes directly with the Microsoft Office 365 - OneDrive Cloud Productivity Platform. Honestly, I don't see how this can ever work out for Box. Microsoft has them ripe for the plucking. And they have pulled it off on the eve of Box's expected IPO. "Box CEO Aaron Levie may not be able to talk about the cloud storage and collaboration company's forthcoming IPO, but he still took the stage at the company's biggest BoxWorks conference yet, with 5,000 attendees. Featured Resource Presented by Citrix Systems 10 essential elements for a secure enterprise mobility strategy Best practices for protecting sensitive business information while making people productive from LEARN MORE Levie discussed the future of the business and make some announcements -- including the beta of a Box integration with the Windows version of Microsoft Office 365; the introduction of Box Workflow, a tool coming in 2015 for creating repeatable workflows on the platform; and the unveiling of Box for Industries, an initiative to tailor Box solutions for specific industry use-cases. And if that wasn't enough, Box also announced a partnership with service firm Accenture to push the platform in large enterprises. The unifying factor for the announcements made at BoxWorks, Levie said, is that users expect their data to follow them everywhere, at home and at work. That means that Box has to think about enterprise from the user outwards, putting them at the center of the appified universe -- in effect, building an ecosystem of tools that support the things employees already use."
Paul Merrell

Lawmakers Say TPP Meetings Classified To Keep Americans in the Dark | Global Research - 0 views

  • US Trade Representative Michael Froman is drawing fire from Congressional Democrats for the Obama adminstration’s continued imposition of secrecy surrounding the Trans-Pacific Parternship. (Photo: AP file) Democratic lawmaker says tightly-controlled briefings on Trans-Pacific Partnership deal are aimed at keeping US constituents ignorant about what’s at stake Lawmakers in Congress who remain wary of the Trans-Pacific Partnership (TPP) trade agreement are raising further objections this week to the degree of secrecy surrounding briefings on the deal, with some arguing that the main reason at least one meeting has been registered “classified” is to help keep the American public ignorant about giveaways to corporate interests and its long-term implications.
  • Among its other critics, Sen. Elizabeth Warren has slammed the idea of ISDS provisions as a surrender of democratic ideals to corporate interests. According to Warren, ISDS would simply “tilt the playing field in the United States further in favor of big multinational corporations.” By having unchallenged input on secretive TPP talks, Warren argued last month, these large companies and financial interests “are increasingly realizing this is an opportunity to gut U.S. regulations they don’t like.” According to Grayson, putting Wednesday’s ISDS briefing in a classified setting “is part of a multi-year campaign of deception and destruction. Why do we classify information? It’s to keep sensitive information out of the hands of foreign governments. In this case, foreign governments already have this information. They’re the people the administration is negotiating with. The only purpose of classifying this information is to keep it from the American people.”
  • “I’m not happy about it,” Rep. Alan Grayson (D-Fla.) told the Huffington Post, referring to the briefing with Froman and Labor Secretary Thomas Perez on Wednesday. The meeting—focused on the section of the TPP that deals with the controversial ‘Investor-State Dispute Settlement’ (ISDS) mechanism—has been labeled “classified,” so that lawmakers and any of their staff who attend will be barred, under threat of punishment, of revealing what they learn with constituents or outside experts. According to the Huffington Post: ISDS has been part of U.S. free trade agreements since NAFTA was signed into law in 1993, and has become a particularly popular tool for multinational firms over the past few years. But while the topic remains controversial, particularly with Democrats, many critics of the administration emphasize that applying national security-style restrictions on such information is an abuse of the classified information system. An additional meeting earlier on Wednesday on currency manipulation with Froman and Treasury Secretary Jack Lew is not classified.
  • ...1 more annotation...
  • As The Hill reports: Members will be allowed to attend the briefing on the proposed trade pact with 12 Latin American and Asian countries with one staff member who possesses an “active Secret-level or high clearance” compliant with House security rules. Rep. Rosa DeLauro (D-Conn.) told The Hill that the administration is being “needlessly secretive.” “Even now, when they are finally beginning to share details of the proposed deal with members of Congress, they are denying us the ability to consult with our staff or discuss details of the agreement with experts,” DeLauro told The Hill. Rep. Lloyd Doggett (D-Texas) condemned the classified briefing. “Making it classified further ensures that, even if we accidentally learn something, we cannot share it. What is [Froman]working so hard to hide? What is the specific legal basis for all this senseless secrecy?” Doggett said to The Hill. “Open trade should begin with open access,” Doggett said. “Members expected to vote on trade deals should be able to read the unredacted negotiating text.”
Paul Merrell

Facebook Says It Is Deleting Accounts at the Direction of the U.S. and Israeli Governments - 0 views

  • In September of last year, we noted that Facebook representatives were meeting with the Israeli government to determine which Facebook accounts of Palestinians should be deleted on the ground that they constituted “incitement.” The meetings — called for and presided over by one of the most extremist and authoritarian Israeli officials, pro-settlement Justice Minister Ayelet Shaked — came after Israel threatened Facebook that its failure to voluntarily comply with Israeli deletion orders would result in the enactment of laws requiring Facebook to do so, upon pain of being severely fined or even blocked in the country. The predictable results of those meetings are now clear and well-documented. Ever since, Facebook has been on a censorship rampage against Palestinian activists who protest the decades-long, illegal Israeli occupation, all directed and determined by Israeli officials. Indeed, Israeli officials have been publicly boasting about how obedient Facebook is when it comes to Israeli censorship orders
  • Facebook now seems to be explicitly admitting that it also intends to follow the censorship orders of the U.S. government.
  • What this means is obvious: that the U.S. government — meaning, at the moment, the Trump administration — has the unilateral and unchecked power to force the removal of anyone it wants from Facebook and Instagram by simply including them on a sanctions list. Does anyone think this is a good outcome? Does anyone trust the Trump administration — or any other government — to compel social media platforms to delete and block anyone it wants to be silenced? As the ACLU’s Jennifer Granick told the Times: It’s not a law that appears to be written or designed to deal with the special situations where it’s lawful or appropriate to repress speech. … This sanctions law is being used to suppress speech with little consideration of the free expression values and the special risks of blocking speech, as opposed to blocking commerce or funds as the sanctions was designed to do. That’s really problematic.
  • ...3 more annotations...
  • As is always true of censorship, there is one, and only one, principle driving all of this: power. Facebook will submit to and obey the censorship demands of governments and officials who actually wield power over it, while ignoring those who do not. That’s why declared enemies of the U.S. and Israeli governments are vulnerable to censorship measures by Facebook, whereas U.S and Israeli officials (and their most tyrannical and repressive allies) are not
  • All of this illustrates that the same severe dangers from state censorship are raised at least as much by the pleas for Silicon Valley giants to more actively censor “bad speech.” Calls for state censorship may often be well-intentioned — a desire to protect marginalized groups from damaging “hate speech” — yet, predictably, they are far more often used against marginalized groups: to censor them rather than protect them. One need merely look at how hate speech laws are used in Europe, or on U.S. college campuses, to see that the censorship victims are often critics of European wars, or activists against Israeli occupation, or advocates for minority rights.
  • It’s hard to believe that anyone’s ideal view of the internet entails vesting power in the U.S. government, the Israeli government, and other world powers to decide who may be heard on it and who must be suppressed. But increasingly, in the name of pleading with internet companies to protect us, that’s exactly what is happening.
Paul Merrell

Nearly Everyone In The U.S. And Canada Just Had Their Private Cell Phone Location Data ... - 0 views

  • A company by the name of LocationSmart isn't having a particularly good month. The company recently received all the wrong kind of attention when it was caught up in a privacy scandal involving the nation's wireless carriers and our biggest prison phone monopoly. Like countless other companies and governments, LocationSmart buys your wireless location data from cell carriers. It then sells access to that data via a portal that can provide real-time access to a user's location via a tailored graphical interface using just the target's phone number.
  • Theoretically, this functionality is sold under the pretense that the tool can be used to track things like drug offenders who have skipped out of rehab. And ideally, all the companies involved were supposed to ensure that data lookup requests were accompanied by something vaguely resembling official documentation. But a recent deep dive by the New York Times noted how the system was open to routine abuse by law enforcement, after a Missouri Sherrif used the system to routinely spy on Judges and fellow law enforcement officers without much legitimate justification (or pesky warrants): "The service can find the whereabouts of almost any cellphone in the country within seconds. It does this by going through a system typically used by marketers and other companies to get location data from major cellphone carriers, including AT&T, Sprint, T-Mobile and Verizon, documents show. Between 2014 and 2017, the sheriff, Cory Hutcheson, used the service at least 11 times, prosecutors said. His alleged targets included a judge and members of the State Highway Patrol. Mr. Hutcheson, who was dismissed last year in an unrelated matter, has pleaded not guilty in the surveillance cases." It was yet another example of the way nonexistent to lax consumer privacy laws in the States (especially for wireless carriers) routinely come back to bite us. But then things got worse.
  • Driven by curiousity in the wake of the Times report, a PhD student at Carnegie Mellon University by the name of Robert Xiao discovered that the "try before you buy" system used by LocationSmart to advertise the cell location tracking system contained a bug, A bug so bad that it exposed the data of roughly 200 million wireless subscribers across the United States and Canada (read: nearly everybody). As we see all too often, the researcher highlighted how the security standards in place to safeguard this data were virtually nonexistent: "Due to a very elementary bug in the website, you can just skip that consent part and go straight to the location," said Robert Xiao, a PhD student at the Human-Computer Interaction Institute at Carnegie Mellon University, in a phone call. "The implication of this is that LocationSmart never required consent in the first place," he said. "There seems to be no security oversight here."
  • ...1 more annotation...
  • Meanwhile, none of the four major wireless carriers have been willing to confirm any business relationship with LocationSmart, but all claim to be investigating the problem after the week of bad press. That this actually results in substantive changes to the nation's cavalier treatment of private user data is a wager few would be likely to make.
1 - 15 of 15
Showing 20 items per page