Skip to main content

Home/ Open Web/ Group items tagged writing

Rss Feed Group items tagged

Gary Edwards

Overview of apps for Office 2013 - 0 views

  •  
    MSOffice is now "Web ready".  The Office apps are capable of running HTML5-JavaScript apps based on a simple Web page model.  Think of this as the Office apps being fitted with a browser, and developers writing extensions to run in that browser using HTML5 and JavaScript.  Microsoft provides an Office.js library and, a developer "Web App/Page Creator"  Visual Basic toolset called "Napa" Office 365 Development Tools.  Lots of project templates. Key MSOffice apps are Word, Excel, PowerPoint and Outlook.  Develop for Office or SharePoint.  Apps can be hosted on any Web Server. excerpt: Microsoft Office 2013 Developer Environment with HTML5, XML and JavaScript.  Office.js library. "his documentation is preliminary and is subject to change. Published: July 16, 2012 Learn how to use apps for Office to extend your Office 2013 Preview applications. This new Office solution type, apps for Office, built on web technologies like HTML, CSS, JavaScript, REST, OData, and OAuth. It provides new experiences within Office applications by surfacing web technologies and cloud services right within Office documents, email messages, meeting requests, and appointments. Applies to:  Excel Web App Preview | Exchange 2013 Preview | Outlook 2013 Preview | Outlook Web App Preview | Project Professional 2013 Preview | Word 2013 Preview | Excel 2013 Preview  In this article What is an app for Office? Anatomy of an app for Office Types of apps for Office What can an app for Office do? Understanding the runtime Development basics Create your first app for Office Publishing basics Scenarios Components of an app for Office solution Software requirements"
Paul Merrell

Wikimedia and Twitter Bots Are Breaking the News | Motherboard - 0 views

  • We already knew that bots were writing news content, automating narrative stories from data-rich topics like sports scores and financial markets. Now, robo-reporters are starting to get scoops. They're not just writing stories; they're breaking them. Thomas Steiner, a Google engineer in Germany, designed an algorithm that covers the news as it's breaking by monitoring activity on Wikipedia (old school journalists everywhere are wincing) and watching for spikes in editing activity. The idea is that if something big is happening—especially if it’s a global event—multiple editors around the world will be updating Wikipedia and Wikidata pages at once, in different languages. That spike in activity tips off the bot to the story. According to Steiner, his news bot spotted major stories like the Boston Marathon bombing and the disappearance of Malaysia Airlines MH370.
  • The bare-bones site tracking real-time editing is called Wikipedia Live Monitor. It was first created last year, and now Steiner's has extended his robo-news operation to Twitter. The bot mines the social media site for a particular search term triggered by the Wikipedia activity and pulls out all relevant photos to illustrate the story.
  • You can check out the visual news events on the Twitter bot account @mediagalleries. The earliest are from a case study Steiner did to test out the program during the Olympics in Sochi. More recently, there are galleries illustrating major sports events, and the latest updates to flight MH370 and the conflict in Crimea.
  • ...2 more annotations...
  • You can see, it's still a rudimentary process, hardly about to put the staff of the New York Times out of business. But it says a lot about the direction automating the news is heading in.
  • Still, the Fourth Estate is one of the more disconcerting industries being taken over by robots, and not just because it’s my own livelihood. And it’s more common than you think; Kristian Hammond, cofounder of Narrative Science, a company that's been automating content for several years now, predicted that 90 percent of the news could be written by computers by 2030.
Paul Merrell

Study: Surveillance will cost US tech sector more than $35B by 2016 | TheHill - 0 views

  • A new study says that the U.S. tech industry is likely to lose more than $35 billion from foreign customers by 2016 because of concerns over government surveillance.“In short, foreign customers are shunning U.S. companies,” the authors of a new study from the Information Technology and Innovation Foundation write.ADVERTISEMENT“The U.S. government’s failure to reform many of the NSA’s surveillance programs has damaged the competitiveness of the U.S. tech sector and cost it a portion of the global market share,” they said.The think tank’s report found that the cost to the tech sector associated with ongoing concerns over surveillance programs run out of the U.S. was likely to “far exceed” $35 billion by 2016, an earlier estimate set by the group.
  • The group said that lawmakers must enact additional reforms to surveillance policy if they wish to help the tech sector regain the trust of foreign customers. That includes opposing “backdoors,” which allow law enforcement to access otherwise encrypted data, and signing off on trade agreements, including the controversial Trans-Pacific Partnership, that “ban digital protectionism.”The study’s authors found that the revelations about broad U.S. surveillance programs acted as a justification for foreign policymakers to enact protectionist policies aimed at aiding their own domestic technology sectors.Foreign companies have also used the information about U.S. surveillance programs to their advantage.“Some European companies have begun to highlight where their digital services are hosted as an alternative to U.S. companies,” the authors write.
  • American companies, they found, have lost contracts to foreign competitors over fears about mass surveillance.Earlier this month, President Obama signed the USA Freedom Act, a bill that reformed the three Patriot Act provisions that authorized the bulk, warrantless collection of Americans’ phone records. The bill was widely supported by technology companies, including giants like Apple and Google.
Gary Edwards

XML Production Workflows? Start with the Web and XHTML - 1 views

  • Challenges: Some Ugly Truths The challenges of building—and living with—an XML workflow are clear enough. The return on investment is a long-term proposition. Regardless of the benefits XML may provide, the starting reality is that it represents a very different way of doing things than the one we are familiar with. The Word Processing and Desktop Publishing paradigm, based on the promise of onscreen, WYSIWYG layout, is so dominant as to be practically inescapable. It has proven really hard to get from here to there, no matter how attractive XML might be on paper. A considerable amount of organizational effort and labour must be expended up front in order to realize the benefits. This is why XML is often referred to as an “investment”: you sink a bunch of time and money up front, and realize the benefits—greater flexibility, multiple output options, searching and indexing, and general futureproofing—later, over the long haul. It is not a short-term return proposition. And, of course, the returns you are able to realize from your XML investment are commensurate with what you put in up front: fine-grained, semantically rich tagging is going to give you more potential for searchability and recombination than a looser, more general-purpose approach, but it sure costs more. For instance, the Text Encoding Initiative (TEI) is the grand example of pouring enormous amounts of energy into the up-front tagging, with a very open-ended set of possibilities down the line. TEI helpfully defines a level to which most of us do not have to aspire.[5] But understanding this on a theoretical level is only part of the challenge. There are many practical issues that must be addressed. Software and labour are two of the most critical. How do you get the content into XML in the first place? Unfortunately, despite two decades of people doing SGML and XML, this remains an ugly question.
  • Practical Challenges In 2009, there is still no truly likeable—let alone standard—editing and authoring software for XML. For many (myself included), the high-water mark here was Adobe’s FrameMaker, substantially developed by the late 1990s. With no substantial market for it, it is relegated today mostly to the tech writing industry, unavailable for the Mac, and just far enough afield from the kinds of tools we use today that its adoption represents a significant hurdle. And FrameMaker was the best of the breed; most of the other software in decent circulation are programmers’ tools—the sort of things that, as Michael Tamblyn pointed out, encourage editors to drink at their desks. The labour question represents a stumbling block as well. The skill-sets and mind-sets that effective XML editors need have limited overlap with those needed by literary and more traditional production editors. The need to think of documents as machine-readable databases is not something that comes naturally to folks steeped in literary culture. In combination with the sheer time and effort that rich tagging requires, many publishers simply outsource the tagging to India, drawing a division of labour that spans oceans, to put it mildly. Once you have XML content, then what do you do with it? How do you produce books from it? Presumably, you need to be able to produce print output as well as digital formats. But while the latter are new enough to be generally XML-friendly (e-book formats being largely XML based, for instance), there aren’t any straightforward, standard ways of moving XML content into the kind of print production environments we are used to seeing. This isn’t to say that there aren’t ways of getting print—even very high-quality print—output from XML, just that most of them involve replacing your prepress staff with Java programmers.
  • Why does this have to be so hard? It’s not that XML is new, or immature, or untested. Remember that the basics have been around, and in production, since the early 1980s at least. But we have to take account of a substantial and long-running cultural disconnect between traditional editorial and production processes (the ones most of us know intimately) and the ways computing people have approached things. Interestingly, this cultural divide looked rather different in the 1970s, when publishers were looking at how to move to digital typesetting. Back then, printers and software developers could speak the same language. But that was before the ascendancy of the Desktop Publishing paradigm, which computerized the publishing industry while at the same time isolating it culturally. Those of us who learned how to do things the Quark way or the Adobe way had little in common with people who programmed databases or document-management systems. Desktop publishing technology isolated us in a smooth, self-contained universe of toolbars, grid lines, and laser proofs. So, now that the reasons to get with this program, XML, loom large, how can we bridge this long-standing divide?
  • ...44 more annotations...
  • Using the Web as a Production Platform The answer, I think, is right in front of you. The bridge is the Web, a technology and platform that is fundamentally based on XML, and which many publishers are by now comfortably familiar with. Perhaps not entirely comfortably, but at least most publishers are already working with the Web; they already either know or have on staff people who understand it and can work with it. The foundation of our argument is this: rather than looking at jumping to XML in its full, industrial complexity, which seems to be what the O'Reilly-backed StartWithXML initiative[6] is suggesting, publishers instead leverage existing tools and technologies—starting with the Web—as a means of getting XML workflows in place. This means making small investments and working with known tools rather than spending tens of thousands of dollars on XML software and rarefied consultants. It means re-thinking how the existing pieces of the production toolchain fit together; re-thinking the existing roles of software components already in use. It means, fundamentally, taking the Web seriously as a content platform, rather than thinking of it as something you need to get content out to, somehow. If nothing else, the Web represents an opportunity to think about editorial and production from outside the shrink-wrapped Desktop Publishing paradigm.
  • Is the Web made of Real XML? At this point some predictable objections can be heard: wait a moment, the Web isn’t really made out of XML; the HTML that makes up most of the Web is at best the bastard child of SGML, and it is far too flaky/unstructured/underpowered to be taken seriously. We counter by arguing that although HTML on the Web exists in a staggering array of different incarnations, and that the majority of it is indeed an unstructured mess, this does not undermine the general principle that basic, ubiquitous Web technologies can make a solid platform for content management, editorial process, and production workflow.
  • With the advent of a published XML standard in the late 1990s came the W3C’s adoption of XHTML: the realization of the Web’s native content markup as a proper XML document type. Today, its acceptance is almost ubiquitous, even while the majority of actual content out there may not be strictly conforming. The more important point is that most contemporary Web software, from browsers to authoring tools to content management systems (from blogs to enterprise systems), are capable of working with clean, valid XHTML. Or, to put the argument the other way around, clean, valid XHTML content plays absolutely seamlessly with everything else on the Web.[7]
  • The objection which follows, then, will be that even if we grant that XHTML is a real XML document type, that it is underpowered for “serious” content because it is almost entirely presentation (formatting) oriented; it lacks any semantic depth. In XHTML, a paragraph is a paragraph is a paragraph, as opposed to a section or an epigraph or a summary.
  • n contrast, more “serious” XML document types like DocBook[8] or DITA-derived schemas[9] are capable of making semantic distinctions about content chunks at a fine level of granularity and with a high degree of specificity.
  • So there is an argument for recalling the 80:20 rule here. If XHTML can provide 80% of the value with just 20% of the investment, then what exactly is the business case for spending the other 80% to achieve that last 20% of value? We suspect the ratio is actually quite a bit steeper than 80:20 for most publishers.
  • Furthermore, just to get technical for a moment, XHTML is extensible in a fairly straightforward way, through the common “class” attribute on each element. Web developers have long leveraged this kind of extensibility in the elaboration of “microformats” for semantic-web applications.[10] There is no reason why publishers shouldn’t think to use XHTML’s simple extensibility in a similar way for their own ends.
  • XHTML, on the other hand, is supported by a vast array of quotidian software, starting with the ubiquitous Web browser. For this very reason, XHTML is in fact employed as a component part of several more specialized document types (ONIX and ePub among them).
  • Why re-invent a general-purpose prose representation when XHTML already does the job?
  • It is worth pausing for a moment to consider the role of XHTML in the ePub standard for ebook content. An ePub file is, anatomically, a simply disguised zip archive. Inside the zip archive are a few standard component parts: there are specialized files that declare metadata about the book, and about the format of the book. And then there is the book’s content, represented in XHTML. An ePub book is a Web page in a wrapper.
  • To sum up the general argument: the Web as it already exists presents incredible value to publishers, as a platform for doing XML content management with existing (and often free) tools, and without having to go blindly into the unknown. At this point, we can offer a few design guidelines: prefer existing and/or ubiquitous tools over specialized ones wherever possible; prefer free software over proprietary systems where possible; prefer simple tools controlled and coordinated by human beings over fully automated (and therefore complex) systems; play to our strengths: use Web software for storing and managing content, use layout software for layout, and keep editors and production people in charge of their own domains.
  • Putting the Pieces Together: A Prototype
  • At the SFU Master of Publishing Program, we have been chipping away at this general line of thinking for a few years. Over that time, Web content management systems have been getting more and more sophisticated, all the while getting more streamlined and easier to use. (NB: if you have a blog, you have a Web content management system.) The Web is beginning to be recognized as a writing and editing environment used by millions of people. And the ways in which content is represented, stored, and exchanged online have become increasingly robust and standardized.
  • The missing piece of the puzzle has been print production: how can we move content from its malleable, fluid form on line into the kind of high-quality print production environments we’ve come to expect after two decades of Desktop Publishing?
  • Anyone who has tried to print Web content knows that the existing methods leave much to be desired (hyphenation and justification, for starters). In the absence of decent tools for this, most publishers quite naturally think of producing the print content first, and then think about how to get material onto the Web for various purposes. So we tend to export from Word, or from Adobe, as something of an afterthought.
  • While this sort of works, it isn’t elegant, and it completely ignores the considerable advantages of Web-based content management.
  • Content managed online is stored in one central location, accessible simultaneously to everyone in your firm, available anywhere you have an Internet connection, and usually exists in a much more fluid format than Word files. If only we could manage the editorial flow online, and then go to print formats at the end, instead of the other way around. At SFU, we made several attempts to make this work by way of the supposed “XML import” capabilities of various Desktop Publishing tools, without much success.[12]
  • In the winter of 2009, Adobe solved this part of the problem for us with the introduction of its Creative Suite 4. What CS4 offers is the option of a complete XML representation of an InDesign document: what Adobe calls IDML (InDesign Markup Language).
  • The IDML file format is—like ePub—a simply disguised zip archive that, when unpacked, reveals a cluster of XML files that represent all the different facets of an InDesign document: layout spreads, master pages, defined styles, colours, and of course, the content.
  • IDML is a well thought-out XML standard that achieves two very different goals simultaneously: it preserves all of the information that InDesign needs to do what it does; and it is broken up in a way that makes it possible for mere mortals (or at least our Master of Publishing students) to work with it.
  • What this represented to us in concrete terms was the ability to take Web-based content and move it into InDesign in a straightforward way, thus bridging Web and print production environments using existing tools and skillsets, with a little added help from free software.
  • We would take clean XHTML content, transform it to IDML-marked content, and merge that with nicely designed templates in InDesign.
  • The result is an almost push-button publication workflow, which results in a nice, familiar InDesign document that fits straight into the way publishers actually do production.
  • Tracing the steps To begin with, we worked backwards, moving the book content back to clean XHTML.
  • The simplest method for this conversion—and if you want to create Web content, this is an excellent route—was to use Adobe’s “Export to Digital Editions” option, which creates an ePub file.
  • Recall that ePub is just XHTML in a wrapper, so within the ePub file was a relatively clean XHTML document. It was somewhat cleaner (that is, the XHTML tagging was simpler and less cluttered) than InDesign’s other Web-oriented exports, possibly because Digital Editions is a well understood target, compared with somebody’s website.
  • In order to achieve our target of clean XHTML, we needed to do some editing; the XHTML produced by InDesign’s “Digital Editions” export was presentation-oriented. For instance, bulleted list items were tagged as paragraphs, with a class attribute identifying them as list items. Using the search-and-replace function, we converted such structures to proper XHTML list and list-item elements. Our guiding principle was to make the XHTML as straightforward as possible, not dependent on any particular software to interpret it.
  • We broke the book’s content into individual chapter files; each chapter could then carry its own basic metadata, and the pages conveniently fit our Web content management system (which is actually just a wiki). We assembled a dynamically generated table of contents for the 12 chapters, and created a cover page. Essentially, the book was entirely Web-based at this point.
  • When the book chapters are viewed online, they are formatted via a CSS2 stylesheet that defines a main column for content as well as dedicating screen real estate for navigational elements. We then created a second template to render the content for exporting; this was essentially a bare-bones version of the book with no navigation and minimal styling. Pages (or even the entire book) can be exported (via the “Save As...” function in a Web browser) for use in either print production or ebook conversion. At this point, we required no skills beyond those of any decent Web designer.
  • Integrating with CS4 for Print Adobe’s IDML language defines elements specific to InDesign; there is nothing in the language that looks remotely like XHTML. So a mechanical transformation step is needed to convert the XHTML content into something InDesign can use. This is not as hard as it might seem.
  • Both XHTML and IDML are composed of straightforward, well-documented structures, and so transformation from one to the other is, as they say, “trivial.” We chose to use XSLT (Extensible Stylesheet Language Transforms) to do the work. XSLT is part of the overall XML specification, and thus is very well supported in a wide variety of tools. Our prototype used a scripting engine called xsltproc, a nearly ubiquitous piece of software that we found already installed as part of Mac OS X (contemporary Linux distributions also have this as a standard tool), though any XSLT processor would work.
  • In other words, we don’t need to buy InCopy, because we just replaced it with the Web. Our wiki is now plugged directly into our InDesign layout. It even automatically updates the InDesign document when the content changes. Credit is due at this point to Adobe: this integration is possible because of the open file format in the Creative Suite 4.
  • We wrote an XSLT transformation script[18] that converted the XHTML content from the Web into an InCopy ICML file. The script itself is less than 500 lines long, and was written and debugged over a period of about a week by amateurs (again, the people named at the start of this article). The script runs in a couple of seconds, and the resulting .icml file can then be “placed” directly into an InDesign template. The ICML file references an InDesign stylesheet, so the template file can be set up with a house-styled layout, master pages, and stylesheet definitions for paragraphs and character ranges.
  • Rather than a public-facing website, our system relies on the Web as a content management platform—of course a public face could easily be added.
  • It should be noted that the Book Publishing 1 proof-of-concept was artificially complex; we began with a book laid out in InDesign and ended up with a look-alike book laid out in InDesign. But next time—for instance, when we publish Book Publishing 2—we can begin the process with the content on the Web, and keep it there throughout the editorial process. The book’s content could potentially be written and edited entirely online, as Web content, and then automatically poured into an InDesign template at proof time. “Just in time,” as they say. This represents an entirely new way of thinking of book production. With a Web-first orientation, it makes little sense to think of the book as “in print” or “out of print”—the book is simply available, in the first place online; in the second place in derivative digital formats; and third, but really not much more difficult, in print-ready format, via the usual InDesign CS print production system publishers are already familiar with.
  • Creating Ebook Files Creating electronic versions from XHTML source is vastly simpler than trying to generate these out of the existing print process. The ePub version is extremely easy to generate; so is online marketing copy or excerpts for the Web, since the content begins life Web-native.
  • Since an ePub file is essentially XHTML content in a special wrapper, all that is required is that we properly “wrap” our XHTML content. Ideally, the content in an ePub file is broken into chapters (as ours was) and a table of contents file is generated in order to allow easy navigation within an ebook reader. We used Julian Smart’s free tool eCub[19] to simply and automatically generate the ePub wrapper and the table of contents. The only custom development we did was to create a CSS stylesheet for the ebook so that headings and paragraph indents looked the way we wanted. Starting with XHTML content, creating ePub is almost too easy.
  • today, we are able to put the process together using nothing but standard, relatively ubiquitous Web tools: the Web itself as an editing and content management environment, standard Web scripting tools for the conversion process, and the well-documented IDML file format to integrate the layout tool.
  • Our project demonstrates that Web technologies are indeed good enough to use in an XML-oriented workflow; more specialized and expensive options are not necessarily required. For massive-scale enterprise publishing, this approach may not offer enough flexibility, and the challenge of adding and extracting extra semantic richness may prove more trouble than it's worth.
  • But for smaller firms who are looking at the straightforward benefits of XML-based processes—single source publishing, online content and workflow management, open and accessible archive formats, greater online discoverability—here is a way forward.
  • The result is very simple and easy to use. Our demonstration requires that a production editor run the XSLT transformation script manually, but there is no reason why this couldn’t be built directly into the Web content management system so that exporting the content to print ran the transformation automatically. The resulting file would then be “placed” in InDesign and proofed.
  • The final piece of our puzzle, the ability to integrate print production, was made possible by Adobe's release of InDesign with an open XML file format. Since the Web's XHTML is also XML, is can be easily and confidently transformed to the InDesign format.
  • Such a workflow—beginning with the Web and exporting to print—is surely more in line with the way we will do business in the 21st century, where the Web is the default platform for reaching audiences, developing content, and putting the pieces together. It is time, we suggest, for publishers to re-orient their operations and start with the Web.
  • Using the Web as a Production Platform
  •  
    I was looking for an answer to a problem Marbux had presented, and found this interesting article.  The issue was that of the upcoming conversion of the Note Case Pro (NCP) layout engine to the WebKit layout engine, and what to do about the NCP document format. My initial reaction was to encode the legacy NCP document format in XML, and run an XSLT to a universal pivot format like TEI-XML.  From there, the TEI-XML community would provide all the XSLT transformation routines for conversion to ODF, OOXML, XHTML, ePUB and HTML/CSS. Researching the problems one might encounter with this approach, I found this article.  Fascinating stuff. My take away is that TEI-XML would not be as effective a "universal pivot point" as XHTML.  Or perhaps, if NCP really wants to get aggressive; IDML - InDesign Markup Language. As an after thought, i was thinking that an alternative title to this article might have been, "Working with Web as the Center of Everything".
Paul Merrell

WikiLeaks' Julian Assange warns: Google is not what it seems - 0 views

  • Back in 2011, Julian Assange met up with Eric Schmidt for an interview that he considers the best he’s ever given. That doesn’t change, however, the opinion he now has about Schmidt and the company he represents, Google.In fact, the WikiLeaks leader doesn’t believe in the famous “Don’t Be Evil” mantra that Google has been preaching for years.Assange thinks both Schmidt and Google are at the exact opposite spectrum.“Nobody wants to acknowledge that Google has grown big and bad. But it has. Schmidt’s tenure as CEO saw Google integrate with the shadiest of US power structures as it expanded into a geographically invasive megacorporation. But Google has always been comfortable with this proximity,” Assange writes in an opinion piece for Newsweek.
  • “Long before company founders Larry Page and Sergey Brin hired Schmidt in 2001, their initial research upon which Google was based had been partly funded by the Defense Advanced Research Projects Agency (DARPA). And even as Schmidt’s Google developed an image as the overly friendly giant of global tech, it was building a close relationship with the intelligence community,” Assange continues.Throughout the lengthy article, Assange goes on to explain how the 2011 meeting came to be and talks about the people the Google executive chairman brought along - Lisa Shields, then vice president of the Council on Foreign Relationship, Jared Cohen, who would later become the director of Google Ideas, and Scott Malcomson, the book’s editor, who would later become the speechwriter and principal advisor to Susan Rice.“At this point, the delegation was one part Google, three parts US foreign-policy establishment, but I was still none the wiser.” Assange goes on to explain the work Cohen was doing for the government prior to his appointment at Google and just how Schmidt himself plays a bigger role than previously thought.In fact, he says that his original image of Schmidt, as a politically unambitious Silicon Valley engineer, “a relic of the good old days of computer science graduate culture on the West Coast,” was wrong.
  • However, Assange concedes that that is not the sort of person who attends Bilderberg conferences, who regularly visits the White House, and who delivers speeches at the Davos Economic Forum.He claims that Schmidt’s emergence as Google’s “foreign minister” did not come out of nowhere, but it was “presaged by years of assimilation within US establishment networks of reputation and influence.” Assange makes further accusations that, well before Prism had even been dreamed of, the NSA was already systematically violating the Foreign Intelligence Surveillance Act under its director at the time, Michael Hayden. He states, however, that during the same period, namely around 2003, Google was accepting NSA money to provide the agency with search tools for its rapidly-growing database of information.Assange continues by saying that in 2008, Google helped launch the NGA spy satellite, the GeoEye-1, into space and that the search giant shares the photographs from the satellite with the US military and intelligence communities. Later on, 2010, after the Chinese government was accused of hacking Google, the company entered into a “formal information-sharing” relationship with the NSA, which would allow the NSA’s experts to evaluate the vulnerabilities in Google’s hardware and software.
  • ...1 more annotation...
  • “Around the same time, Google was becoming involved in a program known as the “Enduring Security Framework” (ESF), which entailed the sharing of information between Silicon Valley tech companies and Pentagon-affiliated agencies at network speed.’’Emails obtained in 2014 under Freedom of Information requests show Schmidt and his fellow Googler Sergey Brin corresponding on first-name terms with NSA chief General Keith Alexander about ESF,” Assange writes.Assange seems to have a lot of backing to his statements, providing links left and right, which people can go check on their own.
  •  
    The "opinion piece for Newsweek" is an excerpt from Assange's new book, When Google met Wikileaks.  The chapter is well worth the read. http://www.newsweek.com/assange-google-not-what-it-seems-279447
Paul Merrell

Join The Internet Vote - 0 views

  • Congress is about to introduce a bill to fast track a secret deal that could lead to global censorship. It’s called the Trans-Pacific Partnership (TPP). We think Internet users everywhere should have a say in decisions that affect the Internet — but if “Fast Track” legislation passes, there is no chance that the public will see the text before the deal is approved. Join the Internet Vote on April 23rd and let’s make it clear to DC how we’re voting: against Fast Track and against Internet censorship. (Learn More)
  •  
    Sign up (email address) for updates on a monumental lobbying effort coming up in the next few days when Congress comes back into session and the legislation to "Fast Track" the TPP *and all future trade agrerements* is introduced. From leaked draft portions, we know that the TPP brings us internet censorship and a mass of copyright law changes that have the giant intellectual property corproate folk drooling at the mouth, because they helped write it while the public was excluded. This is your chance to help end secret trade agreements that the public doesn't even get to see until they have already been made into law.
Paul Merrell

Joint - Dear Colleague Letter: Electronic Book Readers - 1 views

  • U.S. Department of Justice Civil Rights Division U.S. Department of Education Office for Civil Rights
  •  
    June 29, 2010 Dear College or University President: We write to express concern on the part of the Department of Justice and the Department of Education that colleges and universities are using electronic book readers that are not accessible to students who are blind or have low vision and to seek your help in ensuring that this emerging technology is used in classroom settings in a manner that is permissible under federal law. A serious problem with some of these devices is that they lack an accessible text-to-speech function. Requiring use of an emerging technology in a classroom environment when the technology is inaccessible to an entire population of individuals with disabilities - individuals with visual disabilities - is discrimination prohibited by the Americans with Disabilities Act of 1990 (ADA) and Section 504 of the Rehabilitation Act of 1973 (Section 504) unless those individuals are provided accommodations or modifications that permit them to receive all the educational benefits provided by the technology in an equally effective and equally integrated manner. ... The Department of Justice recently entered into settlement agreements with colleges and universities that used the Kindle DX, an inaccessible, electronic book reader, in the classroom as part of a pilot study with Amazon.com, Inc. In summary, the universities agreed not to purchase, require, or recommend use of the Kindle DX, or any other dedicated electronic book reader, unless or until the device is fully accessible to individuals who are blind or have low vision, or the universities provide reasonable accommodation or modification so that a student can acquire the same information, engage in the same interactions, and enjoy the same services as sighted students with substantially equivalent ease of use. The texts of these agreements may be viewed on the Department of Justice's ADA Web site, www.ada.gov. (To find these settlemen
Gary Edwards

Cloudy Battle in Los Angeles: Microturf vs. Googzilla -- Redmond Developer News - 0 views

  •  
    Talk about a game changer: Excerpt:  An epic battle is brewing out West with much more than a lucrative technology contract at stake: Microsoft Office or Google's cloud? As the Los Angeles Times reported yesterday, Microsoft and Google are bidding for a $7.25 million contract to replace the city of Los Angeles' outdated email system. Los Angeles put out a call for bids in 2008. "Google Apps got the nod because city administrators believed it would be cheaper and less labor-intensive," writes LA Times reporter David Sarno. We all knew this day of reckoning was coming. For Microsoft, the fight to hold on to its Office base is on. Google Apps, the Web-based office suite that includes the viral Gmail, promises less overhead and potentially big savings to fiscally strapped cities, corporations and college campuses. In addition to dispatching teams of lobbyists, both Steve Ballmer and Eric Schmidt have offered to put in appearances at city hall, if city officials think it will help, according to a city councilman quoted in the article.
Gary Edwards

Diary Of An x264 Developer » Flash, Google, VP8, and the future of internet v... - 0 views

  •  
    In depth technical discussion about Flash, HTML5, H.264, and Google's VP8.  Excellent.  Read the comments.  Bottom line - Google has the juice to put Flash and H.264 in the dirt.  The YouTube acquisition turns out to be very strategic. excerpt: The internet has been filled for quite some time with an enormous number of blog posts complaining about how Flash sucks-so much that it's sounding as if the entire internet is crying wolf.  But, of course, despite the incessant complaining, they're right: Flash has terrible performance on anything other than Windows x86 and Adobe doesn't seem to care at all.  But rather than repeat this ad nauseum, let's be a bit more intellectual and try to figure out what happened. Flash became popular because of its power and flexibility.  At the time it was the only option for animated vector graphics and interactive content (stuff like VRML hardly counts).  Furthermore, before Flash, the primary video options were Windows Media, Real, and Quicktime: all of which were proprietary, had no free software encoders or decoders, and (except for Windows Media) required the user to install a clunky external application, not merely a plugin.  Given all this, it's clear why Flash won: it supported open multimedia formats like H.263 and MP3, used an ultra-simple container format that anyone could write (FLV), and worked far more easily and reliably than any alternative. Thus, Adobe (actually, at the time, Macromedia) got their 98% install base.  And with that, they began to become complacent.  Any suggestion of a competitor was immediately shrugged off; how could anyone possibly compete with Adobe, given their install base?  It'd be insane, nobody would be able to do it.  They committed the cardinal sin of software development: believing that a competitor being better is excusable.  At x264, if we find a competitor that does something better, we immediately look into trying to put ourselves back on top.  This is why
Gary Edwards

Google acquisitions may signal big push against Microsoft Office | VentureBeat - 0 views

  •  
    Google has been making a number of acquisitions that are clearly Docs-related. Over the weekend, TechCrunch reported that the search giant is in the final stages of talks to acquire DocVerse, a startup that lets users collaborate around Office documents, for $25 million. The deal would also bring Google some key hires, since the startup's co-founders were managers on SharePoint, Microsoft's popular collaboration service. This follows the November acquisition of AppJet, a company founded by former Googlers that created a collaborative word processor. (It's worth noting that Google Docs itself was the offspring of several acquisitions, including Google's purchase of Writely.) Meanwhile, Google has been talking up the splash it wants Google Docs to make in 2010. Don Dodge, who just made the move from Microsoft to Google, recently told me, "2010 is going to be the year of Gmail and Google Docs and Google Apps." Even more concretely, Enterprise President Dave Girouard said last month that Docs will see 30 to 50 improvements over the next year, at which point big companies will be able to "get rid of Office if they choose to." Presumably features from AppJet and DocVerse will be among those improvements. I'd certainly be thrilled to see the battle between Office Docs become a real competition, rather than upstart Google slowly chipping away at Microsoft's Office behemoth.
Gary Edwards

Productivity on Cloud - 0 views

  •  
    Office suites are now taking the cloud route and offering advanced services, luring partners with smart gain  By Varun Aggarwal While all applications are moving to the cloud, there is no reason why the ubiquitous office productivity suites like MS Office or OpenOffice should stick to the desktop. Providing customers with a key set of capabilities, and a browser to aid easy access makes complete sense. Take for instance a student working on a class paper. Writing in a Web browser might aid in sharing and incorporating constructive changes, but it is a cumbersome experience as compared to using Office on his PC. But by using productivity suite online, he gets best of both worlds.  
Gary Edwards

Where is there an end of it? | Thomas Jefferson on Patents | Marbux on Document Format ... - 1 views

  •  
    Whether a patent constitutes "property" in the U.S. is an issue on which the Supreme Court has apparently never ruled. However, there is no question that the nation's founders viewed it only as a government-granted privilege, not a "property" right. The U.S. Supreme Court quoted Thomas Jefferson on the topic: Stable ownership is the gift of social law, and is given late in the progress of society. It would be curious then, if an idea, the fugitive fermentation of an individual brain, could, of natural right, be claimed in exclusive and stable property. If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation. Inventions then cannot, in nature, be a subject of property. Society may give an exclusive right to the profits arising from them, as an encouragement to men to pursue ideas which may produce utility, but this may or may not be done, according to the will and convenience of the society, without claim or complaint from any body. VI Writings of Thomas Jefferson, at 18
Gary Edwards

The State of the Internet Operating System - O'Reilly Radar - 0 views

  •  
    ... The Internet Operating System is an Information Operating System ... Search is key to managing and working "information" ... Media Access ... Communications ... Identity and the Social Graph ... Payment ... Advertising ... Location ... Activity Streams - "Attention" ... Time  ... Image and Speech Recognition ... Government Data ... The Browser Where is the "operating system" in all this? Clearly, it is still evolving. Applications use a hodgepodge of services from multiple different providers to get the information they need. But how different is this from PC application development in the early 1980s, when every application provider wrote their own device drivers to support the hodgepodge of disks, ports, keyboards, and screens that comprised the still emerging personal computer ecosystem? Along came Microsoft with an offer that was difficult to refuse: We'll manage the drivers; all application developers have to do is write software that uses the Win32 APIs, and all of the complexity will be abstracted away. This is the crux of my argument about the internet operating system. We are once again approaching the point at which the Faustian bargain will be made: simply use our facilities, and the complexity will go away. And much as happened during the 1980s, there is more than one company making that promise. We're entering a modern version of "the Great Game", the rivalry to control the narrow passes to the promised future of computing.
Gary Edwards

The real reason why Steve Jobs hates Flash - Charlie's Diary - 1 views

  • Flash is a cross platform development tool. It is not Adobe's goal to help developers write the best iPhone, iPod and iPad apps. It is their goal to help developers write cross platform apps."
  • he really does not want cross-platform apps that might divert attention and energy away from his application ecosystem
  • This is why there's a stench of panic hanging over silicon valley. this is why Apple have turned into paranoid security Nazis, why HP have just ditched Microsoft from a forthcoming major platform and splurged a billion-plus on buying up a near-failure; it's why everyone is terrified of Google: The PC revolution is almost coming to an end, and everyone's trying to work out a strategy for surviving the aftermath.
  •  
    Excellent must read!  Best explanation of what is currently driving Silicon Valley.  Charlie puts all the pieces in context, provides expert perspective, and then pushes everything forward to describe a highly probable future.  MUST READ stuff! excerpts:  I've got a theory, and it's this: Steve Jobs believes he's gambling Apple's future - the future of a corporation with a market cap well over US $200Bn - on an all-or-nothing push into a new market. HP have woken up and smelled the forest fire, two or three years late; Microsoft are mired in a tar pit, unable to grasp that the inferno heading towards them is going to burn down the entire ecosystem in which they exist. There is the smell of panic in the air, and here's why ... We have known since the mid-1990s that the internet was the future of computing.  With increasing bandwidth, data doesn't need to be trapped in the hard drives of our desktop computers: data and interaction can follow us out into the world we live in. .....Wifi and 4G protocols will shortly be delivering 50-150mbps to whatever gizmo is in your pocket, over the air. ......  It's easier to lay a single fat fibre to a radio transciever station than it is to lay lots of thin fibres to everybody's front door.... Anyway, here's Steve Jobs' strategic dilemma in a nutshell: the PC industry as we have known it for a third of a century is beginning to die. PCs are becoming commodity items. The price of PCs and laptops is falling by about 50% per decade in real terms, despite performance simultaneously rising in real terms. The profit margin on a typical netbook or desktop PC is under 10%.  At the same time, wireless broadband is coming. As it does so, organizations and users will increasingly move their data out into the cloud (read: onto hordes of servers racked up high in anonymous data warehouses, owned and maintained by some large corporation like Google). Software will be delivered as a service to users wherever they are, via whatev
Gary Edwards

Google coding tool advances cloud computing | Deep Tech - CNET News - 0 views

  •  
    Google has released a programming tool to help move its Native Client project--and more broadly, its cloud-computing ambitions--from abstract idea to practical reality. The new Native Client software developer kit, though only a developer preview version, is designed to make it easier for programmers to use the Net giant's browser-boosting Native Client technology. "The Native Client SDK preview...includes just the basics you need to get started writing an app in minutes," Google programmer David Springer said Wednesday in a blog post announcing the SDK, a week before the developer-oriented Google I/O conference. "We'll be updating the SDK rapidly in the next few months."
Gary Edwards

Google's Ultra-Real-Time Messaging Tool Lives On - Technology Review - 0 views

  •  
    The company halted its work on Wave, but aspects of its radical approach to communication have been reincarnated for business collaboration. When Google Wave launched in 2009, the company suggested the program was a "new category" of communication because it combined the virtues of e-mail, instant messaging, and methods for sharing pictures, links, and other documents. Among its other features, Wave went a step beyond IM by letting people see what their message partners were writing as they typed it. That meant that the people on the receiving end of your messages would see characters appear onscreen even before you had finished formulating a sentence. It was a radical approach. I tried Wave myself and found it very distracting to watch people type, delete, retype, and misspell their thoughts. People I had persuaded to try it with me never signed in again, unsure as to how it was useful. We weren't alone in our confusion: last year, Google announced it would stop developing Wave. And yet, Google Wave lives on-in business software.
Gary Edwards

Why a JavaScript hater thinks everyone needs to learn JavaScript in the next year - O'R... - 1 views

  • some extremely important game-changers: jQuery, JSON, Node.js, and HTML5.
  • .js has the potential to revolutionize web development. It is a framework for building high performance web applications: applications that can respond very quickly and efficiently to a high volume of incoming requests.
  • Google has started a revolution in JavaScript performance.
  • ...11 more annotations...
  • the number of JavaScript developers is huge.
  • HTML5 is about JavaScript
  • The power of HTML5 lies in what these tags allow you to create in JavaScript.
  • HTML5, then, isn't really a major advance in angle-bracket-based tagging; it's about enabling JavaScr
  • pt to do more powerful things
  • JavaScript has long been the workhorse for implementing dynamic features in HTML. But there have always been two problems: browser incompatibilities, and the awkwardness of working directly with the DOM. The JQuery library has elegantly solved both problems, and is the basis for modern client-side browser development.
  • The use of JavaScript has also exploded in databases.
  • document databases
  • for all three databases, a "document" means a JSON document, not a Word or Excel file.
  • JSON is really just a format for serializing JavaScript objects.
  • Web servers, rich web client libraries, HTML5, databases, even JavaScript-based languages: I see JavaScript everywhere.
  •  
    OK, this article gets my vote as the most important read of the year.  We all know that the the Web is the future of both computing and communications/connectivity.  But wha tis the future of the Web?  Uber coder Mike Loukides says it's JavaScript, and what a compelling case he builds.  This is a must read.  Key concepts are diigo highlighted :) excerpt: JavaScript has "grown up." I'm sure there are many JavaScript developers who would take issue with that judgement, and argue that JavaScript has been a capable, mature, and under-appreciated language all along. They may be right, though you can write any program in any complete programming language, including awful things like BASIC. What makes a language useful is some combination of the language's expressiveness and the libraries and tools available. JavaScript clearly passed the expressiveness barrier a long time ago, even if the ceremony required for creating objects is distasteful. But recently, we've seen some extremely important game-changers: jQuery, JSON, Node.js, and HTML5. JavaScript may have been a perfectly adequate language in the past, but these changes (and a few others that I'll point out) have made JavaScript a language that is essential for every developer to know. If there's one language you need to learn in the next year, it's JavaScript. Insightful comment: HTML5 is a JavaScript API, introducing new elements but significantly redefining ALL elements as objects or classes.  Elements can be expressed with tags.  Or, you can use DOM JavaScripting to create elements. 
Gary Edwards

jorno - 0 views

  •  
    Jorno is the folding Bluetooth keyboard that fits in your pocket and allows you to type with ease anytime, anywhere. Write email in a cafe. Take notes in a meeting. Blog wherever you are.
Gary Edwards

HTML5 Can Get the Job, But Can HTML5 Do the Job? - 2 views

  •  
    Great chart and HTM5 App development advice from pinch/zoom developer Brian Fling!   excerpt: In a post on pinch/zoom's blog Swipe, Fling discusses the "Anatomy of a HTML5 Mobile App" and what developers will need to get started, what the pitfalls are and why HTML5 is so difficult. HTML5 is a lot like HTML, just more advanced. Fling says that "if you know HTML, then chances are you'll understand what's new in HTML5 in under an hour." Yet, he also says that HTML5 is almost nothing without Javascript and CSS. Device detection, offline data, Javascript tools, testing, debugging and themes are issues that need to be resolved with the tools at hand. One of the big challenges that developers face is the need to fully comprehend Javascript. That starts from the most basic of codes on up. Fling says that many developers cannot write Javascript without the aid of frameworks like Prototype, MooTools, jQuery or Scriptaculous. That would not be so much of a problem if all an application consisted of was functionality and theme, but the data and multiple device requirements of apps and working with the HTML5 code means that troubleshooting a Web application can be extremely difficult if a developer does not know what to look for in Javascript. Fling breaks down the three parts of the Javascript stack that is required in building HTML5 apps - hybrid, core and device scripts. Then there is CSS. Fling likens CSS to the make, model, interior and attention to detail of a car. "Javascript definitely influences our experience as well, but they are the machinations out of view," Fling wrote. "We absolutely need it to be there, but as any Top Gear fan can tell you - power under the hood doesn't always equal a powerful experience." So, HTML5 can get the job. But can it do the job? Fling says yes, but with these caveats:
Gary Edwards

Google Launches Dart Programming Language - Development - Web Development - Information... - 1 views

  •  
    Google releases JavaScript alternative Web application programming language.  Release includes Cloud SQL, a cloud computing database to write Web apps against - using either JavaScript or DART. excerpt: Google on Monday introduced a preview version of Dart, its new programming language for Web applications. The introduction was widely expected, not only because the announcement was listed on the GOTO developer conference schedule, but because a Google engineer described the language and its reason for being in a message sent to a developer mailing list late last year. "The goal of the Dash [Dart's former name] effort is ultimately to replace JavaScript as the lingua franca of Web development on the open Web platform," said Google engineer Mark S. Miller in his post last year. More Insights White Papers The Dodd-Frank Act: Impact on Derivatives Technology Infrastructure Simple is Better: Overcoming the complexity that robs financial data of its potential Analytics Mobility's Next Challenge: 8 Steps to a Secure Environment SaaS 2011: Adoption Soars, Yet Deployment Concerns Linger Webcasts Effective IT Inventory and Asset Management: From Quagmire to Quick Fix Outsourcing Security: What Every Potential Cloud Security Customer Should Know Videos In an interview at Interop New York, Cisco's Justin Griffin shows how their wireless products can physically map radio sources by analyzing the spectrum. This allows you to detect rogue devices and sources of interference. Lars Bak, a Google engineer who helped develop Chrome's V8 JavaScript engine and one of the creators of Dart, said in a phone interview that Google works regularly on large Web applications and that the company's engineers feel they need a new programming language to describe large, complex Web applications.
‹ Previous 21 - 40 of 89 Next › Last »
Showing 20 items per page