Skip to main content

Home/ Open Web/ Group items tagged Data-API

Rss Feed Group items tagged

Gary Edwards

How to Ensure Privacy in the Age of HTML5 - CIO.com - 0 views

  • New APIs in the forthcoming HTML5 make it much easier for Web applications to access software and hardware, especially on mobile devices. The W3C is taking privacy seriously as it puts the finishing touches on HTML5, but there are still some important things to consider.
  •  
    "HTML5, the latest version of the language of the Web, was designed with Web applications in mind. It contains a slew of new application programming interfaces (APIs) designed to allow the Web developer to access device hardware and software using JavaScript. Some of the more exciting HTML5 specifications include the following: Geolocation API lets the browser know where you are Media Capture API lets the browser access your camera and microphone File API lets the browser access your file system Web Storage API lets Web applications store large amounts of data on your computer DeviceOrientation Event Specification lets Web apps know when your device changes from portrait to landscape Messaging API gives the browser access to a mobile device's messaging systems Contacts Manager API allows access to the contacts stored in a user's contacts database"
Gary Edwards

Google Launches Cloud SQL API To Allow Developers To Manage Their Databases Programmati... - 0 views

  •  
    "Google's Cloud Platform has long featured Cloud SQL, a zero-maintenance MySQL database that's hosted on Google's cloud platform. What it didn't offer was an API to easily manage these databases without having to use Google's admin interface. Today, however, Google is launching the Cloud SQL API. This new REST API will allow developers to programmatically manage their database instances and open a number of new use cases for Cloud SQL. The API, which Google still deems to be experimental, will allow developers to create their own workflows to easily create and delete instances, restart them and restore them from backup. They will also be able to use it to important and export their databases to and from Google Cloud Storage. For developers, this means using Google's cloud database is now quite a bit easier, especially if they need to regularly manage multiple databases for their customers. Google's launch partner for this API is OrangeScape, which uses it to power parts of KiSSFLOW, its Google Apps workflow SaaS service. "
Gary Edwards

Matt On Stuff: Hadoop For The Rest Of Us - 0 views

  •  
    Excellent Hadoop/Hive explanation.  Hat tip to Matt Asay for the link.  I eft a comment on Matt's blog questioning the consequences of the Oracle vs. Google Android lawsuit, and the possible enforcement of the Java API copyright claim against Hadoop/Hive.  Based on this explanation of Hadoop/Hive, i'm wondering if Oracle is making a move to claim the entire era of Big Data Cloud Computing?  To understand why, it's first necessary to read Matt the Hadoople's explanation.   kill shot excerpt: "You've built your Hadoop job, and have successfully processed the data. You've generated some structured output, and that resides on HDFS. Naturally you want to run some reports, so you load your data into a MySQL or an Oracle database. Problem is, the data is large. In fact it's so large that when you try to run a query against the table you've just created, your database begins to cry. If you listen to its sobs, you'll probably hear "I was built to process Megabytes, maybe Gigabytes of data. Not Terabytes. Not Perabytes. That's not my job. I was built in the 80's and 90's, back when floppy drives were used. Just leave me alone". "This is where Hive comes to the rescue. Hive lets you run an SQL statement against structured data stored on HDFS. When you issue an SQL query, it parses it, and translates it into a Java Map/Reduce job, which is then executed on your data. Although Hive does some optimizations, in general it just goes record by record against all your data. This means that it's relatively slow - a typical Hive query takes 5 or 10 minutes to complete, depending on how much data you have. However, that's what makes it effective. Unlike a relational database, you don't waste time on query optimization, adding indexes, etc. Instead, what keeps the processing time down is the fact that the query is run on all machines in your Hadoop cluster, and the scalability is taken care of for you." "Hive is extremely useful in data-warehousing kind of scenarios. You would
Paul Merrell

Dr Dobbs - HTML5 Web Storage - 0 views

  • HTML5 Web Storage is an API that makes it easy to persist data across web requests. Before the Web Storage API, remote web servers had to store any data that persisted by sending it back and forth from client to server. With the advent of the Web Storage API, however, developers can now store data directly in a browser for repeated access across requests, or to be retrieved long after you completely close the browser, thus greatly reducing network traffic. One more reason to use Web Storage is that this is one of few HTML5 APIs that is already supported in all browsers, including Internet Explorer 8.
  • In many cases, the same results can be achieved without involving a network or remote server. This is where the HTML5 Web Storage API comes in. By using this simple API, developers can store values in easily retrievable JavaScript objects, which persist across page loads. By using either sessionStorage or localStorage, developers can choose to let values survive either across page loads in a single window or tab, or across browser restarts, respectively. Stored data is not transmitted across the network, and is easily accessed on return visits to a page. Furthermore, larger values -- as high as a few megabytes -- can be persisted using the HTML5 Web Storage API. This makes Web Storage suitable for document and file data that would quickly blow out the size limit of a cookie.
Paul Merrell

RDFa API - 0 views

  • RDFa APIAn API for extracting structured data from Web documentsW3C Working Draft 08 June 2010
  • RDFa [RDFA-CORE] enables authors to publish structured information that is both human- and machine-readable. Concepts that have traditionally been difficult for machines to detect, like people, places, events, music, movies, and recipes, are now easily marked up in Web documents. While publishing this data is vital to the growth of Linked Data, using the information to improve the collective utility of the Web for humankind is the true goal. To accomplish this goal, it must be simple for Web developers to extract and utilize structured information from a Web document. This document details such a mechanism; an RDFa Document Object Model Application Programming Interface (RDFa DOM API) that allows simple extraction and usage of structured information from a Web document.
  • This document is a detailed specification for an RDFa DOM API. The document is primarily intended for the following audiences: User Agent developers that are providing a mechanism to programatically extract RDF Triples from RDFa in a host language such as XHTML+RDFa [XHTML-RDFA], HTML+RDFa [HTML-RDFA] or SVG Tiny 1.2 [SVGTINY12], DOM tool developers that want to provide a mechanism for extracting RDFa content via programming languages such as JavaScript, Python, Ruby, or Perl, and Developers that want to understand the inner workings and design criteria for the RDFa DOM API.
Gary Edwards

Cloud file-sharing for enterprise users - 1 views

  •  
    Quick review of different sync-share-store services, starting with DropBox and ending with three Open Source services. Very interesting. Things have progressed since I last worked on the SurDocs project for Sursen. No mention in this review of file formats, conversion or viewing issues. I do know that CrocoDoc is used by near every sync-share-store service to convert documents to either pdf or html formats for viewing. No servie however has been able to hit the "native document" sweet spot. Not even SurDocs - which was the whole purpose behind the project!!! "Native Documents" means that the document is in it's native / original application format. That format is needed for the round tripping and reloading of the document. Although most sync-share-store services work with MSOffice OXML formatted documents, only Microsoft provides a true "native" format viewer (Office 365). Office 365 enables direct edit, view and collaboration on native documents. Which is an enormous advantage given that conversion of any sort is guaranteed to "break" a native document and disrupt any related business processes or round tripping need. It was here that SurDoc was to provide a break-through technology. Sadly, we're still waiting :( excerpt: The availability of cheap, easy-to-use and accessible cloud file-sharing services means users have more freedom and choice than ever before. Dropbox pioneered simplicity and ease of use, and so quickly picked up users inside the enterprise. Similar services have followed Dropbox's lead and now there are dozens, including well-known ones such as Google Drive, SkyDrive and Ubuntu One. cloud.jpg Valdis Filks , research director at analyst firm Gartner explained the appeal of cloud file-sharing services. Filks said: "Enterprise employees use Dropbox and Google because they are consumer products that are simple to use, can be purchased without officially requesting new infrastructure or budget expenditure, and can be installed qu
  •  
    Odd that the reporter mentions the importance of security near the top of the article but gives that topic such short shrift in his evaluation of the services. For example, "secured by 256-bit AES encryption" is meaningless without discussing other factors such as: [i] who creates the encryption keys and on which side of the server/client divide; and [ii] the service's ability to decrypt the customer's content. Encrypt/decryt must be done on the client side using unique keys that are unknown to the service, else security is broken and if the service does business in the U.S. or any of its territories or possessions, it is subject to gagged orders to turn over the decrypted customer information. My wisdom so far is to avoid file sync services to the extent you can, boycott U.S. services until the spy agencies are encaged, and reward services that provide good security from nations with more respect for digital privacy, to give U.S.-based services an incentive to lobby *effectively* on behalf of their customer's privacy in Congress. The proof that they are not doing so is the complete absence of bills in Congress that would deal effectively with the abuse by U.S. spy agencies. From that standpoint, the Switzerland-based http://wuala.com/ file sync service is looking pretty good so far. I'm using it.
Gary Edwards

Stephen Peront : Custom Document Format Interoperability - bound business processes - 0 views

  •  
    Custom Document Format Interoperability You may have heard that Office 2007 SP2 will now support editing files in the OpenDocument 1.1 (ODF) format. This document format was added to Office's long list of supported documents formats to give customers more choices for the format they use to save their documents. In addition to allowing you to edit the ODF 1.1 format within Office 2007, SP2 also supports a new External File Format API that can be used to edit other document formats as well. With this API, users can choose to save their documents in any format they want. In this post we will explore how to use the API to enable Office 2007 to edit our own custom document format. We will then use Office 2007 to save our custom format as DOCX, ODT and HTML. Our Custom Document Format For the purpose of this article, we have a company who needs to manage their sales pipeline information. The data is available as XML, but they do not want to spend the money to build a custom editor. They just want to let their users edit the pipeline data in Word, as a table. They give these files an extension of SPLX (i.e. Sales PipeLine Xml) The sales pipeline information is made up of a series of SalesItem tags, each with a unique id that represents the index of the item. They track the name of the customer (CustomerName), how much the deal represents (DealValue) and a percent that represents how confident they are that the sales opportunity will close (ConfidencePercent).
Gary Edwards

Reinventing Copy and Paste - Anil Dash - 0 views

  •  
    We can all learn a lot of lessons from the history of DDE/OLE/ OLE3/COM /ActiveX/DCOM /COM+ (you can start reading up on Wikipedia to get some background) and how we went from everyone using best-of-breed standalone apps to one integrated, nearly monolithic Office. It basically all started with copy and paste. People who never spent a lot of time in singletasking, character-mode operating environments like the DOS command line don't recall that simply copying-and-pasting information between apps was difficult at the time. And part of the revelation of Windows for mainstream users (or Mac, for leading-edge tech fans), was being able to easily share data in that way. This was different than what Unix users were used to with the command-line pipe, or from what most applications do with feeds today, in allowing structured information flows between applications. There's a desire to combine data from different sources in an arbitrary way, and to have the user interface display the appropriate tools for whatever context you're in. The dominant model here, probably because of the influence of the early PARC demos, is to have toolbars or UI widgets change depending on what kind of content you're manipulating. Microsoft was really into this in the early 90s with OLE2, where your Word toolbars would morph into Excel toolbars if you double-clicked on an embedded spreadsheet. It was ungainly and ugly and slow, especially if you had less than an exorbitant 8MB of RAM, but the idea was pretty cool. And it still is. People are so focused on data formats and feeds that they're ignoring consensus around UI interoperability. The Atom API and Metaweblog API give me a good-enough interface if I want to treat a discrete chunk of information (like a blog post) as an undifferentiated blob. But all the erstwhile spec work around microformats and structured blogging (I forget which one is for XML and which one's for XHTML) doesn't seem to have addressed user experience or editing behavior
Gary Edwards

Eucalyptus open-sources the cloud (Q&A) | The Open Road - CNET News - 0 views

  • The ideal customer is one with an IT organization that is tasked with supporting a heterogeneous set of user groups (each with its own technology needs, business logic, policies, etc.) using infrastructure that it must maintain across different phases of the technology lifecycle. There are two prevalent usage models that we observe regularly. The first is as a development and testing platform for applications that, ultimately, will be deployed in a public cloud. It is often easier, faster, and cheaper to use locally sited resources to develop and debug an application (particularly one that is designed to operate at scale) prior to its operational deployment in an externally hosted environment. The virtualization of machines makes cross-platform configuration easier to achieve and Eucalyptus' API compatibility makes the transition between on-premise resources and the public clouds simple. The second model is as an operational hybrid. It is possible to run the same image simultaneously both on-premise using Eucalyptus and in a public cloud thereby providing a way to augment local resources with those rented from a provider without modification to the application. For whom is this relevant technology today? Who are your customers? Wolski: We are seeing tremendous interest in several verticals. Banking/finance, big pharma, manufacturing, gaming, and the service provider market have been the early adopters to deploy and experiment with the Eucalyptus technology.
  • Eucalyptus is designed to be able to compose multiple technology platforms into a single "universal" cloud platform that exposes a common API, but that can at the same time support separate APIs for the individual technologies. Moreover, it is possible to export some of the specific and unique features of each technology through the common API as "quality-of-service" attributes.
  •  
    Eucalyptus, an open-source platform that implements "infrastructure as a service" (IaaS) style cloud computing, aims to take open source front and center in the cloud-computing craze. The project, founded by academics at the University of California at Santa Barbara, is now a Benchmark-funded company with an ambitious goal: become the universal cloud platform that everyone from Amazon to Microsoft to Red Hat to VMware ties into. [Eucalyptus] is architected to be compatible with such a wide variety of commonly installed data center technologies, [and hence] provides an easy and low-risk way of building private (i.e. on-premise or internal) clouds...Thus data center operators choosing Eucalyptus are assured of compatibility with the emerging application development and operational cloud ecosystem while attaining the security and IT investment amortization levels they desire without the "fear" of being locked into a single public cloud platform.
Paul Merrell

How a "location API" allows cops to figure out where we all are in real time | Ars Tech... - 0 views

  • The digital privacy world was rocked late Thursday evening when The New York Times reported on Securus, a prison telecom company that has a service enabling law enforcement officers to locate most American cell phones within seconds. The company does this via a basic Web interface leveraging a location API—creating a way to effectively access a massive real-time database of cell-site records. Securus’ location ability relies on other data brokers and location aggregators that obtain that information directly from mobile providers, usually for the purposes of providing some commercial service like an opt-in product discount triggered by being near a certain location. ("You’re near a Carl’s Jr.! Stop in now for a free order of fries with purchase!") The Texas-based Securus reportedly gets its data from 3CInteractive, which in turn buys data from LocationSmart. Ars reached 3CInteractive's general counsel, Scott Elk, who referred us to a spokesperson. The spokesperson did not immediately respond to our query. But currently, anyone can get a sense of the power of a location API by trying out a demo from LocationSmart itself. Currently, the Supreme Court is set to rule on the case of Carpenter v. United States, which asks whether police can obtain more than 120 days' worth of cell-site location information of a criminal suspect without a warrant. In that case, as is common in many investigations, law enforcement presented a cell provider with a court order to obtain such historical data. But the ability to obtain real-time location data that Securus reportedly offers skips that entire process, and it's potentially far more invasive. Securus’ location service as used by law enforcement is also currently being scrutinized. The service is at the heart of an ongoing federal prosecution of a former Missouri sheriff’s deputy who allegedly used it at least 11 times against a judge and other law enforcement officers. On Friday, Sen. Ron Wyden (D-Ore.) publicly released his formal letters to AT&T and also to the Federal Communications Commission demanding detailed answers regarding these Securus revelations.
Gary Edwards

FeedHenry Secures $9M Funding Led By Intel Capital To Feed Boom in Mobile Enterprise | ... - 0 views

  •  
    FeedHenry provides a cloud Mobile Application Platform that simplifies the development, integration, deployment and management of secure mobile apps for business. This mobile platform-as-a-service (PaaS) allows apps to be developed in HTML5, JavaScript, and CSS and deployed to multiple mobile devices from a single code base. The node.js backend service offers a complete range of APIs designed to simplify and secure the connectivity of mobile apps to backend and third party systems. The platform can be deployed to private, public or hybrid clouds. FeedHenry's PaaS offers developers speed of development, instant scalability, device and cloud independence, and the ability to easily integrate to backend information. ................................ If, say, a company uses both Sharepoint and Salesforce inside a mobile app, to get that data into one app they need multiple levels of API integration. Because of the enormous boom in mobile and tablet apps, so-called 'back-end as a service' (BaaS) platforms like FeedHenry - which solve these problems - are hugely expanding. Thus, today FeedHenry has secured $9M (€7M) in a funding round led by Intel Capital, alongside a "seven figure" investment from existing investor Kernel Capital. Other existing investors VMware Inc., Enterprise Ireland and private investors also participated and were joined by new investment from ACT Venture Capital. The funds will be used on an international roll out. FeedHenry's mobile application platform - built between Ireland and the U.S. - helps businesses build mobile apps that integrate securely to their business through the cloud. This is a competitive market that includes StackMob, Usergrid, Appcelerator, Sencha.io, Applicasa ,Parse, CloudMine , CloudyRec , iKnode, yorAPI, Buddy and ScottyApp.
Paul Merrell

Google Sites API opens Microsoft SharePoint - Techworld.com - 1 views

  • Signaling an intent to compete with giants in the collaboration software space, Google unveiled an API to extend the Google Sites collaborative content development tool, featuring a capability to migrate files from workspace applications such as Microsoft SharePoint and Lotus Notes to Sites. One application already built using the Google Sites API is SharePoint Move for Google Apps, developed by LTech for migrating data and content from SharePoint to Sites. Google Sites is a free application for building and sharing websites; it is described by Google as a collaborative content creation tool to upload file attachments, information other Google applications such as Google Docs, and free-form content.
Gary Edwards

Google Chrome OS: Web Platform To Rule Them All -- InformationWeek - 0 views

  •  
    Some good commentary on chrome OS from InformationWeek's Thomas Claburn. Excerpt: With Chrome OS, Google aims to make the Web the primary platform for software development....... The fact that Chrome OS applications will be written using open Web standards like JavaScript, HTML, and CSS might seem like a liability because Web applications still aren't as capable as applications written for specific devices and operating systems. But Google is betting that will change and is working to effect the change on which its bet depends. Within a year or two, Web browsers will gain access to peripherals, through an infrastructure layer above the level of device drivers. Google's work with standards bodies is making that happen..... ..... According to Matt Womer, the "ubiquitous Web activity lead" for W3C, the Web standards consortium, Web protocol groups are working to codify ways to access peripherals like digital cameras, the messaging stack, calendar data, and contact data. There's now a JavaScript API that Web developers can use to get GPS information from mobile phones using the phone's browser, he points out. What that means is that device drivers for Chrome OS will emerge as HTML 5 and related standards mature. Without these, consumers would never use Chrome OS because devices like digital cameras wouldn't be able to transfer data. Womer said the standardization work could move quite quickly, but won't be done until there's an actual implementation. That would be Chrome OS...... ..... Chrome OS will sell itself to developers because, as Google puts it, writing applications for the Web gives "developers the largest user base of any platform."
Paul Merrell

Transparency Toolkit - 0 views

  • About Transparency Toolkit We need information about governments, companies, and other institutions to uncover corruption, human rights abuses, and civil liberties violations. Unfortunately, the information provided by most transparency initiatives today is difficult to understand and incomplete. Transparency Toolkit is an open source web application where journalists, activists, or anyone can chain together tools to rapidly collect, combine, visualize, and analyze documents and data. For example, Transparency Toolkit can be used to get data on all of a legislator’s actions in congress (votes, bills sponsored, etc.), get data on the fundraising parties a legislator attends, combine that data, and show it on a timeline to find correlations between actions in congress and parties attended. It could also be used to extract all locations from a document and plot them on a map where each point is linked to where the location was mentioned in the document.
  • Analysis Platform On the analysis platform, users can add steps to the analysis process. These steps chain together the tools, so someone could scrape data, upload a document, crossreference that with the scraped data, and then visualize the result all in less than a minute with little technical knowledge. Some of the tools allow users to specify input, but when this is not the case the output of the last step is the input of the next. Tools Existing and planned Transparency Toolkit tools include include scrapers and APIs for accessing data, format converters, extraction tools (for dates, names, locations, numbers), tools for crossreferencing and merging data, visualizations (maps, timelines, network graphs, maps), and pattern and trend detecting tools. These tools are designed to work in many cases rather than a single specific situation. The tools can be linked together on Transparency Toolkit, but they are also available individually. Where possible, we build our tools off of existing open source software. Road Map You can see the plans for future development of Transparency Toolkit here.
  •  
    If you think this isn't a tool for some very serious research, check the short descriptions of the modules here. https://github.com/transparencytoolkit I'll be installing this and doing some test-driving soon. From the source files, the glue for the tools seems to be Ruby on Rails. The development roadmap linked from the last word on this About page is also highly instructive. It ranks among the most detailed dev roadmaps I have ever seen. Notice that it is classified by milestones with scheduled work periods, giving specific date ranges for achievement. Even given the inevitable need to alter the schedule for unforeseen problems, this is a very aggressive (not quite the word I want) development plan and schedule. And the planned changes look to be super-useful, including a lot of "make it easier for the user" changes.   
Maluvia Haseltine

Apatar - Open Source Data Integration & ETL - 0 views

  •  
    Join your on-premises data sources with the web without coding. Feed data from/to APIs, mashups, and mashup building tools.
Paul Merrell

Microsoft Pitches Technology That Can Read Facial Expressions at Political Rallies - 0 views

  • On the 21st floor of a high-rise hotel in Cleveland, in a room full of political operatives, Microsoft’s Research Division was advertising a technology that could read each facial expression in a massive crowd, analyze the emotions, and report back in real time. “You could use this at a Trump rally,” a sales representative told me. At both the Republican and Democratic conventions, Microsoft sponsored event spaces for the news outlet Politico. Politico, in turn, hosted a series of Microsoft-sponsored discussions about the use of data technology in political campaigns. And throughout Politico’s spaces in both Philadelphia and Cleveland, Microsoft advertised an array of products from “Microsoft Cognitive Services,” its artificial intelligence and cloud computing division. At one exhibit, titled “Realtime Crowd Insights,” a small camera scanned the room, while a monitor displayed the captured image. Every five seconds, a new image would appear with data annotated for each face — an assigned serial number, gender, estimated age, and any emotions detected in the facial expression. When I approached, the machine labeled me “b2ff” and correctly identified me as a 23-year-old male.
  • “Realtime Crowd Insights” is an Application Programming Interface (API), or a software tool that connects web applications to Microsoft’s cloud computing services. Through Microsoft’s emotional analysis API — a component of Realtime Crowd Insights — applications send an image to Microsoft’s servers. Microsoft’s servers then analyze the faces and return emotional profiles for each one. In a November blog post, Microsoft said that the emotional analysis could detect “anger, contempt, fear, disgust, happiness, neutral, sadness or surprise.” Microsoft’s sales representatives told me that political campaigns could use the technology to measure the emotional impact of different talking points — and political scientists could use it to study crowd response at rallies.
  • Facial recognition technology — the identification of faces by name — is already widely used in secret by law enforcement, sports stadiums, retail stores, and even churches, despite being of questionable legality. As early as 2002, facial recognition technology was used at the Super Bowl to cross-reference the 100,000 attendees to a database of the faces of known criminals. The technology is controversial enough that in 2013, Google tried to ban the use of facial recognition apps in its Google glass system. But “Realtime Crowd Insights” is not true facial recognition — it could not identify me by name, only as “b2ff.” It did, however, store enough data on each face that it could continuously identify it with the same serial number, even hours later. The display demonstrated that capability by distinguishing between the number of total faces it had seen, and the number of unique serial numbers. Photo: Alex Emmons
  • ...2 more annotations...
  • Instead, “Realtime Crowd Insights” is an example of facial characterization technology — where computers analyze faces without necessarily identifying them. Facial characterization has many positive applications — it has been tested in the classroom, as a tool for spotting struggling students, and Microsoft has boasted that the tool will even help blind people read the faces around them. But facial characterization can also be used to assemble and store large profiles of information on individuals, even anonymously.
  • Alvaro Bedoya, a professor at Georgetown Law School and expert on privacy and facial recognition, has hailed that code of conduct as evidence that Microsoft is trying to do the right thing. But he pointed out that it leaves a number of questions unanswered — as illustrated in Cleveland and Philadelphia. “It’s interesting that the app being shown at the convention ‘remembered’ the faces of the people who walked by. That would seem to suggest that their faces were being stored and processed without the consent that Microsoft’s policy requires,” Bedoya said. “You have to wonder: What happened to the face templates of the people who walked by that booth? Were they deleted? Or are they still in the system?” Microsoft officials declined to comment on exactly what information is collected on each face and what data is retained or stored, instead referring me to their privacy policy, which does not address the question. Bedoya also pointed out that Microsoft’s marketing did not seem to match the consent policy. “It’s difficult to envision how companies will obtain consent from people in large crowds or rallies.”
  •  
    But nobody is saying that the output of this technology can't be combined with the output of facial recognition technology to let them monitor you individually AND track your emotions. Fortunately, others are fighting back with knowledge and tech to block facial recognition. http://goo.gl/JMQM2W
Gary Edwards

Gray Matter : Open XML and the SharePoint Conference - 0 views

  •  
    excerpt: The trend in Office development is the migration of solutions away from in-application scripted processing toward more data-centric development. Of course this is a primary purpose of Open XML, and it is great to see the amount of activity in this area. We've seen customers scripting Word in a server environment to batch process / print documents or for other automation tasks. In reality Word isn't built to do that on a large scale, it is better to work directly against the document rather than via the application whenever possible. The Open XML SDK unlocks a "whole nuther" environment for document processing, and gets you out of the business of scripting client apps on servers to do the work of a true server application (not to mention the licensing problems created by installing Office on a server). comment:  Gray makes a very important point here.  The dominance of the desktop based MSOffice Productivity Environment was largely based the embedded logic driving "in-process" documents that was application and platform (Win32 API) specific.  Tear open any of these workgroup-workflow oriented compound documents and you find application specific scripts, macros, OLE, data bindings, security settings and other application specific settings.  These internal components are certain to break whenever these highly interactive and "live" compound documents are converted to another format, or application use.  This is how MSOffice documents and the business processes they represent become "bound" to the MSOffice Productivity Environment. What Gray is pointing to here is that Microsoft is moving the legacy Productivity Environment to an MSWeb based center where OpenXML, Silverlight, CAML, XAML and a number of other .NET-WPF technologies become the workgroup drivers.  The key applications for the MS WebStack are Exchange/SharePoint/SQL Server.  To make this move, documents had to be separated from the legacy desktop Productivity Environment settings. Note th
Gary Edwards

Zoho's Next Big Thing | ge TalkBack on ZDNet - 0 views

  •  
    Moving the Point of Assembly Kudos to Zoho. Their efforts remind me of the early days of the Microsoft Productivity Environment where core MSOffice editors expanded their reach through DDE, OLE, rich copy/paste, data binding, merged content and data, VBA scripting and the infamous recorder, and a developer API that meshed platform and productivity apps so deeply into end user information that the binding of business processes to the MOPE is proving near impossible to break. Even for years after the fact. A business ecosystem for client/server was born back in the early 90's, with Microsoft continuing on to own entirely the client side of the equation.
Gary Edwards

How Sir Tim Berners-Lee cut the Gordian Knot of HTML5 | Technology | guardian.co.uk - 0 views

  •  
    Good article with excellent URL references.  Bottom line is that the W3C will support the advance of HTML5 and controversial components such as "canvas", HTML + RDFa, and HTML microdata. excerpt: The key question is: who's going to get their way with HTML5? The companies who want to keep the kitchen sink in? Or those which want it to be a more flexible format which might also be able to displace some rather comfortable organisations that are doing fine with things as they are? Adobe, it turned out, seemed to be trying to slow things down a little. It was accused of trying to put HTML5 "on hold". It strongly denied it. Others said it was using "procedural bullshit". Then Berners-Lee weighed in with a post on the W3 mailing list. First he noted the history: "Some in the community have raised questions recently about whether some work products of the HTML Working Group are within the scope of the Group's charter. Specifically in question were the HTML Canvas 2D API, and the HTML Microdata and HTML+RDFa Working Drafts." (Translation: Adobe seems to have been trying to slow things down on at least one of these points.) And then he pushes: "I agree with the WG [working group] chairs that these items -- data and canvas - are reasonable areas of work for the group. It is appropriate for the group to publish documents in this area." Chop! And that's it. There goes the Gordian Knot. With that simple message, Berners-Lee has probably created a fresh set of headaches for Adobe - but it means that we can also look forward to a web with open standards, rather than proprietary ones, and where commercial interests don't get to push it around.
Gary Edwards

The State of the Internet Operating System - O'Reilly Radar - 0 views

  •  
    ... The Internet Operating System is an Information Operating System ... Search is key to managing and working "information" ... Media Access ... Communications ... Identity and the Social Graph ... Payment ... Advertising ... Location ... Activity Streams - "Attention" ... Time  ... Image and Speech Recognition ... Government Data ... The Browser Where is the "operating system" in all this? Clearly, it is still evolving. Applications use a hodgepodge of services from multiple different providers to get the information they need. But how different is this from PC application development in the early 1980s, when every application provider wrote their own device drivers to support the hodgepodge of disks, ports, keyboards, and screens that comprised the still emerging personal computer ecosystem? Along came Microsoft with an offer that was difficult to refuse: We'll manage the drivers; all application developers have to do is write software that uses the Win32 APIs, and all of the complexity will be abstracted away. This is the crux of my argument about the internet operating system. We are once again approaching the point at which the Faustian bargain will be made: simply use our facilities, and the complexity will go away. And much as happened during the 1980s, there is more than one company making that promise. We're entering a modern version of "the Great Game", the rivalry to control the narrow passes to the promised future of computing.
1 - 20 of 29 Next ›
Showing 20 items per page