Skip to main content

Home/ Future of the Web/ Group items tagged www Web

Rss Feed Group items tagged

Paul Merrell

The Self-Describing Web - 0 views

  • Abstract The Web is designed to support flexible exploration of information by human users and by automated agents. For such exploration to be productive, information published by many different sources and for a variety of purposes must be comprehensible to a wide range of Web client software, and to users of that software. HTTP and other Web technologies can be used to deploy resource representations that are in an important sense self-describing: information about the encodings used for each representation is provided explicitly within the representation. Starting with a URI, there is a standard algorithm that a user agent can apply to retrieve and interpret such representations. Furthermore, representations can be grounded in the Web, by ensuring that specifications required to interpret them are determined unambiguously based on the URI, and that explicit references connect the pertinent specifications to each other. Web-grounding reduces ambiguity as to what has been published in the Web, and by whom. When such self-describing, Web-grounded resources are linked together, the Web as a whole can support reliable, ad hoc discovery of information. This finding describes how document formats, markup conventions, attribute values, and other data formats can be designed to facilitate the deployment of self-describing, Web-grounded Web content.
Gary Edwards

Developing a Universal Markup Solution For Web Content - 0 views

  •  
    KODAXIL To Replace XML?

    File this one under the Universal Interoperability label. Very interesting. Especially since XML document formats have proven to fall short on the two primary expectations of users: interoperability and Web ready. Like HTML+ :) Maybe KODAXIL will work?

    The recent Web 2.0 Conference was filled with new web services , portals and wiki efforts trying their best to mash data into document objects. iCloud, MindTouch, AppLogic, 3Tera, Caspio and Gazoodle all deserve attention. although each took a rather different approach towards solving the problem. MindTouch in particular was excellent.

    "A Montreal-based software and research development company has developed a markup solution and language-neutral asset-descriptor that when fully developed, could result in a universal computer language for representing information in databases, web and document contents and business objects."

    "While still at a seminal stage of development, the company Gnoesis, aims to address the problem of data fragmentation caused by semantic differences between developers and users from different linguistic backgrounds."

    Gnoesis, the company that has developed the language called KODAXIL (Knowledge, Object, Data, Action, and eXtensible Interoperable Language), a data and information representation language, says the new language will replace the XML function of consolidating semantically identical data streams from different languages, by creating a common language to do this.

    The extensible semantic markup associated with this language will be understood worldwide and is three times shorter than XML.
Gonzalo San Gil, PhD.

No, Department of Justice, 80 Percent of Tor Traffic Is Not Child Porn | WIRED [# ! Via... - 0 views

  • The debate over online anonymity, and all the whistleblowers, trolls, anarchists, journalists and political dissidents it enables, is messy enough. It doesn’t need the US government making up bogus statistics about how much that anonymity facilitates child pornography.
  • he debate over online anonymity, and all the whistleblowers, trolls, anarchists, journalists and political dissidents it enables, is messy enough. It doesn’t need the US government making up bogus statistics about how much that anonymity facilitates child pornography. At the State of the Net conference in Washington on Tuesday, US assistant attorney general Leslie Caldwell discussed what she described as the dangers of encryption and cryptographic anonymity tools like Tor, and how those tools can hamper law enforcement. Her statements are the latest in a growing drumbeat of federal criticism of tech companies and software projects that provide privacy and anonymity at the expense of surveillance. And as an example of the grave risks presented by that privacy, she cited a study she said claimed an overwhelming majority of Tor’s anonymous traffic relates to pedophilia. “Tor obviously was created with good intentions, but it’s a huge problem for law enforcement,” Caldwell said in comments reported by Motherboard and confirmed to me by others who attended the conference. “We understand 80 percent of traffic on the Tor network involves child pornography.” That statistic is horrifying. It’s also baloney.
  • In a series of tweets that followed Caldwell’s statement, a Department of Justice flack said Caldwell was citing a University of Portsmouth study WIRED covered in December. He included a link to our story. But I made clear at the time that the study claimed 80 percent of traffic to Tor hidden services related to child pornography, not 80 percent of all Tor traffic. That is a huge, and important, distinction. The vast majority of Tor’s users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what’s often referred to as the “dark web,” use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software’s creators at the non-profit Tor Project. The University of Portsmouth study dealt exclusively with visits to hidden services. In contrast to Caldwell’s 80 percent claim, the Tor Project’s director Roger Dingledine pointed out last month that the study’s pedophilia findings refer to something closer to a single percent of Tor’s overall traffic.
  • ...1 more annotation...
  • So to whoever at the Department of Justice is preparing these talking points for public consumption: Thanks for citing my story. Next time, please try reading it.
  •  
    [# Via Paul Merrell's Diigo...] "That is a huge, and important, distinction. The vast majority of Tor's users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what's often referred to as the "dark web," use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software's creators at the non-profit Tor Project."
  •  
    [# Via Paul Merrell's Diigo...] "That is a huge, and important, distinction. The vast majority of Tor's users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what's often referred to as the "dark web," use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software's creators at the non-profit Tor Project."
Gonzalo San Gil, PhD.

Descarga sitio web completo con wget aún si hay restricciones - 0 views

  •  
    "GNU Wget es una herramienta de software libre que permite la descarga de contenidos desde servidores web de una forma simple. Su nombre deriva de World Wide Web (w), y de «obtener» (en inglés get), esto quiere decir: obtener desde la WWW." [# ! La #Web # ! … en #tus #manos… # ! … con #GNU #Wget]
  •  
    "GNU Wget es una herramienta de software libre que permite la descarga de contenidos desde servidores web de una forma simple. Su nombre deriva de World Wide Web (w), y de «obtener» (en inglés get), esto quiere decir: obtener desde la WWW."
Gary Edwards

The NeuroCommons Project: Open RDF Ontologies for Scientific Reseach - 0 views

  •  
    The NeuroCommons project seeks to make all scientific research materials - research articles, annotations, data, physical materials - as available and as useable as they can be. This is done by fostering practices that render information in a form that promotes uniform access by computational agents - sometimes called "interoperability". Semantic Web practices based on RDF will enable knowledge sources to combine meaningfully, semantically precise queries that span multiple information sources.

    Working with the Creative Commons group that sponsors "Neurocommons", Microsoft has developed and released an open source "ontology" add-on for Microsoft Word. The add-on makes use of MSOffice XML panel, Open XML formats, and proprietary "Smart Tags". Microsoft is also making the source code for both the Ontology Add-in for Office Word 2007 and the Creative Commons Add-in for Office Word 2007 tool available under the Open Source Initiative (OSI)-approved Microsoft Public License (Ms-PL) at http://ucsdbiolit.codeplex.com and http://ccaddin2007.codeplex.com,respectively.

    No doubt it will take some digging to figure out what is going on here. Microsoft WPF technologies include Smart Tags and LINQ. The Creative Commons "Neurocommons" ontology work is based on W3C RDF and SPARQL. How these opposing technologies interoperate with legacy MSOffice 2003 and 2007 desktops is an interesting question. One that may hold the answer to the larger problem of re-purposing MSOffice for the Open Web?

    We know Microsoft is re-purposing MSOffice for the MS Web. Perhaps this work with Creative Commons will help to open up the Microsoft desktop productivity environment to the Open Web? One can always hope :)

    Dr Dobbs has the Microsoft - Creative Commons announcement; Microsoft Releases Open Tools for Scientific Research ...... Joins Creative Commons in releasing the Ontology Add-in
Paul Merrell

The Strongest Link: Libraries and Linked Data - 2 views

  • Abstract Since 1999 the W3C has been working on a set of Semantic Web standards that have the potential to revolutionize web search. Also known as Linked Data, the Machine-Readable Web, the Web of Data, or Web 3.0, the Semantic Web relies on highly structured metadata that allow computers to understand the relationships between objects. Semantic web standards are complex, and difficult to conceptualize, but they offer solutions to many of the issues that plague libraries, including precise web search, authority control, classification, data portability, and disambiguation. This article will outline some of the benefits that linked data could have for libraries, will discuss some of the non-technical obstacles that we face in moving forward, and will finally offer suggestions for practical ways in which libraries can participate in the development of the semantic web.
  •  
    See also Wikipedia on Linked Data: http://en.wikipedia.org/wiki/Linked_Data
Gary Edwards

Nokia and Google: Too much emphasis on the mobile OS? | ge TalkBack on ZDNet - 0 views

  • Although it appears that the mobile hardware providers are competing through the development of incompatible platforms, i think there's reason to be hopeful. There seems to be movement towards a universal web application model able to join legacy Web with an Open-Web future where devices, desktops, web-stacks, and clouds connect, access, exchange and collaborate with all kinds of information systems. Above the metal, at the web application layer, there is a war between competing runtime engines. The recent Web 2.0 Conference was a showcase for Sun Java FX, Adobe RiA, and Microsoft .NET Silverlight. The exhibitors floor featured a large and prominent Microsoft Silverlight-Mesh island surrounded by Flex RiA providers, with currents of IT and developers asking the same question; Can Adobe run with Microsoft?
  •  
    Interesting discussion about a universal web application layer able to wrok across devices, browsers and web service systems. I reponded with a very lengthy post about WebKit.
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Gary Edwards

The Future of the Desktop - ReadWriteWeb by Nova Spivak - 0 views

  •  
    Excellent commentary from Nova Spivak; about as well thought out a discussion as i've ever seen concerning the future of the desktop. Nova sees the emergence of a WebOS, most likely based on JavaScript. This article set off a fire storm of controversy and discussion, but was quickly lost in the dark days of late August/September of 2008, where news of the subsequent collapse of the world financial system and the fear filled USA elections dominated everything. Too bad. this is great stuff. ..... "Everything is moving to the cloud. As we enter the third decade of the Web we are seeing an increasing shift from native desktop applications towards Web-hosted clones that run in browsers. For example, a range of products such as Microsoft Office Live, Google Docs, Zoho, ThinkFree, DabbleDB, Basecamp, and many others now provide Web-based alternatives to the full range of familiar desktop office productivity apps. The same is true for an increasing range of enterprise applications, led by companies such as Salesforce.com, and this process seems to be accelerating. In addition, hosted remote storage for individuals and enterprises of all sizes is now widely available and inexpensive. As these trends continue, what will happen to the desktop and where will it live?" .... Is the desktop of the future going to just be a web-hosted version of the same old-fashioned desktop metaphors we have today? ..... The desktop of the future is going to be a hosted web service ..... The Browser is Going to Swallow Up the Desktop ...... The focus of the desktop will shift from information to attention ...... Users are going to shift from acting as librarians to acting as daytraders. ...... The Webtop will be more social and will leverage and integrate collective intelligence ....... The desktop of the future is going to have powerful semantic search and social search capabilities built-in ....... Interactive shared spaces will replace folders ....... The Portable Desktop ........ The Sma
Gary Edwards

The story behind Google Chrome - 0 views

  •  
    Google released its second web browser yesterday afternoon, adding additional headroom for web applications stretching the limits of what it's possible to accomplish within a web browser. The Google Chrome team assembled domain experts in various fields over the past six years, both through direct hires and acquisitions, to create a new browser and its critical components from scratch. GMail and Google Maps pushed the Web to its limits, taking advantage of browser technologies invented in Redmond but left dormant for far too long. Contributing to Firefox's core, writing browser extensions, and championing HTML could only take the $150 billion company so far: they needed to own the full browser to push their Web efforts forward at full speed.
Gonzalo San Gil, PhD.

How to Access Linux Server Terminal in Web Browser Using 'Wetty (Web + tty)' Tool - 0 views

  •  
    "Wouldn't it be fantastic if there was a way to access a remote Linux server directly from the web browser? Luckily for us all, there is a tool called Wetty (Web + tty) that allows us to do just that - without the need to switch programs and all from the same web browser window."
Paul Merrell

W3C Public Newsletter, 2008-11-03 from W3C Newsletter on 2008-11-03 (w3c-announce@w3.or... - 0 views

  • The Web Content Accessibility Guidelines (WCAG) Working Group has published the "Web Content Accessibility Guidelines 2.0" as a Proposed Recommendation, and published updated Working Drafts of "Understanding WCAG 2.0," "Techniques for WCAG 2.0," and How to Meet WCAG 2.0. WCAG defines how to make Web sites, Web applications, and other Web content accessible to people with disabilities. Comments are welcome through 2 December 2008. Read the announcement, Overview of WCAG 2.0 Documents, and about the Web Accessibility Initiative. http://www.w3.org/WAI/GL/ http://www.w3.org/TR/2008/PR-WCAG20-20081103/ http://www.w3.org/TR/2008/CR-UNDERSTANDING-WCAG20-20081103/ http://www.w3.org/TR/2008/CR-WCAG20-TECHS-20081103/ http://www.w3.org/WAI/WCAG20/quickref/ http://lists.w3.org/Archives/Public/w3c-wai-ig/2008OctDec/0091 http://www.w3.org/WAI/intro/wcag20.php http://www.w3.org/WAI/
Paul Merrell

W3C Standards Make Mobile Web Experience More Inviting - 0 views

  • W3C today announced new standards that will make it easier for people to browse the Web on mobile devices. Mobile Web Best Practices 1.0, published as a W3C Recommendation, condenses the experience of many mobile Web stakeholders into practical advice on creating mobile-friendly content.
  • Until today, content developers faced an additional challenge: a variety of mobile markup languages to choose from. With the publication of the XHTML Basic 1.1 Recommendation today, the preferred format specification of the Best Practices, there is now a full convergence in mobile markup languages, including those developed by the Open Mobile Alliance (OMA). The W3C mobileOK checker (beta), when used with the familiar W3C validator, helps developers test mobile-friendly Web content.
  • W3C is also developing resources to help authors understand how to create content that is both mobile-friendly and accessible to people with disabilities. A draft of Relationship between Mobile Web Best Practices (MWBP) and Web Content Accessibility Guidelines (WCAG) is jointly published by the The Mobile Web Best Practices Working Group and WAI's Education & Outreach Working Group (EOWG).
  •  
    Most quality online stores. Know whether you are a trusted online retailer in the world. Whatever we can buy very good quality. and do not hesitate. Everything is very high quality. Including clothes, accessories, bags, cups. Highly recommended. This is one of the trusted online store in the world. View now www.retrostyler.com
Gonzalo San Gil, PhD.

10 Search Engines to Explore the Invisible Web - 6 views

  •  
    [The Invisible Web refers to the part of the WWW that's not indexed by the search engines. Most of us think that that search powerhouses like Google and Bing are like the Great Oracle"¦they see everything. Unfortunately, they can't because they aren't divine at all; they are just web spiders who index pages by following one hyperlink after the other.]
Paul Merrell

How to Encrypt the Entire Web for Free - The Intercept - 0 views

  • If we’ve learned one thing from the Snowden revelations, it’s that what can be spied on will be spied on. Since the advent of what used to be known as the World Wide Web, it has been a relatively simple matter for network attackers—whether it’s the NSA, Chinese intelligence, your employer, your university, abusive partners, or teenage hackers on the same public WiFi as you—to spy on almost everything you do online. HTTPS, the technology that encrypts traffic between browsers and websites, fixes this problem—anyone listening in on that stream of data between you and, say, your Gmail window or bank’s web site would get nothing but useless random characters—but is woefully under-used. The ambitious new non-profit Let’s Encrypt aims to make the process of deploying HTTPS not only fast, simple, and free, but completely automatic. If it succeeds, the project will render vast regions of the internet invisible to prying eyes.
  • Encryption also prevents attackers from tampering with or impersonating legitimate websites. For example, the Chinese government censors specific pages on Wikipedia, the FBI impersonated The Seattle Times to get a suspect to click on a malicious link, and Verizon and AT&T injected tracking tokens into mobile traffic without user consent. HTTPS goes a long way in preventing these sorts of attacks. And of course there’s the NSA, which relies on the limited adoption of HTTPS to continue to spy on the entire internet with impunity. If companies want to do one thing to meaningfully protect their customers from surveillance, it should be enabling encryption on their websites by default.
  • Let’s Encrypt, which was announced this week but won’t be ready to use until the second quarter of 2015, describes itself as “a free, automated, and open certificate authority (CA), run for the public’s benefit.” It’s the product of years of work from engineers at Mozilla, Cisco, Akamai, Electronic Frontier Foundation, IdenTrust, and researchers at the University of Michigan. (Disclosure: I used to work for the Electronic Frontier Foundation, and I was aware of Let’s Encrypt while it was being developed.) If Let’s Encrypt works as advertised, deploying HTTPS correctly and using all of the best practices will be one of the simplest parts of running a website. All it will take is running a command. Currently, HTTPS requires jumping through a variety of complicated hoops that certificate authorities insist on in order prove ownership of domain names. Let’s Encrypt automates this task in seconds, without requiring any human intervention, and at no cost.
  • ...2 more annotations...
  • The benefits of using HTTPS are obvious when you think about protecting secret information you send over the internet, like passwords and credit card numbers. It also helps protect information like what you search for in Google, what articles you read, what prescription medicine you take, and messages you send to colleagues, friends, and family from being monitored by hackers or authorities. But there are less obvious benefits as well. Websites that don’t use HTTPS are vulnerable to “session hijacking,” where attackers can take over your account even if they don’t know your password. When you download software without encryption, sophisticated attackers can secretly replace the download with malware that hacks your computer as soon as you try installing it.
  • The transition to a fully encrypted web won’t be immediate. After Let’s Encrypt is available to the public in 2015, each website will have to actually use it to switch over. And major web hosting companies also need to hop on board for their customers to be able to take advantage of it. If hosting companies start work now to integrate Let’s Encrypt into their services, they could offer HTTPS hosting by default at no extra cost to all their customers by the time it launches.
  •  
    Don't miss the video. And if you have a web site, urge your host service to begin preparing for Let's Encrypt. (See video on why it's good for them.)
Paul Merrell

Google's web app plans collide with Apple's iPhone, Safari rules - CNET - 0 views

  • Google and Apple, which already battle over mobile operating systems, are opening a new front in their fight. How that plays out may determine the future of the web. Google was born on the web, and its business reflects its origin. The company depends on the web for search and advertising revenue. So it isn't a surprise that Google sees the web as key to the future of software. Front and center are web apps, interactive websites with the same power as conventional apps that run natively on operating systems like Windows, Android, MacOS and iOS.  Apple has a different vision of the future, one that plays to its strengths. The company revolutionized mobile computing with its iPhone line. Its profits depend on those products and the millions of apps that run on them. Apple, unsurprisingly, appears less excited about developments, like web apps, that could cut into its earnings.
Paul Merrell

Common Crawl Founder Gil Elbaz Speaks About New Relationship With Amazon, Semantic Web ... - 0 views

  • The Common Crawl Foundation’s repository of openly and freely accessible web crawl data is about to go live as a Public Data Set on Amazon Web Services.
  • Elbaz’ goal in developing the repository: “You can’t access, let alone download, the Google or the Bing crawl data. So certainly we’re differentiated in being very open and transparent about what we’re crawling and actually making it available to developers,” he says. “You might ask why is it going to be revolutionary to allow many more engineers and researchers and developers and students access to this data, whereas historically you have to work for one of the big search engines…. The question is, the world has the largest-ever corpus of knowledge out there on the web, and is there more that one can do with it than Google and Microsoft and a handful of other search engines are already doing? And the answer is unquestionably yes. ”
  • Common Crawl’s data already is stored on Amazon’s S3 service, but now Amazon will be providing the storage space for free through the Public Data Set program. Not only does that remove from Common Crawl the storage burden and costs for hosting its crawl of 5 billion web pages – some 50 or 60 terabytes large – but it should make it easier for users to access the data, and remove the bandwidth-related costs they might incur for downloads. Users won’t have to deal with setting up accounts, being responsible for bandwidth bills incurred, and more complex authentication processes.
Gary Edwards

How the Web was almost won ... Tim O'Reilly 1998 | Salon - 0 views

  •  
    The Justice Department's antitrust suit and Judge Jackson's finding of fact have focused on how Microsoft used its operating system dominance to wrest control of the Web browser market from Netscape. Perhaps even more significant is the untold story of Microsoft's attempts to corner the Web server market. As someone whose company competes directly with Microsoft, (we sell a Web server called WebSite that runs on Windows NT, and we are active in promoting Perl, Linux and other open-source technologies), I've been privy to some of the not-so-small details that have guided the course of this recent history. And, it seems to me that if it weren't for the work of a small group of independent open-source software developers, the Justice Department intervention might have come too late not just for Netscape but the Web as a whole.
Gary Edwards

Wolfram Alpha is Coming -- and It Could be as Important as Google | Twine - 0 views

  • The first question was could (or even should) Wolfram Alpha be built using the Semantic Web in some manner, rather than (or as well as) the Mathematica engine it is currently built on. Is anything missed by not building it with Semantic Web's languages (RDF, OWL, Sparql, etc.)? The answer is that there is no reason that one MUST use the Semantic Web stack to build something like Wolfram Alpha. In fact, in my opinion it would be far too difficult to try to explicitly represent everything Wolfram Alpha knows and can compute using OWL ontologies. It is too wide a range of human knowledge and giant OWL ontologies are just too difficult to build and curate.
  • However for the internal knowledge representation and reasoning that takes places in the system, it appears Wolfram has found a pragmatic and efficient representation of his own, and I don't think he needs the Semantic Web at that level. It seems to be doing just fine without it. Wolfram Alpha is built on hand-curated knowledge and expertise. Wolfram and his team have somehow figured out a way to make that practical where all others who have tried this have failed to achieve their goals. The task is gargantuan -- there is just so much diverse knowledge in the world. Representing even a small segment of it formally turns out to be extremely difficult and time-consuming.
  • It has generally not been considered feasible for any one group to hand-curate all knowledge about every subject. This is why the Semantic Web was invented -- by enabling everyone to curate their own knowledge about their own documents and topics in parallel, in principle at least, more knowledge could be represented and shared in less time by more people -- in an interoperable manner. At least that is the vision of the Semantic Web.
  • ...1 more annotation...
  • Where Google is a system for FINDING things that we as a civilization collectively publish, Wolfram Alpha is for ANSWERING questions about what we as a civilization collectively know. It's the next step in the distribution of knowledge and intelligence around the world -- a new leap in the intelligence of our collective "Global Brain." And like any big next-step, Wolfram Alpha works in a new way -- it computes answers instead of just looking them up.
  •  
    A Computational Knowledge Engine for the Web In a nutshell, Wolfram and his team have built what he calls a "computational knowledge engine" for the Web. OK, so what does that really mean? Basically it means that you can ask it factual questions and it computes answers for you. It doesn't simply return documents that (might) contain the answers, like Google does, and it isn't just a giant database of knowledge, like the Wikipedia. It doesn't simply parse natural language and then use that to retrieve documents, like Powerset, for example. Instead, Wolfram Alpha actually computes the answers to a wide range of questions -- like questions that have factual answers such as "What country is Timbuktu in?" or "How many protons are in a hydrogen atom?" or "What is the average rainfall in Seattle this month?," "What is the 300th digit of Pi?," "where is the ISS?" or "When was GOOG worth more than $300?" Think about that for a minute. It computes the answers. Wolfram Alpha doesn't simply contain huge amounts of manually entered pairs of questions and answers, nor does it search for answers in a database of facts. Instead, it understands and then computes answers to certain kinds of questions.
Paul Merrell

It's the business processes that are bound to MSOffice - Windows' dominance stifles dem... - 0 views

  • 15 years of workgroup oriented business process automation based on the MSOffice productivity environment has had an impact. Microsoft pretty much owns the "client" in "client/server" because so many of these day-to-day business processes are bound to the MSOffice productivity environment in some way.
  • The good news is that there is a great transition underway. The world is slowly but inexorably moving from "client/server" systems to an emerging architecture one might describe as "client/ WebStack-Cloud-Ria /server. The reason for the great transition is simple; the productivity advantages of putting the Web in the center of information systems and workflows are extraordinary.
  • Now the bad news. Microsoft fully understands this and has spent years preparing for a very controlled transition. They are ready. The pieces are finally falling into place for a controlled transition connecting legacy MSOffice bound business processes to the Microsoft WebStack-Cloud-RiA model (Exchange-SharePoint-SQL Server-Mesh-Silverlight).
  • ...2 more annotations...
  • Anyone with a pulse knows that the Web is the future. Yet, look at how much time and effort has been spent on formats, protocols and interfaces that at best would "break" the Web, and at worst, determine to refight the 1995 office desktop wars. In Massachusetts, while the war between ODF and OOXML raged, Exchange and SharePoint servers were showing up everywhere. It was as if the outcome of the desktop office format decision didn't matter to the Web future.
  • And if we don't successfully re-purpose MSOffice to the Open Web? (And for that matter, OpenOffice). The Web will break. The great transition will be directed to the MS WebStack-Cloud-RiA model. Web enhanced business processes will be entangled with proprietary formats, protocols and interfaces. The barriers to this emerging desktop-Web-device platform of business processes and systems will prove even more impenetrable than the 1995 desktop productivity environment. Linux will not penetrate the business desktop arena. And we will all wonder what it was that we were doing as this unfolded before our eyes.
Gary Edwards

How HTML 5 Is Already Changing the Web - Webmonkey - 0 views

  •  
    HTML 5 represents the biggest leap forward in web standards in almost a decade. Unlike the specifications that came before it, HTML 5 is not merely intended to present content to a web browser. Its goal is to bring the web into maturity as a full-fledged application platform - a level playing field where video, sound, images, animations, and full interactivity with your computer are all standardized. And it may be a long way off still, but elements of HTML 5 are already reshaping the way we use the web.
‹ Previous 21 - 40 of 452 Next › Last »
Showing 20 items per page