Skip to main content

Home/ Future of the Web/ Group items matching "architecture" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Gary Edwards

The real reason Google is making Chrome | Computerworld Blogs - 0 views

  •  
    Good analysis by Stephen Vaughan-Nichols. He gets it right. Sort of. Stephen believes that Chrome is desinged to kill MSOffice. Maybe, but i think it's way too late for that. IMHO, Chrome is designed to keep Google and the Open Web in the game. A game that Microsoft is likely to run away with. Microsoft has built an easy to use transiton bridge form MSOffice desktop centric "client/server" computing model to a Web centirc but proprietary RiA-WebStack-Cloud model. In short, there is an on going great transtion of traditional client/server apps to an emerging model we might call client/ WebStack-Cloud-RiA /server computing model. As the world shifts from a Web document model to one driven by Web Applications, there is i believe a complimentary shift towards the advantage Micorsoft holds via the desktop "client/server" monopoly. For Microsoft, this is just a transtion. Painful from a monopolist profitability view point - but unavoidably necessary. The transition is no doubt helped by the OOXML <> XAML "Fixed/flow" Silverlight ready conversion component. MS also has a WebStack-Cloud (Mesh) story that has become an unstoppable juggernaut (Exchange/SharePoint/SQL Server as the WebSTack). WebKit based RiA challengers like Adobe Apollo, Google Chrome, and Apple SproutCore-Cocoa have to figure out how to crack into the great transition. MS has succeeded in protecting their MSOffice monopoly until such time as they had all the transtion pieces in place. They have a decided advantage here. It's also painfully obvious that the while the WebKit guys have incredible innovation on their side, they are still years behind the complete desktop to WebStack-RiA-Cloud to device to legacy servers application story Microsoft is now selling into the marketplace. They also are seriously lacking in developer tools. Still, the future of the Open Web hangs in the balance. Rather than trying to kill MSOffice, i would think a better approach would be that of trying to
  •  
    There are five reasons why Google is doing this, and, if you read the comic book closely - yes, I'm serious - and you know technology you can see the reasons for yourself. These, in turn, lead to what I think is Google's real goal for Chrome.
  •  
    I'm still keeping the door open on a suspicion that Microsoft may have planned to end the life of MS Office after the new fortress on the server side is ready. The code base is simply too brittle to have a competitive future in the feature wars. I can't get past my belief that if Microsoft saw any future in the traditional client-side office suite, it would have been building a new one a decade ago. Too many serious bugs too deeply buried in spaghetti code to fix; it's far easier to rebuild from the ground up. Word dates to 1984, Excel to 1985, Powerpoint to 1987, All were developed for the Mac, ported years later to Windows. At least Word is still running a deeply flawed 16-bit page layout engine. E.g., page breaks across subdocuments have been broken since Word 1.0. Technology designed to replace yet still largely defined by its predecessor, the IBM Correcting Selectric electro-mechanical typewriter. Mid-80s stand-alone, non-networked computer technology in the World Wide Web era? Where's the future in software architecture developed two decades ago, before the Connected World? I suspect Office's end is near. Microsoft's problem is migrating their locked-in customers to the new fortress on the server side. The bridge is OOXML. In other words, Google doesn't have to kill Office; Microsoft will do that itself. Giving the old cash cow a face lift and fresh coat of lipstick? That's the surest sign that the old cow's owner is keeping a close eye on prices in the commodity hamburger market while squeezing out the last few buckets of milk.
Paul Merrell

IDABC - Revision of the EIF and AG - 0 views

  • In 2006, the European Commission has started the revision of the European Interoperability Framework (EIF) and the Architecture Guidelines (AG).
  • The European Commission has started drafting the EIF v2.0 in close cooperation with the concerned Commission services and with the Members States as well as with the Candidate Countries and EEA Countries as observers.
  • A draft document from which the final EIF V2.0 will be elaborated was available for external comments till the 22nd September. The proposal for the new EIF v2.0 that has been subject to consultation, is available: [3508 Kb]
  •  
    This planning document forms the basis for the forthcoming work to develop European Interoperability Framework v. 2.0. It is the overview of things to come, so to speak. Well worth the read to see how SOA concepts are evolving at the bleeding edge. But also noteworthy for the faceted expansion in the definition of "interoperability," which now includes: [i] political context; [ii] legal interop; [iii] organizational interop; [iv] semantic interop; and [v] technical interop. A lot of people talk the interop talk; this is a document from people who are walking the interop walk, striving to bring order out of the chaos of incompatible ICT systems across the E.U.
  •  
    Full disclosure: I submitted detailed comments on the draft of the subject document on behalf of the Universal Interoperability Council. One theme of my comments was embraced in this document: the document recognizes human-machine interactions as a facet of interoperability, moving accessibility and usability from sideshow treatment in the draft to part of the technical interop dimension of the plan.
Matteo Spreafico

Google vNext - 0 views

  • For the last several months, a large team of Googlers has been working on a secret project: a next-generation architecture for Google's web search.
  • The new infrastructure sits "under the hood" of Google's search engine, which means that most users won't notice a difference in search results. But web developers and power searchers might notice a few differences, so we're opening up a web developer preview to collect feedback
  • We invite you to visit the web developer preview of Google's new infrastructure at http://www2.sandbox.google.com/ and try searches there.
Paul Merrell

LEAKED: Secret Negotiations to Let Big Brother Go Global | Wolf Street - 0 views

  • Much has been written, at least in the alternative media,&nbsp;about the Trans Pacific Partnership (TPP) and the Transatlantic Trade and Investment Partnership (TTIP), two multilateral trade treaties being negotiated between the representatives of dozens of national governments and armies of corporate lawyers and lobbyists (on which you can read more here, here and here). However, much less is known about the decidedly more secretive Trade in Services Act (TiSA), which involves more countries than either of the other two. At least until now, that is. Thanks to a leaked document jointly published by the Associated Whistleblowing Press and Filtrala, the potential ramifications of the treaty being hashed out behind hermetically sealed doors in Geneva are finally seeping out into the public arena.
  • If signed, the treaty would affect all services ranging from electronic transactions and data flow, to veterinary and architecture services. It would almost certainly open the floodgates to the final wave of privatization of public services, including the provision of healthcare, education and water. Meanwhile, already privatized companies would be prevented from a re-transfer to the public sector by a so-called barring “ratchet clause” – even if the privatization failed. More worrisome still, the proposal stipulates that no participating state can stop the use, storage and exchange of personal data relating to their territorial base. Here’s more from Rosa Pavanelli, general secretary of Public Services International (PSI):
  • The leaked documents confirm our worst fears that TiSA is being used to further the interests of some of the largest corporations on earth (…) Negotiation of unrestricted data movement, internet neutrality and how electronic signatures can be used strike at the heart of individuals’ rights. Governments must come clean about what they are negotiating in these secret trade deals. Fat chance of that, especially in light of the fact that the text is designed to be almost impossible to repeal, and is to be “considered confidential” for five years after being signed. What that effectively means is that the U.S. approach to data protection (read: virtually non-existent) could very soon become the norm across 50 countries spanning the breadth and depth of the industrial world.
  • ...1 more annotation...
  • The main players in the top-secret negotiations are the United States and all 28 members of the European Union. However, the broad scope of the treaty also includes Australia, Canada, Chile, Colombia, Costa Rica, Hong Kong, Iceland, Israel, Japan, Liechtenstein, Mexico, New Zealand, Norway, Pakistan, Panama, Paraguay, Peru, South Korea, Switzerland, Taiwan and Turkey. Combined they represent almost 70 percent of all trade in services worldwide. An explicit goal of the TiSA negotiations is to overcome the exceptions in GATS that protect certain non-tariff trade barriers, such as data protection. For example, the draft Financial Services Annex of TiSA, published by Wikileaks in June 2014, would allow financial institutions, such as banks, the free transfer of data, including personal data, from one country to another. As Ralf Bendrath, a senior policy advisor to the MEP Jan Philipp Albrecht, writes in State Watch, this would constitute a radical carve-out from current European data protection rules:
Paul Merrell

The Digital Hunt for Duqu, a Dangerous and Cunning U.S.-Israeli Spy Virus - The Intercept - 1 views

  • “Is this related to what we talked about before?” Bencsáth said, referring to a previous discussion they’d had about testing new services the company planned to offer customers. “No, something else,” Bartos said. “Can you come now? It’s important. But don’t tell anyone where you’re going.” Bencsáth wolfed down the rest of his lunch and told his colleagues in the lab that he had a “red alert” and had to go. “Don’t ask,” he said as he ran out the door. A while later, he was at Bartos’ office, where a triage team had been assembled to address the problem they wanted to discuss. “We think we’ve been hacked,” Bartos said.
  • They found a suspicious file on a developer’s machine that had been created late at night when no one was working. The file was encrypted and compressed so they had no idea what was inside, but they suspected it was data the attackers had copied from the machine and planned to retrieve later. A search of the company’s network found a few more machines that had been infected as well. The triage team felt confident they had contained the attack but wanted Bencsáth’s help determining how the intruders had broken in and what they were after. The company had all the right protections in place—firewalls, antivirus, intrusion-detection and -prevention systems—and still the attackers got in.
  • Bencsáth was a teacher, not a malware hunter, and had never done such forensic work before. At the CrySyS Lab, where he was one of four advisers working with a handful of grad students, he did academic research for the European Union and occasional hands-on consulting work for other clients, but the latter was mostly run-of-the-mill cleanup work—mopping up and restoring systems after random virus infections. He’d never investigated a targeted hack before, let alone one that was still live, and was thrilled to have the chance. The only catch was, he couldn’t tell anyone what he was doing. Bartos’ company depended on the trust of customers, and if word got out that the company had been hacked, they could lose clients. The triage team had taken mirror images of the infected hard drives, so they and Bencsáth spent the rest of the afternoon poring over the copies in search of anything suspicious. By the end of the day, they’d found what they were looking for—an “infostealer” string of code that was designed to record passwords and other keystrokes on infected machines, as well as steal documents and take screenshots. It also catalogued any devices or systems that were connected to the machines so the attackers could build a blueprint of the company’s network architecture. The malware didn’t immediately siphon the stolen data from infected machines but instead stored it in a temporary file, like the one the triage team had found. The file grew fatter each time the infostealer sucked up data, until at some point the attackers would reach out to the machine to retrieve it from a server in India that served as a command-and-control node for the malware.
  • ...1 more annotation...
  • Bencsáth took the mirror images and the company’s system logs with him, after they had been scrubbed of any sensitive customer data, and over the next few days scoured them for more malicious files, all the while being coy to his colleagues back at the lab about what he was doing. The triage team worked in parallel, and after several more days they had uncovered three additional suspicious files. When Bencsáth examined one of them—a kernel-mode driver, a program that helps the computer communicate with devices such as printers—his heart quickened. It was signed with a valid digital certificate from a company in Taiwan (digital certificates are documents ensuring that a piece of software is legitimate). Wait a minute, he thought. Stuxnet—the cyberweapon that was unleashed on Iran’s uranium-enrichment program—also used a driver that was signed with a certificate from a company in Taiwan. That one came from RealTek Semiconductor, but this certificate belonged to a different company, C-Media Electronics. The driver had been signed with the certificate in August 2009, around the same time Stuxnet had been unleashed on machines in Iran.
« First ‹ Previous 41 - 46 of 46
Showing 20 items per page