Skip to main content

Home/ Open Web/ Group items tagged architecture

Rss Feed Group items tagged

Gary Edwards

CPU Wars - Intel to Play Fab for an ARM Chipmaker: Understanding What the Altera Deal M... - 0 views

  • Intel wants x86 to conquer all computing spaces -- including mobile -- and is trying to leverage its process lead to make that happen.  However, it's been slowed by a lack of inclusion of 4G cellular modems on-die and difficulties adapting to the mobile market's low component prices.  ARM, meanwhile, wants a piece of the PC and server markets, but has received a lukewarm response from consumers due to software compatibility concerns. The disappointing sales of (x86) tablet products using Microsoft Corp.'s (MSFT) Windows 8 and the flop of Windows RT (ARM) product in general somewhat unexpectedly had the net result of being a driver to maintain the status quo, allowing neither company to gain much ground.  For Intel, its partnership with Microsoft (the historic "Wintel" combo) has damaged its mobile efforts, as Windows 8 flopped in the tablet market.  Likewise ARM's efforts to score PC market share were stifled by the flop of Windows RT, which led to OEMs killing off ARM-based laptops and convertibles.
  • Both companies seem to have learned their lesson and are migrating away from Windows towards other platforms -- in ARM's case Chromebooks, and in Intel's case Android tablets/smartphones. But suffice it to say, ARM Holdings and Intel are still very much bitter enemies from a sales perspective.
  • III. Profit vs. Risk -- Understanding the Modern CPU Food Chain
  • ...16 more annotations...
  • Whether it's tablets or PCs, the processor is still one of the most expensive components onboard.  Aside from the discrete GPU -- if a device has one -- the CPU has the greatest earning potential for a large company like Intel because the CPU is the most complex component. Other components like the power supply or memory tend to either be lower margin or have more competitors.  The display, memory, and storage components are all sensitive to process, but see profit split between different parties (e.g. the company who makes the DRAM chips and the company who sells the stick of DRAM) and are primarily dependent on process technology. CPUs and GPUs remain the toughest product to make, as it's not enough to simply have the best process, you must also have the best architecture and the best optimization of that architecture for the space you're competing in. There's essentially five points of potential profit on the processor food chain: [CPU] Fabrication [CPU] Architecture design [CPU] Optimization OEM OS platform Of these, the fabrication/OS point is the most profitable (but is dependent on the number of OEM adopters).  The second most profitable niche is optimization (which again is dependent on OEM adopter market share), followed by OEM markups.  In terms of expense, fabrication and operating system designs requires the greatest capital investment and the highest risk.
  • In terms of difficulty/risk, the fabrication and operating system are the most difficult/risky points.  Hence in terms of combined risk, cost, and profitability the ranking of which points are "best" is arguably: Optimization Architecture design OS platfrom OEM Fabrication ...with the fabrication point being last largely because it's so high risk. In other words, the last thing Intel wants is to settle into a niche of playing fabs for everybody else's product, as that's an unsound approach.  If you can't keep up in terms of chip design, you typically spin off your fabs and opt for a different architecture direction -- just look at Advanced Micro Devices, Inc.'s (AMD) spinoff of GlobalFoundries and upcoming ARM product to see that.
  • IV. Top Firms' Role on That Food Chain
  • Apple has seen unbelievable profits due to this fundamental premise.  It controls the two most desirable points on the food chain -- OS and optimization -- while sharing some profit with its architecture designer (ARM Holdings) and a bit with the fabricator (Samsung Electronics Comp., Ltd. (KSC:005930)).  By choosing to play operating system maker, too, it adds to its profits, but also its risk.  Note that nearly every other first-party exclusive smartphone platform has failed or is about to fail (i.e. BlackBerry, Ltd. (TSE:BB) and the now-dead Palm).
  • Intel controls points 1, 2, and 5, currently, on the food chain.  Compared to Apple, Intel's points of control offer less risk, but also slightly less profitability. Its architecture control may be at risk, but even so, it's currently the top in its most risky/expensive point of control (fabrication), where as Apple's most risky/expensive point of control (OS development) is much less of a clear leader (as Android has surpassed Apple in market share).  Hence Apple might be a better short-term investment, but Intel certainly appears a better long-term investment.
  • Samsung is another top company in terms of market dominance and profit.  It occupies points 1, 3, 4, and 5 -- sometimes.  Sometimes Samsung's devices use third-party optimization firms like Qualcomm Inc. (QCOM) and NVIDIA Corp. (NVDA), which hurts profitability by removing one of the most profitable roles.  But Samsung makes up for this by being one of the largest and most successful third party manufacturers.
  • Microsoft enjoys a lot of profit due to its OS dominance, as does Google Inc. (GOOG); but both companies are limited in controlling only one point which they monetize in different ways (Microsoft by direct sales; Google by giving away OS product for free in return for web services market share and by proxy search advertising revenue).
  • Qualcomm and NVIDIA are also quite profitable operating solely as optimizers, as is ARM Holdings who serves as architecture maker to Qualcomm, NVIDIA, Apple, and Samsung.
  • V. Four Scenarios in the x86 vs. ARM Competition
  • Scenario one is that x86 proves dominant in the mobile space, assuming a comparable process.
  • A second scenario is that x86 and ARM are roughly tied, assuming a comparable process.
  • A third scenario is that x86 is inferior to ARM at a comparable process, but comparable or superior to ARM when the x86 chip is built using a superior process.  From the benchmarks I've seen to date, I personally believe this is most likely.
  • A fourth scenario is that x86 is so drastically inferior to ARM architecturally that a process lead by Intel can't make up for it.
  • This is perhaps the most interesting scenario, in the sense of thinking of how Intel would react, if not overly likely.  If Intel were faced with this scenario, I believe Intel would simply bite the bullet and start making ARM chips, leveraging its process lead to become the dominant ARM chipmaker.  To make up for the revenue it lost, paying licensing fees to ARM Holdings, it could focus its efforts in the OS space (it's Tizen Linux OS project with Samsung hints at that).  Or it could look to make up for lost revenue by expanding its production of other basic process-sensitive components (e.g. DRAM).  I think this would be Intel's best and most likely option in this scenario.
  • VI. Why Intel is Unlikely to Play Fab For ARM Chipmakers (Even if ARM is Better)
  • From Intel's point of view, there is an entrenched, but declining market for x86 chips because of Windows, and Intel will continue to support Atom chips (which will be required to run Windows 8 tablets), but growth on desktops will come from 64 bit desktop/server class non-Windows ARM devices - Chromebooks, Android laptops, possibly Apple's desktop products as well given they are going 64 bit ARM for their future iPhones. Even Windows has been trying to transition (unsuccessfully) to ARM. Again, the Windows server market is tied to x86, but Linux and FreeBSD servers will run on ARM as well, and ARM will take a chunk out of the server market when a decent 64bit ARM server chip is available as a result.
  •  
    Excellent article explaining the CPU war for the future of computing, as Intel and ARM square off.  Intel's x86 architecture dominates the era of client/server computing, with their famed WinTel alliance monopolizing desktop, notebook and server implementations.  But Microsoft was a no show with the merging mobile computing market, and now ARM is in position transition from their mobile dominance to challenge the desktop -notebook - server markets.   WinTel lost their shot at the mobile computing market, and now their legacy platforms are in play.  Good article!!! Well worth the read time  ................
Gary Edwards

Cloud Computing White Papers by the Open Group - 0 views

  •  
    Cloud Computing White Papers   The Open Group Cloud Work Group exists to create a common understanding among buyers and suppliers of how enterprises of all sizes and scales of operation can include Cloud Computing technology in a safe and secure way in their architectures to realize its significant cost, scalability, and agility benefits. It includes some of the industry's leading cloud providers and end-user organizations, collaborating on standard models and frameworks aimed at eliminating vendor lock-in for enterprises looking to benefit from Cloud products and services. The White Papers on this website form the current output of the Work Group. They are also available in PDF form from The Open Group bookstore for download and printing. Further papers will be added as the Work Group progresses. The initial focus of the Work Group is on business drivers for Cloud Computing, and this is reflected in the first items to appear: The Business Scenario Workshop Report White Paper: Building Return on Investment from Cloud Computing White Paper: Strengthening your Business Case for Using Cloud White Paper: Cloud Buyers' Decision Tree White Paper: Cloud Buyers' Requirements Questionnaire Further White Papers will address other key Work Group topics, including Architecture, Infrastructure, and Security.
Gary Edwards

Will Intel let Jen-Hsun Huang spread graphics beyond PCs? » VentureBeat - 1 views

  •  
    Nvidia chief executive Jen-Hsun Huang is on a mission to get graphics chips into everything from handheld computers to smart phones. He expects, for instance, that low-cost Netbooks will become the norm and that gadgets will need to have battery life lasting for days. Holding up an Ion platform, which couples an Intel low-cost Atom processor with an Nvidia integrated graphics chip set, he said his company is looking to determine "what is the soul of the new PC." With Ion, Huang said he is prepared for the future of the computer industry. But first, he has to deal with Intel. Good interview. See interview with Charlie Rose! The Dance of the Sugarplum Documents is about the evolution of the Web document model from a text-typographical/calculation model to one that is visually rich with graphical media streams meshing into traditional text/calc. The thing is, this visual document model is being defined on the edge. The challenge to the traditional desktop document model is coming from the edge, primarily from the WebKit - Chrome - iPhone Community. Jen-Hsun argues on Charlie Rose that desktop computers featured processing power and applications designed to automate typewritter (wordprocessing) and calculator (spreadsheet) functions. The x86 CPU design reflects this orientation. He argues that we are now entering the age of visual computing. A GPU is capable of dramatic increases in processing power because the architecture is geared to the volumes of graphical information being processed. Let the CPU do the traditional stuff, and let the GPU race into the future with the visual processing. That a GPU architecture can scale in parallel is an enormous advantage. But Jen-Hsun does not see the need to try to replicate CPU tasks in a GPU. The best way forward in his opinion is to combine the two!!!
Paul Merrell

Testosterone Pit - Home - The Other Reason Why IBM Throws A Billion At Linux ... - 0 views

  • IBM announced today that it would throw another billion at Linux, the open-source operating system, to run its Power System servers. The first time it had thrown a billion at Linux was in 2001, when Linux was a crazy, untested, even ludicrous proposition for the corporate world. So the moolah back then didn’t go to Linux itself, which was free, but to related technologies across hardware, software, and service, including things like sales and advertising – and into IBM’s partnership with Red Hat which was developing its enterprise operating system, Red Hat Enterprise Linux. “It helped start a flurry of innovation that has never slowed,” said Jim Zemlin, executive director of the Linux Foundation. IBM claims that the investment would “help clients capitalize on big data and cloud computing with modern systems built to handle the new wave of applications coming to the data center in the post-PC era.” Some of the moolah will be plowed into the Power Systems Linux Center in Montpellier, France, which opened today. IBM’s first Power Systems Linux Center opened in Beijing in May. IBM may be trying to make hay of the ongoing revelations that have shown that the NSA and other intelligence organizations in the US and elsewhere have roped in American tech companies of all stripes with huge contracts to perfect a seamless spy network. They even include physical aspects of surveillance, such as license plate scanners and cameras, which are everywhere [read.... Surveillance Society: If You Drive, You Get Tracked].
  • Then another boon for IBM. Experts at the German Federal Office for Security in Information Technology (BIS) determined that Windows 8 is dangerous for data security. It allows Microsoft to control the computer remotely through a “special surveillance chip,” the wonderfully named Trusted Platform Module (TPM), and a backdoor in the software – with keys likely accessible to the NSA and possibly other third parties, such as the Chinese. Risks: “Loss of control over the operating system and the hardware” [read.... LEAKED: German Government Warns Key Entities Not To Use Windows 8 – Links The NSA.
  • It would be an enormous competitive advantage for an IBM salesperson to walk into a government or corporate IT department and sell Big Data servers that don’t run on Windows, but on Linux. With the Windows 8 debacle now in public view, IBM salespeople don’t even have to mention it. In the hope of stemming the pernicious revenue decline their employer has been suffering from, they can politely and professionally hype the security benefits of IBM’s systems and mention in passing the comforting fact that some of it would be developed in the Power Systems Linux Centers in Montpellier and Beijing. Alas, Linux too is tarnished. The backdoors are there, though the code can be inspected, unlike Windows code. And then there is Security-Enhanced Linux (SELinux), which was integrated into the Linux kernel in 2003. It provides a mechanism for supporting “access control” (a backdoor) and “security policies.” Who developed SELinux? Um, the NSA – which helpfully discloses some details on its own website (emphasis mine): The results of several previous research projects in this area have yielded a strong, flexible mandatory access control architecture called Flask. A reference implementation of this architecture was first integrated into a security-enhanced Linux® prototype system in order to demonstrate the value of flexible mandatory access controls and how such controls could be added to an operating system. The architecture has been subsequently mainstreamed into Linux and ported to several other systems, including the Solaris™ operating system, the FreeBSD® operating system, and the Darwin kernel, spawning a wide range of related work.
  • ...1 more annotation...
  • Among a slew of American companies who contributed to the NSA’s “mainstreaming” efforts: Red Hat. And IBM? Like just about all of our American tech heroes, it looks at the NSA and other agencies in the Intelligence Community as “the Customer” with deep pockets, ever increasing budgets, and a thirst for technology and data. Which brings us back to Windows 8 and TPM. A decade ago, a group was established to develop and promote Trusted Computing that governs how operating systems and the “special surveillance chip” TPM work together. And it too has been cooperating with the NSA. The founding members of this Trusted Computing Group, as it’s called facetiously: AMD, Cisco, Hewlett-Packard, Intel, Microsoft, and Wave Systems. Oh, I almost forgot ... and IBM. And so IBM might not escape, despite its protestations and slick sales presentations, the suspicion by foreign companies and governments alike that its Linux servers too have been compromised – like the cloud products of other American tech companies. And now, they’re going to pay a steep price for their cooperation with the NSA. Read...  NSA Pricked The “Cloud” Bubble For US Tech Companies
Paul Merrell

Hewlett-Packard Traded WebOS for This: The Autonomy Gamble - 0 views

  • Content management systems today continue to be based on the types of structured database systems about one or two steps more evolved than dBASE. We've known they would be insufficient for the task, but we've put off the problem of composing a new architecture. It's already too late for major IT companies to start that new architecture from square one; if a company has any hope of addressing this colossal, underappreciated problem, it will need to acquire the architectural project in progress. This is what Hewlett-Packard announced yesterday that it intends to do: acquire a software firm whose core product aims to supplant everything we know about databases, both the SQL kind and the Google kind. In its place would come a clustered approach whose goal is no less than to be the central repository for meaning in the world.
  • As CEO Apotheker told analysts yesterday, HP intends to exploit the prospects for using Autonomy's technology as a foundation for a content management system. For now, that CMS would be a project for what, on the surface, seems an unlikely department: the Imaging and Printing Group (IPG). Autonomy describes this technology - which it calls Intelligent Data Operating Layer (IDOL) - as nothing less than a replacement for, a complete substitute for, a revolutionary disruption of, Google.
  • Elsewhere in Autonomy's literature is a monkey wrench it hurls directly at Google, with hopes of messing up its gears. Here, the company attacks the value of Google's page ranking technology in the enterprise: "in many cases, the most popular information is also the most relevant. The importance or popularity of a Web page is approximated by counting the number of other pages that are linked to it, and by how frequently those pages are viewed by other users. This works quite well on the Internet but in the enterprise it is doomed to failure. Firstly, there are no native links between information in the enterprise. Secondly, if a user happens to be an expert, perhaps in the field of gallium arsenide laser diodes, there may be no one else interested in the subject, but it is still imperative that they find relevant information." This is what HP is buying: an opportunity to disrupt Google. If IDOL is every bit the next stage of database evolution that Autonomy makes it out to be, then HP (at least in its executives' own minds) is not surrendering to Google at all, as some consumer publications this morning are suggesting. As HP perceives it, rather than cutting off Google's left arm, it's targeting the gut.
Gary Edwards

Government Market Drags Microsoft Deeper into the Cloud - 0 views

  •  
    Nice article from Scott M. Fulton describing Microsoft's iron fisted lock on government desktop productivity systems and the great transition to a Cloud Productivity Platform.  Keep in mind that in 2005, Massachusetts tried to do the same thing with their SOA effort.  Then Governor Romney put over $1 M into a beta test that produced the now infamous 300 page report written by Sam Hiser.  The details of this test resulted in the even more infamous da Vinci ODF plug-in for Microsoft Office desktops.   The lessons of Massachusetts are simple enough; it's not the formats or office suite applications.  It's the business process!  Conversion of documents not only breaks the document.  It also breaks the embedded "business process". The mystery here is that Microsoft owns the client side of client/server computing.  Compound documents, loaded with intertwined OLE, ODBC, ActiveX, and other embedded protocols and interface dependencies connecting data sources with work flow, are the fuel of these client/server business productivity systems.  Break a compound document and you break the business process.   Even though Massachusetts workers were wonderfully enthusiastic and supportive of an SOA based infrastructure that would include Linux servers and desktops as well as OSS productivity applications, at the end of the day it's all about getting the work done.  Breaking the business process turned out to be a show stopper. Cloud Computing changes all that.  The reason is that the Cloud is rapidly replacing client/server as the target architecture for new productivity developments; including data centers and transaction processing systems.  There are many reasons for the great transition, but IMHO the most important is that the Web combines communications with content, data, and collaborative computing.   Anyone who ever worked with the Microsoft desktop productivity environment knows that the desktop sucks as a communication device.  There was
Gary Edwards

Office to finally fully support ODF, Open XML, and PDF formats | ZDNet - 0 views

  •  
    The king of clicks returns!  No doubt there was a time when the mere mention of ODF and the now legendary XML "document" format wars with Microsoft could drive click counts into the statisphere.  Sorry to say though, those times are long gone. It's still a good story though.  Even if the fate of mankind and the future of the Internet no longer hinges on the outcome.  There is that question that continues defy answer; "Did Microsoft win or lose?"  So the mere announcement of supported formats in MSOffice XX is guaranteed to rev the clicks somewhat. Veteran ODF clickmeister SVN does make an interesting observation though: "The ironic thing is that, while this was as hotly debated am issue in the mid-2000s as are mobile patents and cloud implementation is today, this news was barely noticed. That's a mistake. Updegrove points out, "document interoperability and vendor neutrality matter more now than ever before as paper archives disappear and literally all of human knowledge is entrusted to electronic storage." He concluded, "Only if documents can be easily exchanged and reliably accessed on an ongoing basis will competition in the present be preserved, and the availability of knowledge down through the ages be assured. Without robust, universally adopted document formats, both of those goals will be impossible to attain." Updegrove's right of course. Don't believe me? Go into your office's archives and try to bring up documents your wrote in the 90s in WordPerfect or papers your staff created in the 80s with WordStar. If you don't want to lose your institutional memory, open document standards support is more important than ever. "....................................... Sorry but Updegrove is wrong.  Woefully wrong. The Web is the future.  Sure interoperability matters, but only as far as the Web and the future of Cloud Computing is concerned.  Sadly neither ODF or Open XML are Web ready.  The language of the Web is famously HTML, now HTML5+
Gary Edwards

NoSQL Pioneers Are Driving the Web's Manifest Destiny - 1 views

  •  
    Good Chart comparing four types of Data Stores: Key-Value, Tabular/Columnar, Document Store, Relational excerpt: The bottleneck is no longer around performance or the cost of computing - it's about quickly getting the information to thousands, or hundreds of thousands, of nodes trying to act as one computer delivering a service. Google and IBM both have written about the data center as a computer, and Facebook says it thinks of adding hardware at the rack level rather than at the server level. But the current means of storing and accessing data have not made this leap from a single server to a rack - let alone an entire data center. As programmers attempt this leap, they face several difficulties, which include working with existing software and programming languages and figuring out what problems and bottlenecks the new services built on these monolithic computer platforms will encounter. Plus, the IT world doesn't all move at once, which means plenty of jobs and workloads will continue with the old way of doing things - that is, relational databases such as Oracle's offerings and the open source MySQL, which Oracle now has a stake in thanks to its purchase of Sun. The result is not a steady movement to non-relational databases or other methods of storing data, but a back-and-forth as programmers and businesses figure out what kind of architecture they need and what problems they want to solve. For a closer look at the issue and a bunch of charts detailing how the landscape is currently laid out, analyst Matt Sarrel, has penned a report over at GigaOM Pro (sub. req'd.) on the NoSQL movement called "NoSQL Databases - Providing Extreme Scale and Flexibility."
Gary Edwards

Key Google Docs changes promise faster service | Relevant Results - CNET News - 0 views

  •  
    Jonathan Rochelle and Dave Girouard: Google's long-term vision of computing is based around the notion that the Web and the browser become the primary vehicles for applications, and Google Docs is an important part of realizing that vision. The main improvement was to create a common infrastructure across the Google Docs products, all of which came into Google from separate acquisitions, Rochelle said. This has paved the way for Google to offer users a chance to do character-by-character real-time editing of a document or spreadsheet, almost the same way Google Wave lets collaborators see each other's keystrokes in a Wave. Those changes have also allowed Google to take more control of the way documents are rendered and formatted in Google Docs, instead of passing the buck to the browser to make those decisions. This allows Google to ensure that documents will look the same on the desktop or in the cloud, an important consideration for designing marketing materials or reviewing architectural blueprints, for example.
Gary Edwards

Ansca Mobile's advanced mobile app development tool - 1 views

  •  
    SDK - OpenGL Developer tool for crossplatform development of iOS and Android Apps.  Much better performance than Flash.  Check out the KWiK add-ons for Adobe Photoshop for writing visually immersive books and magazines.  Corona is a must watch technology. excerpts: High-performance graphics. Corona was built from the ground up for blazing-fast performance. Built on top of OpenGL, OpenAL, and Lua, Corona uses the same industry-standard architecture as top-selling mobile games from Tapulous, Electronic Arts, and ngmoco. Develop across platforms. Corona has the only complete solution for developing across platforms, OS versions, and screen sizes. You can write once and build to iOS or Android at the touch of a button, and Corona will automatically scale your content from phones to tablets.
Gary Edwards

Is Salesforce Switching AppExchange To Google Wave? | BNET Technology Blog | BNET - 0 views

  •  
    Nice catch and cast from Michael Hickens. He walks us through some strange goings on at SalesForce.com. It seems Commander Benioff has ordered the good ship SlaesForce to turn on a dime, drop everything, and set a course for Wave. Good stuff: "...... Could Salesforce be reengineering its AppExchange platform to run standards-based code like HTML 5? The reason I ask is that none other than Salesforce CEO Marc Benioff listed his status on Facebook this weekend as: "working on salesforce.com's new architecture." There would have to be a very good reason, or a transformational event like Google's introduction of its Wave, for the company to change a key element of its strategy.
Paul Merrell

InfoQ: Google Wave's Architecture - 0 views

  • Operational Transformation This is the crucial part of Wave’s technology. Google Wave makes extensive use of Operational Transformations (OT) which are executed on the server. When an user edits a collaborative document opened by several users, the client program provides an Optimistic UI by immediately displaying what he/she types but it also sends the editing operation to the server to be ratified hoping that it will be accepted by the server. The client waits for the server to evaluate the operation and will cache any other operations until the server replies. After the server replies, all cached operations are sent from client to server in bulk. The server, considering operations received from other clients, will transform the operation accordingly and will inform all clients about the transformation, and the clients will update their UI accordingly. Operations are sent to the server and propagated to each client on a character by character basis, unless it is a bulk operation. The server is the keeper of the document and its version is considered the “correct” version. In the end, each client will be updated with the final version received from the server, which is the result of possibly many operational transformations. There are recovery means provided for communication failure or server/client crash. All XML documents exchanged between the client and the server carry a checksum for rapid identification of miscommunications.
Gary Edwards

Google Drops A Nuclear Bomb On Microsoft. And It's Made of Chrome. - 0 views

  •  
    Introducing the Chrome OS alternative to Windows: excerpt: What Google is doing is not recreating a new kind of OS, they're creating the best way to not need one at all. So why release this new OS instead of using Android? After all, it has already been successfully ported to netbooks. Google admits that there is some overlap there. But a key difference they don't mention is the ability to run on the x86 architecture. Android cannot do that (though there are ports), Chrome OS can and will. But more, Google wants to emphasize that Chrome OS is all about the web, whereas Android is about a lot of different things. Including apps that are not standard browser-based web apps. But Chrome OS will be all about the web apps. And no doubt HTML 5 is going to be a huge part of all of this. A lot of people are still wary about running web apps for when their computer isn't connected to the web. But HTML 5 has the potential to change that, as you'll be able to work in the browser even when not connected, and upload when you are again.
Gary Edwards

The Advantage of Cloud Infrastructure: Servers are Software - ReadWriteCloud - 0 views

  •  
    Excellent discussion and capture of the importance of Cloud-computing!   Guest author Joe Masters Emison is VP of research and development at BuildFax writes for readwriteweb: excerpt:  More and more companies are moving from traditional servers to virtual servers in the cloud, and many new service-based deployments are starting in the cloud. However, despite the overwhelming popularity of the cloud here, deployments in the cloud look a lot like deployments on traditional servers. Companies are not changing their systems architecture to take advantage of some of the unique aspects of being in the cloud. The key difference between remotely-hosted, virtualized, on-demand-by-API servers (the definition of the "cloud" for this post) and any other hardware-based deployment (e.g., dedicated, co-located, or not-on-demand-by-API virtualized servers) is that servers are software on the cloud. Software applications traditionally differ from server environments in several key ways: ..... Traditional servers require humans and hours-if not days-to launch; Software launches automatically and on demand in seconds or minutes ...... Traditional servers are physically limited-companies have a finite number available to them; Software, as a virtual/information resource, has no such physical limitation ..... Traditional servers are designed to serve many functions (often because of the above-mentioned physical limitations); Software is generally designed to serve a single function ...... Traditional servers are not designed to be discarded; Software is built around the idea that it runs ephemerally and can be terminated at any moment On the cloud, these differences can disappear.
Paul Merrell

Why the Sony hack is unlikely to be the work of North Korea. | Marc's Security Ramblings - 0 views

  • Everyone seems to be eager to pin the blame for the Sony hack on North Korea. However, I think it’s unlikely. Here’s why:1. The broken English looks deliberately bad and doesn’t exhibit any of the classic comprehension mistakes you actually expect to see in “Konglish”. i.e it reads to me like an English speaker pretending to be bad at writing English. 2. The fact that the code was written on a PC with Korean locale & language actually makes it less likely to be North Korea. Not least because they don’t speak traditional “Korean” in North Korea, they speak their own dialect and traditional Korean is forbidden. This is one of the key things that has made communication with North Korean refugees difficult. I would find the presence of Chinese far more plausible.
  • 3. It’s clear from the hard-coded paths and passwords in the malware that whoever wrote it had extensive knowledge of Sony’s internal architecture and access to key passwords. While it’s plausible that an attacker could have built up this knowledge over time and then used it to make the malware, Occam’s razor suggests the simpler explanation of an insider. It also fits with the pure revenge tact that this started out as. 4. Whoever did this is in it for revenge. The info and access they had could have easily been used to cash out, yet, instead, they are making every effort to burn Sony down. Just think what they could have done with passwords to all of Sony’s financial accounts? With the competitive intelligence in their business documents? From simple theft, to the sale of intellectual property, or even extortion – the attackers had many ways to become rich. Yet, instead, they chose to dump the data, rendering it useless. Likewise, I find it hard to believe that a “Nation State” which lives by propaganda would be so willing to just throw away such an unprecedented level of access to the beating heart of Hollywood itself.
  • 5. The attackers only latched onto “The Interview” after the media did – the film was never mentioned by GOP right at the start of their campaign. It was only after a few people started speculating in the media that this and the communication from DPRK “might be linked” that suddenly it became linked. I think the attackers both saw this as an opportunity for “lulz” and as a way to misdirect everyone into thinking it was a nation state. After all, if everyone believes it’s a nation state, then the criminal investigation will likely die.
  • ...4 more annotations...
  • 6. Whoever is doing this is VERY net and social media savvy. That, and the sophistication of the operation, do not match with the profile of DPRK up until now. Grugq did an excellent analysis of this aspect his findings are here – http://0paste.com/6875#md 7. Finally, blaming North Korea is the easy way out for a number of folks, including the security vendors and Sony management who are under the microscope for this. Let’s face it – most of today’s so-called “cutting edge” security defenses are either so specific, or so brittle, that they really don’t offer much meaningful protection against a sophisticated attacker or group of attackers.
  • 8. It probably also suits a number of political agendas to have something that justifies sabre-rattling at North Korea, which is why I’m not that surprised to see politicians starting to point their fingers at the DPRK also. 9. It’s clear from the leaked data that Sony has a culture which doesn’t take security very seriously. From plaintext password files, to using “password” as the password in business critical certificates, through to just the shear volume of aging unclassified yet highly sensitive data left out in the open. This isn’t a simple slip-up or a “weak link in the chain” – this is a serious organization-wide failure to implement anything like a reasonable security architecture.
  • The reality is, as things stand, Sony has little choice but to burn everything down and start again. Every password, every key, every certificate is tainted now and that’s a terrifying place for an organization to find itself. This hack should be used as the definitive lesson in why security matters and just how bad things can get if you don’t take it seriously. 10. Who do I think is behind this? My money is on a disgruntled (possibly ex) employee of Sony.
  • EDIT: This appears (at least in part) to be substantiated by a conversation the Verge had with one of the alleged hackers – http://www.theverge.com/2014/11/25/7281097/sony-pictures-hackers-say-they-want-equality-worked-with-staff-to-break-in Finally for an EXCELLENT blow by blow analysis of the breach and the events that followed, read the following post by my friends from Risk Based Security – https://www.riskbasedsecurity.com/2014/12/a-breakdown-and-analysis-of-the-december-2014-sony-hack EDIT: Also make sure you read my good friend Krypt3ia’s post on the hack – http://krypt3ia.wordpress.com/2014/12/18/sony-hack-winners-and-losers/
  •  
    Seems that the FBI overlooked a few clues before it told Obama to go ahead and declare war against North Korea. 
Paul Merrell

iTWire - Huawei claims 30Gbps wireless "beyond LTE" - 0 views

  • Huawei says it has "recently introduced...Beyond LTE technology, which significantly increases peak rates to 30Gbps - over 20 times faster than existing commercial LTE networks." It claims to have achieved this with "key breakthroughs in antenna structure, radio frequency architecture, IF (intermediate frequency) algorithms, and multi-user MIMO (multi-input multi-output).
Paul Merrell

The US is Losing Control of the Internet…Oh, Really? | Global Research - 0 views

  • All of the major internet organisations have pledged, at a summit in Uruguay, to free themselves of the influence of the US government. The directors of ICANN, the Internet Engineering Task Force, the Internet Architecture Board, the World Wide Web Consortium, the Internet Society and all five of the regional Internet address registries have vowed to break their associations with the US government. In a statement, the group called for “accelerating the globalization of ICANN and IANA functions, towards an environment in which all stakeholders, including all governments, participate on an equal footing”. That’s a distinct change from the current situation, where the US department of commerce has oversight of ICANN. In another part of the statement, the group “expressed strong concern over the undermining of the trust and confidence of Internet users globally due to recent revelations of pervasive monitoring and surveillance”. Meanwhile, it was announced that the next Internet Governance Summit would be held in Brazil, whose president has been extremely critical of the US over web surveillance. In a statement announcing the location of the summit, Brazilian president Dilma Rousseff said: “The United States and its allies must urgently end their spying activities once and for all.”
Paul Merrell

Spies and internet giants are in the same business: surveillance. But we can stop them ... - 0 views

  • On Tuesday, the European court of justice, Europe’s supreme court, lobbed a grenade into the cosy, quasi-monopolistic world of the giant American internet companies. It did so by declaring invalid a decision made by the European commission in 2000 that US companies complying with its “safe harbour privacy principles” would be allowed to transfer personal data from the EU to the US. This judgment may not strike you as a big deal. You may also think that it has nothing to do with you. Wrong on both counts, but to see why, some background might be useful. The key thing to understand is that European and American views about the protection of personal data are radically different. We Europeans are very hot on it, whereas our American friends are – how shall I put it? – more relaxed.
  • Given that personal data constitutes the fuel on which internet companies such as Google and Facebook run, this meant that their exponential growth in the US market was greatly facilitated by that country’s tolerant data-protection laws. Once these companies embarked on global expansion, however, things got stickier. It was clear that the exploitation of personal data that is the core business of these outfits would be more difficult in Europe, especially given that their cloud-computing architectures involved constantly shuttling their users’ data between server farms in different parts of the world. Since Europe is a big market and millions of its citizens wished to use Facebook et al, the European commission obligingly came up with the “safe harbour” idea, which allowed companies complying with its seven principles to process the personal data of European citizens. The circle having been thus neatly squared, Facebook and friends continued merrily on their progress towards world domination. But then in the summer of 2013, Edward Snowden broke cover and revealed what really goes on in the mysterious world of cloud computing. At which point, an Austrian Facebook user, one Maximilian Schrems, realising that some or all of the data he had entrusted to Facebook was being transferred from its Irish subsidiary to servers in the United States, lodged a complaint with the Irish data protection commissioner. Schrems argued that, in the light of the Snowden revelations, the law and practice of the United States did not offer sufficient protection against surveillance of the data transferred to that country by the government.
  • The Irish data commissioner rejected the complaint on the grounds that the European commission’s safe harbour decision meant that the US ensured an adequate level of protection of Schrems’s personal data. Schrems disagreed, the case went to the Irish high court and thence to the European court of justice. On Tuesday, the court decided that the safe harbour agreement was invalid. At which point the balloon went up. “This is,” writes Professor Lorna Woods, an expert on these matters, “a judgment with very far-reaching implications, not just for governments but for companies the business model of which is based on data flows. It reiterates the significance of data protection as a human right and underlines that protection must be at a high level.”
  • ...2 more annotations...
  • This is classic lawyerly understatement. My hunch is that if you were to visit the legal departments of many internet companies today you would find people changing their underpants at regular intervals. For the big names of the search and social media worlds this is a nightmare scenario. For those of us who take a more detached view of their activities, however, it is an encouraging development. For one thing, it provides yet another confirmation of the sterling service that Snowden has rendered to civil society. His revelations have prompted a wide-ranging reassessment of where our dependence on networking technology has taken us and stimulated some long-overdue thinking about how we might reassert some measure of democratic control over that technology. Snowden has forced us into having conversations that we needed to have. Although his revelations are primarily about government surveillance, they also indirectly highlight the symbiotic relationship between the US National Security Agency and Britain’s GCHQ on the one hand and the giant internet companies on the other. For, in the end, both the intelligence agencies and the tech companies are in the same business, namely surveillance.
  • And both groups, oddly enough, provide the same kind of justification for what they do: that their surveillance is both necessary (for national security in the case of governments, for economic viability in the case of the companies) and conducted within the law. We need to test both justifications and the great thing about the European court of justice judgment is that it starts us off on that conversation.
Paul Merrell

Red Hat's CEO: Clouds can become the mother of all lock-ins | Cloud Computing - InfoWorld - 0 views

  • Cloud architecture has to be defined in a way that allows applications to move around, or clouds can become the mother of all lock-ins, warned Red Hat's CEO James Whitehurst. Once users get stuck in something, it's hard for them to move, Whitehurst said in an interview. The industry has to get in front of the cloud computing wave and make sure this next generation infrastructure is defined in a way that's friendly to customers, rather than to IT vendors, according to Whitehurst.
  • The cloud certification program was announced last year, and Amazon Web Services was the first cloud provider to get certified. Since then, NTT and IBM have been added to the list of certified partners and more are on the way, according to Whitehurst.
  • To be able to move a workload from a data center to a cloud or between two clouds, a connecting API (application programming interface) is needed, and there are a plethora of different ones being developed. Fewer would be better, according to Whitehurst. However, the real challenge isn't the API, but ensuring that the application will run with the same performance when it has been moved.
Paul Merrell

Here it comes: 'Super WiFi' - 0 views

  • Microsoft, Google and other tech companies won a key victory in Washington, D.C., today as the Federal Communications Commission moved to open up vacant spectrum between television channels for unlicensed use by wireless devices -- a development expected to lead to a powerful new form of wireless Internet access.
  • White spaces Internet is often called “wifi on steroids” -- working in much the same way as wifi but with a potential range of multiple miles, requiring fewer access points and offering the ability to better penetrate obstructions such as walls
1 - 20 of 24 Next ›
Showing 20 items per page