Skip to main content

Home/ Open Web/ Group items tagged Designing-Services

Rss Feed Group items tagged

Global Web Solution

Website Designing Service: We have over 8 years of experience in website designing and ... - 0 views

  •  
    Global Web Solution offers best website designing services because we have an expert team of website designers, web developers, and Internet marketing specialists all ready to follow your web site from conception to completion.
Gary Edwards

Does It Matter Who Wins the Browser Wars? Only if you care about the Future of the Open... - 1 views

  •  
    The Future of the Open Web You're right that the browser wars do not matter - except for this point of demarcation; browsers that support HTML+ and browser that support 1998 HTML. extensive comment by ~ge~ Not all Web services and applications support HTML+, the rapidly advancing set of technologies that includes HTML5, CSS3, SVG/Canvas, and JavaScript (including the libraries and JSON). Microsoft has chosen to draw the Open Web line at what amounts to 1998-2001 level of HTML/CSS. Above that line, they provision a rich-client / rich-server Web model bound to the .NET-WPF platform where C#, Silverlight, and XAML are very prominent. Noticeably, Open Web standards are for the most part replaced at this richer MSWeb level by proprietary technologies. Through limited support for HTML/CSS, IE8 itself acts to dumb down the Open Web. The effect of this is that business systems and day-to-day workflow processes bound to the ubiquitous and very "rich" MSOffice Productivity Environment have little choice when it comes to transitioning to the Web but to stay on the Microsoft 2010 treadmill. Sure, at some point legacy business processes and systems will be rewritten to the Web. The question is, will it be the Open Web or the MS-Web? The Open Web standards are the dividing line between owning your information and content, or, having that content bound to a Web platform comprised of proprietary Microsoft services, systems and applications. Web designers and developers are still caught up in the browser wars. They worry incessantly as to how to dumb down Web content and services to meet the limited functionality of IE. This sucks. So everyone continues to watch "the browser wars" stats. What they are really watching for though is that magic moment where "combined" HTML+ browser uptake in marketshare signals that they can start to implement highly graphical and collaboratively interactive HTML+ specific content. Meanwhile, the greater Web is a
pranetorweb

UX Designing Services in Hyderabad - 0 views

image

started by pranetorweb on 08 Jul 16 no follow-up yet
Paul Merrell

Reset The Net - Privacy Pack - 0 views

  • This June 5th, I pledge to take strong steps to protect my freedom from government mass surveillance. I expect the services I use to do the same.
  • Fight for the Future and Center for Rights will contact you about future campaigns. Privacy Policy
  •  
    I wound up joining this campaign at the urging of the ACLU after checking the Privacy Policy. The Reset the Net campaign seems to be endorsed by a lot of change-oriented groups, from the ACLU to Greenpeac to the Pirate Party. A fair number of groups with a Progressive agenda, but certainly not limited to them. The right answer to that situation is to urge other groups to endorse, not to avoid the campaign. Single-issue coalition-building is all about focusing on an area of agreement rather than worrying about who you are rubbing elbows with.  I have been looking for a a bipartisan group that's tackling government surveillance issues via mass actions but has no corporate sponsors. This might be the one. The reason: Corporate types like Google have no incentive to really butt heads with the government voyeurs. They are themselves engaged in massive surveillance of their users and certainly will not carry the battle for digital privacy over to the private sector. But this *is* a battle over digital privacy and legally defining user privacy rights in the private sector is just as important as cutting back on government surveillance. As we have learned through the Snowden disclosures, what the private internet companies have, the NSA can and does get.  The big internet services successfully pushed in the U.S. for authorization to publish more numbers about how many times they pass private data to the government, but went no farther. They wanted to be able to say they did something, but there's a revolving door of staffers between NSA and the big internet companies and the internet service companies' data is an open book to the NSA.   The big internet services are not champions of their users' privacy. If they were, they would be featuring end-to-end encryption with encryption keys unique to each user and unknown to the companies.  Like some startups in Europe are doing. E.g., the Wuala.com filesync service in Switzerland (first 5 GB of storage free). Compare tha
Gary Edwards

The enterprise implications of Google Wave | Enterprise Web 2.0 | ZDNet.com - 0 views

  •  
    Dion Hinchcliffe has an excellent article casting Google Wave as an Enterprise game-changer. He walks through Wave first, and then through some important enterprise features: ".....to fully understand Google Wave, one should appreciate the separation of concerns between the product Google is offering and the protocols and technologies behind it, which are open to the Web community: Google Wave has three layers: the product, the platform, and the protocol: The Google Wave product (available as a developer preview) is the web application people will use to access and edit waves. It's an HTML 5 app, built on Google Web Toolkit. It includes a rich text editor and other functions like desktop drag-and-drop (which, for example, lets you drag a set of photos right into a wave). Google Wave can also be considered a platform with a rich set of open APIs that allow developers to embed waves in other web services, and to build new extensions that work inside waves. The Google Wave protocol is the underlying format for storing and the means of sharing waves, and includes the "live" concurrency control, which allows edits to be reflected instantly across users and services. The protocol is designed for open federation, such that anyone's Wave services can interoperate with each other and with the Google Wave service. To encourage adoption of the protocol, we intend to open source the code behind Google Wave.
Paul Merrell

The FCC is about to kill the free Internet | PandoDaily - 0 views

  • The Federal Communications Commission is poised to ruin the free Internet on a technicality. The group is expected to introduce new net neutrality laws that would allow companies to pay for better access to consumers through deals similar to the one struck by Netflix and Comcast earlier this year. The argument is that those deals don’t technically fall under the net neutrality umbrella, so these new rules won’t apply to them even though they directly affect the Internet. At least the commission is being upfront about its disinterest in protecting the free Internet.
  • The Verge notes that the proposed rules will offer some protections to consumers: The Federal Communication Commission’s proposal for new net neutrality rules will allow internet service providers to charge companies for preferential treatment, effectively undermining the concept of net neutrality, according to The Wall Street Journal. The rules will reportedly allow providers to charge for preferential treatment so long as they offer that treatment to all interested parties on “commercially reasonable” terms, with the FCC will deciding whether the terms are reasonable on a case-by-case basis. Providers will not be able to block individual websites, however. The goal of net neutrality rules is to prevent service providers from discriminating between different content, allowing all types of data and all companies’ data to be treated equally. While it appears that outright blocking of individual services won’t be allowed, the Journal reports that some forms of discrimination will be allowed, though that will apparently not include slowing down websites.
  • Re/code summarizes the discontent with these proposed rules: Consumer groups have complained about that plan because they’re worried that Wheeler’s rules may not hold up in court either. A federal appeals court rejected two previous versions of net neutrality rules after finding fault in the FCC’s legal reasoning. During the latest smackdown, however, the court suggested that the FCC had some authority to impose net neutrality rules under a section of the law that gives the agency the ability to regulate the deployment of broadband lines. Internet activists would prefer that the FCC just re-regulate Internet lines under old rules designed for telephone networks, which they say would give the agency clear authority to police Internet lines. Wheeler has rejected that approach for now. Phone and cable companies, including Comcast, AT&T and Verizon, have vociferously fought that idea over the past few years.
  • ...2 more annotations...
  • The Chicago Tribune reports on the process directing these rules: The five-member regulatory commission may vote as soon as May to formally propose the rules and collect public comment on them. Virtually all large Internet service providers, such as Verizon Communications Inc. and Time Warner Cable Inc., have pledged to abide by the principles of open Internet reinforced by these rules. But critics have raised concerns that, without a formal rule, the voluntary pledges could be pulled back over time and also leave the door open for deals that would give unequal treatment to websites or services.
  • I wrote about the European Union’s attempts to defend the free Internet: The legislation is meant to provide access to online services ‘without discrimination, restriction or interference, independent of the sender, receiver, type, content, device, service or application.’ For example, ISPs would be barred from slowing down or ‘throttling’ the speed at which one service’s videos are delivered while allowing other services to stream at normal rates. To bastardize Gertrude Stein: a byte is a byte is a byte. Such restrictions would prevent deals like the one Comcast recently made with Netflix, which will allow the service’s videos to reach consumers faster than before. Comcast is also said to be in talks with Apple for a deal that would allow videos from its new streaming video service to reach consumers faster than videos from competitors. The Federal Communications Commission’s net neutrality laws don’t apply to those deals, according to FCC Chairman Tom Wheeler, so they are allowed to continue despite the threat they pose to the free Internet.
  •  
    Cute. Deliberately not using the authority the court of appeals said it could use to impose net neutrality. So Europe can have net neutrality but not in the U.S.
Paul Merrell

LEAKED: Secret Negotiations to Let Big Brother Go Global | Wolf Street - 0 views

  • Much has been written, at least in the alternative media, about the Trans Pacific Partnership (TPP) and the Transatlantic Trade and Investment Partnership (TTIP), two multilateral trade treaties being negotiated between the representatives of dozens of national governments and armies of corporate lawyers and lobbyists (on which you can read more here, here and here). However, much less is known about the decidedly more secretive Trade in Services Act (TiSA), which involves more countries than either of the other two. At least until now, that is. Thanks to a leaked document jointly published by the Associated Whistleblowing Press and Filtrala, the potential ramifications of the treaty being hashed out behind hermetically sealed doors in Geneva are finally seeping out into the public arena.
  • The leaked documents confirm our worst fears that TiSA is being used to further the interests of some of the largest corporations on earth (…) Negotiation of unrestricted data movement, internet neutrality and how electronic signatures can be used strike at the heart of individuals’ rights. Governments must come clean about what they are negotiating in these secret trade deals. Fat chance of that, especially in light of the fact that the text is designed to be almost impossible to repeal, and is to be “considered confidential” for five years after being signed. What that effectively means is that the U.S. approach to data protection (read: virtually non-existent) could very soon become the norm across 50 countries spanning the breadth and depth of the industrial world.
  • If signed, the treaty would affect all services ranging from electronic transactions and data flow, to veterinary and architecture services. It would almost certainly open the floodgates to the final wave of privatization of public services, including the provision of healthcare, education and water. Meanwhile, already privatized companies would be prevented from a re-transfer to the public sector by a so-called barring “ratchet clause” – even if the privatization failed. More worrisome still, the proposal stipulates that no participating state can stop the use, storage and exchange of personal data relating to their territorial base. Here’s more from Rosa Pavanelli, general secretary of Public Services International (PSI):
  • ...1 more annotation...
  • The main players in the top-secret negotiations are the United States and all 28 members of the European Union. However, the broad scope of the treaty also includes Australia, Canada, Chile, Colombia, Costa Rica, Hong Kong, Iceland, Israel, Japan, Liechtenstein, Mexico, New Zealand, Norway, Pakistan, Panama, Paraguay, Peru, South Korea, Switzerland, Taiwan and Turkey. Combined they represent almost 70 percent of all trade in services worldwide. An explicit goal of the TiSA negotiations is to overcome the exceptions in GATS that protect certain non-tariff trade barriers, such as data protection. For example, the draft Financial Services Annex of TiSA, published by Wikileaks in June 2014, would allow financial institutions, such as banks, the free transfer of data, including personal data, from one country to another. As Ralf Bendrath, a senior policy advisor to the MEP Jan Philipp Albrecht, writes in State Watch, this would constitute a radical carve-out from current European data protection rules:
Gary Edwards

Petabytes on a budget: How to build cheap cloud storage | Backblaze Blog - 0 views

  •  
    Amazing must read!  BackBlaze offers unlimited cloud storage/backup for $5 per month.  Now they are releasing the "storage" aspect of their service as an open source design.  The discussion introducing the design is simple to read and follow - which in itself is an achievement.   They held back on open sourcing the BackBlaze Cloud software system, which is understandable.  But they do disclose a Debian Linux OS running Tomcat over Apache Server 5.4 with JFS and HTTPS access.  This is exciting stuff.  I hope the CAR MLS-Cloud guys take notice.  Intro: At Backblaze, we provide unlimited storage to our customers for only $5 per month, so we had to figure out how to store hundreds of petabytes of customer data in a reliable, scalable way-and keep our costs low. After looking at several overpriced commercial solutions, we decided to build our own custom Backblaze Storage Pods: 67 terabyte 4U servers for $7,867. In this post, we'll share how to make one of these storage pods, and you're welcome to use this design. Our hope is that by sharing, others can benefit and, ultimately, refine this concept and send improvements back to us. Evolving and lowering costs is critical to our continuing success at Backblaze.
Gary Edwards

The Advantage of Cloud Infrastructure: Servers are Software - ReadWriteCloud - 0 views

  •  
    Excellent discussion and capture of the importance of Cloud-computing!   Guest author Joe Masters Emison is VP of research and development at BuildFax writes for readwriteweb: excerpt:  More and more companies are moving from traditional servers to virtual servers in the cloud, and many new service-based deployments are starting in the cloud. However, despite the overwhelming popularity of the cloud here, deployments in the cloud look a lot like deployments on traditional servers. Companies are not changing their systems architecture to take advantage of some of the unique aspects of being in the cloud. The key difference between remotely-hosted, virtualized, on-demand-by-API servers (the definition of the "cloud" for this post) and any other hardware-based deployment (e.g., dedicated, co-located, or not-on-demand-by-API virtualized servers) is that servers are software on the cloud. Software applications traditionally differ from server environments in several key ways: ..... Traditional servers require humans and hours-if not days-to launch; Software launches automatically and on demand in seconds or minutes ...... Traditional servers are physically limited-companies have a finite number available to them; Software, as a virtual/information resource, has no such physical limitation ..... Traditional servers are designed to serve many functions (often because of the above-mentioned physical limitations); Software is generally designed to serve a single function ...... Traditional servers are not designed to be discarded; Software is built around the idea that it runs ephemerally and can be terminated at any moment On the cloud, these differences can disappear.
Gary Edwards

FeedHenry Secures $9M Funding Led By Intel Capital To Feed Boom in Mobile Enterprise | ... - 0 views

  •  
    FeedHenry provides a cloud Mobile Application Platform that simplifies the development, integration, deployment and management of secure mobile apps for business. This mobile platform-as-a-service (PaaS) allows apps to be developed in HTML5, JavaScript, and CSS and deployed to multiple mobile devices from a single code base. The node.js backend service offers a complete range of APIs designed to simplify and secure the connectivity of mobile apps to backend and third party systems. The platform can be deployed to private, public or hybrid clouds. FeedHenry's PaaS offers developers speed of development, instant scalability, device and cloud independence, and the ability to easily integrate to backend information. ................................ If, say, a company uses both Sharepoint and Salesforce inside a mobile app, to get that data into one app they need multiple levels of API integration. Because of the enormous boom in mobile and tablet apps, so-called 'back-end as a service' (BaaS) platforms like FeedHenry - which solve these problems - are hugely expanding. Thus, today FeedHenry has secured $9M (€7M) in a funding round led by Intel Capital, alongside a "seven figure" investment from existing investor Kernel Capital. Other existing investors VMware Inc., Enterprise Ireland and private investors also participated and were joined by new investment from ACT Venture Capital. The funds will be used on an international roll out. FeedHenry's mobile application platform - built between Ireland and the U.S. - helps businesses build mobile apps that integrate securely to their business through the cloud. This is a competitive market that includes StackMob, Usergrid, Appcelerator, Sencha.io, Applicasa ,Parse, CloudMine , CloudyRec , iKnode, yorAPI, Buddy and ScottyApp.
Gary Edwards

ShareFile Integrates Cloud File Share With Desktop Folders - PCWorld Business Center - 0 views

  •  
    Making cloud-based file transfer service easier.  Improved FTP alternative for small and medium sized business. A ShareFile user's customers can access files on a company-branded Web portal, one of the business-friendly features that helps set the service apart from the likes of Dropbox, according to Steve Chiles, chief marketing officer at ShareFile. Users can also allow their customers to log in to the service and upload files from their own website. ShareFile comes with reporting features that allow users to see who has uploaded and downloaded files, and when they were transferred. The addition of Sync will help automate the process of uploading files, instead of having to do a lot of the upload work manually. The feature allows for both one-way and two-way synchronization of files.  The user just has to drag and drop the file they want to synchronize into a designated folder. A folder can also be configured to send content to many recipients.
Gary Edwards

RuleLab.Net Server: Web system for design, implementation and management of business pr... - 0 views

  •  
    RuleLab.Net is a web-based system for designing and implementing the business rules that operate on an application's XML data. Extend your existing applications by adding Rule building and Business Rules Engine (BRE) capabilities. Consolidate your business logic in an easy to read format, build, test, share, and deploy your Rules using the web browser; and integrate them into your system via the BRE. Intuitive GUI, English-like syntax, and centralized repository empower business users with direct access to the Rules.In the RuleLab.Net system, Business Rules are composed and managed over the Internet or Intranet using the web-based Rules Designer. It allows users to associate an application XML data template with Rules, create a vocabulary of natural terms, graphically build complex logical expressions, test the Rules on data samples, and store the Rules in a database. Features include strong data types, reasoning, rule priorities and dependencies, calculation formulas, looping-data-structure support, and a built-in set of computational, aggregate and other data processing functions. Rules and other system objects are stored in XML files that can be downloaded, modified, and uploaded to the online repository. Rule changes made online can be instantly deployed for runtime use by the applications integrated with the BRE. The forward chaining BRE parses XML application data against the ruleset, updates your data XML document, and returns it back to the application along with the comprehensive state information. Written in .NET, the BRE component can be utilized as a managed assembly, a COM object, or through the Web Service.
Gary Edwards

Google News - 0 views

  •  
    Prepare to be blown away. I viewed a demo of Numecent today and then did some research. There is no doubt in my mind that this is the end of the shrink wrapped- Microsoft business model. It's also perhaps the end of software application design and construction as we know it. Mobile apps in particular will get blasted by the Numecent "Cloud - Paging" concept. Extraordinary stuff. I'll leave a few useful links on Diigo "Open Web". "Numecent, a company that has a new kind of cloud computing technology that could potentially completely reorganize the way software is delivered and handled - upending the business as we know it - has another big feather in its cap. The company is showing how enterprises can use this technology to instantly put all of their enterprise software in the cloud, without renegotiating contracts and licenses with their software vendors. It signed $3 billion engineering construction company Parsons as a customer. Parsons is using Numecent's tech to deliver 4 million huge computer-aided design (CAD) files to its nearly 12,000 employees around the world. CAD drawings are bigger than video files and they can only be opened and edited by specific CAD apps like AutoCAD. Numecent offers a tech called "cloud paging" which instantly "cloudifies" any Windows app. Instead of being installed on a PC, the enterprise setup can deliver the app over the cloud. Unlike similar cloud technologies (called virtualization), this makes the app run faster and continue working even when the Internet connection goes down. "It's offers a 95% reduction in download times and 95% in download network usage," CEO Osman Kent told Business Insider. "It makes 8G of memory work like 800G." It also lets enterprises check in and check out software, like a library book, so more PCs can legally share software without violating licensing terms, saving money on software license fees, Kent says. Parson is using it to let employees share over 700 huge applications such as Au
  •  
    Sounds like Microsoft must-buy-or-kill technology.
Paul Merrell

Cover Pages: Content Management Interoperability Services (CMIS) - 0 views

  • On October 06, 2008, OASIS issued a public call for participation in a new technical committee chartered to define specifications for use of Web services and Web 2.0 interfaces to enable information sharing across content management repositories from different vendors. The OASIS Content Management Interoperability Services (CMIS) TC will build upon existing specifications to "define a domain model and bindings that are designed to be layered on top of existing Content Management systems and their existing programmatic interfaces. The TC will not prescribe how specific features should be implemented within those Enterprise Content Management (ECM) systems. Rather it will seek to define a generic/universal set of capabilities provided by an ECM system and a set of services for working with those capabilities." As of February 17, 2010, the CMIS technical work had received broad support through TC participation, industry analyst opinion, and declarations of interest from major companies. Some of these include Adobe, Adullact, AIIM, Alfresco, Amdocs, Anakeen, ASG Software Solutions, Booz Allen Hamilton, Capgemini, Citytech, Content Technologies, Day Software, dotCMS, Ektron, EMC, EntropySoft, ESoCE-NET, Exalead, FatWire, Fidelity, Flatirons, fme AG, Genus Technologies, Greenbytes GmbH, Harris, IBM, ISIS Papyrus, KnowledgeTree, Lexmark, Liferay, Magnolia, Mekon, Microsoft, Middle East Technical University, Nuxeo, Open Text, Oracle, Pearson, Quark, RSD, SAP, Saperion, Structured Software Systems (3SL), Sun Microsystems, Tanner AG, TIBCO Software, Vamosa, Vignette, and WeWebU Software. Early commentary from industry analysts and software engineers is positive about the value proposition in standardizing an enterprise content-centric management specification. The OASIS announcement of November 17, 2008 includes endorsements. Principal use cases motivating the CMIS technical work include collaborative content applications, portals leveraging content management repositories, mashups, and searching a content repository.
  •  
    I should have posted before about CMIS, an emerging standard with a very lot of buy-in by vendors large and small. I've been watching the buzz grow via Robin Cover's Daily XML links service. IIt's now on my "need to watch" list. 
Gary Edwards

CPU Wars - Intel to Play Fab for an ARM Chipmaker: Understanding What the Altera Deal M... - 0 views

  • Intel wants x86 to conquer all computing spaces -- including mobile -- and is trying to leverage its process lead to make that happen.  However, it's been slowed by a lack of inclusion of 4G cellular modems on-die and difficulties adapting to the mobile market's low component prices.  ARM, meanwhile, wants a piece of the PC and server markets, but has received a lukewarm response from consumers due to software compatibility concerns. The disappointing sales of (x86) tablet products using Microsoft Corp.'s (MSFT) Windows 8 and the flop of Windows RT (ARM) product in general somewhat unexpectedly had the net result of being a driver to maintain the status quo, allowing neither company to gain much ground.  For Intel, its partnership with Microsoft (the historic "Wintel" combo) has damaged its mobile efforts, as Windows 8 flopped in the tablet market.  Likewise ARM's efforts to score PC market share were stifled by the flop of Windows RT, which led to OEMs killing off ARM-based laptops and convertibles.
  • Both companies seem to have learned their lesson and are migrating away from Windows towards other platforms -- in ARM's case Chromebooks, and in Intel's case Android tablets/smartphones. But suffice it to say, ARM Holdings and Intel are still very much bitter enemies from a sales perspective.
  • III. Profit vs. Risk -- Understanding the Modern CPU Food Chain
  • ...16 more annotations...
  • Whether it's tablets or PCs, the processor is still one of the most expensive components onboard.  Aside from the discrete GPU -- if a device has one -- the CPU has the greatest earning potential for a large company like Intel because the CPU is the most complex component. Other components like the power supply or memory tend to either be lower margin or have more competitors.  The display, memory, and storage components are all sensitive to process, but see profit split between different parties (e.g. the company who makes the DRAM chips and the company who sells the stick of DRAM) and are primarily dependent on process technology. CPUs and GPUs remain the toughest product to make, as it's not enough to simply have the best process, you must also have the best architecture and the best optimization of that architecture for the space you're competing in. There's essentially five points of potential profit on the processor food chain: [CPU] Fabrication [CPU] Architecture design [CPU] Optimization OEM OS platform Of these, the fabrication/OS point is the most profitable (but is dependent on the number of OEM adopters).  The second most profitable niche is optimization (which again is dependent on OEM adopter market share), followed by OEM markups.  In terms of expense, fabrication and operating system designs requires the greatest capital investment and the highest risk.
  • In terms of difficulty/risk, the fabrication and operating system are the most difficult/risky points.  Hence in terms of combined risk, cost, and profitability the ranking of which points are "best" is arguably: Optimization Architecture design OS platfrom OEM Fabrication ...with the fabrication point being last largely because it's so high risk. In other words, the last thing Intel wants is to settle into a niche of playing fabs for everybody else's product, as that's an unsound approach.  If you can't keep up in terms of chip design, you typically spin off your fabs and opt for a different architecture direction -- just look at Advanced Micro Devices, Inc.'s (AMD) spinoff of GlobalFoundries and upcoming ARM product to see that.
  • IV. Top Firms' Role on That Food Chain
  • Apple has seen unbelievable profits due to this fundamental premise.  It controls the two most desirable points on the food chain -- OS and optimization -- while sharing some profit with its architecture designer (ARM Holdings) and a bit with the fabricator (Samsung Electronics Comp., Ltd. (KSC:005930)).  By choosing to play operating system maker, too, it adds to its profits, but also its risk.  Note that nearly every other first-party exclusive smartphone platform has failed or is about to fail (i.e. BlackBerry, Ltd. (TSE:BB) and the now-dead Palm).
  • Intel controls points 1, 2, and 5, currently, on the food chain.  Compared to Apple, Intel's points of control offer less risk, but also slightly less profitability. Its architecture control may be at risk, but even so, it's currently the top in its most risky/expensive point of control (fabrication), where as Apple's most risky/expensive point of control (OS development) is much less of a clear leader (as Android has surpassed Apple in market share).  Hence Apple might be a better short-term investment, but Intel certainly appears a better long-term investment.
  • Samsung is another top company in terms of market dominance and profit.  It occupies points 1, 3, 4, and 5 -- sometimes.  Sometimes Samsung's devices use third-party optimization firms like Qualcomm Inc. (QCOM) and NVIDIA Corp. (NVDA), which hurts profitability by removing one of the most profitable roles.  But Samsung makes up for this by being one of the largest and most successful third party manufacturers.
  • Microsoft enjoys a lot of profit due to its OS dominance, as does Google Inc. (GOOG); but both companies are limited in controlling only one point which they monetize in different ways (Microsoft by direct sales; Google by giving away OS product for free in return for web services market share and by proxy search advertising revenue).
  • Qualcomm and NVIDIA are also quite profitable operating solely as optimizers, as is ARM Holdings who serves as architecture maker to Qualcomm, NVIDIA, Apple, and Samsung.
  • V. Four Scenarios in the x86 vs. ARM Competition
  • Scenario one is that x86 proves dominant in the mobile space, assuming a comparable process.
  • A second scenario is that x86 and ARM are roughly tied, assuming a comparable process.
  • A third scenario is that x86 is inferior to ARM at a comparable process, but comparable or superior to ARM when the x86 chip is built using a superior process.  From the benchmarks I've seen to date, I personally believe this is most likely.
  • A fourth scenario is that x86 is so drastically inferior to ARM architecturally that a process lead by Intel can't make up for it.
  • This is perhaps the most interesting scenario, in the sense of thinking of how Intel would react, if not overly likely.  If Intel were faced with this scenario, I believe Intel would simply bite the bullet and start making ARM chips, leveraging its process lead to become the dominant ARM chipmaker.  To make up for the revenue it lost, paying licensing fees to ARM Holdings, it could focus its efforts in the OS space (it's Tizen Linux OS project with Samsung hints at that).  Or it could look to make up for lost revenue by expanding its production of other basic process-sensitive components (e.g. DRAM).  I think this would be Intel's best and most likely option in this scenario.
  • VI. Why Intel is Unlikely to Play Fab For ARM Chipmakers (Even if ARM is Better)
  • From Intel's point of view, there is an entrenched, but declining market for x86 chips because of Windows, and Intel will continue to support Atom chips (which will be required to run Windows 8 tablets), but growth on desktops will come from 64 bit desktop/server class non-Windows ARM devices - Chromebooks, Android laptops, possibly Apple's desktop products as well given they are going 64 bit ARM for their future iPhones. Even Windows has been trying to transition (unsuccessfully) to ARM. Again, the Windows server market is tied to x86, but Linux and FreeBSD servers will run on ARM as well, and ARM will take a chunk out of the server market when a decent 64bit ARM server chip is available as a result.
  •  
    Excellent article explaining the CPU war for the future of computing, as Intel and ARM square off.  Intel's x86 architecture dominates the era of client/server computing, with their famed WinTel alliance monopolizing desktop, notebook and server implementations.  But Microsoft was a no show with the merging mobile computing market, and now ARM is in position transition from their mobile dominance to challenge the desktop -notebook - server markets.   WinTel lost their shot at the mobile computing market, and now their legacy platforms are in play.  Good article!!! Well worth the read time  ................
Gary Edwards

Introducing CloudStack - 0 views

  •  
    CloudStack Manifesto Before getting into the framework specifics, it would be worthwhile to cover some of the design principles we had in mind while we were building CloudStack: CloudStack brings together the best of the web and the desktop: We strongly believe in the convergence of the desktop and the web and will continually strive to expose more services that bring out the best from both. CloudStack enables rapid application development and deployment: Out of the box, CloudStack provides a fully brand able and deployable shell application that can be used as a starting point to jumpstart application development. CloudStack also provides a scalable deployment environment for hosting your applications. CloudStack leverages existing web technologies: We built the CloudStack P2WebServer container over the J2EE compliant Jetty web server. As a result, CloudStack applications are built using standard web technologies like AJAX, HTML, JavaScript, Flash, Flex, etc. CloudStack does not reinvent the wheel: We strive to reuse as much as possible from other open source projects and standards. By creatively stringing together seemingly disparate pieces, like P2P and HTTP, it?fs amazing to create something that's really much greater than the sum of the parts. CloudStack does aim to simplify existing technologies: We will abstract and simplify existing interfaces if needed. For example, we built simpler abstractions for JXTA (P2P) and Jena (RDF Store). CloudStack encourages HTML-based interfaces: We believe that the web browser is the most portable desktop application container with HTML being the lingua franca of the web. Rather than writing a native widget interface for the local desktop application and another web-based interface for the remote view, we encourage writing a single interface that can be reused across both local and remote views. HTML based interfaces are inherently cross-platform and provide good decoupling of design from code (versus having the UI as compiled
1 - 20 of 51 Next › Last »
Showing 20 items per page