Skip to main content

Home/ Cloud Computing/ Group items tagged balancer

Rss Feed Group items tagged

DJHell .

Amazon adds Load balancing, Monitoring, and Auto-Scaling « RightScale Blog - 0 views

  •  
    Announced late last year, Amazon tonight launched load balancing, monitoring, and auto-scaling for the Elastic Compute Cloud (EC2). These features have been requested many times by EC2 users and with this release Amazon continues to show that it listens and responds to feedback.
Alex MIkhalev

Amazon Elastic Compute Cloud - 0 views

  •  
    Amazon introduced Auto Scaling, Load Balancing and Monitoring. You don't need any third party tools any more.
  •  
    WOW. Amazon introduced Auto Scaling, Load Balancing and Monitoring. You don't need any third party tools any more.
DJHell .

Amazon Web Services Blog: New Features for Amazon EC2: Elastic Load Balancing, Auto Sca... - 0 views

  •  
    We are working to make it even easier for you to build sophisticated, scalable, and robust web applications using AWS. As soon as you launch some EC2 instances, you want visibility into resource utilization and overall performance. You want your application to be able to scale on demand based on traffic and system load. You want to spread the incoming traffic across multiple web servers for high availability and better performance. You want to focus on building an application that takes advantage of the powerful infrastructure available in the cloud, while avoiding system administration and operational burdens ("The Muck," as Jeff Bezos once called it).
Rich Hintz

Dr. Dobb's | Q&A: Parallel Programming | February 21, 2009 - 0 views

  •  
    Parallelism and performance go hand-in-hand. But achieving maximum performance can be a balancing act, as Intel senior engineer James Reinders explains to Dr. Dobb's editor in chief Jonathan Erickson.
DJHell .

Automating the management of Amazon EC2 using Amazon CloudWatch, Auto Scaling and Elast... - 0 views

  •  
    The Amazon Elastic Compute Cloud (Amazon EC2) embodies much of what makes infrastructure as a service such a powerful technology; it enables our customers to build secure, fault-tolerant applications that can scale up and down with demand, at low cost. Core in achieving these levels of efficiency and fault-tolerance is the ability to acquire and release compute resources in a matter of minutes, and in different Availability Zones.
Maluvia Haseltine

AWS Free Usage Tier - 4 views

  • will be able to run a free Amazon EC2 Micro Instance for a year,
  • launch new applications, test existing applications in the cloud, or simply gain hands-on experience with AWS.
  • Elastic Load Balancer
  • ...5 more annotations...
  • 750 hours of Amazon EC2
  • the AWS Management Console is available at no charge to help you build and manage your application on AWS.
  • ** These free tiers do not expire after 12 months and are available to both existing and new AWS customers indefinitely.
  • only available to new AWS customers
  • 10 GB of Amazon Elastic Block Storage,
  •  
    Wow-Wow-Wow-Wow-Wow
DJHell .

OpenSocial in the Cloud - OpenSocial - 0 views

  • Apps can grow especially fast on social networks, so before you launch your next social app, you should think about how to scale up quickly if your app takes off.
  • Unfortunately, scaling is a complex problem that's hard to solve quickly and expensive to implement.
  • If this app grows to serve millions of users and photos, shared hosting or even a dedicated server won't have the bandwidth or CPU cycles to handle all of the requests. We could invest in more servers and network infrastructure, shard the database, and load-balance requests, but that takes time, money, and expertise. If you'd rather work on the new features of the app, it's time to move into the cloud.
  • ...9 more annotations...
  • It's important to focus on the interactions between the app and your server when designing an application that will run in the cloud. If we standardize the communication protocol and data format, we can easily change the server side implementation without modifying the OpenSocial app.
  • You can configure the makeRequest method to digitally sign the requests your app makes to your server using OAuth's algorithm for parameter signing. This means that when your server receives a request, it can verify that the request came from your application hosted in a specific container. To implement this, the calls to makeRequest in the OpenSocial app spec XML specify that the request should be signed, and the code that handles requests on the server side verifies that a signature is included and valid
  • When our server receives a request, we can verify that it came from our application by checking that the digital signature was signed by a valid container and that the application ID is correct.
  • Since our server isn't storing any relationship data, the app will need to send us a list of user IDs so we can fetch the appropriate photos.
  • Although it's outside the scope of this article, we could provide a mechanism for our OpenSocial app to request a one-time-use token that it would include in the request to upload a photo.
  • Note that the post data is URL-encoded in the request so the post method uses urllib.unquote before splitting the comma-separated list of person IDs.
  • Since the server doesn't store any relationship data, the PhotosHandler class checks the post data of the request for a list of IDs from the container.
  • A common misconception when coding in the cloud is that storage space, CPU cycles, and bandwidth are unlimited. While the cloud hosting provider can, in theory, provide all the resources your app needs, hosting in the cloud ain't free so these resources are limited by your budget. Luckily, OpenSocial provides several mechanisms to cache images and data that will reduce the load on your server.
  • In addition to reducing traffic to our server, this technique has the added benefit of being fast—requesting data from the Persistence API is much faster than making the round trip to your server.
  •  
    Some OpenSocial apps can be written entirely with client-side JavaScript and HTML, leveraging the container to serve the page and store application data. In this case, the app can scale effortlessly because the only request hitting your server is for the gadget specification which is typically cached by the container anyway. However, there are lots of reasons to consider using your own server: * Allows you to write code in the programing language of your choice. * Puts you in control of how much application data you can store. * Lets you combine data from users on multiple social networks. * Enables interaction with the OpenSocial REST API. Setting up an OpenSocial app that uses a third party server is fairly simple. There are a few gotchas and caveats, but the real issues come up when your app becomes successful - serving millions of users and sending thousands of requests per second. Apps can grow especially fast on social networks, so before you launch your next social app, you should think about how to scale up quickly if your app takes off. Unfortunately, scaling is a complex problem that's hard to solve quickly and expensive to implement. Luckily, there are several companies that provide cloud computing resources-places you can store data or run processes on virtual machines. These computing solutions manage huge infrastructures so you can focus on your applications and let the "cloud" handle all the requests and data at scale. This tutorial focuses on a simple photo-sharing app that uses a third-party server to host photos and associated metadata. If this app is going to host millions of images and support many requests per second, we won't be able to run it on a single dedicated host. We'll break the app down and analyze the interactions between the OpenSocial App and the back end server. Then we'll implement the app in the cloud, first using Google App Engine, then leveraging Amazon's S3 data storage service. Finally, we'll look at s
1 - 8 of 8
Showing 20 items per page