Handling Traffic Spikes Without Getting One in the Back

Handling Traffic Spikes Without Getting One in the Back
.

View photo

Handling Traffic Spikes Without Getting One in the Back

In building out our OpenStack® based public cloud services, one of our goals is to make it easy and painless for our users to deploy their applications. In creating our new HP Cloud Load Balancer service, we wanted to achieve the same ease of setup, deployment and management as we had with our other cloud services.  Our Load Balancing as a Service (LBaaS) had to alleviate the user from the burden of having to manage their own service while delivering more value than you can get from self-deployed load balancers.  Our key design targets in planning this service were to provide best-in-class time to deployment, easy service management, and enterprise-class performance and stability.  We have just released our first iteration of the HP Cloud Load Balancer service as a free, early access – so you can begin to work with it and give us your feedback.

Load balancing as a building block has been around for a long time. Load balancers can improve scalability, reliability and performance of online applications by distributing incoming network traffic across multiple back end application servers.  Of particular importance, this capability can be used to maintain the quality of your user’s experience with your application during traffic “spikes” – such as a flood of website visitors that can occur during holiday sales, breaking news, sporting events or new product releases.  Reliability and dependability are key for load balancers since they represent the front lines of your application deployment.  In developing our cloud-based load balancing service, our development team set out to build an LBaaS that allows developers to ‘set it and forget it’.  To make that a reality, our service, at a minimum, would need to be:

  • Easy to provision a load balancer
  • Easy to configure (via API or CLI)
  • Easy to manage as your deployment’s needs change
  • Reliable

Since there was no ready-made OpenStack project for load balancing, we had to start by considering what currently available ingredients (if any) best fit our needs, what parts we needed to develop internally, and then how to best tie them all together to deliver our solution.  Using the Atlas API definition, HAproxy as our core load balancer, and some fantastic development work from our engineering team I believe what we’ve created will be well received by the users of HP Cloud. What we are releasing in this initial free phase is the core functionality of the service.  HP Cloud will continue to build on this foundation and expand the capabilities of our load balancing service.

Ultimately it’s you, our users, that confirm whether or not we are headed in the right direction. I encourage you to sign up for the free early access to HP Cloud Load Balancers and give us your feedback. We will continue to build up the feature set of the service, and would love to have our users help us to ensure we best meet your needs!

Our goal with all of our managed services is to enable developers to focus on their brilliant applications and not on infrastructure administration.  So take our free early access HP Cloud Load Balancer Service for a spin and let us know if we have achieved that goal.

View Comments (0)