Showing posts with label Cloud computing. Show all posts
Showing posts with label Cloud computing. Show all posts

Friday, July 9, 2010

Cloud-based Message Queueing and Persistence

I think given the choice, most developers rather deal with synchronous vs. asynchronous processes. It’s just much easier to wait for an immediate response and continue down your path vs. having to queue requests and set up a secondary process to poll for replies, perhaps correlate them to the original requests and then process them. However, since you can’t always control every application you need to work with, you will find that having a place to temporarily park messages for either later processing or pick up is necessary. You some times need UPS to retry delivering your package and some times you rather go pick it up at the distribution center.

As an example, if you ever worked with Salesforce’s Workflow and Outbound messaging feature, you’ve probably had to figure out a way to deal with storing the Outbound Notification from Salesforce while your Great Plains , Oracle EBS,... instance is down for maintenance. Salesforce’s Outbound messaging service will only retry a finite amount of times and then the notification is lost so having a place to temporarily store the notification until your application is back on line can be quite necessary.

For most use cases, you have plenty of options for persisting your messages, from simple database tables to more sophisticated message queueing software. If you’re looking to reduce internal IT costs and headaches, Amazon Simple Queue Service (SQS) can be a good option too. It’s simple to set up, it’s fully managed, and it’s very inexpensive. You have several choices of geographic location for the service, US East/West, EU, and APAC. The API is straightforward enough and since it uses standard Cloud-based protocols (RESTFul and SOAP WS over HTTPS), you can be up and running in no time. 

Amazon SQS uses a distributed architecture with multiple queue nodes and as most Cloud-based services go, it some times takes a little time for all the nodes to sync up and have the expected response to your request. Here are some of the behavior I noticed while playing around.

  • There was almost always a delay (10-30secs) in availability of newly created queues. I would create a queue and then use the list queue service to see if it was there. Same was true when I deleted queues. They showed up in the list of queues for 10-30 seconds after I had a confirmation of deletion.
  • Message retrieval seemed some what random. I would send a number of messages in a row and then try to retrieve them. I would get my messages back but in what seemed random groupings. I never lost a message so that was great news.

Overall, SQS was very reliable and as long as you don’t care about ordered delivery and you don’t that have messages that are > 8KB, it’s a viable Cloud-based choice.

Microsoft’s Azure platform offers a similar service which I haven’t played around with but it looks almost identical to Amazon’s SQS. I guess borrowing is the best form of flattery. The operations seem to be identical (1:1) to SQS and here’s a direct quote from their documentation: 
"A queue can contain an unlimited number of messages, each of which can be up to 8 KB in size. Messages are generally added to the end of the queue and retrieved from the front of the queue, although first in, first out (FIFO) behavior is not guaranteed.“ 
So pretty much the same as Amazon's SQS. It's great when your competition doubles as your PM.

If you’re looking for a Cloud-based message persistence service to complement your Cloud-based integration platform, Amazon SQS and (probably) Microsoft’s Azure both offer worthwhile solutions.

Thursday, May 27, 2010

Is Your Cloud Elastic or Plastic?

The discussions over multi vs. single tenant Cloud application design seems to be winding down with multi-tenancy clearly the winner. Although customers should frankly not care or be impacted by either architecture, vendors would be wise to follow a path that leads to easier maintenance and lower operational costs. As the trend in SaaS and Cloud computing puts downward pressure on license prices, application vendors need to adopt technologies and architectural models that result in the most cost effective manner to operate a Cloud application. That's assuming you have a for profit or at least against loss business model.

Now, it seems natural that the next big debate in Cloud computing architecture is going to be around elasticity vs plasticity. 

Is your Cloud computing infrastructure able to scale up or down based on demand? or Do you have to plan for peak demand and deploy enough hardware to make sure you have enough capacity to address your demand and SLAs?

If it's the latter, then it's like having a hotel keeping all the lights in every room on just in case some one checks in. How about turning the lights on only when you need them?! The amount of resources (aka moolah) wasted on running infrastructure constantly for peak demand can potentially negate any cost savings achieved through multi-tenancy.

The only arguments that seem to make any sense for plastic computing are with regards to SLAs and the latency involved in bringing on new hardware/software to support increases in demand. To that, I say...invest in some operational analytics. I'm sure if you can track your usage, you can see patterns of use and plan for them. For example, if you're running a sales or finance application, it seems natural that end of month/quarter would be the busiest times and beginning through mid-month, things are not so busy. Turn off the lights when you don't need them.