My Introduction into Service Bus Architecture

Dominic Burford
3 min readDec 21, 2016

Over the last couple of weeks I’ve been looking at service bus architectures, specifically with regards to Azure Service Bus. Since deploying our ASP.NET Web API services into the Azure cloud, I wanted to ensure that they were resilient and scalable. So I spent some time looking into service bus architectures. Both understanding the concepts and theory, as well as the practice.

The biggest mind-shift is the conceptual shift from direct service-to-service communication to completely decoupled services. Whereas previously all services communicated directly with other services, in a service bus architecture, no such direct communication exists. This takes a little getting used to. When you need to update another service, you add the request to the service bus queue and get on with the next task.

This architecture has many benefits. Firstly, it provides consistency. No matter what service you need to communicate with, it always involves sending / receiving messages to / from the service bus. The only endpoint you need to be interested in is that of the service bus. Whilst the messages will be different, the endpoint and architecture will be the same.

Secondly, from a client application perspective, the service request will appear highly responsive. This is because the service endpoint has simply dropped your request onto the service bus queue and is immediately available to service another request. The actual processing of the request will be undertaken later when the request is picked up from the service bus queue by a separate process. When this happens is down to how the service bus has been configured, but suffice to say that it will be processed in a time-frame acceptable to the business.

Scaling up the number of requests you are able to process becomes an almost trivial matter, but importantly, is an infrastructure problem. It no longer becomes a problem that the software developer needs to solve. Yes the software developer needs to write code that is capable of sending and receiving messages from the service bus queue, but how responsive those messages are and how many can be processed within a specified time-frame is largely an infrastructure problem.

Ensuring the requests are processed in the event of a failure is also an infrastructure problem. Instead of implementing retry patterns in your code, simply configure the retry mechanism in your service bus. Service bus architectures allow for messages to be placed back on the queue in the event of a failure where they can be re-tried at a later time. So if the database failed to update due to a deadlock or other lock contention, then fail the request, add it back onto the service bus queue, and try it again later.

A service bus architecture turns what was previously difficult to implement in software, into a mere infrastructure configuration.

I’ve been working with Azure Service Bus and have developed a simple proof of concept and associated unit tests. It has been surprisingly easy to work with. As you would expect from Microsoft, all the tooling needed to work with Azure Service Bus is available within the .NET ecosystem. Suffice to say, that I will be using Azure Service Bus from now on, including in my current project.

I will leave the details of how I have developed the applications that send and receive messages from the service bus for a future article.

--

--

Dominic Burford

A father, cyclist, vegetarian, atheist, geek and multiple award winning technical author. Loves real ale, fine wine and good music. All round decent chap.