Other than Pub/Sub as a distribution mechanism, the other important idea in Ranadiv�’s Information Bus was that it combined data from wildly different sources. Market data, financial news, internal analysis, reports, and everything else easily added up to a single trader wanting access to tens of different systems simultaneously, each using their own data format, transmitting to their own software, which was often run on individual dedicated terminals sitting on a trader’s desk. Using middleware simplified this by sitting in-between, talking to each of the data sources in their own language, and presenting a common interface to a single piece of software which consumed whatever data was needed and presented it to the user.
Many companies outside of finance had developed a similar mish-mash of services: systems from various vendors, databases and data warehouses, and feeds of information. Rather than data dissemination being the dominant requirement though, enterprise systems were often primarily made up of services that could be called upon to perform actions. Though there were many potential endpoints, a given trader’s flow was reasonable straightforward - receiving data and issuing orders to be executed in one of a handful of large exchanges. Many enterprises had logistics, point of sale, order processing, customer service and myriad other systems in place, each shuttling data back and forth, each needing to be accessed by many different staff and job functions, and very few of them communicating with each other.
These systems didn’t operate like a trading exchange - if a user made a request, it was not uncommon for them to be the only party interested in the response. However, these systems often exposed RPC style interfaces, to allow enterprise integrators to take advantage of their functionality and build new products and services. Here again there was a need for middleware to standardise and simplify, but rather than using Pub/Sub, this type of integration required point to point or direct messaging - a more general form of the client-server principle, where request and response were delivered via messages, and could be routed, distributed and stored as needed.
One of the first companies to really latch on to this particular aspect of messaging was Progress Software with their SonicMQ product in the early 2000s, and it is their term - Enterprise Service Bus - that has found the most usage to describe this pattern of architecture.
ESBs solved a really common, and expensive, problem for many companies. As part of scoping any new work within a business, it was often found that the implementors would need to interact with several different services in order to deliver any significant new functionality. These disparate applications were often under the control of different departments, with little ability for the application developer to make any changes, and often very long change cycles if it could be done at all. On the flip side, the sheer scale of large enterprises meant that some degree of change was nearly constantly occurring, causing a requirement for ongoing work for any integrator. This lead to growing notification and sign-off chains for any change, out of fear of breaking some legacy system.
For example, a company may have built a stock management system, a cataloguing system for storing product information and images, a fulfilment system that managed the shipping process, and a billing system for orders. Each system would speak its own language, each expose services in a different way. If customers were ordering from a printed catalogue, and orders were sent to a delivery department via a note or email, then this all probably worked. As soon as a single website was expected to manage the whole process to allow proper online ordering, it would start to get tricky - the web developer had to integrate with all four systems, which entailed four integrations, and four service level agreements with the teams running them. If a second ordering platform was added to allow sales people in the field to register orders, the same work might have to be done again.
An enterprise service bus places a message broker at the heart of the organisation, with the aim of hiding details of the systems from implementors. Each implementor only has to implement the messaging interface with the broker, and there is only a single implementation of each service to the broker itself. This allows the developer of new products not to have to worry about where a system is, the quirks of its individual services, and whether client libraries are available. They can action whatever is needed via the message bus. This is often referred to as Enterprise Application Integration, or EAI.
Messages are routed to the appropriate services based on a routing key, or service name. This kind of direct routing is preferred over the Pub/Sub model because in an EAI situation it is more common to be moving different types of message to very different types of system. The communication is point to point, though usually still asynchronous via the use of a message queue, but with the advantage of allowing the middleware to manage the load on the service and distribute the work intelligently.
Responses from the service were generally routed back to the calling application via a middleware queue, where they could be consumed as needed. This neatly separated the calling application and service, requiring no changes if adding a new consumer, or to split messages to be load balanced across multiple services. It allows business decisions, such as sending orders over a certain size down a different flow, to be added to a system without comprising or even modifying most of what was already running. Messages could be simply routed to an extra queue, or switched based on certain keys, which provided a significant amount of extra flexibility.
Where ESB abstractions tends to run into trouble is at the format of the data that is sent back and forth. Often different systems will represent the same data in different ways. For example an ordering system might expect a document with a certain structure, or a catalogue and a fulfilment system may both have product IDs, but one expects them as strings, the other as integers.
The data format or data structure is the specific detail of the way data is represented. For example, an address might be a single opaque string, or it might be five strings for building, street, city, county and postcode. The shape of the data is often very specific to certain systems, so many ESB implementations would define a canonical format which included all data at the maximum level of granularity required, then design message transformers for each interface which would convert the canonical format into the specific format required at the time it was passed to an existing service. Implementors would only ever have to implement a single transformation, from their preferred representation to the canonical format and back. The downside of this approach is that the canonical format could often turn into a battleground of political infighting between different factions, and be difficult and expensive to change if required.
“[O]ur own experience in the INFINET applications group had led us to realize the shortcomings of message queuing as a middleware paradigm. The primary issue was that our MQ tools had no formal interface definition (message formats were essentially determined and documented by source code of the sending program).”
- Erik Townsend
Often ESBs defined not only the formats transferred, but also different types of messages. The systems may differentiate between command messages, event messages and document messages, which can be managed and queued in different ways. They may define different channels or queues for different data types, potentially pushing to multiple queues from a single service to allow a consumer to choose their appropriate format. These different types of communication form the informal protocol of the system, and will often come with constraints on how and when messages can be sent.
The ESB therefore provided a layer on top of the messaging system, with its own protocol syntax defining the data and message formats, its own routing via the configuration of the ESB, and with error and flow control handled by the message brokers involved.
Companies often also required that their messaging middleware address security issues. Security in messaging systems revolves around the same basic concerns as security in almost any environment - users need some way of verifying a messages’s integrity, so that they can be sure it hasn’t been tampered with. They need to be confident of message confidentiality so that they know their data can’t be snooped on. To ensure the messages are delivered and created legitimately the parties involved may need to be authenticated to confirm their identities, and authorised to see or access data or services.
Of course, not every system requires all, or sometimes any, of these properties, and there are levels of complexity and coverage for each. For example, there is a difference in the type of integrity check required to ensure that message hasn’t been accidentally modified by an error on the line, versus the checking needed to detect one that has been intentionally manipulated by a motivated attacker. To achieve the latter case, enterprise message systems tend to include some support for encryption technologies. Often messages must be encrypted in transport so that the message can only be read by the intended receiver, or cryptographically signed by the sender so they cannot be forged. They may be routed or filtered based on some kind of managed policy for which endpoints can receive, send, or process messages. This requires authenticated endpoints and some kind of access control list to determine who can receive the data.
All of this is usually accomplished with the aid of public key encryption, an invention that goes back to the mid-70s. In this type of cryptography, as used by SSL and TLS, rather than both sender and receiver sharing a single secret key or password that would allow them to encrypt and decrypt messages for each other, the keys were split into public and private parts.
The public key could be used to encrypt messages so that only the holder of the private key could read them, and the private key could be used to encrypt messages so that any holder of the public key could decrypt them, verifying that the private key holder had genuinely been the one to create the message. This latter usage is generally used for signing a message, the encryption is usually over a hash or summary of the message rather than the entire contents, allowing the receiver to verify the message without the overhead of having the entire message encoded. This is used in a variety of ways - for example, the messaging based configuration and execution management tool Salt uses TLS security in order to sign the commands sent via it, so that servers do not execute bogus commands sent or tampered with by an attacker.
The structure used to manage these keys and the chains of trust (as key holder can sign keys from other users they trust) is generally referred to as a public key infrastructure, and the management of it is often a major component of secure distribution of data around an organisation.
The ESB enabled an architectural pattern known as the Service Oriented Architecture. Variants on this model had been in use back into the early 80s, but beginning in the 90s the term (coined by Yefin V. Natis at Gartner) gained favour to describe the idea of building functionality as a series of small independent services, which together provided all the processing needed by an application. Rather than having the consuming applications implement these services directly, they would be contacted via a messaging system, for all of the EAI-type benefits mentioned before.
This produces a very decoupled and flexible system, that can easily be extended as new requirements and functionality arise, and has been popular in many spheres for exactly this reason. By driving the communication via messaging, even if at the end the broker is making an RPC type service call, client and server can be kept very distinct from each other - the message broker is the only one that needs to know where they are and precisely how they communicate. This decoupling makes it easier to scale, cache and distribute systems. It gives the messaging system the possibility to load balance among multiple servers, or to log messages for later view. In some systems this is taken to a practical extreme - in an event sourced model messages are always stored before being actioned. This allows replaying events later for debugging purposes, or running a parallel system for comparison, testing or validation.
This type of messaging is also used in Business Process Management - the automation and coordination of the day to day processes that go on inside a business. BPM systems are often setup with messaging at the centre, in combination with a business rules engine that responds to events delivered by the messaging system. For some organisations, enough event driven functionality may push them towards a Complex Event Processing system, where events communicated over messages are run against real time queries and within windowed buffers to track complex interactions and trigger appropriate action. These can be used for direct, operational decisions (such as mailing someone a promotion when they have visited their account page after calling customer services) or to trigger further events within an organisation.
Enterprise Integration Patterns
ESBs and SOAs were some of the key driving forces in the development and adoption of messaging infrastructure, so it is of no surprise that one of the preeminent books on practical messaging is Gregor Hohpe’s Enterprise Integration Patterns1. Hohpe and co-author Bobby Woolf documented the patterns prevalent in building messaging systems for integration, but in doing so captured many of the basic messaging use cases for other spheres. These range from straightforward ideas, like including a Return Address in request/response type messages that describes where responses should be delivered to, to more complex systems of aggregation, such as the Auction pattern where a message is distributed to many processors, and a single “winning” response is returned by an aggregator.
One important set of patterns are covered under the idea of message channels. The authors discuss the challenges around the fundamental act of sending a message. For example, who determines whether there is a channel to send a message through (such as a topic)? It could be the sender, the messaging middleware, or the receiver. How are undeliverable messages handled? They can be retried to some degree, but eventually need to either be dropped to the floor or delivered to a dead letter queue where they can be manually reviewed. Reviews require that there are staff available to investigate, which is only likely to be the case for high value, high reliability requirement systems.
How messages are routed around channels is also discussed, introducing ideas such as the topic or content based routing, and a pipes and filters model where a chain of processors are hooked together with the message broker (pipes) and process or transform messages (filters) on the way. This processing can include message translation, or transforming the format of a message so that local consumers have the message in a style they understand. All of this addresses the way that communication is controlled, and the way the endpoints need to react.
These patterns become particularly important as messaging scales, because messaging, like all distributed communication, is fraught with opportunities for failure. What if a reply is lost along the way, the system processing it crashes, or just takes a really long time (for some definition of “really long”)? Do you retry, can you retry? To define how these communications should happen, and in what order, some systems define conversation policies - effectively describing state machines that details what the legitimate actions are taken given some input and the current state of the system. This becomes a form of protocol design, covering the error handling, flow control and routing of messages around enterprise integrations. Products such as IBM’s Message Broker and others have developed whole suites of visual management tools to define these patterns. The power of building up conversations of related messages out of common building blocks of simple messaging patterns is significant though, as we will see explored further in other messaging standards and systems.