Decoupling responsibilities with pedestal like interceptors in Ruby

Laerti Papa
9 min readJan 11, 2019

Motivation

As developers we always try to find the chance to apply new ideas. Normally I tend to keep things simple and rarely plan ahead on good design and abstractions. Having the big picture upfront and plan your code ahead needs experience and a lot of passed failures to learn from.

Recently I came across a codebase where we had to add a new feature on top of an existing implementation. It was implemented a couple of months ago and still, it kept growing an growing. An equivalent UML diagram can be found below (class names and method names / calls have been modified to make the example simpler without adding any business / domain rules).

Class diagram for a user request processing

From the diagram above we can see that the entry point (UserRequestCreatedProcessor) is tightly coupled with other collaborators and moreover we have low cohesion due to the strong composition relationship between the objects involved. Although the responsibilities were decoupled kind of “okayish” in different classes, there was still some room for improvements, better API definition and better separation of concerns.

Another argument that we had back then for refactoring was that we were using exceptions to handle control flow and transition a business entity in failure state on different failure reasons. There are different arguments about exceptions in control flow, Martin Fowler has a nice article about replacing exceptions with Notification classes. We liked the idea although we followed a monoid approach to handle results something that we will write about in a later post.

The exception handling was done in UserRequestCreatedProcessor and the exception could be raised from any object deep in the stack like RemoteFileExtractor, Transformer, UserDataBuilder etc. These exceptions were business logic that we needed to handle later on. Thus the code was something like this:

class UserCreatedProcessor# ...def execute
if url.nil?
return entity.error!("URL is missing")
end
builder = UserDataBuilder.build(url, entity)
data = builder.data.to_json
http_service.put!("some_path", request_body: data)
entity.complete!
rescue MissingUserAttributeError
entity.error("user_attribute_missing")
rescue NoDataFound
entity.error("user_data_not_found")
rescue APINotFoundError
...
rescue UnprocessableEntity
...
ensure
builder&.cleanup
end

You get the point.

First ideas for improvements

I started thinking on different approaches on how we could improve the above flow. In the end we wanted to achieve the following:

We wanted to have high cohesion in order to have classes that are easier to maintain and less frequently changed. Such classes are more re-usable since they are designed having single responsibility in mind with a well focused purpose.

Of course having high cohesion would drop coupling and visa versa. A good explanation and example on what low in coupling and high in cohesion means can be found at the following stack-overflow question.

I already had some ideas explored a couple of years ago and seen it being used in my current company in some applications but I wasn’t sure if we would want to go in that direction.

Being a big fan of clean code and design principles, the first thing that came in my mind is to open gang of four and do some research. Since we had undo functionality in almost all our classes we thought that command pattern would perfectly apply here. It encapsulates a request as an object, so we can parametrize clients with different requests, and it support undoable operations.

Another thought, was to use mediator pattern which allows loose coupling between classes by being the only class that has detailed knowledge of their methods. I’ve seen the pattern recently in kubernetes-deploy gem and I wanted to give it a shot. It was a good fit since we wanted to delegate interactions to a mediator object instead of interacting with each object directly. We avoided this scenario since in practice the mediators tend to become more complex and complex.

Actual approach: pipes and filters architecture

Stepping back a little bit, we reconsidered that we also wanted to introduce a new approach on how to build future use cases. Putting some thought on the other side, we would try some architectural patterns but that would be too much effort and complex for our scenario. It would complicate things much more and add extra overhead that we wanted to avoid. So we wanted something simple without affecting our daily code dev style. Easier to understand, easier to change and most important keep it simple.

A couple of years ago I was playing with pedestal. Pedestal is a framework on top of Clojure to build reliable services and APIs. What I liked back then was the idea of interceptors. One can think that interceptors are an implementation of the “pipes and filters” architecture pattern. More information can be found in pedestal guides. Below we will briefly explain how the pattern works.

Interceptors in pedestal are executables (probably in Java would be some runnable interface) that run by an “executor” passing a context object. Or in other words a middleware. A UML representation of the class diagram adapted from pedestal data structures (it uses a queue and a stack to implement the middleware) can be found below.

UML diagram of Interceptor pattern

Hiding some details about the Context object you can assume that it is an associative array (Hash | dictionary) or an object like OpenStruct in Ruby.

The execution process is as follows:

  • Enter phase: a Client instance is creating an Executor instance with a set of interceptors that we want to run. Interceptors could be easily passed in Executor’s initialization process or added via register method. Every interceptor gets registered in the middleware. At this point, every interceptor is queued in the enterQueue leaving the leaveStack empty for now. Calling execute on Executor instance, will dequeue all the enqueued Interceptors one by one from enterQueue and call the onEnter method on each interceptor, passing the context. Each interceptor will execute it’s logic by having access to the context instance. The context represents data flow (pipes and filters), and each interceptor can read from it, add items or change it (addition, deletion).
  • Once the first interceptor has run successfully (no exception is raised), the first interceptor instance is passed to the onLeave stack. Repeating the same process for the remaining interceptors in enterQueue we dequeue them one by one and the onEnter message is sent to each instance passing the corresponding context. Each interceptor similarly can read / write in the context (Things to consider: The queue could be build based on heuristics or a priority queue…. That would be interesting!!!).
  • Leave phase: once all interceptors in the onEnter queue are executed, we repeat the process for the onLeave stack and start popping one by one all the interceptors that were pushed into the stack. Note that the interceptors at this point are executed in reverse order (LIFO). Leave phase may include additional logic like cleanup which was one of our requirements, encoding before sending the result back to the client or mapping the context back to result objects (like an Optional monad).
  • Error phase: this phase is called once an exception is raised. Executor catches any runtime error raised and calls the onError method on all interceptors in the stack passing the context as usual. At this point each interceptor can handle the error gracefully by resolving it if it applies, logging, having retry logic etc.

We immediately saw some pros using the described approach like:

  • Simple to implement and use.
  • Queue and call stack is exposed as pure data. This gives us good control over the execution process.
  • Try catch logic is prevented to do error handling.
  • It applies well in our case since we could easily apply the cleanup on the leave phase when needed or equivalent in error phaze . Same applies for logging and error transition phazes.
  • The whole process and involved parties of the user request could be easily seen in the caller client class as shown below.
class UserRequestCreatedProcessor
attr_reader :payload
def initialize(payload)
@payload = payload
end
def execute
# executor = Interceptors::Executor.new(
# A.new("whatever"),
# B.new,
# C.new,
# D.new)

executor = Interceptors::Executor.new
executor.register(RemoteFileExtractor.new(url)
executor.register(
DomainFileCreator.new(TranslationService.instance)
)
executor.register(RemoteFileUploader.instance)
executor.register(
UserDataCreator.new(DataPreviewReport.new)
)
executor.register(ApiNotificationService.instance)

res = executor.execute(entity: domain_entity)
res.success? # true on success
res.failure? # true on failure
res.data # the context object
end
private def url
payload[:url]
end
def domain_entity
"SomeEntity"
end
end

Client has all the “glue” that is needed to implement the business use case. This is a common question in every developer. Where is this called? Where this is used? In which context is this executed? How many grep commands you have to make to understand where this is used? DCI tries to solve it without saying that this approach solves the problem . Each interceptor does one thing only and encapsulates our business logic. None of the interceptors knows about each other so we promote louse coupling. If they need to collaborate with each other they can communicate by passing data / messages to the context. We also increased cohesion since there is a separate class for each specific job to execute, which result better usability and maintenance. Last but not least from readability and understanding perspective reading UserRequestCreatedProcessor you can immediately see what the class does. From the above example it is easy to see that we extract a remote file from somewhere, we create our own file and we probably translate some attribute in the file, we upload the file somewhere and we call UserDataCreator which will build the JSON payload for an API where ApiNotificationService will send. I tend to think the above as data driven programming although I may be wrong about the term.

Although there are some big benefits one should take care to carefully update the context and not override any value by accident. It can be difficult to trace who changed the context although different approaches can help in debugging. Level of complexity is also increased as you add more interceptors in the middleware but on the other hand the definition of complexity is a little bit misleading since applications grow, they become complex and we keep doing our best to improve, refactor, experiment, fail, succeed and learn. An ex-colleague and a good friend of mine said to me once:

Έτσι είναι μεγαλώνει και ασχημαινει — Jason

It’s in Greek and it was his reply when I told him once: “Man I see a class and I don’t believe what I was thinking when I was writing it or the direction that it goes...”. And he replies: “That’s how it goes. It’s getting bigger and it’s getting ugly”. Which is kind of true!

Let’s see a possible implementation of RemoteFileExtractor interceptor.

class RemoteFileExtractor < Interceptors::Base
MissingRemoteURLError = Class.new(StandardError)
DESTINATION_FILE_PATH = "destination.csv"
def initialize(url)
@url = url
@extractor = Domain::Extractor.new(
Domain::Fetcher.new(@url),
DESTINATION_FILE_PATH
)
end
def on_enter(context)
raise MissingRemoteURLError, "Url for entity #{context[:entity].inspect} is missing" if @url.blank?

context[:extracted_file_path] = extractor.call
context
end
def on_leave(context)
@extractor.cleanup
end
alias_method :on_error, :on_leave
end

The above is a simple example in our use case. Once the RemoteFileExtractor is executed on enter phase it will extract the remote file and update the context by putting its extracted file path in the context. The interested parties like DomainFileCreator can access the context to get the extracted file path.

class DomainFileCreator < Interceptors::Base
# ...
def on_enter(context)
# ... process context[:extracted_file_path]
context
end
end

Similar logic follows on other interceptors. We will try to see an example in the next article and an implementation of it in Ruby. Someone may argue why not observables but I think that the flexibility coming from this design is better. It also gives you a handy way to handle post action and error cases without introducing more abstractions. The drawback though is that you need to keep in mind where these exceptions come from and if you don’t have a good pipeline of your process or you don’t think your entity’s state as a finite-state machine things may get complex. You also need to take care if you need an object initialisation upfront without waiting for executor’s execution — although this can be handled.

Related Ruby Libraries

Talking to a colleague of mine while discussing the interceptors in Pedestal described above, he mentioned that there is a gem called interactor which provides almost the same functionality. It is indeed similar and the gem is pretty small and well written so definitely I would give it a shot. It’s quite popular and it also ranks 1st in ruby-toolbox under service objects categories.

Conclusion and future steps

This was a first introduction on pedestal.io interceptors and its common interface to create a “pipes and filters architecture”. In the next part we will see a simple implementation in Ruby where we will implement the middleware using a dequeue data structure.

--

--