BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News How HubSpot Uses Apache Kafka Swimlanes for Timely Processing of Workflow Actions

How HubSpot Uses Apache Kafka Swimlanes for Timely Processing of Workflow Actions

This item in japanese

HubSpot adopted routing messages over multiple Kafka topics (called swimlanes) for the same producer to avoid the build-up in the consumer group lag and prioritize the processing of real-time traffic. Using a combination of automatic and manual detection of traffic spikes, the company ensures the majority of customers’ workflows execute without delays.

HubSpot offers a business process automation platform that, at its core, employs a workflow engine to power action execution. The platform handles millions of active workflows and executes hundreds of millions of actions every day and tens of thousands per second.

Workflow Engine Overview (Source: HubSpot Engineering Blog)

Most of the processing is triggered asynchronously, with Apache Kafka used for messaging, which allows decoupling between the sources/triggers of actions and the components executing them. The platform uses many Kafka topics responsible for delivering action data from a variety of sources. The potential downside of using a message broker is that if messages are published too quickly and consumers can’t process them in time, there will be a build-up of messages waiting to be processed, called a consumer lag.

Angus Gibbs, engineering lead at HubSpot, describes challenges with ensuring that message processing happens in near real-time:

If there’s a sudden burst of messages produced onto our topic, there will be a backlog of messages that we have to work through. We could scale our number of consumer instances up, but that would increase our infrastructure costs; we could add autoscale, but adding new instances takes time, and customers generally expect workflows to process enrollments in near real-time.

The team recognized that the problem they needed to address was using the same topic for all messages of the same type or from the same source. Considering that the platform is used by many customers, if one or a small group of customers starts producing a lot of messages, all of the traffic is delayed, and the user experience for all customers suffers.

To address this concern, the developers opted to use multiple topics, which they called swimlanes, and have dedicated pools of consumers for each. The easiest way to apply this pattern is to use two topics: one responsible for the real-time traffic and the other for the overflow traffic. The two swimlanes process the traffic in exactly the same way, but each topic will have independent consumer lag, and with appropriate routing of messages between both of them, it’s possible to ensure the real-time swimlane avoids any (or significant) delays.

Kafka Swimlanes (Source: HubSpot Engineering Blog)

Where possible, automatic routing of messages between swimlanes was implemented based on metadata extracted from published messages. For instance, some bulk import of refill operations were clearly marked as such within the message schema, and the routing logic could publish those to the overflow swimlane with ease. Additionally, the developers introduced per-customer configuration for rate-limiting the traffic and set appropriate thresholds based on the max throughput metrics of message consumers.

Another angle for deciding how to route messages between swimlanes was looking at action-execution times. Fact actions would be routed to one swimlane and slow ones to the other. This is particularly relevant for the HubSpot platform, where customers can create custom actions executing arbitrary Node or Python code.

Lastly, the team developed the means to manually route all traffic for specific customers to a dedicated swimlane in case the traffic from a customer unexpectedly starts creating a lag on the primary (real-time or fast) swimlane, and none of the automatic routing mechanisms kicked in. That way, the traffic can be isolated as the team works on troubleshooting the cause of delays.

About the Author

Rate this Article

Adoption
Style

BT