Tuesday, August 9, 2022

Distributed System Design with .Net ecosystem and Azure - Part 2

In my previous post in this series, I discussed about high level requirements of the client and technology stack that we choose to provide the solution as well as the primary data structure and a bit about the data flow design. If you haven't red, I highly recommend you to read that to get the context of this article.

Redis DB as a primary Database

In order to understand how we arrived at consensus in terms of making a decision to choose redis as a primary database, there is a bit of history. 

Our client basically had an existing desktop based system which was a .net win forms application using redis as a database. Honestly, this is the first time where I saw redis used as a primary database. I had mixed feeling seeing this. The design looked fascinating and scary at the same time.

The reason to worry is data persistence! 

Redis is basically an in-memory data structure store. In layman terms, what it means is, if you write a piece of data to redis, it will be stored in RAM and not on disc. Hence its majorly used as a cache rather than database. Then I learned that there are configurations in redis using which you can choose to persist the data in disc. By default, it persists data to disc with snapshotting strategy. However, you can choose to persist using AOF strategy or both combined. 

When I looked at the configuration of the existing system, it was configured to use snapshotting strategy with the configuration value save 20 1(i.e, take a snapshot every 20 seconds if there is at least 1 key change).

Still it was not convincing enough to use this as a primary data store, because if you go with above and your system happened to crash, in the worst case you would loose 20 seconds of data. And in sports, every second matters! specially when you are an official governing body of a specific sport (I cannot reveal the name here due to NDA constraints). 

Unfortunately, it was not possible to speak to any of the technical people who developed this system. The system was however running successfully in production for couple of years.

Further digging of all the materials shared by client, I could figure out that there was multiple instance of redis which would run as a backup system and receive the data with pub-sub. With this you could achieve certain degree of data safety in terms of durability. If not perfect, may be its a good enough kind of safety for that occasional unfortunate failure scenario. 

But as you guessed, the performance was blazingly fast!

Now this is a benchmark for our cloud based solution which had all the feature of the desktop + some feature enhancements + integrating with external systems with sub-second latency.

So I definitely didn't wanted to move away from Redis but also introduce further measures to ensure durability. After doing some research we arrived at following configuration - 

  1. Kept the snapshotting configuration as is, which was with the value save 20 1 (i.e, take a snapshot every 20 seconds if there is at least 1 key change)
  2. Enabled AOF strategy with the value appendfsync everysec
  3. Enabled master-slave replication with asynchronous configuration
Note: The master-slave replication we used here is not for the purpose of syncing data from on-premise to cloud or vice versa. It was only to have a local backup of data in the case if our primary system hosing the redis crashes.

With this setup, in the worst case, maximum we will loose 1 second of data for which we got approval from client.

This was one of the key reference article which helped me to get some better insights about these configurations.

Why not redis in-built replication to sync data from On-Premise to Cloud and vice versa?

In order to understand this, its important to know following facts of the application -

  1. In redis server, each database is just a logical separation.
  2. In our cloud redis server, each logical redis database holds the data of one specific tournament.
  3. The on-premise redis server holds the data of only 1 tournament which is running on that location.
  4. The on-premise and cloud runs in a toggling active-passive mode for a tournament. Meaning, at any given moment of time for a give tournament, the writes can either happen to cloud or on-premise but not to both.
  5. The entire setup runs in linux environment in a docker engine both on cloud and on-premise.
Note: Above constraints were imposed by us by considering various application features and requirements. Something that we agreed with client.

Diagrammatically it looks something like this - 


So, basically following are the challenges to use in-built replication -
  1. The on-premise needs only 1 tournament data but cloud deals with multiple tournament data, each in its on logical db.
  2. Since there are multiple on-premise instances at various physical locations, each running a single tournament, there is no clear single node which we can consider as master node.
  3. Since everything runs inside a docker engine, there is additional overhead in the configurations in terms of network mapping.
Hence we decided to implement the data sync logic explicitly. In short, this is how it was implemented -
  1. Cloud to on-premise: This is a straightforward download of data from cloud to on-premise for the specific tournament. The on-premise db will get overridden by the backup file.
  2. On-Premise to Cloud: Each on-premise redis transaction is delivered to cloud azure redis using azure webjob via Kafka. Basically, on-premise will push the transaction messages to Kafka, and azure web job in cloud has a logic to consume these messages from Kafka and commit it to redis db in the cloud.

Why Kafka?

Our application needed to integrate with couple of external systems, including data pull and push from our systems. Our consumers reside in various geographical areas and hence we needed a platform which has inherent support for geographical distribution and can deliver data with sub second latency. We considered couple of options for this. They are - 
  1. Azure Cosmos DB - This is a cloud only solution and that's the main reason to drop this for our application. Also, its more of a distributed storage system rather than a message streaming platform. Hence if we go with this, we needed additional other PAAS service or custom implementation to deliver the data to external systems.
  2. Azure Event Hub - This was one of the toughest decisions to make because both Kafka and Event Hub had almost equal capabilities. Compared to Kafka, Event Hub was fairly new to the market. Kafka is already a matured framework with wide community support and our client was more inclined towards Kafka due to this reason. For me the only worry was how well it cops with rest of technology stack. Fortunately, it had good C# client libraries to work with.
  3. Redis Enterprise - This was again a more promising choice. It too had inherent capabilities of scale along message delivery capabilities. Since redis is our primary database, it made more sense to go for this but unfortunately their licensing strategy was not aligned with how our application was designed to operate. It was becoming insanely expensive making us to drop this out.
During the phase of making these decisions, we were in continuous touch with pre-sales teams of both Microsoft, Redis Enterprise and Confluent Kafka. Everyone was proposing different architectural solutions with different tech they have to offer. For us, its Confluent Kafka which met all our criteria. We did couple of POCs and the results were excellent. We proceeded with this dropping out other options.


The overall message delivery was within a second for several concurrent matches. It was a good and satisfactory outcome.

Where does SQL Server fits in?

If you went through my previous article, you can find the mention of SQL server. We have basically used SQL Azure to maintain user details, roles, permission and controlling aspects of active-passive switch of redis. Many of these details were required for on-premise as well but to avoid additional overhead on on-premise we just used another redis db. This data was only updatable from cloud counter part and needed only as a read-only for on-premise (except for few tables which anyways need not to sync back to cloud). Technically, we achieved this by having a unified repository interface with concrete Redis and SQL implementations.

Its worth mentioning the various technical scenarios covered related to data synchronization including, recovering from app crash, intermittent internet disconnection issue, switching over to different machine when primary machine is not recoverable etc.

Will cover these implementation aspects in the next article. Stay tuned!


No comments:

Post a Comment