Decoding the Tech Battle: Monoliths and Microservices clash in the Digital Arena

Monolith:

Advantages:

  1. Single Codebase: All components of the app exist in one codebase.
  2. Easy Development: Simpler to add new features.
  3. Easy Testing: Easier to simulate and test scenarios.
  4. Easy Deployment: Easy to deploy the entire platform.
  5. Easy Debugging: Easier to trace bugs.
  6. Easy Performance Monitoring: Easier to monitor performance of all features.

Disadvantages:

  1. Slower Development Speed: As the system grows, adding new features can slow down.
  2. Scalability Issues: Scaling issues can arise when the user base grows.
  3. Reliability: A bug in one part can bring down the entire system.
  4. Flexibility Issues: Can’t add a feature if it requires a different tech stack.
  5. Deployment Complexity: Small changes require complete deployment.

Microservices:

Advantages:

  1. Agile Development: Faster development, can update services independently.
  2. Scalability: Can scale only necessary services.
  3. Highly Testable & Maintainable: Each microservice can be tested and maintained separately.
  4. Flexibility: Different services can use different technology stacks.
  5. Independent Deployment: Each microservice can be deployed independently.

Disadvantages:

  1. Management Maintenance: Managing multiple services can be complex.
  2. Infrastructure Costs: Can increase due to separate databases and servers for each service.
  3. Organizational Issues: Communication challenges can arise among teams working on different services.
  4. Debugging Issues: Requires advanced tools for efficient debugging.
  5. Lack of Standardization: Different standards among services can create integration issues.
  6. Lack of Code Ownership: Potential issues in shared code areas due to divided responsibilities.

Monolith to microservices migration:

Strangler Fig Pattern

The Strangler Fig Pattern can be an effective method for migrating a monolithic system to microservices. Here’s how it can be applied to stock market application:

  1. Identify Part to Migrate: Identify a part of the existing system that needs migration. For instance, we may choose the ‘Buy/Sell transaction’ functionality in the monolith application that we wish to replace.
  2. Implement New Microservice: Implement this functionality into a new microservice. We could create a new ‘Transaction Service’ that handles all the buy/sell operations independently.
  3. Redirect Requests: Start gradually redirecting the requests from the old monolithic system to the new ‘Transaction Service’. This can be done using a routing mechanism, which routes a specific portion of requests to the new service. This allows the new microservice to start handling real-world requests while also giving a chance to monitor its performance and correct any issues before it fully takes over the functionality from the monolith.

Branch By Abstraction Pattern

  1. Abstraction: Identify the ‘Buy/Sell transaction’ part of the monolithic system. Create an interface called ‘TransactionService’ that defines the operations like ‘buy’ and ‘sell’. The existing monolith codebase would implement this interface.
  2. New Implementation: Now, start developing the new microservice which will also implement the ‘TransactionService’ interface. This new microservice is designed to handle the ‘Buy/Sell transaction’ operations independently.
  3. Switch: Once the microservice is ready and thoroughly tested, gradually start redirecting the ‘Buy/Sell transaction’ requests from the monolithic system to the new microservice. This could be accomplished through feature toggles or a routing mechanism, which allows you to control which requests are processed by the new microservice.
  4. Remove Legacy Code: When the new microservice has fully taken over the ‘Buy/Sell transaction’ operations and is working as expected, the legacy ‘Buy/Sell transaction’ code in the monolith system can be safely removed.

Branch by Abstraction allows this transition to happen smoothly, without disrupting the functioning of the system. The old and new systems can coexist and operate in parallel during the transition, reducing risks and enabling continuous delivery.

Remote Procedure Call (RPC):

A Remote Procedure Call (RPC) is similar to a function call, but it’s used in the context of networked applications. It allows a program running on one machine to call a function on a different machine (a remote server) as if it were a local function.

For example, consider a client-server application where the server provides a function to add two numbers. But instead of calling this function locally, a client on a different machine can use RPC to call this function on the server.

gRPC:

gRPC is a modern, open-source, high-performance RPC framework developed by Google. It uses Protocol Buffers (protobuf) as its interface definition language, which describes the service interface and the structure of the payload messages. This is an efficient binary format that provides a simpler and faster data exchange compared to JSON and XML.

Protocol Buffers:

Protocol Buffers (Protobuf) is a binary encoding format that allows you to specify a schema for your data using a specification language. This schema is used to generate code in various languages and provides a wide range of data structures that result in serialized data being small and quick to encode/decode.

gRPC in Action:

Here’s a simplified version of how gRPC works in a client-server architecture:

  1. gRPC Client: The process starts from the gRPC client. The client makes a call through a client stub, which has the same methods as the server. The data for the call is serialized using Protobuf into a binary format.
  2. Transport: The serialized data is then sent over the network via the underlying transport layer.
  3. HTTP/2: gRPC utilizes HTTP/2 as its transport protocol. One significant benefit of HTTP/2 is that it allows multiplexing, which is the ability to send multiple streams of messages over a single, long-lived TCP connection. This reduces latency and increases the performance of network communication.
  4. gRPC Server: The server receives the serialized data, deserializes it back into the method inputs, and executes the method. The result is then sent back in the reverse direction: serialized and sent back to the client via HTTP/2 and the transport layer, then deserialized by the client stub.

The adoption of gRPC in web client and server communication has been slower due to a few significant factors:

  1. Browser Compatibility: Not all browsers fully support HTTP/2, the protocol underlying gRPC. Even where HTTP/2 is supported, the necessary HTTP/2 features such as trailers might not be available.
  2. gRPC-Web: While gRPC-Web, a JavaScript implementation of gRPC for browsers, does exist, it doesn’t support all the features of gRPC, such as bidirectional streaming, and is less mature than other gRPC libraries.
  3. Text-Based Formats: In the context of web development, formats like JSON and XML are very common and convenient for data interchange. They’re directly compatible with JavaScript and are human-readable. gRPC, on the other hand, defaults to Protocol Buffers, a binary format that’s more efficient but not as straightforward to use on the web.
  4. Firewalls and Proxies: Some internet infrastructure might not support HTTP/2 or might block gRPC traffic, causing potential network issues.
  5. REST Familiarity: REST over HTTP is a well-understood model with broad support in many programming languages, frameworks, and tools. It’s simpler to use and understand, which can speed up development and debugging.
  • Increased Complexity: While gRPC has performance benefits, it also adds complexity to the system. The performance gain might not always be worth the added complexity, particularly for applications that don’t require high-performance inter-service communication.

Webhooks and Event-Driven Architecture:

Webhooks are a method of augmenting or altering the behavior of a web page or application with custom callbacks. These callbacks can be maintained, modified, and managed by third-party users and developers who may not necessarily be affiliated with the originating website or application.

In the context of a stock market application like Zerodha, this translates to the following:

Zerodha, a brokerage platform, wants to stay updated with price changes from the stock exchange (SEBI). To achieve this, Zerodha would provide a webhook, essentially a callback URL, to SEBI. This URL is designed to be hit whenever the specific event of interest, such as a particular stock reaching a certain price, occurs.

This is an example of an Event-Driven Architecture where communication happens based on events, rather than constant polling or maintaining a persistent connection.

Here’s the sequence of steps in more detail:

  1. Register: Zerodha first registers a webhook with SEBI. This is a callback URL that Zerodha exposes and asks SEBI to call when a certain event happens. In this case, when a particular stock price reaches a specified value.
  2. Trigger Event: When the stock price reaches the specified value, the event is triggered on the SEBI side.
  3. Invoke Webhook: SEBI then sends an HTTP request (usually a POST request) to the registered webhook URL provided by Zerodha. The request would contain information about the event in its body, typically formatted in JSON or XML.
  4. Receive and Process: Zerodha receives the HTTP request and processes the data contained in the body of the request. Based on the information received, it can take necessary action, such as notifying the user about the price change.

This event-driven method allows efficient communication and helps Zerodha stay updated with real-time changes in stock prices. It avoids the need for long polling and persistent connections, which could be expensive and not scalable when dealing with millions of clients.

Other examples:

  1. CI/CD Deployment Actions
  2. MailChimp
  3. Zapier
  4. Stripe