Skip to main content

Posts

Client-Server Communication Model (Part-1)

The foundation of the internet lies in the communication between computers. Computers, acting as servers, own and can provide resources to other computers, serving as clients. The roles of computers shift over time – one moment, a computer may own resources, while at another time, it may require resources from others. Communication is essential for data exchange between them. The predominant protocol facilitating Client-Server communication is the HyperText Transfer Protocol (HTTP). HTTP is a protocol for fetching web pages, documents, images, media files, binaries, etc. Pretty much the whole of the internet runs on HTTP. We have discussed HTTP in detail here . A typical HTTP request flow is as follows: A client opens a connection and requests a resource from a server. The server calculates the response The server sends the response to the client on the same opened connection Some of the most popular Client Server Communication models are: Polling / Short Polling / AJAX Polling Long Po...

Does Google Spanner Provide High Availability and Strong Consistency, Defying CAP Theorem

 I was going through one of the official introduction videos of Google Spanner. It mentions " Google Spanner is a mission-critical relational database service built from the ground up and battle-tested at Google for Strong Consistency and High Availability at a global scale ". A few questions popped up into my mind after this statement: How does a database guarantee high Availability and Strong Consistency on a global scale? Ensuring Partition Tolerance is necessary in building Distributed Systems. On top of that, how does Spanner provide High Availability and Strong Consistency simultaneously? If it provides all three guarantees, does this break the CAP theorem? The short answer is Google Spanner does not break the CAP theorem . Before going deep, let us revisit the CAP theorem. As per Wikipedia, any distributed data store can provide two of the following three guarantees: Consistency - Every read receives the most recent write or an error . Each read operation returns ...

Caching Strategy Part-1

Caching is a widely used technique to enhance system performance. In essence, frequently accessed content is stored in a faster temporary storage, called a Cache. This allows the content to be retrieved directly from the cache, rather than fetching it from the actual source every time. Retrieving content from the cache improves performance, as it is faster than retrieving it from the actual source. The actual data source can be a service, database, or any other system. Although the following illustrations take a database as the data source, the same concept applies to other systems. Accessing a database can be an expensive operation. Frequent access to the same data can have performance implications on both the application and the database. Caching can reduce response time for the application and decrease the load on the underlying database. There are different strategies for selecting the right kind of cache. The choice of these strategies depends on how the data is used, like how it...