Client-Server Model
Server & Client explanation is given in this post.
Data or code will not be always stored in the file or the database of a computer/server.
We can send this via the internet using protocols. The data or code can be hosted on the cloud too.
APIs are used to send and receive requests and responses. When one computer starts sharing the data it holds with others, the client-server concept has begun.
Eg: If an application needs to reach millions of users, we should need brilliant engineering on the server-side. So the server should never fail. Even if the hardware fails, we should have multiple copies of the same server in the system. Changing the number of copies should be easy, we can add or remove the servers. Each server in the system should be the same while maintaining multiple copies. If not the same, the user might get contradicting information while making a request to a different server.
Cloud is a set of computers which was bought for a price. If we pay a cloud service, for example AWS, AWS will provide computation power. AWS is a solution provider where the configuration, settings, reliability can be taken care of to a large extent.
Computation power is a desktop that runs my/our algorithm. If we remotely log in, we can use the services.
System Design Concepts
1. Caching
3. Partitioning
4. Proxies
5. Messaging Queue or Message Passing
6. CAP Theorem
7. Databases (NoSQL Databases)
Caching
Why we need a cache?
When the number of requests is more, there will be a lot of reads and write operations on the database which slows down the performance of the system. To handle these requests, a cache is used by reducing the need to access the underlying slowest storage layer.
We used to save a lot of grocery essentials in our pantry which would save the time of cooking and a lot of time. Similarly, if we need to access a certain piece of data often, we need to store it in cache and retrieve it. Because accessing the data from the primary memory(RAM) is faster than accessing the data from secondary memory(disk).
Load Balancing
Scalability
Buying more machines to handle more requests or buying bigger machines to hanlde more requests is called scalability.
Buying bigger machine - Vertical scaling
Buying more machine - horizontal scaling
Horizontal Scaling
- It requires some load-balancing
- Resilient - If one machine fails, the request may/not will be redirected to others.
- All the communication is over the network. (Network calls - RPC), so it is slow.
- Data inconsistency
- It scales well when the number of users increases.
Vertical Scaling
- No load-balancing is required.
- Single point of failure.
- Inter-process communication, so this is fast.
- One system where all the data resides, so this is consistent.
- Hardware Limitation - We can't keep making the desktop bigger.
Comments
Post a Comment