Open In App

Guide to System Design for Freshers

Last Updated : 21 Mar, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

Starting your journey into the system design domain can be exciting and difficult, especially for freshers. In this article, we’ll provide a simplified model of system design aimed specifically at freshers/new grads. Whether you’re a new grad or transitioning into a tech career, understanding system design fundamentals is crucial for building scalable, efficient, and reliable software solutions.

System Design for Freshers

System Design for Freshers

So, let’s start with the guide for system design in a way that’s easy to learn and apply.

1. What is System Design?

System design is the process of defining the architecture, components, modules, interfaces, and data for a system to satisfy specified requirements. It involves translating user requirements into a detailed blueprint that guides the implementation phase. The goal is to create a well-organized and efficient structure that meets the intended purpose while considering factors like scalability, maintainability, and performance.

2. How to Design a System?

System design is the process of designing the architecture and components of a software system to meet specific business requirements. The process involves defining the system’s architecture, components, modules, and interfaces, and identifying the technologies and tools that will be used to implement the system. Here are some steps to get started with system design:

  1. Understand the requirements: Before you begin designing the system, you need to understand the requirements. This involves talking to stakeholders and users, reviewing existing documentation, and analyzing the business processes that the system will support.
  2. Define the system architecture: Once you have a clear understanding of the requirements, you can begin defining the system architecture. This involves identifying the major components of the system and the interfaces between them.
  3. Choose the technology stack: Based on the requirements and the system architecture, you can select the technology stack. This includes choosing the programming language, database, frameworks, and libraries that will be used to implement the system.
  4. Design the modules: Next, you need to design the modules that will make up the system. This involves defining the functions that each module will perform and the data that it will manipulate.
  5. Plan for scalability: As you design the system, you need to consider how it will scale. This involves identifying potential bottlenecks and designing the system to handle increased loads.
  6. Consider security and privacy: Security and privacy should be a key consideration in system design, as mentioned later in this System Design Interview Bootcamp. This involves identifying potential security threats and designing the system to mitigate them.
  7. Test and validate: Once the system design is complete, you need to test and validate it. This involves creating test cases and scenarios that simulate real-world usage and verifying that the system meets the requirements.

3. System Design Life Cycle

The System Design Life Cycle (SDLC) is a comprehensive process that outlines the steps involved in designing and developing a system, be it a software application, hardware solution, or an integrated system combining both. The System Design Life Cycle (SDLC) involves following phases:

  • Stage 1. Planning
    • Objective: Define the project scope, goals, and resources.
    • Example: Imagine a company initiating a project to develop a new customer relationship management (CRM) system. The planning phase would involve outlining the functionalities, budge constraints, and identifying the team responsible.
  • Stage 2. Feasibility Study
    • Objective: Asses the practicality of the proposed system.
    • Example: Before committing to the CRM project, a feasibility study would analyze factors like technical, operational, and economic viability. This involves evaluating whether the benefits outweigh the costs.
  • Stage 3. System Design
    • Objective: Develop a blueprint of the system architecture and components.
    • Example: For the CRM system, this involves creating a detailed design that outlines the database structure, user interfaces, and system functionalities. It serves as a guide for the developers during the coding phase.
  • Stage 4. Implementation
    • Objective: Transform the design into an operational system.
    • Example: Developers write the code for the CRM system based on the design specifications. This phase involves rigorous testing to identify and rectify any bugs or errors.
  • Stage 5. Testing
    • Objective: Verify that the system meets the specified requirements.
    • Example: The CRM system undergoes various testing procedures, such as unit testing, integration testing, and user acceptance testing, to ensure its functionality, performance, and security.
  • Stage 6. Deployment
    • Objective: Introduce the system to its intended environment.
    • Example: The CRM system is deployed for use by the organization’s employees. This may involves training sessions to familiarize users with the new system.
  • Stage 7. Maintenance and Support
    • Objective: Ensure the ongoing functionality and address any issues that arise.
    • Example: Regular updates, bug fixes, and user support for the CRM system to adapt to changing business requirements and address any emerging issues.

4. SOLID Principles

The SOLID principles are a set of five design principles in object-oriented programming intended to make software designs more understandable, flexible, and maintainable. This principle is an acronym of the five principles which are given below:

  1. Single Responsibility Principle
    • This principle states that “A class should have only one reason to change” which means every class should have a single responsibility or single job or single purpose. In other words, a class should have only one job or purpose within the software system.
  2. Open/Closed Principle
    • This principle states that “Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification” which means you should be able to extend a class behavior, without modifying it.
  3. Liskov’s Substitution Principle
    • The principle was introduced by Barbara Liskov in 1987 and according to this principle “Derived or child classes must be substitutable for their base or parent classes“. This principle ensures that any class that is the child of a parent class should be usable in place of its parent without any unexpected behavior.
  4. Interface Segregation Principle
    • This principle is the first principle that applies to Interfaces instead of classes in SOLID and it is similar to the single responsibility principle. It states that “do not force any client to implement an interface which is irrelevant to them“. Here your main goal is to focus on avoiding fat interface and give preference to many small client-specific interfaces. You should prefer many client interfaces rather than one general interface and each interface should have a specific responsibility.
  5. Dependency Inversion Principle
    • The Dependency Inversion Principle (DIP) is a principle in object-oriented design that states that “High-level modules should not depend on low-level modules. Both should depend on abstractions“. Additionally, abstractions should not depend on details. Details should depend on abstractions.

5. What is Monolith Architecture?

Monolithic architecture is a traditional software architecture style where all the components of a software application are tightly integrated into a single, unified executable or deployable unit. In a monolithic architecture, the entire application, including its user interface, business logic, and data access layers, is developed, deployed, and scaled as a single unit.

Monolithic-Architecture

Key characteristics of monolithic architecture include:

  1. Tight Coupling: Components within the application are tightly coupled, meaning changes to one component may require modifications to other components.
  2. Single Codebase: The entire application is built from a single codebase, often resulting in larger and more complex codebases as the application grows.
  3. Single Deployment Unit: The application is deployed as a single unit, typically on a single server or set of servers.
  4. Scalability Challenges: Scaling a monolithic application can be challenging, as the entire application needs to be scaled together rather than scaling individual components independently.
  5. Technological Homogeneity: Components within the application typically use the same technology stack and programming language.

While monolithic architecture has been widely used and is relatively straightforward to develop and deploy, it can present challenges in terms of flexibility, scalability, and maintainability, particularly as applications grow larger and more complex

6. What is Microservices Architecture?

Microservices architecture is a software development approach where a large application is decomposed into smaller, independent services that are loosely coupled and independently deployable. Each service is responsible for a specific business function and communicates with other services through well-defined APIs. Key characteristics of microservices architecture include:

  1. Decomposition: The application is broken down into smaller, focused services, each responsible for a specific business capability or functionality.
  2. Loose Coupling: Services are loosely coupled, meaning they can be developed, deployed, and scaled independently of each other. Changes to one service do not require changes to other services.
  3. Independently Deployable: Each service can be deployed independently, allowing for faster release cycles and reducing the risk associated with deploying changes.
  4. Technology Diversity: Services can be built using different programming languages, frameworks, and technologies, allowing teams to choose the best tools for each service’s specific requirements.
  5. Scalability: Services can be scaled independently based on demand, allowing for more efficient resource utilization and improved performance.

Microservices-Architecture

Microservices architecture offers several benefits, including improved agility, scalability, resilience, and flexibility. However, it also introduces challenges such as distributed system complexity, increased operational overhead, and the need for robust service communication mechanisms.

7. Content Delivery Network (CDN)

A Content Delivery Network (CDN) is a distributed network of servers that work together to deliver content (like images, videos, and static files) to users faster and more efficiently.

  • These servers, called edge serverswhich are strategically positioned across various geographical locations.
  • CDNs help improve the performance, reliability, and scalability of websites and web applications by caching content closer to users, reducing latency, and offloading traffic from origin servers.

cdn-(1)

Importance of Content Delivery Network(CDN)

CDNs offer several key benefits that make them important for delivering content over the internet:

  • Faster Content Delivery: By caching content in servers located closer to end users, CDNs reduce the physical distance that data must travel, thereby decreasing latency and improving load times.
  • Improved Website Performance: Faster load times lead to better overall website performance, which can result in higher user engagement, longer visit durations, and increased conversion rates.
  • Scalability: CDNs help websites handle traffic spikes and high loads by distributing the load across multiple servers. This scalability is especially crucial for websites with global audiences or those experiencing sudden surges in traffic.
  • Redundancy and Reliability: CDNs offer redundancy by storing copies of content across multiple servers. If one server fails, another server can seamlessly take over, ensuring continuous availability of the content.
  • Global Reach: CDNs have servers located around the world, allowing websites to reach users in different geographic locations with reduced latency and improved performance.

Type of Content Delivery Network(CDN)

CDNs can be classified into several types based on their architecture and functionality:

  • Public CDNs A public CDN is a CDN that is available for use by anyone on the internet. These CDNs typically have a large network of servers located around the world and are used to deliver content, such as images, videos, and other static files, to users quickly and efficiently.
  • Private CDNs: A private CDN is a CDN that is used exclusively by a single organization or company. These CDNs are often deployed within a company’s own infrastructure or on a private cloud and are used to deliver content to internal users or customers.
  • Peer-to-Peer (P2P) CDNs: These CDNs utilize peer-to-peer networking technology to distribute content directly between users, reducing reliance on centralized servers.
  • Hybrid CDNs: A hybrid CDN combines elements of both public and private CDNs. In a hybrid CDN, some content is delivered using a public CDN, while other content is delivered using a private CDN.
  • Push CDNs: In a push CDN, content is uploaded or “pushed” to the CDN’s servers in advance of when it is needed. This can help improve performance by ensuring that content is available closer to end users when they request it.
  • Pull CDNs: In a pull CDN, content is requested or “pulled” from the CDN’s servers when it is needed. This approach is more efficient for delivering content that is frequently updated or dynamically generated. Pull CDNs are often used for delivering dynamic content, such as web pages or API responses.

8. Object-Oriented Programming(OOP) for System Design

Object-oriented programming (OOP) is important for system design because it promotes modular, scalable, and maintainable code. By organizing code into objects with defined behaviors and interactions, OOP facilitates the creation of complex systems that are easier to understand, extend, and modify.

Why OOPs is important in System Design

  • This paradigm fosters code reuse, encapsulation, and abstraction, which are essential principles for building robust and adaptable software architectures.
  • In system design, OOP enables developers to effectively manage complexity, enhance code readability, and streamline collaboration among team members.

Four-pillars-of-oops

  1. Class:
    • A class is a blueprint or template for creating objects. It defines the attributes (data fields or properties) and behaviors (methods or functions) that all objects of that class will have.
    • Think of a class as a blueprint for creating multiple instances of similar objects.
    • For example, a “Car” class may define attributes such as make, model, and color, and behaviors such as driving and braking.
  2. Object:
    • An object is an instance of a class. It represents a specific realization of the class blueprint, with its own unique set of attribute values.
    • Objects encapsulate both data and behavior.
    • Using the example above, if we create an object of the “Car” class called “myCar,” it might have attributes like make=”Toyota”, model=”Camry”, and color=”blue”, and it can perform actions like driving and braking.
  3. Encapsulation
    • Encapsulation is the concept of bundling data (attributes or properties) and methods (functions or procedures) that operate on the data into a single unit, called a class.
    • It allows the internal state of an object to be accessed and modified only through well-defined interfaces provided by the class, while hiding the implementation details from the outside world.
    • This mechanism protects the integrity of the data and ensures that the object remains in a valid state, promoting modularity, security, and code maintainability.
    • Encapsulation helps developers control access to sensitive data and behavior, facilitating robust and reliable software design.
  4. Inheritance
    • Inheritance enables a new class (called a subclass or derived class) to inherit attributes and methods from an existing class (called a superclass or base class).
    • The subclass inherits the properties and behaviors of the superclass, allowing it to reuse code and extend functionality.
    • Inheritance promotes code reuse, modularity, and abstraction, as it allows developers to create hierarchical relationships between classes, organize code more effectively, and enhance the flexibility and scalability of software systems.
  5. Polymorphism
    • Polymorphism refers to the ability of different objects to respond to the same message or method invocation in different ways.
    • There are two main types of polymorphism: compile-time (or static) polymorphism and runtime (or dynamic) polymorphism.
      • Compile-time polymorphism:
        • This is achieved through method overloading and operator overloading.
        • Method overloading allows a class to have multiple methods with the same name but different parameters, and the appropriate method is selected at compile time based on the arguments passed.
        • Operator overloading enables the operators to behave differently based on the operands they operate on.
      • Runtime polymorphism:
        • This is achieved through method overriding, which occurs when a subclass provides a specific implementation of a method that is already defined in its superclass.
        • When a method is invoked on an object of the superclass, the runtime system determines the actual type of the object and executes the corresponding overridden method in the subclass.
    • Polymorphism promotes flexibility, extensibility, and code reusability by enabling objects to exhibit different behaviors based on their specific types or contexts.
    • It allows for more generic and flexible code and simplifies the implementation of complex systems.
  6. Abstraction
    • Abstraction is the process of representing complex real-world entities as simplified models within a software system.
    • It involves focusing on the essential characteristics of an object while hiding unnecessary details. In OOP, abstraction is achieved primarily through the use of classes and interfaces.
    • Classes serve as templates for creating objects, encapsulating both data (attributes or properties) and behaviors (methods or functions) relevant to a particular entity.
    • Interfaces define a contract for classes to implement, specifying a set of methods without providing their implementation details.

9. Object-Oriented Analysis and Design(OOAD)

Object-Oriented Analysis and Design (OOAD) is a software engineering methodology that employs object-oriented principles to model and design complex systems. It involves analyzing the problem domain, representing it using objects and their interactions, and then designing a modular and scalable solution. It helps create systems that are easier to understand, maintain, and extend by organizing functionality into reusable and interconnected components.

1. Object-Oriented Analysis

Object-Oriented Analysis (OOA) is the first technical activity performed as part of object-oriented software engineering. OOA introduces new concepts to investigate a problem. It is based on a set of basic principles, which are as follows:

  • The information domain is modeled: Lets say you’re building a game. OOA helps you figure out all the things you need to know about the game world – the characters, their features, and how they interact. It’s like making a map of everything important.
  • Behavior is represented: OOA also helps you understand what your game characters will do. If a character jumps when you press a button, OOA helps describe that action. It’s like writing down a script for each character.
  • The function is described: Every program has specific tasks or jobs it needs to do. OOA helps you list and describe these jobs. In our game, it could be tasks like moving characters or keeping score. It’s like making a to-do list for your software.
  • Data, functional, and behavioral models are divided to uncover greater detail: OOA is smart about breaking things into different parts. It splits the job into three categories: things your game knows (like scores), things your game does (like jumping), and how things in your game behave (like characters moving around).
  • Starting Simple, Getting Detailed: OOA knows that at first, you just want to understand the big picture. So, it starts with a simple version of your game or program. Later on, you add more details to make it work perfectly. It’s like sketching a quick drawing before adding all the colors and details.

2. Object-Oriented Design

In the object-oriented software development process, the analysis model, which is initially formed through object-oriented analysis (OOA), undergoes a transformation during object-oriented design (OOD). This evolution is crucial because it shapes the analysis model into a detailed design model, essentially serving as a blueprint for constructing the software.

The outcome of object-oriented design, or OOD, manifests in a design model characterized by multiple levels of modularity. This modularity is expressed in two key ways:

  • Subsystem Partitioning:
    • At a higher level, major components of the system are organized into subsystems.
    • This practice is similar to creating modules at the system level, providing a structured and organized approach to managing the complexity of the software.
  • Object Encapsulation:
    • A more granular form of modularity is achieved through the encapsulation of data manipulation operations into objects. ” It’s like putting specific tasks (or operations) and the data they need into little boxes called “objects.”
    • Each object does its job neatly and keeps things organized. So, if our game has a character jumping, we put all the jumping stuff neatly inside an object.
    • It’s like having a box for each task, making everything easier to handle and understand.

10. What is High-Level Design(HLD)?

High-level design or HLD refers to the overall system, a design that consists description of the system architecture and design and is a generic system design that includes:

  • System architecture
  • Database design
  • Brief description of systems, services, platforms, and relationships among modules.

High-level design is also known as macro level designing.

Characteristics of High-Level Design

  • A diagram representing each design aspect is included in the All Design HLD (which is based on business requirements and anticipated results).
  • Description of hardware, software interfaces, and also user interfaces.
  • The workflow of the user’s typical process is detailed, along with performance specifications.
  • The project’s architecture and design are contained in Users HLD.

11. What is Availability?

Availability refers to the readiness and accessibility of a system or service to users at any given time. It measures the percentage of time a system remains operational and usable. High availability ensures that users can access the system without significant interruptions or downtime, typically achieved through redundancy, fault tolerance, and efficient recovery mechanisms.

Importance in System Design

  1. User Experience: Availability ensures that users can access the system and its services when needed, leading to a positive user experience. Systems that are frequently unavailable or experience downtime frustrate users and may lead to dissatisfaction, loss of trust, and even abandonment of the system in favor of alternatives.
  2. Business Continuity: Availability is essential for maintaining business continuity and ensuring uninterrupted operations. For businesses that rely on their systems to deliver services or conduct transactions, even brief periods of downtime can result in significant financial losses, damage to reputation, and legal liabilities.
  3. Service Level Agreements (SLAs): Many organizations commit to meeting specific availability targets through SLAs with their customers or stakeholders. Failure to meet these SLAs can result in financial penalties or contractual obligations. Therefore, designing systems with high availability is crucial for meeting SLA requirements and maintaining customer satisfaction.
  4. Competitive Advantage: High availability can be a competitive differentiator for businesses, particularly in industries where uptime and reliability are critical factors. Systems that offer better availability compared to competitors are more likely to attract and retain customers, leading to a competitive advantage in the market.
  5. Regulatory Compliance: In many industries, there are regulatory requirements or standards that mandate a minimum level of system availability. Failure to comply with these regulations can result in legal consequences, fines, or sanctions. Therefore, designing systems with high availability is essential for ensuring regulatory compliance.

12. What is Scalability?

Scalability refers to the ability of a system to handle increasing amounts of workload or requests without sacrificing performance or incurring excessive costs.

It’s crucial to understand scalability in system design because it directly impacts the system’s ability to grow and adapt to changing requirements over time. Failing to design for scalability can lead to bottlenecks, performance issues, and ultimately, system failures as demand increases.

There are two main types of scalability:

1. Vertical Scaling

Vertical scaling, also known as scaling up, refers to the process of increasing the capacity or capabilities of an individual hardware or software component within a system. You can add more power to your machine by adding better processors, increasing RAM, or other power-increasing adjustments. Vertical scaling aims to improve the performance and capacity of the system to handle higher loads or more complex tasks without changing the fundamental architecture or adding additional servers.

2. Horizontal Scaling

Horizontal scaling, also known as scaling out, refers to the process of increasing the capacity or performance of a system by adding more machines or servers to distribute the workload across a larger number of individual units.In this approach, there is no need to change the capacity of the server or replace the server. Also, like vertical scaling, there is no downtime while adding more servers to the network.

13. What is Reliability?

Reliability in system design refers to the ability of a system to consistently perform its intended functions under expected conditions over a specified period. It encompasses several key aspects:

  1. Availability: The system should be available and accessible to users when needed, minimizing downtime and ensuring uninterrupted service.
  2. Fault Tolerance: The system should be able to continue operating in the presence of faults or failures, either by detecting and recovering from failures automatically or by gracefully degrading functionality to maintain essential services.
  3. Resilience: The system should be able to recover quickly from failures or disruptions and return to a fully functional state with minimal impact on users and operations.
  4. Scalability: The system should be able to handle increasing workloads or user demands by scaling resources (such as processing power, storage, or network bandwidth) horizontally or vertically without compromising performance or reliability.
  5. Redundancy: Redundant components or systems should be employed to mitigate the impact of failures and ensure continuous operation. This may include redundant servers, data centers, network paths, or power supplies.

14. CAP Theorem

The three letters in CAP refer to three desirable properties of distributed systems with replicated data: consistencyavailability and partition tolerance. The CAP theorem states that it is not possible to guarantee all three of the desirable properties – consistency, availability, and partition tolerance at the same time in a distributed system with data replication. Whereas in networked shared-data system or distributed system can share/have only two of the three desired characteristics for a database.

15. What is Latency?

Latency refers to the time it takes for a request to travel from its point of origin to its destination and receive a response.

  • Latency represents the delay between an action and its corresponding reaction
  • It can be measured in various units like seconds, milliseconds, and nanoseconds depending on the system and application.

Latency

Latency involves so many things such as processing time, time to travel over the network between components, and queuing time.

  • Round Trip Time: This includes the time taken for the request to travel to the server, processing time at the server, and the response time back to the sender.
  • Different Components: Processing time, transmission time (over network or between components), queueing time (waiting in line for processing), and even human reaction time can all contribue to overall latency.

16. What is a Load Balancer?

load balancer works as a “traffic cop” sitting in front of your server and routing client requests across all servers. It simply distributes the set of requested operations (database write requests, cache queries) effectively across multiple servers and ensures that no single server bears too many requests that lead to degrading the overall performance of the application. A load balancer can be a physical device or a virtualized instance running on specialized hardware or a software process. 

How-Load-Balancer-works

Characteristics of Load Balancers

  1. Traffic Distribution: Load balancers evenly distribute incoming requests among multiple servers, preventing any single server from being overloaded.
  2. High Availability: By distributing traffic across multiple servers, load balancers enhance the availability and reliability of applications. If one server fails, the load balancer redirects traffic to healthy servers.
  3. Scalability: Load balancers facilitate horizontal scaling by easily accommodating new servers or resources to handle increasing traffic demands.
  4. Optimization: Load balancers optimize resource utilization, ensuring efficient use of server capacity and preventing bottlenecks.
  5. Health Monitoring: Load balancers often monitor the health of servers, directing traffic away from servers experiencing issues or downtime.

17. SQL vs NoSQL: Which Database to Choose in System Design?

Let us compare SQL and NoSQL on different domains:

Aspect SQL NoSQL
Data Model and Schema Enforces a structured schema with predefined tables and relationships. Embraces a flexible schema, allowing for dynamic and evolving data structures.
Scalability and Performance Traditionally scales vertically by adding more resources to a single server. Excels in horizontal scaling, distributing data across multiple servers to handle increasing loads seamlessly.
Query Language and Transactions Standardized SQL language for querying data. Varied query languages, with some using SQL and others adopting unique approaches. Transaction handling follows ACID principles.
Flexibility and Schema Evolution Rigid schema may require complex changes with potential downtime. Dynamic schema evolution allows for adaptation to changing data requirements without significant disruption.
Use Cases and Applications Suitable for complex transactions, strict data integrity, and well-defined relationships. Ideal for applications demanding high scalability, handling large volumes of unstructured data, and rapid development cycles. Hybrid approaches are common.
Data Integrity and Relationships Maintains data integrity through ACID transactions. Relationships are well-defined. Requires denormalization for efficient relationship handling, and the level of consistency may vary.
Security and Authentication Well-established security mechanisms, including user roles, access controls, and encryption. Varies in security features, with the need for implementing access controls to protect sensitive data.
Decision Factors for System Design Align choice with specific project requirements, considering data structures, scalability needs, and development pace. Evaluate team expertise in SQL or NoSQL, and consider long-term scalability and adaptability aligned with project growth.

18. What is Caching?

Caching in system design involves storing frequently accessed data or computations in a temporary storage location, called a cache, to reduce access latency and improve system performance. When a requested piece of data or computation result is found in the cache, it can be served directly to the user or application without the need to retrieve it from the original data source or perform expensive computations again.

Key aspects of caching in system design include:

  1. Improved Performance: Caching helps reduce the time required to access data or compute results by serving them from a faster storage medium, such as memory or a local disk, rather than retrieving them from a slower or remote data source.
  2. Reduced Latency: By minimizing the round-trip time required to fetch data or compute results, caching reduces latency and improves responsiveness, leading to a better user experience.
  3. Scalability: Caching can help alleviate the load on backend systems by serving frequently accessed data directly from the cache, reducing the overall demand on resources and improving system scalability.
  4. Cache Invalidation and Refresh: To ensure data consistency, caches need mechanisms to invalidate or refresh cached entries when the underlying data changes. This may involve using expiration policies, cache invalidation signals, or explicit cache refresh mechanisms.

19. What is Low-Level Design(LLD)?

LLD, or Low-Level Design, is a phase in the software development process where detailed system components and their interactions are specified. It involves converting the high-level design into a more detailed blueprint, addressing specific algorithms, data structures, and interfaces.

  • LLD serves as a guide for developers during coding, ensuring the accurate and efficient implementation of the system’s functionality.
  • LLD describes class diagrams with the help of methods and relations between classes and program specs.

20. Unified Modeling Language(UML)

Unified Modeling Language (UML) is a general-purpose modeling language. The main aim of UML is to define a standard way to visualize the way a system has been designed. It is quite similar to blueprints used in other fields of engineering. UML is not a programming language, it is rather a visual language.

  • We use UML diagrams to portray the behavior and structure of a system.
  • UML helps software engineers, businessmen, and system architects with modeling, design, and analysis.

There are two types of UML Diagrams:

1. Structural UML Diagrams

  • Class Diagram: Class diagrams are the main building blocks of every object-oriented method. The class diagram can be used to show the classes, relationships, interface, association, and collaboration. UML is standardized in class diagrams.
  • Package Diagram: A package diagram is mainly used to represent the organization and the structure of a system in the form of packages. A package is used as a container to organize the elements present in the system into a more manageable unit.
  • Object Diagram: It can be referred to as a screenshot of the instances in a system and the relationship that exists between them. Since object diagrams depict behaviour when objects have been instantiated, we are able to study the behavior of the system at a particular instant.
  • Component Diagram: They are used to represent how the physical components in a system have been organized. We use them for modelling implementation details. Component Diagrams depict the structural relationship between software system elements and help us in understanding if functional requirements have been covered by planned development. 
  • Composite structure diagram: A composite structure diagram represents relationship between parts and their configuration which determine how the classifier (class, a component, or a deployment node) behaves. They represent internal structure of a structured classifier making the use of parts, ports, and connectors. 
  • Deployment diagram: They are used to represent system hardware and its software. It tells us what hardware components exist and what software components run on them. We illustrate system architecture as distribution of software artifacts over distributed targets. 

2. Behavioral UML Diagrams

  • State Machine Diagrams: They are used to represent the condition of the system or part of the system at finite instances of time. It’s a behavioral diagram and it represents the behavior using finite state transitions. 
  • Activity Diagrams: We use Activity Diagrams to illustrate the flow of control in a system. We can also use an activity diagram to refer to the steps involved in the execution of a use case. We model sequential and concurrent activities using activity diagrams. So, we basically depict workflows visually using an activity diagram.
  • Use Case Diagrams: Use Case Diagrams are used to depict the functionality of a system or a part of a system. They are widely used to illustrate the functional requirements of the system and its interaction with external agents(actors). A use case is basically a diagram representing different scenarios where the system can be used. 
  • Sequence Diagram: A sequence diagram simply depicts interaction between objects in a sequential order i.e. the order in which these interactions take place. We can also use the terms event diagrams or event scenarios to refer to a sequence diagram. Sequence diagrams describe how and in what order the objects in a system function. 
  • Communication Diagram: It is used to show sequenced messages exchanged between objects. A communication diagram focuses primarily on objects and their relationships. 
  • Time Sequence Diagram: They are a special form of Sequence diagrams which are used to depict the behavior of objects over a time frame. We use them to show time and duration constraints which govern changes in states and behavior of objects.
  • Interaction Overview Diagram: An Interaction Overview Diagram models a sequence of actions and helps us simplify complex interactions into simpler occurrences. It is a mixture of activity and sequence diagrams.

21. Design Patterns

Design patterns are basically defined as reusable solutions to the common problems that arise during software design and development. They are general templates or best practices that guide developers in creating well-structured, maintainable, and efficient code. They help in addressing the recurring issues in software design and provide a shared vocabulary for developers to communicate and collaborate effectively.

Below are the important Design Patterns for freshers:

  • Decorator Pattern
    • It allows us to dynamically add functionality and behavior to an object without affecting the behavior of other existing objects within the same class.
    • We use inheritance to extend the behavior of the class. This takes place at compile-time, and all the instances of that class get the extended behavior.
  • Abstract Factory Pattern
    • Abstract Factory pattern is almost similar to Factory Pattern and is considered as another layer of abstraction over factory pattern.
    • Abstract Factory patterns work around a super-factory which creates other factories.
  • Strategy Pattern
    • that allows the behavior of an object to be selected at runtime. It is one of the Gang of Four (GoF) design patterns, which are widely used in object-oriented programming.
    • The Strategy pattern is based on the idea of encapsulating a family of algorithms into separate classes that implement a common interface.
  • Adapter Pattern
    • The adapter pattern convert the interface of a class into another interface clients expect.
    • Adapter lets classes work together that couldn’t otherwise because of incompatible interfaces
  • Builder Pattern
    • Builder pattern aims to “Separate the construction of a complex object from its representation so that the same construction process can create different representations.”
    • It is used to construct a complex object step by step and the final step will return the object.
  • Factory Pattern
    • The Factory Method pattern is used to create objects without specifying the exact class of object that will be created.
    • This pattern is useful when you need to decouple the creation of an object from its implementation.
  • Singleton Pattern
    • The Singleton method or Singleton Design pattern is one of the simplest design patterns.
    • It ensures a class only has one instance, and provides a global point of access to it.

22. Design a URL Shortening Service (TinyURL)

URL shortening service allows users to enter a long URL, and then it returns a shorter, unique URL. These services generate a short URL if the user gives a long URL and if the user gives a short URL then it returns the original long URL.  

Low-Level-Design-of-URL-Shortening-Service

Things to discuss and analyze

  • Given a long URL, the service should generate a shorter and unique alias for it.
  • When the user hits a short link, the service should redirect to the original link.
  • Consider scalability if 1000’s URL shortening requests come every second.
  • Service handle redirects.
  • Support for custom short URLs.
  • Track click stats.
  • Delete expired URLs.
  • The system should be highly available.

You need to consider three things while designing this service. 

  1. API(REST API) – Discuss how the client will follow an approach to communicate with the service along with the load balancer which is the front end of the service.
  2. Application Layer – Discuss how the worker thread or hosts that will take the long URL, generate the tiny URL and how it will store both of the URLs in the database.
  3. Persistence Layer – Database



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads