Skip to content

ThrottleX System Design Documentation

Neel Patel edited this page Oct 17, 2024 · 6 revisions

ThrottleX System Design Documentation

Welcome to the ThrottleX project wiki! Below you'll find a detailed overview of the architecture, flow, and components of ThrottleX, including diagrams created using Mermaid.js to illustrate the system's design and interactions.

Note: I used mermaid.js to create all diagrams and it renders directly into github markdown files. It's pretty cool, check them out!


1. System Architecture

This diagram provides an overview of the architecture of ThrottleX, showing how it interacts with clients, in-memory storage, and Redis.

graph TD;
    Client -->|Sends API Request| ThrottleX
    ThrottleX -->|Check Rate Limit| InMemory[(In-Memory Store)]
    ThrottleX -->|Check Rate Limit| Redis[(Redis)]
Loading

Description:
This diagram shows the flow of a client request through the ThrottleX system. The request is rate-limited by either the in-memory store or Redis, depending on the configuration.


2. Request Flow

The step-by-step flow of a single API request as it passes through ThrottleX, showing how rate limits are checked using both in-memory and Redis storage.

sequenceDiagram
    participant C as Client
    participant T as ThrottleX
    participant IM as In-Memory Store
    participant R as Redis

    C->>T: API Request
    alt Using In-Memory Store
        T->>IM: Check Rate Limit
        alt Limit Exceeded
            IM-->>T: Rate Limit Exceeded
            T-->>C: 429 Too Many Requests
        else Within Limit
            IM-->>T: Request Allowed
            T-->>C: Forward Response
        end
    else Using Redis Store
        T->>R: Check Rate Limit
        alt Limit Exceeded
            R-->>T: Rate Limit Exceeded
            T-->>C: 429 Too Many Requests
        else Within Limit
            R-->>T: Request Allowed
            T-->>C: Forward Response
        end
    end
Loading

Description:
This flow diagram details the interactions for each request. Depending on the configured storage backend, ThrottleX checks if the request exceeds rate limits using either the in-memory store or Redis. If the request is allowed, the response is sent back to the client.


3. Data Storage and Rate Limiting Policy

This diagram illustrates how both in-memory and Redis are used to store rate limits and how the data is structured.

erDiagram
    InMemory {
        string key PK
        int request_count
        timestamp expiration_time
    }
    
    Redis {
        string key PK
        int request_count
        timestamp expiration_time
    }
Loading

Description:

  • In-Memory Store: Stores rate-limiting data, such as the current request count and expiration time for each API key, within the application memory. Suitable for single-instance setups.
  • Redis: Stores rate-limiting data in a distributed manner, allowing multiple instances of ThrottleX to share the rate-limiting state. Suitable for distributed systems.

4. Component Interaction

This diagram highlights the interaction between ThrottleX and the various system components, providing a high-level view of the data and request flow.

flowchart LR
    Client --> ThrottleX
    ThrottleX --> InMemory[(In-Memory Store)]
    ThrottleX --> Redis[(Redis)]
Loading

Description:
This component interaction diagram shows how the different components of the ThrottleX system interact:

  • Client sends requests to ThrottleX.
  • ThrottleX queries either In-Memory Store or Redis for rate-limiting data.

5. Rate Limiting Algorithm Design

ThrottleX currently implements three core rate-limiting algorithms:

Fixed Window Rate Limiting

  • Description: Fixed Window rate limiting allows a fixed number of requests within a specific time window (e.g., 10 requests per minute). Once the limit is reached, any additional requests are blocked until the window resets.
flowchart TD
    Request -->|Check Rate Limit| Store
    Store -->|Within Limit?| Decision[Decision]
    Decision -->|Yes| Allow[Allow Request]
    Decision -->|No| Block[Block Request]
Loading

Sliding Window Rate Limiting

  • Description: Sliding Window rate limiting provides more granular control by maintaining a sliding window of time to smooth out request bursts. The limit is calculated over a rolling period, which helps distribute the load more evenly.
flowchart TD
    Request -->|Check Rate Limit| Store
    Store -->|Calculate Sliding Window| Sliding[Sliding Window Logic]
    Sliding -->|Within Limit?| Decision[Decision]
    Decision -->|Yes| Allow[Allow Request]
    Decision -->|No| Block[Block Request]
Loading

Token Bucket Rate Limiting

  • Description: The Token Bucket algorithm allows for a burst of requests if there are enough tokens in the bucket. Tokens are refilled at a steady rate, and requests consume tokens until they are depleted.
flowchart TD
    Request -->|Consume Token| Bucket[Token Bucket]
    Bucket -->|Tokens Available?| Decision[Decision]
    Decision -->|Yes| Allow[Allow Request]
    Decision -->|No| Block[Block Request]
Loading

These rate-limiting algorithms are designed to handle different use cases, providing flexibility for managing API traffic effectively.


Future Updates:

  • Detailed designs for additional rate-limiting policies and caching mechanisms.
  • gRPC and WebSocket support diagrams (future extensions).

Final Notes:

These diagrams and descriptions provide an overview of ThrottleX’s system architecture. The project is still in the design stages and very early development. As the project evolves, this documentation will be updated to reflect new features and enhancements.