A structured memory system for AI assistants to enhance coding capabilities using database integration utilizing Claude Desktop and MCP Servers.
This system leverages multiple database types to create a comprehensive memory system for coding assistance:
- Qdrant Vector Database: For semantic search and retrieval of code patterns
- SQLite Database: For structured algorithm storage and versioning
- Knowledge Graph: For representing relationships between coding concepts
For storing and retrieving code snippets, patterns, and solutions by semantic meaning.
What to store:
- Reusable code patterns with explanations
- Solutions to complex problems
- Best practices and design patterns
- Documentation fragments and explanations
Enhanced Metadata:
- Language and framework details
- Complexity level (simple, intermediate, advanced)
- Dependencies and requirements
- Quality metrics (cyclomatic complexity, documentation coverage)
- User feedback and ratings
Example Usage:
# Storing a code pattern
information = {
"type": "code_pattern",
"language": "python",
"name": "Context Manager Pattern",
"code": "class MyContextManager:\n def __enter__(self):\n # Setup code\n return self\n def __exit__(self, exc_type, exc_val, exc_tb):\n # Cleanup code\n pass",
"explanation": "Context managers provide a clean way to manage resources like file handles.",
"tags": ["python", "resource management", "context manager"],
"complexity": "intermediate",
"quality_metrics": {
"cyclomatic_complexity": 2,
"documentation_coverage": 0.85
},
"user_rating": 4.5
}
# Store in Qdrant
For maintaining a structured catalog of algorithms with proper versioning.
Database Schema:
algorithms
: Basic algorithm information (name, description)algorithm_versions
: Different versions of algorithm implementationsalgorithm_categories
: Categories like Sorting, Searching, Graph, etc.performance_metrics
: Performance data for different implementationsimprovements
: Tracked improvements between versionschange_logs
: Detailed logs of changes with rationale and context
Version Diffing:
- Store diffs between algorithm versions
- Track performance improvements across versions
- Document rationale behind changes
Example Query:
-- Find all sorting algorithms with performance metrics
SELECT a.name, a.description, v.version_number, p.time_complexity, p.space_complexity
FROM algorithms a
JOIN algorithm_versions v ON a.id = v.algorithm_id
JOIN performance_metrics p ON v.id = p.version_id
JOIN algorithm_category_mapping m ON a.id = m.algorithm_id
JOIN algorithm_categories c ON m.category_id = c.id
WHERE c.name = 'Sorting'
ORDER BY a.name, v.version_number DESC;
-- Get change logs for a specific algorithm
SELECT v.version_number, c.change_description, c.rationale, c.created_at
FROM algorithm_versions v
JOIN change_logs c ON v.id = c.version_id
WHERE v.algorithm_id = 5
ORDER BY v.version_number;
For representing complex relationships between coding concepts, patterns, and solutions.
Advanced Ontology:
- Algorithm
- DesignPattern
- CodeConcept
- ProblemType
- Solution
- Framework
- Library
- Language
Rich Relation Types:
- IMPLEMENTS (Algorithm → CodeConcept)
- SOLVES (DesignPattern → ProblemType)
- OPTIMIZES (Algorithm → Performance)
- RELATED_TO (Any → Any)
- IMPROVES_UPON (Solution → Solution)
- ALTERNATIVELY_SOLVES (Solution → ProblemType)
- EXTENDS (Pattern → Pattern)
- DEPENDS_ON (Solution → Library)
- COMPATIBLE_WITH (Framework → Language)
Graph Analytics:
- Identify frequently co-occurring patterns
- Discover emerging trends in coding practices
- Map problem domains to solution approaches
When facing a new coding problem:
-
Context Gathering:
- Clearly define the problem and constraints
- Identify performance requirements and environment details
- Document project-specific considerations
-
Memory Querying:
- Break down the problem using sequential thinking
- Query Qdrant for similar solutions:
qdrant-find-memories("efficient way to traverse binary tree")
- Filter results by language, complexity, and quality metrics
- Check algorithm database for relevant algorithms:
SELECT * FROM algorithms WHERE name LIKE '%tree%'
- Explore knowledge graph for related concepts and alternative approaches
-
Solution Application:
- Test and verify solution in REPL
- Document performance characteristics
- Compare against alternatives
-
Feedback Loop:
- Store successful solution back in Qdrant with detailed metadata
- Log performance metrics and usage context
- Update knowledge graph connections
When discovering a useful pattern:
-
Automated Documentation:
- Generate initial documentation using AI tools
- Include detailed usage examples
- Document edge cases and limitations
-
Quality Assessment:
- Run linters and static analyzers to ensure code quality
- Calculate and store quality metrics
- Validate against best practices
-
Metadata Enrichment:
- Document the pattern with clear examples
- Add comprehensive metadata (language, complexity, dependencies)
- Apply consistent tagging from controlled vocabulary
-
Knowledge Integration:
- Store in Qdrant with appropriate tags and explanation
- Create knowledge graph connections to related concepts
- Add to SQL database if it's an algorithm implementation
- Suggest automatic connections based on content similarity
When starting a new project:
-
Template Selection:
- Choose from library of project templates
- Customize based on project requirements
- Select language, framework, and testing tools
-
Automated Setup:
- Generate project structure with proper directory layout
- Set up version control with appropriate .gitignore
- Configure linting and code quality tools
- Initialize testing framework
-
Best Practices Integration:
- Query memory system for relevant boilerplate code
- Retrieve best practices for the specific project type
- Use stored documentation templates for initial setup
- Configure CI/CD based on project requirements
-
Access Controls:
- Role-based access for sensitive code repositories
- Permissions for viewing vs. modifying memories
-
Backup & Recovery:
- Regular backups of Qdrant and SQLite databases
- Version control for knowledge graph
- Recovery procedures for data corruption
-
Sensitive Information:
- Sanitize code examples to remove sensitive data
- Validate code snippets before storage
- Flag and restrict access to sensitive patterns
-
Usage Tracking:
- Monitor which patterns are most frequently retrieved
- Track search query patterns to identify knowledge gaps
- Log user ratings and feedback
-
Performance Metrics:
- Monitor database response times
- Track memory usage and scaling requirements
- Optimize queries based on usage patterns
- Quality over Quantity: Only store high-quality, well-documented code
- Regular Review: Periodically review and update stored patterns
- Contextual Storage: Include usage context with each stored pattern
- Versioning: Track improvements and versions in SQLite
- Tagging Consistency: Use controlled vocabulary for better retrieval
- Performance Optimization: Regularly optimize database queries
- Feedback Integration: Update patterns based on usage feedback
-
Store your first code memory:
qdrant-store-memory(json.dumps({ "type": "code_pattern", "name": "Python decorator pattern", "code": "def my_decorator(func):\n def wrapper(*args, **kwargs):\n # Do something before\n result = func(*args, **kwargs)\n # Do something after\n return result\n return wrapper", "explanation": "Decorators provide a way to modify functions without changing their code.", "tags": ["python", "decorator", "metaprogramming"], "complexity": "intermediate" }))
-
Retrieve it later:
qdrant-find-memories("python decorator pattern")
- Advanced code quality assessment before storage
- Integration with version control systems
- Learning from usage patterns to improve retrieval
- Automated documentation generation
- Custom IDE plugins for seamless access
- Multi-modal storage (code, diagrams, explanations)
- Natural language interface for querying
- Performance benchmark database
- Install script for MCP Servers and DB
The MCP (Model Context Protocol) server provides a standardized interface for AI models to interact with the Coder DB memory system. It is built using FastAPI and Uvicorn.
The mcp_server
directory contains the FastAPI application:
mcp_server/
├── core/ # Core logic, Pydantic models, configuration
│ ├── __init__.py
│ ├── config.py # Application settings
│ └── models.py # Pydantic data models
├── database/ # Database connectors and SQLAlchemy models
│ ├── __init__.py
│ ├── qdrant_connector.py # Logic for Qdrant vector database
│ ├── sqlite_connector.py # Logic for SQLite relational database
│ └── sql_models.py # SQLAlchemy ORM models for SQLite
├── routers/ # FastAPI routers for different API endpoints
│ ├── __init__.py
│ ├── algorithm.py # Endpoints for /algorithm
│ ├── health.py # Endpoint for /health
│ └── memory.py # Endpoints for /memory
├── tests/ # Unit and integration tests
│ ├── __init__.py
│ ├── test_algorithm_api.py
│ ├── test_health.py
│ └── test_memory_api.py
├── main.py # Main FastAPI application setup and startup
└── pyproject.toml # Project dependencies and metadata (using Poetry)
(Note: requirements.txt
is part of the old setup and can be removed if pyproject.toml
and Poetry are used exclusively.)
This project uses Poetry for dependency management and packaging.
-
Install Poetry (if you haven't already): Follow the instructions on the Poetry website.
-
Install Dependencies: Navigate to the directory containing
pyproject.toml
(this should be themcp_server
directory if you created it as a self-contained Poetry project, or the root of this repository ifmcp_server
is a sub-package of a larger Poetry project) and run:poetry install --with dev # --with dev includes testing dependencies
This will create a virtual environment (if one doesn't exist for this project) and install all dependencies.
-
Environment Configuration (Optional but Recommended): The application uses settings defined in
mcp_server/core/config.py
. You can override these by creating a.env
file in the same directory where you runuvicorn
(typically the root of the repository ormcp_server/
if running from there). Example.env
file content:# mcp_server/.env or project_root/.env # QDRANT_HOST="your_qdrant_host_if_not_localhost" # Overrides default 'localhost' # QDRANT_PORT=6334 # Overrides default 6333 # QDRANT_API_KEY="your_qdrant_api_key_if_any" # SQLITE_DATABASE_URL="sqlite+aiosqlite:///./custom_mcp_data.db" # Overrides default ./mcp_server.db # SQLITE_ECHO_LOG=True # To see SQLAlchemy logs
Refer to
mcp_server/core/config.py
for all available settings. -
Run the Development Server: Ensure your Poetry environment is active (e.g., by running
poetry shell
in the directory withpyproject.toml
) or prepend commands withpoetry run
. From the root of the repository (the directory containing themcp_server
folder):poetry run uvicorn mcp_server.main:app --reload --host 0.0.0.0 --port 8000
The server will be available at
http://127.0.0.1:8000
.- Interactive API documentation (Swagger UI):
http://127.0.0.1:8000/docs
- Alternative API documentation (ReDoc):
http://127.0.0.1:8000/redoc
- Interactive API documentation (Swagger UI):
With development dependencies installed (poetry install --with dev
):
From the root of the repository:
poetry run pytest
This will discover and run tests located in the mcp_server/tests/
directory.
The server currently exposes the following main API endpoints. For detailed request/response models and to try them out, please visit the /docs
URL when the server is running.
-
System Endpoints (Tag:
System
)GET /
: Provides basic information about the server.GET /health
: A simple health check endpoint. Returns{"status": "OK"}
.
-
Memory Management Endpoints (Tag:
Memory Management
, Prefix:/memory
)POST /memory/store
: Stores a new memory item (e.g., code pattern, solution, documentation snippet) into the Qdrant vector database.- Request Body:
StoreMemoryRequest
(JSON object containing aMemoryItem
). - Response:
StoreMemoryResponse
(JSON object with the ID of the stored item and status).
- Request Body:
POST /memory/find
: Searches for memory items in Qdrant based on a natural language query and/or filters.- Request Body:
FindMemoryRequest
(JSON object with query string, limit, and optional filters like language, tags, complexity). - Response:
FindMemoryResponse
(JSON object containing a list of matchingMemoryItem
s and a count).
- Request Body:
-
Algorithm Management Endpoints (Tag:
Algorithm Management
, Prefix:/algorithm
)POST /algorithm/store
: Stores a new algorithm or a new version of an existing algorithm in the SQLite database.- Request Body:
StoreAlgorithmRequest
(JSON object containingAlgorithm
data, including its versions). - Response:
StoreAlgorithmResponse
(JSON object with the ID of the stored/updated algorithm and status).
- Request Body:
POST /algorithm/find
: Searches for algorithms in the SQLite database by name or category.- Request Body:
FindAlgorithmRequest
(JSON object with optionalname
andcategory
fields). - Response:
FindAlgorithmResponse
(JSON object containing a list of matchingAlgorithm
s and a count).
- Request Body:
-
Collection Management Endpoints (Tag:
Collection Management
, Prefix:/coder/collections
)POST /coder/collections/create
: Creates a new Qdrant collection.- Request Body:
CreateCollectionRequest
(JSON object withcollection_name
and optionalmodel_name
). Ifmodel_name
is omitted, the default embedding model specified in server configuration is used. The model's name and vector size are stored in the collection's metadata. - Response: Confirmation message including the collection name and the embedding model information that was applied.
- Request Body:
GET /coder/collections/{collection_name}/info
: Retrieves information about a specific Qdrant collection, including its configuration and associated embedding model metadata.- Path Parameter:
collection_name
. - Response: JSON object with collection details (status, point/vector counts, configuration) and embedding model metadata.
- Path Parameter:
The server now uses actual text embeddings (via the FastEmbed library) for storing and searching memories in Qdrant.
- Default Model: The server is configured with a default embedding model (e.g.,
sentence-transformers/all-MiniLM-L6-v2
). This model's dimension is used as the default vector size for new collections if no specific model is indicated. - Collection-Specific Models: When creating a collection via the
/coder/collections/create
endpoint, you can specify a different (supported) FastEmbed model. The chosen model's name and its specific vector dimension will be associated with the collection and stored in its metadata. This dimension will be used when configuring the vector parameters for that collection in Qdrant. - Dynamic Embedding: When storing or searching memories:
- The system determines the correct embedding model (and its vector dimension) by checking the metadata of the target Qdrant collection.
- If a memory is stored or searched in a collection that does not have this specific model metadata (e.g., a collection created externally or before this feature was implemented), the server's configured default embedding model and its dimension are used.
- The
store_memory
operation, if targeting a non-existent collection, will attempt to create it usingcreate_coder_collection
. If no model is specified for this implicit creation, the default model and its settings will be applied to the new collection. - The actual text (e.g., from
memory_item.explanation
and/ormemory_item.code
) is then embedded using the determined provider before being sent to Qdrant.
- Supported Models: The
EnhancedEmbeddingModelManager
has a predefined list of supported FastEmbed models and their properties. This list can be expanded.
The server now includes an initial integration with Neo4j as a knowledge graph database. This is the third core database type, enabling the representation and querying of complex relationships between coding concepts, stored memory items, and algorithms.
-
Purpose:
- To model relationships like "MemoryItem X uses Language Y," "Algorithm Z is a type of Concept A," "Concept B is related to Concept C."
- To enable more advanced contextual understanding, discovery, and navigation of stored knowledge.
- To serve as a foundation for future analytical capabilities (e.g., identifying highly connected concepts, suggesting related memories based on graph paths).
-
Setup Requirements:
- A running Neo4j instance (version 4.4 or 5.x recommended for
IF NOT EXISTS
clause support in constraints). - Connection parameters (URL, user, password, database name) must be configured in
mcp_server/core/config.py
or via environment variables (e.g.,NEO4J_URL
,NEO4J_USER
,NEO4J_PASSWORD
,NEO4J_DATABASE_NAME
). The defaults arebolt://localhost:7687
, userneo4j
, passwordpassword
, databaseneo4j
. - The
neo4j
Python driver (e.g.,neo4j>=5.17.0
) is now a project dependency (seepyproject.toml
).
- A running Neo4j instance (version 4.4 or 5.x recommended for
-
Initialization:
- On application startup, the server initializes an asynchronous Neo4j driver.
- It also attempts to create basic schema constraints to ensure uniqueness for key node properties (e.g.,
KGConcept.name
,KGMemory.memory_id
,KGAlgorithm.algorithm_id
). These operations are idempotent.
-
Current Knowledge Graph Schema Nodes (Initial Definition):
:KGConcept
: Represents abstract concepts.- Properties:
name
(string, unique),type
(string, e.g., 'language', 'framework', 'tag', 'methodology', 'algorithm_category', 'memory_type'),description
(optional string),created_at
(datetime).
- Properties:
:KGMemory
: Represents aMemoryItem
that has been stored in Qdrant, linking it into the graph.- Properties:
memory_id
(string, unique, typically the Qdrant point ID or internal UUID),name
(string, name of the memory item),type
(string, type of the memory item like 'code_pattern'),created_at
(datetime).
- Properties:
:KGAlgorithm
: Represents anAlgorithm
stored in SQLite, linking it into the graph.- Properties:
algorithm_id
(integer, unique, from SQLite primary key),name
(string),category
(string),created_at
(datetime).
- Properties:
-
Relationships (Initial Definition):
- Memory items are linked to concepts using a generic relationship
[:KG_RELATED_TO {type: "SPECIFIC_REL_TYPE"}]
. Thetype
property on the relationship stores the semantic meaning. Examples:(:KGMemory)-[:KG_RELATED_TO {type: "USES_LANGUAGE"}]->(:KGConcept {name: "Python"})
(:KGMemory)-[:KG_RELATED_TO {type: "TAGGED_WITH"}]->(:KGConcept {name: "asyncio"})
(:KGMemory)-[:KG_RELATED_TO {type: "IS_OF_TYPE"}]->(:KGConcept {name: "code_pattern"})
- Memory items are linked to concepts using a generic relationship
-
API Endpoints for Knowledge Graph (Prefix:
/kg
):POST /kg/concept
: Creates or merges a:KGConcept
node.- Request Body:
KGConcept
model (name
,type
,description
). - Response: Details of the created/merged concept.
- Request Body:
POST /kg/link_memory_to_concept
: Links a memory item to a concept. Creates the:KGMemory
and:KGConcept
nodes if they don't exist, then creates/merges the relationship.- Request Body:
LinkMemoryToConceptRequest
(memory_id
,memory_name
,memory_type
,concept_name
,concept_type
,relationship_type
). - Response: Confirmation of the link.
- Request Body:
GET /kg/memory_item/{memory_id}/related_concepts
: Retrieves all:KGConcept
nodes directly linked to a given:KGMemory
node (identified bymemory_id
), along with the type of relationship.- Response: List of concepts with their names, types, descriptions, and the relationship type.
-
Automatic Knowledge Graph Updates:
- When a new
MemoryItem
is stored via thePOST /memory/store
endpoint:- A corresponding
:KGMemory
node is created/merged in Neo4j using thememory_id
from Qdrant. :KGConcept
nodes are created/merged for the memory item'slanguage
(if present), each of itstags
, and its owntype
(e.g., 'code_pattern').- The
:KGMemory
node is then linked to these:KGConcept
nodes using appropriateKG_RELATED_TO
relationships (e.g., withtype: "USES_LANGUAGE"
,type: "TAGGED_WITH"
,type: "IS_OF_TYPE"
).
- A corresponding
- This ensures that new memories are automatically contextualized within the knowledge graph as they are added.
- When a new
This initial Neo4j integration provides the foundational structure for building more sophisticated knowledge representation and graph-based query capabilities as outlined in the Coder DB vision. Future work will involve expanding the schema, adding more relationship types, integrating algorithms into the graph, and developing more complex analytical queries.