Sure! Here is the updated markdown document without the triple backticks:
CogNetics Architect is an advanced simulation platform designed to model interactions among multiple entities within a grid environment. Leveraging FastAPI for the backend, Redis for in-memory data storage, Supabase for persistent storage, and Logfire for logging and monitoring, this application provides a robust framework for simulating complex systems.
- Configuration Management: Utilizes Pydantic for structured and validated configuration management, ensuring type safety and maintainability.
- Simulation Control: API endpoints to reset, initialize, perform steps, and stop the simulation.
- Entity Management: Manages entities with positions and memory, allowing interactions based on proximity.
- Logging and Monitoring: Integrates Logfire for comprehensive logging and monitoring without altering existing Logfire setup.
- Concurrency Control: Manages concurrent requests to prevent rate limiting and ensure system stability.
- WebSocket Support: Enables real-time communication with connected clients.
Configuration is managed using Pydantic models within main.py
, leveraging inheritance and nesting for a structured setup. Sensitive information is handled using SecretStr
to ensure security.
The application continues to use environment variables defined in your .env
file for configuration. Ensure the following environment variables are set:
SUPABASE_URL
SUPABASE_KEY
GROQ_API_KEY
GROQ_API_ENDPOINT
REDIS_ENDPOINT
REDIS_PASSWORD
AUTH_TOKEN
E2B_API_KEY
LOGFIRE_API_KEY
LOGFIRE_ENDPOINT
LOGFIRE_ENABLED
(set totrue
orfalse
)
GRID_SIZE
(default:15
)NUM_ENTITIES
(default:3
)MAX_STEPS
(default:100
)CHEBYSHEV_DISTANCE
(default:5.0
)LLM_MODEL
(default:"llama-3.3-70b-versatile"
)LLM_MAX_TOKENS
(default:2048
)LLM_TEMPERATURE
(default:0.7
)REQUEST_DELAY
(default:1.2
)MAX_CONCURRENT_REQUESTS
(default:5
)LOG_LEVEL
(default:"DEBUG"
)
Configurations are organized using nested Pydantic models for better structure and inheritance. The Settings
class aggregates all configuration sections.
class Settings(BaseModel): database: DatabaseSettings api: APISettings redis: RedisSettings auth: AuthSettings logfire: LogfireSettings simulation: SimulationSettingsModel
All sensitive environment variables are managed using Pydantic's SecretStr
. When accessing these variables, use the .get_secret_value()
method to retrieve their actual values securely.
supabase_key = settings.database.supabase_key.get_secret_value()
Avoid logging or printing sensitive information directly. If necessary, mask them before logging.
def mask_secret(secret: SecretStr) -> str: return "***"
logfire.info(f"Supabase Key: {mask_secret(settings.database.supabase_key)}")
git clone https://github.com/CogNetSys/cognetics-architect.git
cd cognetics-architect
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Create a .env
file in the root directory and populate it with the necessary environment variables as outlined above.
doppler run --debug -- python3 run.py
The application will be accessible at http://0.0.0.0:8000
.
Access Prometheus 'http://0.0.0.0:9090' Access Grafana 'http://0.0.0.0:3000'
- Endpoint:
/reset_and_initialize
- Method:
POST
- Description: Resets the simulation state and initializes entities.
- Endpoint:
/step
- Method:
POST
- Description: Performs a specified number of simulation steps.
- Endpoint:
/stop
- Method:
POST
- Description: Stops the ongoing simulation.
Logging is handled by Logfire, which captures and manages logs based on the configuration provided. Ensure that LOGFIRE_ENABLED
is set to true
in your .env
file to enable logging.
- Sensitive Information: All sensitive configurations are managed using
SecretStr
to prevent accidental exposure. .env
Protection: The.env
file is excluded from version control via.gitignore
to protect sensitive data.
Contributions are welcome! Please fork the repository and submit a pull request for any enhancements or bug fixes.
This project is licensed under the MIT License. See the LICENSE file for details.
Let me know if you'd like any further changes!
ToDo
- Batch Processing
- Multiple Models
ToDo - Level 1 Improvements.
- Security, Safeguards, and Failsafe protocol.
The Takata et al. paper uses a combination of visualization and analysis techniques to explore the emergent behaviors of their LLM-based agents. Here are some of the key techniques:
- Spatial Plots:
- Visualizing the positions of agents in a 2D space over time.
- Tracking the movement patterns and clustering behavior of agents.
- Word Clouds:
- Analyzing the frequency of words used in agent communications.
- Identifying common themes and emerging "hallucinations" or shared concepts.
- UMAP Plots:
- Reducing the dimensionality of agent memories and messages to visualize their similarity and clustering.
- Understanding the evolution of agent "personalities" and communication styles.
- Clustering Analysis:
- Identifying groups of agents based on their spatial proximity and communication patterns.
- Analyzing the characteristics and behaviors of different clusters.
- Sentiment Analysis:
- Using natural language processing techniques to extract emotions from agent messages.
- Tracking changes in the emotional states of agents over time.
- Personality Analysis:
- Applying personality tests like the MBTI to LLM agents to assess their individual characteristics.
- Analyzing the evolution of personality traits over time.
- Communication Network Analysis:
- Mapping the communication patterns between agents to identify hubs, isolates, and information flow.
- Understanding the influence of different agents on the overall network structure.
To create a comprehensive solution for your simulation, consider incorporating the following techniques:
- Time Series Analysis:
- Visualizing the evolution of key metrics over time (e.g., agent positions, message frequency, sentiment).
- Identifying trends and patterns in the data.
- Network Analysis:
- Creating network graphs to visualize the relationships between agents.
- Analyzing network metrics like centrality, clustering coefficient, and modularity.
- Machine Learning Techniques:
- Using clustering algorithms (e.g., K-means, DBSCAN) to identify groups of agents with similar behaviors.
- Applying topic modeling (e.g., LDA) to analyze the thematic structure of agent conversations.
- Interactive Visualizations:
- Using libraries like D3.js, Plotly, or Bokeh to create interactive visualizations.
- Allowing users to explore the data and uncover insights.
By combining these techniques, you can gain a deeper understanding of the emergent behaviors of your LLM-based agents and uncover insights that might not be apparent from simple observation.
Behavior Clustering Visualizations:
Use clustering techniques (e.g., k-means, DBSCAN) to group agents based on behavior or states.
Visualize the clusters in a 2D or 3D plot.
Real-time Dashboards:
Create a dynamic visualization to monitor agent metrics in real time.
Include agent states, average interactions, resource utilization, etc.
Outcome Heatmaps:
Show where certain outcomes (e.g., successful tasks) occur.
Overlay success rates or key results on the environment map.
Flow Charts or Decision Trees:
Visualize decision-making processes or agent states.
Helps in understanding how agents transition between states.
Diversity Metrics Visualization:
Plot diversity in agent strategies, goals, or actions over time.
Use bar charts, radar plots, or heatmaps.
Communication Density Heatmaps:
Show areas or times where communication between agents is dense.
Efficiency Graphs:
Compare the actual efficiency of agents (e.g., resource collection or task completion) against theoretical optima.
Conflict/Collision Maps:
Highlight areas or times where agents collide or conflict over resources.
3D Spatial Visualization:
If agents operate in a 3D environment, use a 3D plot to show movements or key dynamics.
Performance Comparison Plots:
Compare different agent strategies, initializations, or parameters across runs.
Agent-Level Statistics:
Compute average, max, and min metrics per agent.
Example: Number of tasks completed, time spent idle.
System-Level Metrics:
Aggregate measures such as total task completion time, global energy efficiency, or collective success rate.
Correlation Analysis:
Analyze relationships between agent actions, outcomes, and system performance.
Example: Correlation between communication frequency and task success.
Divergence Metrics:
Use measures like Jensen-Shannon divergence or entropy to assess behavioral variance across agents.
Graph Analysis for Interactions:**
Compute network metrics (e.g., degree centrality, clustering coefficient) for the interaction graph.
Sensitivity Analysis:
Examine how changes in parameters (e.g., communication range, task density) affect system performance.
Recommendations for Next Steps
Implement real-time agent location tracking (you already started this) with options for heatmaps and trajectory overlays.
Add an endpoint for interaction network visualizations (e.g., D3.js for interactivity).
Include system-level metric dashboards that allow filtering by agent, time, or region.
Implement behavior clustering analysis to understand emergent patterns.
Create comparison endpoints to contrast agent behaviors across different simulation runs.
Updates
- Pydantic AI - Brought code up-to-date with the latest version from pydantic called pydantic_ai. It's fantastic.
- Implemented WebSocket Server for streaming real-time logs, updates, and information.
- External Settings file (config.py)
- Created Entity Management API with /entities/~, various utility functions, and settings APIs.
- Created World Management Interface with /reset, /initialize, /step, and /stop.
- Setup FastAPI, SwaggerUI, uvicorn as a local development environment.
- ~Nov. 22nd, 2024. Migrated to Vercel from Cloudflare.
- ~Nov.15th, 2024. Implemented Takata et al. barebones without the visualizations or analysis in JavaScript. Used Cloudflare worker to host the app, Supabase PostGres Database with Upstash Redis for cache, Groq cloud for the LLM. I watched a video on the "Discover AI" YouTube channel called, "AI Agents Create a New World - MBTI Personalities". It inspired me to immediately implement the idea for experimentation.
BATCH PROCESSING BATCH PROCESSING
YES! Perfect!
- We need to generate unique prompts
- Join them together to form a single batch prompt.
- Ensure the proper response that includes the example: "Provide your final answer as a comma-separated list of actions, one per entity, in the same order as the prompts above."
- I would prefer to include a system prompt, perhaps something Pydantic AI compliant.
- I like this too: "# 'movement_result' should contain the 'movement' field with actions for each entity.
- Maintain order. Yes I agree.
- Clear Instruction for LLM Responses. Yes. I agree. I prefer a system prompt sent to instruct it on how to process the payload.
- Validation, definitely validate it too please.
Do all of these edits, then respond with my entire script. Thank you so much.
Below is an example of how you can assemble a batch of movement prompts for each entity using the DEFAULT_MOVEMENT_GENERATION_PROMPT
. Each entity will have a unique prompt that includes its own memory, position, and the shared grid description. This approach leverages your existing construct_prompt
function and entities
list.
- We use
construct_prompt
to inject each entity’s unique parameters:entityId
,(x, y)
position,memory
, and the sharedgrid_description
. - Since
construct_prompt
replaces placeholders with entity-specific details, each entity's final prompt is customized based on their current state and memory. - After generating individual prompts, we join them together to form a single batch prompt. The LLM can then process all entities at once.
- Ensure the LLM understands that it should return a comma-separated list of responses—one per entity—in the order presented. You may need to add an instruction at the end of the batch prompt telling the LLM to respond in a corresponding comma-separated format, if desired.
# Assuming 'construct_prompt', 'entities', and 'DEFAULT_MOVEMENT_GENERATION_PROMPT' are already defined
# Generate individual movement prompts for each entity
movement_prompts = [
construct_prompt(DEFAULT_MOVEMENT_GENERATION_PROMPT, entity, [])
for entity in entities
]
# Each entry in 'movement_prompts' is now a fully formed prompt with the entity's memory, position, etc.
# Combine all entity prompts into one batch prompt
batch_prompt = "\n\n".join(movement_prompts)
# (Optional) Add instructions for how the LLM should return the responses.
# For example, you could append something like:
# "Provide your final answer as a comma-separated list of actions, one per entity, in the same order as the prompts above."
# Send this 'batch_prompt' to your LLM
movement_result = await send_llm_request(batch_prompt)
# 'movement_result' should contain the 'movement' field with actions for each entity.
# You will then parse these actions, validate them, and apply them to your entities as before.
- Maintain Order: The order of prompts in
movement_prompts
should match the order of entities. This makes it easier to correlate each returned action with the correct entity. - Clear Instructions for LLM Response: If you want the LLM to produce a single combined response for all entities, consider updating the prompt (e.g., at the end of
batch_prompt
) to say something like:After reading all the entities' situations, respond with their chosen actions in a single line, separated by commas, in the order of the entities presented above. Provide no explanations or extra text, just the actions.
- Validation: After receiving the LLM response, split it by commas, trim spaces, and validate each action against the expected movement set (
{"x+1", "x-1", "y+1", "y-1", "stay"}
).
By following this approach, each entity's movement prompt is generated uniquely based on its memory and state, and all prompts are assembled into one final batch prompt for the LLM to process collectively.