A First-Place Award-Winning Graduation Project from the Faculty of Science, Zagazig University, Computer Science Department.
Cyber Vision AI is an advanced, open-source AI-powered assistant designed specifically for the cybersecurity community. This project, which won first place in the university's scientific conference for the Mathematics Division, was developed to overcome the limitations of conventional AI tools that often restrict security-related queries.
Our platform provides an unrestricted yet ethically-safeguarded environment for tasks such as vulnerability analysis, exploit explanation, and proof-of-concept code generation.
- Fine-tuning Dataset: The model was fine-tuned on a specialized dataset available at saberbx/X-mini-datasets.
- Core Model: The primary model used is saberbx/XO.
- Deep Thinking Mode: Delivers nuanced, analytically rich responses for complex cybersecurity problems.
- Retrieval-Augmented Generation (RAG): Provides highly personalized answers by searching your private vector database.
- Deep Search & Multi-Agent Framework: Conducts comprehensive, multi-stage searches and compiles in-depth, structured reports.
- Voice Interaction: Features high-accuracy speech-to-text and text-to-speech capabilities.
- MindMap Generation: Dynamically visualizes complex topics and plans as interactive mind maps using Markmap-CLI.
- Live Web Search & Multi-AI Support: Augments answers with real-time web data and insights from multiple top-tier AI models.
Before you begin, ensure you have the following installed and configured:
- Python 3.12+ and pip.
- Ollama: Install from ollama.com.
- Markmap-CLI: Required for MindMap generation. Install it globally:
npm install -g markmap-cli
- Create the Model in Ollama: The project expects a model named
X
. - A ready-to-use Modelfile is available on the model's Hugging Face page.
- Run the following command in a new terminal to create the model. Ollama will automatically download it.
Note: Replace the path with the actual location of your Modelfile.
ollama create X -f /path/to/your/Modelfile
- Verify: Confirm the model is installed by running:
You should see
ollama list
X
in the list.
- Clone the Repository:
git clone https://github.com/bx0-0/CyberVisionAI.git cd CyberVisionAI
- Create and Activate a Virtual Environment:
Important: Ensure the virtual environment (venv) is activated in every new terminal you use for running project commands.
python -m venv venv # On Windows .\venv\Scripts\activate
- Install Dependencies:
pip install -r requirements.txt
- Create a .env file: Inside the
chat/streamlit_stricture
directory, create a new file named.env
. - Add a Secret Key: Open the
.env
file and add the following line. This key is used to encrypt and secure the initial communication between the Django and Streamlit servers. You can use the provided key or generate your own secure key.SECRET_KEY=121f9d63c642ddd73325274068f4196aacd110b5f9ff3f882ff537046e81698b
- Database Migrations:
# (Ensure venv is activated) python manage.py makemigrations python manage.py migrate
- Create a Superuser:
Follow the prompts to set up your admin credentials.
python manage.py createsuperuser
To run Cyber Vision AI, you need to start several services in separate terminals.
- Terminal 1: Start Ollama Server
ollama serve
- Terminal 2: Start Django Backend Server
The backend will now be running at http://localhost:8080.
# (Ensure venv is activated) python manage.py runserver 8080
- Terminal 3: Start Streamlit Frontend
The frontend will now be running at http://localhost:8501.
# (Ensure venv is activated) cd chat/streamlit_stricture streamlit run streamlit_chat.py --server.enableXsrfProtection false
If you wish to use the Retrieval-Augmented Generation (RAG) feature, you will need to run the following two servers in two additional terminals.
- Terminal 4: Start Redis Server
(We recommend installing and running it via WSL):
wsl redis-server
- Terminal 5: Start Celery Worker
From the project's root directory:
# (Ensure venv is activated) celery -A project worker --pool=solo --loglevel=info
- Start with the Django Interface:
- Open your browser and navigate to http://localhost:8080.
- Log in or create a new account.
- Automatic Redirect:
- Upon successful login, the Django interface will automatically redirect you to the Streamlit application (http://localhost:8501).
- Your authentication token will be passed securely in the background, ensuring you are logged in and ready to use the app without any manual steps.
project/settings.py
: Main Django configuration. Here you can adjust the database, Redis broker URL for Celery, and other backend settings.chat/streamlit_stricture/streamlit_chat.py
: Customize the Streamlit frontend, change the default Ollama model name, or adjust other UI/AI parameters.chat/streamlit_stricture/.env
: Stores the secret key for securing the initial server-to-server communication.
- The Text-to-Speech (TTS) feature in the current code was designed for an older version of the Kokoro model.
- The application will run perfectly without the TTS feature.
- If you wish to enable it, you will need to adapt the code in
tts.py
to work with the new version of the model.
This project is licensed under the MIT License.