Predictive Prompt is a simple Language Learning Model (LLM) chat window with retro styling. It dynamically populates a dropdown with available models from a local instance of Ollama and uses the streaming API to generate and display results in real-time. The output is rendered in markdown with syntax highlighting support.
- Dynamic Model Selection: Automatically fetches available models from your local Ollama instance.
- Retro Loading Bar: Displays a nostalgic loading bar while the model processes your request.
- Real-time Text Rendering: Text is rendered in chunks as the model generates the output.
- Markdown Support: The output includes markdown formatting and syntax highlighting for code snippets.
Follow these steps to set up Predictive Prompt:
Before starting, ensure that Ollama is installed and running on your local machine with the streaming API enabled.
-
Download Ollama
- Go to ollama.com and download the version suited for your operating system.
-
Install Ollama
- Follow the step-by-step installation guide for your specific OS:
- For Windows, use the Windows installation guide.
- For macOS and Linux, check the Ollama Download page for instructions.
- Follow the step-by-step installation guide for your specific OS:
-
Start Ollama
- Once installed, start Ollama and ensure the streaming API is running on
http://localhost:11434
(default). - You can pull a model from the command line using the following format:
ollama pull <model_name>:<version>
. For example:ollama pull llama3.1:latest
- Visit https://ollama.com/library to explore other Large Language Models.
- Once installed, start Ollama and ensure the streaming API is running on
-
Install Node.js and npm
- If Node.js and npm are not already installed, download and install them from nodejs.org. npm is included with Node.js.
-
Install Project Dependencies
- In your terminal, navigate to the project directory where
package.json
is located. - Run the command:
or
npm install
to install all necessary dependencies.yarn install
- In your terminal, navigate to the project directory where
-
Start the Server
- In your terminal, run:
or
npm run dev
This will start the development server.yarn dev
- In your terminal, run:
-
Open the Chat Window
- Open your browser and navigate to
http://localhost:3000
.
- Open your browser and navigate to
-
Select a Model
- Use the dropdown menu to choose a model from your local Ollama instance.
-
Enter Your Prompt
-
Generate Output
- Press the 'Send' button to submit your prompt. The model will begin processing, and the output will be displayed in markdown format as it is generated.
- Ollama Not Running: If you receive an error indicating that Ollama is not running, ensure that Ollama is installed correctly and the streaming API is enabled on
http://localhost:11434
. - Model Selection Issues: If models do not appear in the dropdown, make sure your Ollama instance is running properly. Restart Ollama and refresh the chat window if necessary.