Skip to content

Predictive Prompt is a simple Language Learning Model (LLM) chat window with retro styling. It dynamically populates a dropdown with available models from a local instance of Ollama and uses the streaming API to generate and display results in real-time. The output is rendered in markdown with syntax highlighting support.

Notifications You must be signed in to change notification settings

cp0153/Predictive-Prompt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Predictive Prompt

Demo

Predictive Prompt is a simple Language Learning Model (LLM) chat window with retro styling. It dynamically populates a dropdown with available models from a local instance of Ollama and uses the streaming API to generate and display results in real-time. The output is rendered in markdown with syntax highlighting support.

Features

  • Dynamic Model Selection: Automatically fetches available models from your local Ollama instance.
  • Retro Loading Bar: Displays a nostalgic loading bar while the model processes your request.
  • Real-time Text Rendering: Text is rendered in chunks as the model generates the output.
  • Markdown Support: The output includes markdown formatting and syntax highlighting for code snippets.

Getting Started

Follow these steps to set up Predictive Prompt:

1. Setting Up Ollama

Before starting, ensure that Ollama is installed and running on your local machine with the streaming API enabled.

  1. Download Ollama

    • Go to ollama.com and download the version suited for your operating system.
  2. Install Ollama

  3. Start Ollama

    • Once installed, start Ollama and ensure the streaming API is running on http://localhost:11434 (default).
    • You can pull a model from the command line using the following format: ollama pull <model_name>:<version>. For example: ollama pull llama3.1:latest
    • Visit https://ollama.com/library to explore other Large Language Models.

2. Installing Project Dependencies

  1. Install Node.js and npm

    • If Node.js and npm are not already installed, download and install them from nodejs.org. npm is included with Node.js.
  2. Install Project Dependencies

    • In your terminal, navigate to the project directory where package.json is located.
    • Run the command:
      npm install
      or
      yarn install
      to install all necessary dependencies.

3. Running the Development Server

  1. Start the Server

    • In your terminal, run:
      npm run dev
      or
      yarn dev
      This will start the development server.
  2. Open the Chat Window

    • Open your browser and navigate to http://localhost:3000.

Usage

  1. Select a Model

    • Use the dropdown menu to choose a model from your local Ollama instance.
  2. Enter Your Prompt

    • Type your prompt into the text input area.

      image
  3. Generate Output

    • Press the 'Send' button to submit your prompt. The model will begin processing, and the output will be displayed in markdown format as it is generated.
    image

Troubleshooting

  • Ollama Not Running: If you receive an error indicating that Ollama is not running, ensure that Ollama is installed correctly and the streaming API is enabled on http://localhost:11434.
  • Model Selection Issues: If models do not appear in the dropdown, make sure your Ollama instance is running properly. Restart Ollama and refresh the chat window if necessary.

About

Predictive Prompt is a simple Language Learning Model (LLM) chat window with retro styling. It dynamically populates a dropdown with available models from a local instance of Ollama and uses the streaming API to generate and display results in real-time. The output is rendered in markdown with syntax highlighting support.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published