Skip to content

Access your Ollama inference server running on your computer from anywhere. Set up with NextJS + Langchain JS LCEL + Ngrok

Notifications You must be signed in to change notification settings

developersdigest/ollama-anywhere

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Ollama Anywhere README

Developers Digest GIF

Ollama Anywhere is a proof of concept project designed to enable seamless interaction with Ollama and the LLM's you have installed being able to access from anywhere, using any device. This allows users to leverage the power of models like Llama 2, Mistral, Mixtral, etc. That they have locally on their computer, making them accessible for inference directly from your computer or device. The project is crafted with responsiveness in mind, ensuring a smooth user experience on any device, whether it's your phone, tablet, or laptop.

Project Structure

The project is divided into two main components:

1. next-web-app

  • A Next.js application that can be deployed to Vercel.
  • Features a chat UI for real-time interaction with the LLMs you have set up.
  • Responsive design to work seamlessly across devices.
  • To get started, deploy the Next.js app to Vercel following standard procedures.

2. ngrok-server

  • A local server that creates a secure tunnel to your machine, enabling access to your LLMs from anywhere.
  • Requires a free ngrok account and an auth token.

Getting Started

Ollama Setup

  1. Visit Ollama's website to download the application for macOS & Linux (Windows coming soon).
  2. After installation, select and download at least one model for inference from Ollama's library.

ngrok Setup

  1. Sign up for a free ngrok account here.
  2. Obtain your ngrok auth token from this page.
  3. Place the auth token in the .env file within the ngrok-server directory.

Installation

Ensure to install dependencies for both the ngrok-server and the next-web-app directories by running npm i in each.

Usage

Open up each project in separate terminal tabs.

  • To run the ngrok-server, execute node index.js.
  • To run the Next.js app, use npm run dev.

Deployment

  • For a staging instance, run the vercel command.
  • For a production deployment, use vercel --prod.

Once everything is set up and the app is running locally or deployed to Vercel, running the ngrok server will allow you to access the deployed version/local host version from the terminal output. From there, enjoy interacting with your LLMs through Ollama Anywhere.

About

Access your Ollama inference server running on your computer from anywhere. Set up with NextJS + Langchain JS LCEL + Ngrok

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published