Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatCraft - open-source web companion for coding with LLMs. #656

Open
1 task
irthomasthomas opened this issue Feb 28, 2024 · 2 comments
Open
1 task

ChatCraft - open-source web companion for coding with LLMs. #656

irthomasthomas opened this issue Feb 28, 2024 · 2 comments
Labels
AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models base-model llm base models not finetuned for chat CLI-UX Command Line Interface user experience and best practices code-generation code generation models and tools like copilot and aider llm Large Language Models openai OpenAI APIs, LLMs, Recipes and Evals

Comments

@irthomasthomas
Copy link
Owner

New Chat - ChatCraft

DESCRIPTION:
Welcome to ChatCraft, your open-source web companion for coding with Large Language Models (LLMs). Designed with developers in mind, ChatCraft transforms the way you interact with GPT models, making it effortless to read, write, debug, and enhance your code.

We think ChatCraft is the best platform for learning, experimenting, and getting creative with code. Here's a few of the reasons why we think you'll agree:

Feature ChatCraft ChatGPT Copilot
Optimized for conversations about code
Work with models from multiple AI vendors
Previews for Mermaid Diagrams, HTML
Edit Generated AI Replies
Use Custom System Prompts
Easy to retry with different AI models
Edit/Run Generated Code and Custom Functions
Open Source

Learn more about ChatCraft on GitHub

Quick Start Instructions
You can begin using ChatCraft today by following these steps:

  1. Choose an AI provider below: we support both OpenAI and OpenRouter. OpenAI supports various versions of ChatGPT (gpt-3.5-turbo) and GPT-4 models, while OpenRouter adds support for even more models from vendors like Anthropic, Google, and Meta. It's easy to switch providers later, or go back-and-forth.
  2. Enter an API Key. ChatCraft is a "bring your own API Key" web app. No matter which provider you choose, ChatCraft needs an API Key to start making API calls on your behalf. API Keys are never shared, and get stored in your browser's local storage.
  3. Start chatting with AI! Type your question in the textbox at the bottom of the screen and click the Ask button to prompt a particular model (switch to a different model whenever you like).
  4. Copy, edit, delete, or retry any AI response with a different model until you get the results you need.
  5. Every chat is saved to a local, offline database in your browser, which you can search (top of UI) or navigate by opening the sidebar with the hamburger menu in the top-left.

Suggested labels

@irthomasthomas irthomasthomas added AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models base-model llm base models not finetuned for chat CLI-UX Command Line Interface user experience and best practices code-generation code generation models and tools like copilot and aider llm Large Language Models openai OpenAI APIs, LLMs, Recipes and Evals labels Feb 28, 2024
@irthomasthomas
Copy link
Owner Author

irthomasthomas commented Feb 28, 2024

Related issues

#418: openchat/openchat-3.5-1210 · Hugging Face

### DetailsSimilarity score: 0.91 - [ ] [openchat/openchat-3.5-1210 · Hugging Face](https://huggingface.co/openchat/openchat-3.5-1210#conversation-templates)

Using the OpenChat Model

We highly recommend installing the OpenChat package and using the OpenChat OpenAI-compatible API server for an optimal experience. The server is optimized for high-throughput deployment using vLLM and can run on a consumer GPU with 24GB RAM.

  • Installation Guide: Follow the installation guide in our repository.

  • Serving: Use the OpenChat OpenAI-compatible API server by running the serving command from the table below. To enable tensor parallelism, append --tensor-parallel-size N to the serving command.

    Model Size Context Weights Serving
    OpenChat 3.5 1210 7B 8192 python -m ochat.serving.openai_api_server --model openchat/openchat-3.5-1210 --engine-use-ray --worker-use-ray
  • API Usage: Once started, the server listens at localhost:18888 for requests and is compatible with the OpenAI ChatCompletion API specifications. Here's an example request:

    curl http://localhost:18888/v1/chat/completions \
      -H "Content-Type: application/json" \
      -d '{
            "model": "openchat_3.5",
            "messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}]
          }'
  • Web UI: Use the OpenChat Web UI for a user-friendly experience.

Online Deployment

If you want to deploy the server as an online service, use the following options:

  • --api-keys sk-KEY1 sk-KEY2 ... to specify allowed API keys
  • --disable-log-requests --disable-log-stats --log-file openchat.log for logging only to a file.

For security purposes, we recommend using an HTTPS gateway in front of the server.

Mathematical Reasoning Mode

The OpenChat model also supports mathematical reasoning mode. To use this mode, include condition: "Math Correct" in your request.

```bash
curl http://localhost:18888/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
        "model": "openchat_3.5",
        "condition": "Math Correct",
        "messages": [{"role": "user", "content": "10.3 − 7988.8133 = "}]
      }'
```
Conversation Templates

We provide several pre-built conversation templates to help you get started.

  • Default Mode (GPT4 Correct):

    GPT4 Correct User: Hello<|end_of_turn|>
    GPT4 Correct Assistant: Hi<|end_of_turn|>
    GPT4 Correct User: How are you today?<|end_of_turn|>
    GPT4 Correct Assistant:
  • Mathematical Reasoning Mode:

    Math Correct User: 10.3 − 7988.8133=<|end_of_turn|>
    Math Correct Assistant:

    NOTE: Remember to set <|end_of_turn|> as end of generation token.

  • Integrated Tokenizer: The default (GPT4 Correct) template is also available as the integrated tokenizer.chat_template, which can be used instead of manually specifying the template.

Suggested labels

{ "label": "chat-templates", "description": "Pre-defined conversation structures for specific modes of interaction." }

#305: Home - LibreChat

### DetailsSimilarity score: 0.89 - [ ] [Home - LibreChat](https://docs.librechat.ai/index.html)

Table of contents
🪶 Features
📃 All-In-One AI Conversations with LibreChat
⭐ Star History
✨ Contributors
💖 This project exists in its current state thanks to all the people who contribute

LibreChat

🪶 Features

🖥️ UI matching ChatGPT, including Dark mode, Streaming, and 11-2023 updates
💬 Multimodal Chat:
Upload and analyze images with GPT-4 and Gemini Vision 📸
More filetypes and Assistants API integration in Active Development 🚧
🌎 Multilingual UI:
English, 中文, Deutsch, Español, Français, Italiano, Polski, Português Brasileiro, Русский
日本語, Svenska, 한국어, Tiếng Việt, 繁體中文, العربية, Türkçe, Nederlands
🤖 AI model selection: OpenAI API, Azure, BingAI, ChatGPT, Google Vertex AI, Anthropic (Claude), Plugins
💾 Create, Save, & Share Custom Presets
🔄 Edit, Resubmit, and Continue messages with conversation branching
📤 Export conversations as screenshots, markdown, text, json.
🔍 Search all messages/conversations
🔌 Plugins, including web access, image generation with DALL-E-3 and more
👥 Multi-User, Secure Authentication with Moderation and Token spend tools
⚙️ Configure Proxy, Reverse Proxy, Docker, many Deployment options, and completely Open-Source
📃 All-In-One AI Conversations with LibreChat

LibreChat brings together the future of assistant AIs with the revolutionary technology of OpenAI's ChatGPT. Celebrating the original styling, LibreChat gives you the ability to integrate multiple AI models. It also integrates and enhances original client features such as conversation and message search, prompt templates and plugins.

With LibreChat, you no longer need to opt for ChatGPT Plus and can instead use free or pay-per-call APIs. We welcome contributions, cloning, and forking to enhance the capabilities of this advanced chatbot platform.

Suggested labels

"ai-platform"

#459: llama2

### DetailsSimilarity score: 0.89 - [ ] [llama2](https://ollama.ai/library/llama2)

Llama 2

The most popular model for general use.

265.8K Pulls
Updated 4 weeks ago

Overview

Llama 2 is released by Meta Platforms, Inc. This model is trained on 2 trillion tokens, and by default supports a context length of 4096. Llama 2 Chat models are fine-tuned on over 1 million human annotations, and are made for chat.

CLI

Open the terminal and run

ollama run llama2

API

Example using curl:

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt":"Why is the sky blue?"
 }'

API documentation

Memory requirements

  • 7b models generally require at least 8GB of RAM
  • 13b models generally require at least 16GB of RAM
  • 70b models generally require at least 64GB of RAM

If you run into issues with higher quantization levels, try using the q4 model or shut down any other programs that are using a lot of memory.

Model variants

  • Chat: fine-tuned for chat/dialogue use cases. These are the default in Ollama, and for models tagged with -chat in the tags tab.

    Example: ollama run llama2

  • Pre-trained: without the chat fine-tuning. This is tagged as -text in the tags tab.

    Example: ollama run llama2:text

By default, Ollama uses 4-bit quantization. To try other quantization levels, please use the other tags. The number after the q represents the number of bits used for quantization (i.e. q4 means 4-bit quantization). The higher the number, the more accurate the model is, but the slower it runs, and the more memory it requires.

References

Suggested labels

{ "label-name": "llama2-model", "description": "A powerful text model for chat, dialogue, and general use.", "repo": "ollama.ai/library/llama2", "confidence": 91.74 }

#485: Docs | OpenRouter

### DetailsSimilarity score: 0.88 - [ ] [Docs | OpenRouter](https://openrouter.ai/docs#models)

Title: Docs | OpenRouter

Description: The future will bring us hundreds of language models and dozens of providers for each. How will you choose the best?

Benefit from the race to the bottom. OpenRouter finds the lowest price for each model across dozens of providers. You can also let users pay for their own models via OAuth PKCE.

Standardized API. No need to change your code when switching between models or providers.

The best models will be used the most. Evals are flawed. Instead, compare models by how often they're used, and soon, for which purposes. Chat with multiple at once in the Playground.

URL: https://openrouter.ai/docs#models

Key Features

  • Lowest Price Guarantee: OpenRouter finds the lowest price for each model across dozens of providers.
  • Standardized API: No need to change your code when switching between models or providers.
  • Usage-Based Comparison: Compare models by how often they're used, and soon, for which purposes.
  • User-Paid Models: Allow users to pay for their own models via OAuth PKCE.
  • Playground: Chat with multiple models at once in the Playground.

OpenRouter is the future of language model selection and usage. Benefit from a wide range of models and providers, while ensuring the best models are used the most.

Suggested labels

{ "label-name": "language-models", "description": "Information about language models and providers", "repo": "openrouter.ai", "confidence": 96.2 }

#552: LargeWorldModel/LWM-Text-Chat-1M · Hugging Face

### DetailsSimilarity score: 0.87 - [ ] [LargeWorldModel/LWM-Text-Chat-1M · Hugging Face](https://huggingface.co/LargeWorldModel/LWM-Text-Chat-1M)

LargeWorldModel/LWM-Text-Chat-1M · Hugging Face

DESCRIPTION:

LWM-Text-1M-Chat Model Card

Model details

Model type: LWM-Text-1M-Chat is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data. It is an auto-regressive language model, based on the transformer architecture.

Model date: LWM-Text-1M-Chat was trained in December 2023.

Paper or resources for more information: https://largeworldmodel.github.io/

URL: https://huggingface.co/LargeWorldModel/LWM-Text-Chat-1M

Suggested labels

{'label-name': 'Open-source Models', 'label-description': 'Models that are publicly available and open-source for usage and exploration.', 'gh-repo': 'huggingfaceco/LargeWorldModel/LWM-Text-Chat-1M', 'confidence': 56.11}

@williamewing9087
Copy link

nice

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models base-model llm base models not finetuned for chat CLI-UX Command Line Interface user experience and best practices code-generation code generation models and tools like copilot and aider llm Large Language Models openai OpenAI APIs, LLMs, Recipes and Evals
Projects
None yet
Development

No branches or pull requests

2 participants