This GitHub repository is dedicated to showcasing a collection of awesome free and open-source alternatives to ChatGPT and pilot training courseware. These resources offer a wide range of functionalities, providing developers and enthusiasts with accessible tools to build and enhance conversational AI models and pilot training materials. By leveraging these open alternatives, you can support the community-driven innovation that powers AI research and development.
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
OpenChatKit provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications.
A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.
This is a browser-based front-end for AI-assisted writing with multiple local & remote AI models. It offers the standard array of tools, including Memory, Author’s Note, World Info, Save & Load, adjustable AI settings, formatting options, and the ability to import existing AI Dungeon adventures.
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
This is the repo for the Stanford Alpaca project, which aims to build and share an instruction-following LLaMA model.
- Pointnetwork/point-alpaca Released weights recreated from Stanford Alpaca, an experiment in fine-tuning LLaMA on a synthetic instruction dataset.
- Nomic-ai/gpt4all Demo, data and code to train an assistant-style large language model with ~800k GPT-3.5-Turbo Generations based on LLaMA.
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
ChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters.
Related:
- Slim version (remove 20K image tokens to reduce memory usage): silver/chatglm-6b-slim
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF), supporting online RL up to 20b params and offline RL to larger models. Basically what you would use to finetune GPT into ChatGPT.
Script to fine tune GPT-J 6B model on the Alpaca dataset. Insightful if you want to fine tune LLMs.
A minimum example of aligning language models with RLHF similar to ChatGPT
Also:
Sev3n open source GPT-3 style models with parameter ranges from 111 million to 13 billion, trained using the Chinchilla formula.
A curated list of open-source resources and tools for pilot training and aviation enthusiasts. These resources cover various aspects of aviation, including flight simulators, navigation tools, and aircraft design analysis.
A sophisticated and open-source flight simulator for desktop systems, featuring realistic physics and a large library of aircraft. [Website]
An open-source aviation navigation app that offers flight planning, charts, and weather information.
An open-source tool for analyzing and optimizing aircraft structures and aerodynamics, developed by the Stanford University Aerospace Design Lab.
A utility for converting airspace and waypoint data between different formats used in aviation.