Chat with any AI model with one line of Python.
Build agents, chatbots, and apps that just work with no downtime.

LitAI is the easiest way to chat with any model (ChatGPT, Anthropic, etc) in one line of Python. LitAI handles retries, fallback, billing, and logging - so you can build agents, chatbots, or apps without managing flaky APIs or writing wrapper code.
✅ Use any AI model (OpenAI, etc.) ✅ 20+ public models ✅ Bring your model API keys ✅ Unified usage dashboard ✅ No subscription ✅ Auto retries and fallback ✅ Deploy dedicated models on-prem ✅ Start instantly ✅ No MLOps glue code
Quick start • Features • Examples • Performance • FAQ • Docs
Install LitAI via pip (more options):
pip install litai
Add AI to any Python program in 3 lines:
from litai import LLM
llm = LLM(model="openai/gpt-4")
answer = llm.chat("who are you?")
print(answer)
# I'm an AI by OpenAI
What we love about LitAI is that you can build agents, chatbots and apps in plain Python - no heavy frameworks or magic. Agents can be just simple Python programs with a few decisions made by a model.
Here's a simple agent that tells you the latest news
import re, requests
from litai import LLM
llm = LLM(model="openai/gpt-4o")
website_url = "https://text.npr.org/"
website_text = re.sub(r'<[^>]+>', ' ', requests.get(website_url).text)
response = llm.chat(f"Based on this, what is the latest: {website_text}")
print(response)
We believe the best way to build agents is with normal Python programs and simple “agentic if statements.” That way, 90% of the logic stays deterministic, and the model only steps in when needed. No complex abstractions, no framework magic - just code you can trust and debug.
from litai import LLM
llm = LLM(model="openai/gpt-3.5-turbo")
product_review = "This TV is terrible."
response = llm.chat(f"Is this review good or bad? Reply only with 'good' or 'bad': {product_review}").strip().lower()
if response == "good":
print("good review")
else:
print("bad review")
Agentic workflows mostly come down to agentic-if statements or classification decisions. While you can use llm.chat
yourself to do it,
we provide 2 simple shortcuts
from litai import LLM
llm = LLM()
# shortcut for agentic if statement (can do this yourself with llm.chat if needed)
product_review = "This TV is terrible."
if llm.if_(product_review, "is this a positive review?"):
print("bad review")
else:
print("good review")
# shortcut for agentic classification (can do this yourself with llm.chat if needed)
sentiment = llm.classify("This movie was awful.", ["positive", "negative"])
print("Sentiment:", sentiment)
Track usage and spending in your Lightning AI dashboard. Model calls are paid for with Lightning AI credits.
✅ No subscription ✅ 15 free credits (~37M tokens) ✅ Pay as you go for more credits
✅ Use over 20+ models (ChatGPT, Claude, etc...)
✅ Monitor all usage in one place
✅ Async support
✅ Auto retries on failure
✅ Auto model switch on failure
✅ Switch models
✅ Multi-turn conversation logs
✅ Streaming
✅ Bring your own model (connect your API keys, coming soon...)
✅ Chat logs (coming soon...)
Model APIs can flake or can have outages. LitAI automatically retries in case of failures. After multiple failures it can automatically fallback to other models in case the provider is down.
from litai import LLM
llm = LLM(
model="openai/gpt-4",
fallback_models=["google/gemini-2.5-flash", "anthropic/claude-3-5-sonnet-20240620"],
max_retries=4,
)
print(llm.chat("What is a fun fact about space?"))
Streaming
Real-time chat applications benefit from showing words as they generate which gives the illusion of faster speed to the user. Streaming is the mechanism that allows you to do this.
from litai import LLM
llm = LLM(model="openai/gpt-4")
for chunk in llm.chat("hello", stream=True):
print(chunk, end="", flush=True)
Use your own client (like OpenAI)
For those who already have their own SDK to call LLMs (like the OpenAI sdk), you can still use LitAI via the https://lightning.ai/api/v1
endpoint,
which will track usage, billing, etc...
from openai import OpenAI
client = OpenAI(
base_url="https://lightning.ai/api/v1",
api_key="LIGHTNING_API_KEY",
)
completion = client.chat.completions.create(
model="openai/gpt-4o",
messages=[
{
"role": "user",
"content": "What is a fun fact about space?"
}
]
)
print(completion.choices[0].message.content)
Concurrency with async
Advanced Python programs that process multiple requests at once rely on "async" to do this. LitAI can work with async libraries without blocking calls. This is especially useful in high-throughput applications like chatbots, APIs, or agent loops.
To enable async behavior, set enable_async=True
when initializing the LLM
class. Then use await llm.chat(...)
inside an async
function.
import asyncio
from litai import LLM
async def main():
llm = LLM(model="openai/gpt-4", teamspace="lightning-ai/litai", enable_async=True)
print(await llm.chat("who are you?"))
if __name__ == "__main__":
asyncio.run(main())
Multi-turn conversations
Models only know the message that was sent to them. To enable them to respond with memory of all the messages sent to it so far, track the related message under the same conversation. This is useful for assistants, summarizers, or research tools that need multi-turn chat history.
Each conversation is identified by a unique name. LitAI stores conversation history separately for each name.
from litai import LLM
llm = LLM(model="openai/gpt-4")
# Continue a conversation across multiple turns
llm.chat("What is Lightning AI?", conversation="intro")
llm.chat("What can it do?", conversation="intro")
print(llm.get_history("intro")) # View all messages from the 'intro' thread
llm.reset_conversation("intro") # Clear conversation history
Create multiple named conversations for different tasks.
from litai import LLM
llm = LLM(model="openai/gpt-4")
llm.chat("Summarize this text", conversation="summarizer")
llm.chat("What's a RAG pipeline?", conversation="research")
print(llm.list_conversations())
Switch models on each call
In certain applications you may want to call ChatGPT in one message and Anthropic in another so you can use the best model for each task. LitAI lets you dynamically switch models at request time.
Set a default model when initializing LLM
and override it with the model
parameter only when needed.
from litai import LLM
llm = LLM(model="openai/gpt-4")
# Uses the default model (openai/gpt-4)
print(llm.chat("Who created you?"))
# >> I am a large language model, trained by OpenAI.
# Override the default model for this request
print(llm.chat("Who created you?", model="google/gemini-2.5-flash"))
# >> I am a large language model, trained by Google.
# Uses the default model again
print(llm.chat("Who created you?"))
# >> I am a large language model, trained by OpenAI.
Multiple models, same conversation
One application of LitAI is to reduce costs of chats by using separate models for the same conversation. For example, use a cheap model to answer the first question and a more expensive model for something that requires more intelligence.
from litai import LLM
llm = LLM(model="openai/gpt-4")
# use a cheap model for this question
llm.chat("Is this a number or word: '5'", model="google/gemini-2.5-flash", conversation="story")
# go back to the expensive model
llm.chat("Create a story about that number like Lord of the Rings", conversation="story")
print(llm.get_history("story")) # View all messages from the 'story' thread
LitAI does smart routing across a global network of servers. As a result, it only adds 25ms of overhead for an API call.
Do I need a subscription to use LitAI? (Nope)
Nope. You can start instantly without a subscription. LitAI is pay-as-you-go and lets you use your own model API keys (like OpenAI, Anthropic, etc.).
Do I need an OpenAI account? (Nope)
Nope. You get access to all models and all model providers without a subscription.
What happens if a model API fails or goes down?
LitAI automatically retries the same model and can fall back to other models you specify. You’ll get the best chance of getting a response, even during outages.
Can I bring my own API keys for OpenAI, Anthropic, etc.? (Yes)
Yes. You can plug in your own keys to any OpenAI compatible API
Can I connect private models? (Yes)
Yes. You can connect any endpoint that supports the OpenAI spec.
Can you deploy a dedicated, private model like Llama for me? (Yes)
Yes. We can deploy dedicated models on any cloud (Lambda, AWS, etc).
Can you deploy models on-prem? (Yes)
Yes. We can deploy on any dedicated VPC on the cloud or your own physical data center.
Do deployed models support Kubernetes? (Yes)
Yes. We can use the Lightning AI orchestrator custom built for AI or Kubernetes, whatever you want!
How do I pay for the model APIs?
Buy Lightning AI credits on Lightning to pay for the APIs.
Do you add fees?
At this moment we don't add fees on top of the API calls, but that might change in the future.
Are you SOC2, HIPAA compliant? (Yes)
LitAI is built by Lightning AI. Our enterprise AI platform powers teams all the way from Fortune 100 to startups. Our platform is fully SOC2, HIPAA compliant.
LitAI is a community project accepting contributions - Let's make the world's most advanced AI routing engine.