Skip to content

Liquid4All/liquid_client

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python Client for the Liquid API

📦 Installation

pip install -U liquidai

API Key

To accesss the following URLs, you need to create an API key at labs.liquid.ai with your account. You can find your API key in the profile tab of the Liquid platform (left bottom icon in the navigation bar).

OpenAI compatible API

For openai and langchain compatible api documentation.

💬 Other liquid api endpoints

  • /complete. Retrieval augmentation enabled completion endpoint, can be passed with files.

You need to set the environment variables LIQUID_URL and LIQUID_API_KEY to the UR and your API key of your Liquid AI subscription respectively.

🔐 API Keys The most secure way is to set the environment variables, which the Liquid client will automatically use.

export LIQUID_URL="https://labs.liquid.ai/api/v1"
export LIQUID_API_KEY="9cba1....."

Alternatively, you can also pass the base_url and api_key parameters to the Client constructor.

To specify a different model, pass the model argument (for instance model='sajj-upscale-120-2.0') to the client.complete function call.

from liquidai import Client
# Create a client object with the API URL and API key
client = Client()
print("Models: ", client.list_models()) # List all models
# Create a conversation with the model (a list of messages)
chat = [{"role": "user", "content": "Hello world in python!"}]
response = client.complete(chat)
print(f"Response: {response['message']['content']}")

Output:

>>> Models:  ['liquid-preview-0.1', 'liquid-edge-base-0.1', 'liquid-cloud-0.1', 'sajj-upscale-120-2.0']

>>> Response: Here is how to code a Hello World program in Python: print("Hello, world!")

Multi-turn conversations:

chat.append(response["message"]) # add assistant message to conversation
chat.append({"role": "user", "content": "And in C++?"})
response = client.complete(chat)
print(f"Response: {response['message']['content']}")

Output:

>>> #include <iostream>
>>>
>>> int main() {
>>>     std::cout << "Hello, World!" << std::endl;
>>>     return 0;
>>> }

📚 Adding Knowledge Bases to the Model

# Let's create an example knowledge base
test_file = "test.txt"
with open(test_file, "w") as f:
    f.write("The name of the CEO of Liquid is Ramin Hasani.")
# Upload the file to the server
response = client.upload_file(test_file)
print(f"Uploaded {test_file} to {response['filename']}")
files = client.list_files()
print(f"Files: {files}")

Output:

>>> Uploaded test.txt to text.txt
>>> Files: ['text.txt']

Next we can tell the model to use the document we just uploaded:

chat = [
    {"role": "user", "content": "Who is the CEO of Liquid?", "files": ["test.txt"]}
]
response = client.complete(chat)
print(f"Response: {response['message']['content']}")

Output:

>>> Response: The CEO of Liquid is Ramin Hasani.

Removing files: Finally we can delete the file from the server:

client.delete_file(test_file)
print(f"Deleted {test_file}")

files = client.list_files()
print(f"Files: {files}")

Output:

>>> Deleted test.txt
>>> Files: []

Evals

Evaluate liquid models using EletherAI harness.

  • tasks: eval tasks like: mmlu, gsm8k
  • limit: set this number to partially test the number of eval dataset.
  • model: the name of a Liquid model

Steps

  • Install the eval
pip install lm-eval[openai]
  • Set your liquid api key
export OPENAI_API_KEY=<your liquid api key>
  • Run this test
lm_eval --model local-chat-completions --tasks gsm8k --limit 10 --model_args model=liquid-edge-base-0.1,base_url=https://labs.liquid.ai/api/v1

📌 Full Examples

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages