diff --git a/README.md b/README.md index 30119d4..62534f3 100644 --- a/README.md +++ b/README.md @@ -24,10 +24,10 @@ Using this convenience API, requesting text completion from an already loaded LLM is as straightforward as: ```python -import lmstudio as lm +import lmstudio as lms -llm = lm.llm() -llm.complete("Once upon a time,") +model = lms.llm() +model.complete("Once upon a time,") ``` Requesting a chat response instead only requires the extra step of @@ -35,19 +35,19 @@ setting up a `Chat` helper to manage the chat history and include it in response prediction requests: ```python -import lmstudio as lm +import lmstudio as lms EXAMPLE_MESSAGES = ( "My hovercraft is full of eels!", "I will not buy this record, it is scratched." ) -llm = lm.llm() -chat = lm.Chat("You are a helpful shopkeeper assisting a foreign traveller") +model = lms.llm() +chat = lms.Chat("You are a helpful shopkeeper assisting a foreign traveller") for message in EXAMPLE_MESSAGES: chat.add_user_message(message) print(f"Customer: {message}") - response = llm.respond(chat) + response = model.respond(chat) chat.add_assistant_response(response) print(f"Shopkeeper: {response}") ```