{/* This file is autogenerated, do not edit manually, see: https://github.com/basetenlabs/truss/tree/main/docs/chains/doc_gen */}
APIs for creating user-defined Chainlets.
Base class for all chainlets.
Inheriting from this class adds validations to make sure subclasses adhere to the chainlet pattern and facilitates remote chainlet deployment.
Refer to the docs and this example chainlet for more guidance on how to create subclasses.
Sets a “symbolic marker” to indicate to the framework that a chainlet is a
dependency of another chainlet. The return value of depends
is intended to be
used as a default argument in a chainlet’s __init__
-method.
When deploying a chain remotely, a corresponding stub to the remote is injected in
its place. In run_local
mode an instance of a local chainlet is injected.
Refer to the docs and this example chainlet for more guidance on how make one chainlet depend on another chainlet.
Despite the type annotation, this does not immediately provide a
chainlet instance. Only when deploying remotely or using run_local
a
chainlet instance is provided.
Parameters:
Name | Type | Description |
---|---|---|
chainlet_cls |
Type[ChainletT] | The chainlet class of the dependency. |
retries |
int | The number of times to retry the remote chainlet in case of failures (e.g. due to transient network issues). For streaming, retries are only made if the request fails before streaming any results back. Failures mid-stream not retried. |
timeout_sec |
int | Timeout for the HTTP request to this chainlet. |
use_binary |
bool | Whether to send data in binary format. This can give a parsing speedup and message size reduction (~25%) for numpy arrays. Use NumpyArrayField as a field type on pydantic models for integration and set this option to True . For simple text data, there is no significant benefit. |
- Returns: A “symbolic marker” to be used as a default argument in a chainlet’s initializer.
- Return type: ChainletT
Sets a “symbolic marker” for injecting a context object at runtime.
Refer to the docs and this
example chainlet
for more guidance on the __init__
-signature of chainlets.
Despite the type annotation, this does not immediately provide a
context instance. Only when deploying remotely or using run_local
a
context instance is provided.
- Returns: A “symbolic marker” to be used as a default argument in a chainlet’s initializer.
- Return type: DeploymentContext
Bases: pydantic.BaseModel
Bundles config values and resources needed to instantiate Chainlets.
The context can optionally added as a trailing argument in a Chainlet’s
__init__
method and then used to set up the chainlet (e.g. using a secret as
an access token for downloading model weights).
Parameters:
Name | Type | Description |
---|---|---|
data_dir |
Path|None | The directory where the chainlet can store and access data, e.g. for downloading model weights. |
chainlet_to_service |
Mapping[str,[DeployedServiceDescriptor](#truss_chains.DeployedServiceDescriptor | A mapping from chainlet names to service descriptors. This is used to create RPC sessions to dependency chainlets. It contains only the chainlet services that are dependencies of the current chainlet. |
secrets |
MappingNoIter[str,str] | A mapping from secret names to secret values. It contains only the secrets that are listed in remote_config.assets.secret_keys of the current chainlet. |
environment |
[Environment](#truss_chains.definitions.Environment | The environment that the chainlet is deployed in. None if the chainlet is not associated with an environment. |
chainlet_to_service : Mapping[str, DeployedServiceDescriptor]
environment : Environment | None
- Return type: str
- Parameters: chainlet_name (str)
- Return type: DeployedServiceDescriptor
Bases: pydantic.BaseModel
The environment the chainlet is deployed in.
- Parameters: name (str) – The name of the environment.
Bases: pydantic.BaseModel
Parameters:
Name | Type | Description |
---|---|---|
enable_b10_tracing |
bool | enables baseten-internal trace data collection. This helps baseten engineers better analyze chain performance in case of issues. It is independent of a potentially user-configured tracing instrumentation. Turning this on, could add performance overhead. |
env_variables |
Mapping[str,str] | static environment variables available to the deployed chainlet. |
Bases: pydantic.BaseModel
Options to customize RPCs to dependency chainlets.
Parameters:
Name | Type | Description |
---|---|---|
retries |
int | The number of times to retry the remote chainlet in case of failures (e.g. due to transient network issues). For streaming, retries are only made if the request fails before streaming any results back. Failures mid-stream not retried. |
timeout_sec |
int | Timeout for the HTTP request to this chainlet. |
use_binary |
bool | Whether to send data in binary format. This can give a parsing speedup and message size reduction (~25%) for numpy arrays. Use NumpyArrayField as a field type on pydantic models for integration and set this option to True . For simple text data, there is no significant benefit. |
Decorator to mark a chainlet as the entrypoint of a chain.
This decorator can be applied to one chainlet in a source file and then the CLI push command simplifies: only the file, not the class within must be specified.
Optionally a display name for the Chain (not the Chainlet) can be set (effectively giving a custom default value for the –name arg of the CLI push command).
Example usage:
import truss_chains as chains
@chains.mark_entrypoint
class MyChainlet(ChainletBase):
...
# OR with custom Chain name.
@chains.mark_entrypoint("My Chain Name")
class MyChainlet(ChainletBase):
...
These data structures specify for each chainlet how it gets deployed remotely, e.g. dependencies and compute resources.
Bases: pydantic.BaseModel
Bundles config values needed to deploy a chainlet remotely.
This is specified as a class variable for each chainlet class, e.g.:
import truss_chains as chains
class MyChainlet(chains.ChainletBase):
remote_config = chains.RemoteConfig(
docker_image=chains.DockerImage(
pip_requirements=["torch==2.0.1", ...]
),
compute=chains.Compute(cpu_count=2, gpu="A10G", ...),
assets=chains.Assets(secret_keys=["hf_access_token"], ...),
)
Parameters:
Name | Type | Description |
---|---|---|
docker_image |
[DockerImage](#truss_chains.DockerImage | |
compute |
[Compute](#truss_chains.Compute | |
assets |
[Assets](#truss_chains.Assets | |
name |
str|None | |
options |
[ChainletOptions](#truss_chains.ChainletOptions |
assets : Assets
compute : Compute
docker_image : DockerImage
- Return type: AssetSpec
- Return type: ComputeSpec
options : ChainletOptions
Bases: pydantic.BaseModel
Configures the docker image in which a remoted chainlet is deployed.
Any paths are relative to the source file where DockerImage
is
defined and must be created with the helper function make_abs_path_here
.
This allows you for example organize chainlets in different (potentially nested)
modules and keep their requirement files right next their python source files.
Parameters:
Name | Type | Description |
---|---|---|
base_image |
[BasetenImage](#truss_chains.BasetenImage | The base image used by the chainlet. Other dependencies and assets are included as additional layers on top of that image. You can choose a Baseten default image for a supported python version (e.g. BasetenImage.PY311 ), this will also include GPU drivers if needed, or provide a custom image (e.g. CustomImage(image="python:3.11-slim") ).. |
pip_requirements_file |
AbsPath|None | Path to a file containing pip requirements. The file content is naively concatenated with pip_requirements . |
pip_requirements |
list[str] | A list of pip requirements to install. The items are naively concatenated with the content of the pip_requirements_file . |
apt_requirements |
list[str] | A list of apt requirements to install. |
data_dir |
AbsPath|None | Data from this directory is copied into the docker image and accessible to the remote chainlet at runtime. |
external_package_dirs |
list[AbsPath]|None | A list of directories containing additional python packages outside the chain’s workspace dir, e.g. a shared library. This code is copied into the docker image and importable at runtime. |
base_image : BasetenImage | CustomImage
Bases: Enum
Default images, curated by baseten, for different python versions. If a Chainlet uses GPUs, drivers will be included in the image.
Bases: pydantic.BaseModel
Configures the usage of a custom image hosted on dockerhub.
Parameters:
Name | Type | Description |
---|---|---|
image |
str | |
python_executable_path |
str|None | |
docker_auth |
DockerAuthSettings|None |
Specifies which compute resources a chainlet has in the remote deployment.
Not all combinations can be exactly satisfied by available hardware, in some cases more powerful machine types are chosen to make sure requirements are met or over-provisioned. Refer to the baseten instance reference.
Parameters:
Name | Type | Description |
---|---|---|
cpu_count |
int | Minimum number of CPUs to allocate. |
memory |
str | Minimum memory to allocate, e.g. “2Gi” (2 gibibytes). |
gpu |
str|Accelerator|None | GPU accelerator type, e.g. “A10G”, “A100”, refer to the truss config for more choices. |
gpu_count |
int | Number of GPUs to allocate. |
predict_concurrency |
int|Literal['cpu_count'] | Number of concurrent requests a single replica of a deployed chainlet handles. |
Concurrency concepts are explained in this guide. # noqa: E501
It is important to understand the difference between predict_concurrency and
the concurrency target (used for autoscaling, i.e. adding or removing replicas).
Furthermore, the predict_concurrency
of a single instance is implemented in
two ways:
- Via python’s
asyncio
, ifrun_remote
is an async def. This requires thatrun_remote
yields to the event loop. - With a threadpool if it’s a synchronous function. This requires that the threads don’t have significant CPU load (due to the GIL).
- Return type: ComputeSpec
Specifies which assets a chainlet can access in the remote deployment.
For example, model weight caching can be used like this:
import truss_chains as chains
from truss.base import truss_config
mistral_cache = truss_config.ModelRepo(
repo_id="mistralai/Mistral-7B-Instruct-v0.2",
allow_patterns=["*.json", "*.safetensors", ".model"]
)
chains.Assets(cached=[mistral_cache], ...)
See truss caching guide for more details on caching.
Parameters:
Name | Type | Description |
---|---|---|
cached |
Iterable[ModelRepo] | One or more truss_config.ModelRepo objects. |
secret_keys |
Iterable[str] | Names of secrets stored on baseten, that the chainlet should have access to. You can manage secrets on baseten here. |
external_data |
Iterable[ExternalDataItem] |
Data to be downloaded from public URLs and made available
in the deployment (via `context.data_dir`). See
[here](https://docs.baseten.co/reference/config#external-data) for
more details.
Returns parsed and validated assets.
- Return type: AssetSpec
General framework and helper functions.
Deploys a chain remotely (with all dependent chainlets).
Parameters:
Name | Type | Description |
---|---|---|
entrypoint |
Type[ABCChainlet] | The chainlet class that serves as the entrypoint to the chain. |
chain_name |
str | The name of the chain. |
publish |
bool | Whether to publish the chain as a published deployment (it is a draft deployment otherwise) |
promote |
bool | Whether to promote the chain to be the production deployment (this implies publishing as well). |
only_generate_trusses |
bool | Used for debugging purposes. If set to True, only the the underlying truss models for the chainlets are generated in /tmp/.chains_generated . |
remote |
str | name of a remote config in .trussrc. If not provided, it will be inquired. |
environment |
str|None | The name of an environment to promote deployment into. |
progress_bar |
Type[progress.Progress]|None | Optional rich.progress.Progress if output is desired. |
- Returns: A chain service handle to the deployed chain.
- Return type: BasetenChainService
Bases: ABC
Handle for a deployed chain.
A ChainService
is created and returned when using push
. It
bundles the individual services for each chainlet in the chain, and provides
utilities to query their status, invoke the entrypoint etc.
Parameters:
Name | Type | Description |
---|---|---|
name |
str | |
entrypoint_service |
TrussService |
Fake JSON example data that matches the entrypoint’s input schema. This property must be externally populated.
- Raises: ValueError – If fake data was not set.
Queries the statuses of all chainlets in the chain.
- Returns:
List of
DeployedChainlet
for each chainlet. - Return type: list[DeployedChainlet]
Invokes the entrypoint with JSON data.
- Returns: The JSON response.
- Parameters: json (Dict)
- Return type: Any
URL to invoke the entrypoint.
Link to status page on Baseten.
Helper to specify file paths relative to the immediately calling module.
E.g. in you have a project structure like this:
root/
chain.py
common_requirements.text
sub_package/
chainlet.py
chainlet_requirements.txt
You can now in root/sub_package/chainlet.py
point to the requirements
file like this:
shared = make_abs_path_here("../common_requirements.text")
specific = make_abs_path_here("chainlet_requirements.text")
This helper uses the directory of the immediately calling module as an
absolute reference point for resolving the file location. Therefore,
you MUST NOT wrap the instantiation of make_abs_path_here
into a
function (e.g. applying decorators) or use dynamic code execution.
Ok:
def foo(path: AbsPath):
abs_path = path.abs_path
foo(make_abs_path_here("./somewhere"))
Not Ok:
def foo(path: str):
dangerous_value = make_abs_path_here(path).abs_path
foo("./somewhere")
- Parameters: file_path (str)
- Return type: AbsPath
Context manager local debug execution of a chain.
The arguments only need to be provided if the chainlets explicitly access any the
corresponding fields of DeploymentContext
.
Parameters:
Name | Type | Description |
---|---|---|
secrets |
Mapping[str,str]|None | A dict of secrets keys and values to provide to the chainlets. |
data_dir |
Path|str|None | Path to a directory with data files. |
chainlet_to_service |
Mapping[str,[DeployedServiceDescriptor](#truss_chains.DeployedServiceDescriptor | A dict of chainlet names to service descriptors. |
- Return type: ContextManager[None]
Example usage (as trailing main section in a chain file):
import os
import truss_chains as chains
class HelloWorld(chains.ChainletBase):
...
if __name__ == "__main__":
with chains.run_local(
secrets={"some_token": os.environ["SOME_TOKEN"]},
chainlet_to_service={
"SomeChainlet": chains.DeployedServiceDescriptor(
name="SomeChainlet",
display_name="SomeChainlet",
predict_url="https://...",
options=chains.RPCOptions(),
)
},
):
hello_world_chain = HelloWorld()
result = hello_world_chain.run_remote(max_value=5)
print(result)
Refer to the local debugging guide for more details.
Bases: ServiceDescriptor
Parameters:
Name | Type | Description |
---|---|---|
name |
str | |
display_name |
str | |
options |
[RPCOptions](#truss_chains.RPCOptions | |
predict_url |
str |
Bases: BasetenSession
, ABC
Base class for stubs that invoke remote chainlets.
Extends BasetenSession
with methods for data serialization, de-serialization
and invoking other endpoints.
It is used internally for RPCs to dependency chainlets, but it can also be used in user-code for wrapping a deployed truss model into the Chains framework. It flexibly supports JSON and pydantic inputs and output. Example usage:
import pydantic
import truss_chains as chains
class WhisperOutput(pydantic.BaseModel):
...
class DeployedWhisper(chains.StubBase):
# Input JSON, output JSON.
async def run_remote(self, audio_b64: str) -> Any:
return await self.predict_async(
inputs={"audio": audio_b64})
# resp == {"text": ..., "language": ...}
# OR Input JSON, output pydantic model.
async def run_remote(self, audio_b64: str) -> WhisperOutput:
return await self.predict_async(
inputs={"audio": audio_b64}, output_model=WhisperOutput)
# OR Input and output are pydantic models.
async def run_remote(self, data: WhisperInput) -> WhisperOutput:
return await self.predict_async(data, output_model=WhisperOutput)
class MyChainlet(chains.ChainletBase):
def __init__(self, ..., context=chains.depends_context()):
...
self._whisper = DeployedWhisper.from_url(
WHISPER_URL,
context,
options=chains.RPCOptions(retries=3),
)
async def run_remote(self, ...):
await self._whisper.run_remote(...)
Parameters:
Name | Type | Description |
---|---|---|
service_descriptor |
[DeployedServiceDescriptor](#truss_chains.DeployedServiceDescriptor | Contains the URL and other configuration. |
api_key |
str | A baseten API key to authorize requests. |
Factory method, convenient to be used in chainlet’s __init__
-method.
Parameters:
Name | Type | Description |
---|---|---|
predict_url |
str | URL to predict endpoint of another chain / truss model. |
context |
[DeploymentContext](#truss_chains.DeploymentContext | Deployment context object, obtained in the chainlet’s __init__ . |
options |
[RPCOptions](#truss_chains.RPCOptions | RPC options, e.g. retries. |
- Parameters: inputs (InputT)
- Return type: AsyncIterator[bytes]
Bases: pydantic.BaseModel
When a remote chainlet raises an exception, this pydantic model contains information about the error and stack trace and is included in JSON form in the error response.
Parameters:
Name | Type | Description |
---|---|---|
exception_cls_name |
str | |
exception_module_name |
str|None | |
exception_message |
str | |
user_stack_trace |
list[StackFrame] |
Format the error for printing, similar to how Python formats exceptions with stack traces.
- Return type: str