Skip to content

Commit

Permalink
Merge pull request #60 from microsoft/python
Browse files Browse the repository at this point in the history
Python
  • Loading branch information
sethjuarez authored Aug 14, 2024
2 parents bdd6bdb + d4ff57e commit 4214b3d
Show file tree
Hide file tree
Showing 28 changed files with 2,247 additions and 107 deletions.
119 changes: 98 additions & 21 deletions runtime/prompty/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,10 @@ authors:
model:
api: chat
configuration:
api_version: 2023-12-01-preview
azure_deployment: gpt-35-turbo
azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT}
azure_deployment: ${env:AZURE_OPENAI_DEPLOYMENT:gpt-35-turbo}
sample:
firstName: Jane
lastName: Doe
Expand All @@ -46,66 +49,140 @@ Download the [VS Code extension here](https://marketplace.visualstudio.com/items


## Using this Prompty Runtime
The Python runtime is a simple way to run your prompts in Python. The runtime is available as a Python package and can be installed using pip.
The Python runtime is a simple way to run your prompts in Python. The runtime is available as a Python package and can be installed using pip. Depending on the type of prompt you are running, you may need to install additional dependencies. The runtime is designed to be extensible and can be customized to fit your needs.

```bash
pip install prompty
pip install prompty[azure]
```

Simple usage example:

```python
import prompty
# import invoker
import prompty.azure

# execute the prompt
response = prompty.execute("path/to/prompty/file")

print(response)
```

## Available Invokers
The Prompty runtime comes with a set of built-in invokers that can be used to execute prompts. These include:

- `azure`: Invokes the Azure OpenAI API
- `openai`: Invokes the OpenAI API
- `serverless`: Invokes serverless models (like the ones on GitHub) using the [Azure AI Inference client library](https://learn.microsoft.com/en-us/python/api/overview/azure/ai-inference-readme?view=azure-python-preview) (currently only key based authentication is supported with more managed identity support coming soon)


## Using Tracing in Prompty
Prompty supports tracing to help you understand the execution of your prompts. The built-in tracing dumps the execution of the prompt to a file.
Prompty supports tracing to help you understand the execution of your prompts. This functionality is customizeable and can be used to trace the execution of your prompts in a way that makes sense to you. Prompty has two default traces built in: `console_tracer` and `PromptyTracer`. The `console_tracer` writes the trace to the console, and the `PromptyTracer` writes the trace to a JSON file. You can also create your own tracer by creating your own hook.

```python
import prompty
from prompty.tracer import Trace, PromptyTracer
# import invoker
import prompty.azure
from prompty.tracer import trace, Tracer, console_tracer, PromptyTracer

# add console tracer
Tracer.add("console", console_tracer)

# add default tracer
Trace.add_tracerTrace.add_tracer("prompty", PromptyTracer("path/to/trace/dir"))
# add PromptyTracer
json_tracer = PromptyTracer(output_dir="path/to/output")
Tracer.add("console", json_tracer.tracer)

# execute the prompt
response = prompty.execute("path/to/prompty/file")

print(response)
```

You can also bring your own tracer by creating a `Tracer` class.
Simple example:
You can also bring your own tracer by your own tracing hook. The `console_tracer` is the simplest example of a tracer. It writes the trace to the console.
This is what it loks like:

```python
@contextlib.contextmanager
def console_tracer(name: str) -> Iterator[Callable[[str, Any], None]]:
try:
print(f"Starting {name}")
yield lambda key, value: print(f"{key}:\n{json.dumps(value, indent=4)}")
finally:
print(f"Ending {name}")

```

It uses a context manager to define the start and end of the trace so you can do whatever setup and teardown you need. The `yield` statement returns a function that you can use to write the trace. The `console_tracer` writes the trace to the console using the `print` function.

The `PromptyTracer` is a more complex example of a tracer. This tracer manages its internal state using a full class. Here's an example of the class based approach that writes each function trace to a JSON file:

```python
class SimplePromptyTracer:
def __init__(self, output_dir: str):
self.output_dir = output_dir
self.tracer = self._tracer

@contextlib.contextmanager
def tracer(self, name: str) -> Iterator[Callable[[str, Any], None]]:
trace = {}
try:
yield lambda key, value: trace.update({key: value})
finally:
with open(os.path.join(self.output_dir, f"{name}.json"), "w") as f:
json.dump(trace, f, indent=4)
```

The tracing mechanism is supported for all of the prompty runtime internals and can be used to trace the execution of the prompt along with all of the paramters. There is also a `@trace` decorator that can be used to trace the execution of any function external to the runtime. This is provided as a facility to trace the execution of the prompt and whatever supporting code you have.

```python
import prompty
from prompty.tracer import Tracer
# import invoker
import prompty.azure
from prompty.tracer import trace, Tracer, PromptyTracer

class MyTracer(Tracer):
json_tracer = PromptyTracer(output_dir="path/to/output")
Tracer.add("PromptyTracer", json_tracer.tracer)

def start(self, name: str) -> None:
print(f"Starting {name}")
@trace
def get_customer(customerId):
return {"id": customerId, "firstName": "Sally", "lastName": "Davis"}

def add(self, key: str, value: Any) -> None:
print(f"Adding {key} with value {value}")
@trace
def get_response(customerId, prompt):
customer = get_customer(customerId)

def end(self) -> None:
print("Ending")
result = prompty.execute(
prompt,
inputs={"question": question, "customer": customer},
)
return {"question": question, "answer": result}

# add your tracer
Trace.add_tracer("my_tracer", MyTracer())
```

# execute the prompt
response = prompty.execute("path/to/prompty/file")
In this case, whenever this code is executed, a `.ptrace` file will be created in the `path/to/output` directory. This file will contain the trace of the execution of the `get_response` function, the execution of the `get_customer` function, and the prompty internals that generated the response.

## OpenTelemetry Tracing
You can add OpenTelemetry tracing to your application using the same hook mechanism. In your application, you might create something like `trace_span` to trace the execution of your prompts:

```python
from opentelemetry import trace as oteltrace

_tracer = "prompty"

@contextlib.contextmanager
def trace_span(name: str):
tracer = oteltrace.get_tracer(_tracer)
with tracer.start_as_current_span(name) as span:
yield lambda key, value: span.set_attribute(
key, json.dumps(value).replace("\n", "")
)

# adding this hook to the prompty runtime
Tracer.add("OpenTelemetry", trace_span)

```

To define your own tracer, you can subclass the `Tracer` class and implement the `start`, `add`, and `end` methods and then add it to the `Trace` instance. You can add as many tracers as you like - the will all of them will be called in order.
This will produce spans during the execution of the prompt that can be sent to an OpenTelemetry collector for further analysis.

## CLI
The Prompty runtime also comes with a CLI tool that allows you to run prompts from the command line. The CLI tool is installed with the Python package.
Expand Down
33 changes: 31 additions & 2 deletions runtime/prompty/pdm.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 2 additions & 4 deletions runtime/prompty/prompty/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
from pathlib import Path
from typing import Dict, List, Union

from .tracer import trace
from .core import (
from prompty.tracer import trace
from prompty.core import (
Frontmatter,
InvokerFactory,
ModelSettings,
Expand All @@ -16,8 +16,6 @@

from .renderers import *
from .parsers import *
from .executors import *
from .processors import *


def load_global_config(
Expand Down
3 changes: 3 additions & 0 deletions runtime/prompty/prompty/azure/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# __init__.py
from .executor import AzureOpenAIExecutor
from .processor import AzureOpenAIProcessor
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,16 @@
import importlib.metadata
from typing import Iterator
from openai import AzureOpenAI
from .core import Invoker, InvokerFactory, Prompty, PromptyStream
from ..core import Invoker, InvokerFactory, Prompty, PromptyStream

VERSION = importlib.metadata.version("prompty")


@InvokerFactory.register_executor("azure")
@InvokerFactory.register_executor("azure_openai")
class AzureOpenAIExecutor(Invoker):
""" Azure OpenAI Executor """
"""Azure OpenAI Executor"""

def __init__(self, prompty: Prompty) -> None:
super().__init__(prompty)
kwargs = {
Expand Down Expand Up @@ -40,7 +41,7 @@ def __init__(self, prompty: Prompty) -> None:

self.client = AzureOpenAI(
default_headers={
"User-Agent": f"prompty{VERSION}",
"User-Agent": f"prompty/{VERSION}",
"x-ms-useragent": f"prompty/{VERSION}",
},
**kwargs,
Expand All @@ -51,7 +52,7 @@ def __init__(self, prompty: Prompty) -> None:
self.parameters = self.prompty.model.parameters

def invoke(self, data: any) -> any:
""" Invoke the Azure OpenAI API
"""Invoke the Azure OpenAI API
Parameters
----------
Expand Down
Original file line number Diff line number Diff line change
@@ -1,22 +1,14 @@
from typing import Iterator
from pydantic import BaseModel
from openai.types.completion import Completion
from openai.types.chat.chat_completion import ChatCompletion
from .core import Invoker, InvokerFactory, Prompty, PromptyStream
from ..core import Invoker, InvokerFactory, Prompty, PromptyStream, ToolCall
from openai.types.create_embedding_response import CreateEmbeddingResponse


class ToolCall(BaseModel):
id: str
name: str
arguments: str


@InvokerFactory.register_processor("openai")
@InvokerFactory.register_processor("azure")
@InvokerFactory.register_processor("azure_openai")
class OpenAIProcessor(Invoker):
"""OpenAI/Azure Processor"""
class AzureOpenAIProcessor(Invoker):
"""Azure OpenAI Processor"""

def __init__(self, prompty: Prompty) -> None:
super().__init__(prompty)
Expand Down Expand Up @@ -62,10 +54,13 @@ def invoke(self, data: any) -> any:

def generator():
for chunk in data:
if len(chunk.choices) == 1 and chunk.choices[0].delta.content != None:
if (
len(chunk.choices) == 1
and chunk.choices[0].delta.content != None
):
content = chunk.choices[0].delta.content
yield content

return PromptyStream("OpenAIProcessor", generator())
return PromptyStream("AzureOpenAIProcessor", generator())
else:
return data
Loading

0 comments on commit 4214b3d

Please sign in to comment.