Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(backend): Track LLM token usage + LLM blocks cleanup #8367

Merged
merged 10 commits into from
Oct 22, 2024
5 changes: 3 additions & 2 deletions autogpt_platform/backend/backend/blocks/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
import os
import re
from pathlib import Path
from typing import Type

from backend.data.block import Block

Expand All @@ -24,7 +25,7 @@
AVAILABLE_MODULES.append(module)

# Load all Block instances from the available modules
AVAILABLE_BLOCKS = {}
AVAILABLE_BLOCKS: dict[str, Type[Block]] = {}
majdyz marked this conversation as resolved.
Show resolved Hide resolved


def all_subclasses(clz):
majdyz marked this conversation as resolved.
Show resolved Hide resolved
Expand Down Expand Up @@ -76,6 +77,6 @@ def all_subclasses(clz):
if block.disabled:
continue

AVAILABLE_BLOCKS[block.id] = block
AVAILABLE_BLOCKS[block.id] = cls
majdyz marked this conversation as resolved.
Show resolved Hide resolved

__all__ = ["AVAILABLE_MODULES", "AVAILABLE_BLOCKS"]
Loading
Loading