Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different models per camera #148

Merged
merged 26 commits into from
Apr 13, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
23da089
First endpoints tests (Routers: App and Config)
JFer11 Dec 20, 2020
919363c
First endpoints tests (Routers: App and Config) II
JFer11 Dec 21, 2020
6681c8d
Changes based on comments
JFer11 Dec 23, 2020
dd4155a
First Tests, Refactored
JFer11 Jan 14, 2021
a67a7b5
First endpoints tests (Routers: App and Config) III
JFer11 Jan 21, 2021
da1c868
Merge branch 'tests_api_processor_routers'
JFer11 Jan 25, 2021
bcd8bfd
Merge branch 'master' of github.com:neuralet/smart-social-distancing
JFer11 Jan 29, 2021
a355d86
Merge branch 'master' of github.com:xmartlabs/smart-social-distancing
JFer11 Feb 15, 2021
2bc1b56
Merge branch 'master' of github.com:xmartlabs/smart-social-distancing
JFer11 Mar 2, 2021
06da81d
Merge branch 'master' of github.com:xmartlabs/smart-social-distancing
JFer11 Mar 10, 2021
313baaf
Merge branch 'master' of github.com:xmartlabs/smart-social-distancing
JFer11 Mar 15, 2021
0114e8f
Merge branch 'master' of github.com:xmartlabs/smart-social-distancing
JFer11 Mar 18, 2021
d398543
POC Support different models
JFer11 Mar 19, 2021
c77bcf4
Changes based on comments
JFer11 Mar 23, 2021
02dc9c5
Progress in modify model endpoint
JFer11 Mar 24, 2021
09ca19d
progress in validation
JFer11 Mar 25, 2021
f39e512
progress whole endpoint
JFer11 Mar 29, 2021
b5fa4c3
tests for the endpoint and a couple of fixes
JFer11 Mar 30, 2021
08647cc
README updated
JFer11 Mar 30, 2021
6f419dd
progress based on comments
JFer11 Mar 31, 2021
9250a45
progress based on comments II
JFer11 Mar 31, 2021
8292c38
progress based on comments III
JFer11 Apr 1, 2021
cbf01e7
progress based on comments IV
JFer11 Apr 2, 2021
934b5c2
progress based on comments V
JFer11 Apr 5, 2021
d364a69
little change
JFer11 Apr 7, 2021
3004dbc
mathi comments
JFer11 Apr 12, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -564,6 +564,14 @@ All the configurations are grouped in *sections* and some of them can vary depen
- `BackupInterval`: Expressed in minutes. Defines the time interval desired to back up the raw data.
- `BackupS3Bucket`: Configures the S3 Bucket used to store the backups.


#### Use different models per camera
By default, all video streams are processing running against the same ML model.
When a processing threads starts running it verifies if a configuration .json file exists in the path: /repo/data/processor/config/sources/<camera_id>/ml_models/model_<device>.json
If no custom configuration is detected, a file will be generated using the default values from the `[Detector]` section, documented above.
These JSONs contain the configuration of which ML Model is used for processing said stream, and can be modified either manually or using the endpoint `/ml_model` documented below. Please note that models that differ in their location or name regarding the `./download_` scripts must specify their location in the field `file_path`.


### API usage
After you run the processor on your node, you can use the exposed API to control the Processor's Core, where all the process is getting done.

Expand All @@ -585,6 +593,7 @@ The available endpoints are grouped in the following subapis:
- `/export`: an endpoint to export (in zip format) all the data generated by the processor.
- `/slack`: a set of endpoints required to configure Slack correctly in the processor. We recommend to use these endpoints from the [UI](https://app.lanthorn.ai) instead of calling them directly.
- `/auth`: a set of endpoints required to configure OAuth2 in the processors' endpoints.
- `/ml_model`: an endpoint to edit the ML model and its parameters, that is used to process certain camera's video feed.

Additionally, the API exposes 2 endpoints to stop/start the video processing
- `PUT PROCESSOR_IP:PROCESSOR_PORT/start-process-video`: Sends command `PROCESS_VIDEO_CFG` to core and returns the response. It starts to process the video adressed in the configuration file. If the response is `true`, it means the core is going to try to process the video (no guarantee if it will do it), and if the response is `false`, it means the process can not be started now (e.g. another process is already requested and running).
Expand Down
127 changes: 127 additions & 0 deletions api/models/ml_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
from pydantic import Field, validator, root_validator
from typing import List, Optional

from .base import SnakeModel


MODELS_DEVICES = {
"Jetson": ["ssd_mobilenet_v2_coco", "ssd_mobilenet_v2_pedestrian_softbio", "openpifpaf_tensorrt"],
"EdgeTPU": ["mobilenet_ssd_v2", "pedestrian_ssd_mobilenet_v2", "pedestrian_ssdlite_mobilenet_v2", "posenet"],
"Dummy": ["openvino", "openpifpaf_tensorrt", "mobilenet_ssd_v2", "openpifpaf", "yolov3", "ssd_mobilenet_v2_coco",
"ssd_mobilenet_v2_pedestrian_softbio", "posenet", "pedestrian_ssd_mobilenet_v2",
"pedestrian_ssdlite_mobilenet_v2"], # All available models.
"x86": ["mobilenet_ssd_v2", "openvino", "openpifpaf", "openpifpaf_tensorrt", "yolov3"],
"x86-gpu": ["mobilenet_ssd_v2", "openvino", "openpifpaf", "openpifpaf_tensorrt", "yolov3"]
}


class MLModelDTO(SnakeModel):
device: str = Field(example="Jetson")
name: str = Field(example="openvino")
image_size: str = Field(example="300,300,3")
model_path: str = Field("")
class_id: int = Field(1)
min_score: float = Field(0.25, example=0.30)
tensorrt_precision: Optional[int] = Field(None, example=32)

@validator("device")
def validate_device(cls, device):
if device not in ["x86", "x86-gpu", "Jetson", "EdgeTPU", "Dummy"]:
raise ValueError('Not valid Device. Try one of the following: "x86", "x86-gpu", "Jetson", "EdgeTPU",'
'"Dummy".')
return device

@validator("tensorrt_precision")
def validate_tensorrt_precision(cls, tensorrt_precision):
if tensorrt_precision is not None and tensorrt_precision not in [16, 32]:
raise ValueError('Not valid tensorrt_precision. Accepted values: 16, 32')

return tensorrt_precision

@validator("image_size")
def validate_image_size(cls, image_size):
integers = image_size.split(",")

if len(integers) != 3:
raise ValueError('ImageSize must be a string with 3 numbers separated with commas. Ex: "30,30,40".')

try:
[int(x) for x in integers]
except ValueError:
raise ValueError('ImageSize must be a string with 3 numbers separated with commas. Ex: "30,30,40".')

return image_size

# Root validators are called after each field validators success.

@root_validator(skip_on_failure=True)
def check_models_and_device(cls, values):
JFer11 marked this conversation as resolved.
Show resolved Hide resolved
if values.get("device") == "Jetson":
if values.get("name") not in MODELS_DEVICES["Jetson"]:
raise ValueError(f'The device {values.get("device")} only supports the following models:'
f' {MODELS_DEVICES["Jetson"]}. ')

elif values.get("device") == "EdgeTPU":
if values.get("name") not in MODELS_DEVICES["EdgeTPU"]:
raise ValueError(f'The device {values.get("device")} only supports the following models:'
f' {MODELS_DEVICES["EdgeTPU"]}. ')

elif values.get("device") == "Dummy":
# No restrictions on this model.
# All available models.
if values.get("name") not in MODELS_DEVICES["Dummy"]:
raise ValueError(f'The device {values.get("device")} only supports the following models:'
f' {MODELS_DEVICES["Dummy"]}. ')

elif values.get("device") == "x86":
if values.get("name") not in MODELS_DEVICES["x86"]:
raise ValueError(f'The device {values.get("device")} only supports the following models:'
f' {MODELS_DEVICES["x86"]}. ')

elif values.get("device") == "x86-gpu":
if values.get("name") not in MODELS_DEVICES["x86-gpu"]:
raise ValueError(f'The device {values.get("device")} only supports the following models:'
f' {MODELS_DEVICES["x86-gpu"]}. ')

return values

@root_validator(skip_on_failure=True)
def check_variables_for_models(cls, values):

if values.get("name") in ["ssd_mobilenet_v2_coco", "ssd_mobilenet_v2_pedestrian_softbio", "mobilenet_ssd_v2",
"pedestrian_ssd_mobilenet_v2", "pedestrian_ssdlite_mobilenet_v2", "mobilenet_ssd_v2",
"openvino"]:
pass

elif values.get("name") == "openpifpaf_tensorrt":
if values.get("tensorrt_precision") is None:
raise ValueError('The model "openpifpaf_tensorrt" requires the parameter "tensorrt_precision", and'
'said parameter can be either 16 or 32.')

integers = values.get("image_size").split(",")
if int(integers[0]) % 16 != 1 or int(integers[0]) % 16 != 1:
raise ValueError('First two values of ImageSize must be multiples of 16 plus 1 for openpifpaf_tensorrt.'
JFer11 marked this conversation as resolved.
Show resolved Hide resolved
' Ex: "641,369,3".')

elif values.get("name") == "posenet":
integers = values.get("image_size").split(",")

if integers != ["1281", "721", "3"] and integers != ["641", "481", "3"] and integers != ["481", "353", "3"]:
raise ValueError('ImageSize must be either one of the following options: "1281,721,3", "641,481,3" or'
' "481,353,3".')

elif values.get("name") == "openpifpaf":
if values.get("image_size").split(",") != ["1281", "721", "3"]:
raise ValueError('ImageSize must be 1281,721,3 for the model "openpifpaf".')

elif values.get("name") == "yolov3":
integers = values.get("image_size").split(",")

reminder_one = int(integers[0]) % 32
reminder_two = int(integers[1]) % 32

if reminder_one or reminder_two != 0:
raise ValueError('For yolov3 model the ImageSize MUST be w = h = 32x e.g: x= 13=> ImageSize = 416,'
'416,3. ')

return values
7 changes: 6 additions & 1 deletion api/processor_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,8 @@
from fastapi.openapi.utils import get_openapi
from share.commands import Commands

from libs.utils.loggers import get_area_log_directory, get_source_log_directory, get_screenshots_directory
from libs.utils.loggers import get_area_log_directory, get_source_log_directory, get_screenshots_directory, \
get_config_source_directory, get_config_areas_directory
from api.utils import bad_request_serializer

from .dependencies import validate_token
Expand All @@ -32,6 +33,7 @@
from .routers.source_post_processors import source_post_processors_router
from .routers.static import static_router
from .routers.tracker import tracker_router
from .routers.ml_models import ml_model_router
from .settings import Settings

logger = logging.getLogger(__name__)
Expand All @@ -57,7 +59,9 @@ def __init__(self):

def create_fastapi_app(self):
os.environ["SourceLogDirectory"] = get_source_log_directory(self.settings.config)
os.environ["SourceConfigDirectory"] = get_config_source_directory(self.settings.config)
os.environ["AreaLogDirectory"] = get_area_log_directory(self.settings.config)
os.environ["AreaConfigDirectory"] = get_config_areas_directory(self.settings.config)
os.environ["ScreenshotsDirectory"] = get_screenshots_directory(self.settings.config)

os.environ["HeatmapResolution"] = self.settings.config.get_section_dict("App")["HeatmapResolution"]
Expand Down Expand Up @@ -89,6 +93,7 @@ def create_fastapi_app(self):
app.include_router(slack_router, prefix="/slack", tags=["Slack"], dependencies=dependencies)
app.include_router(auth_router, prefix="/auth", tags=["Auth"])
app.include_router(static_router, prefix="/static", dependencies=dependencies)
app.include_router(ml_model_router, prefix="/ml_model", tags=["ML Models"], dependencies=dependencies)

@app.exception_handler(RequestValidationError)
async def validation_exception_handler(request: Request, exc: RequestValidationError):
Expand Down
4 changes: 4 additions & 0 deletions api/routers/areas.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,8 @@ async def create_area(new_area: AreaConfigDTO, reboot_processor: Optional[bool]

area_directory = os.path.join(os.getenv("AreaLogDirectory"), new_area.id, "occupancy_log")
Path(area_directory).mkdir(parents=True, exist_ok=True)
area_config_directory = os.path.join(os.getenv("AreaConfigDirectory"), new_area.id)
Path(area_config_directory).mkdir(parents=True, exist_ok=True)

return next((area for area in get_areas() if area["id"] == area_dict["Id"]), None)

Expand Down Expand Up @@ -125,5 +127,7 @@ async def delete_area(area_id: str, reboot_processor: Optional[bool] = True):

area_directory = os.path.join(os.getenv("AreaLogDirectory"), area_id)
shutil.rmtree(area_directory)
area_config_directory = os.path.join(os.getenv("AreaConfigDirectory"), area_id)
shutil.rmtree(area_config_directory)

return handle_response(None, success, status.HTTP_204_NO_CONTENT)
6 changes: 5 additions & 1 deletion api/routers/cameras.py
Original file line number Diff line number Diff line change
Expand Up @@ -209,6 +209,8 @@ async def create_camera(new_camera: CameraDTO, reboot_processor: Optional[bool]
Path(camera_screenshot_directory).mkdir(parents=True, exist_ok=True)
heatmap_directory = os.path.join(os.getenv("SourceLogDirectory"), new_camera.id, "objects_log")
Path(heatmap_directory).mkdir(parents=True, exist_ok=True)
source_config_directory = os.path.join(os.getenv("SourceConfigDirectory"), new_camera.id)
Path(source_config_directory).mkdir(parents=True, exist_ok=True)

return next((camera for camera in get_cameras() if camera["id"] == camera_dict["Id"]), None)

Expand Down Expand Up @@ -241,11 +243,13 @@ async def delete_camera(camera_id: str, reboot_processor: Optional[bool] = True)
config_dict = reestructure_cameras((config_dict))
success = update_config(config_dict, reboot_processor)

# Deletes the camera screenshots directory and all its content.
# Delete all directories related to a given camera.
camera_screenshot_directory = os.path.join(os.environ.get("ScreenshotsDirectory"), camera_id)
shutil.rmtree(camera_screenshot_directory)
source_directory = os.path.join(os.getenv("SourceLogDirectory"), camera_id)
shutil.rmtree(source_directory)
source_config_directory = os.path.join(os.getenv("SourceConfigDirectory"), camera_id)
shutil.rmtree(source_config_directory)

return handle_response(None, success, status.HTTP_204_NO_CONTENT)

Expand Down
70 changes: 70 additions & 0 deletions api/routers/ml_models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
import json
import os
from pathlib import Path


from fastapi import APIRouter, status, Body
from starlette.exceptions import HTTPException
from typing import Optional

from api.models.ml_model import MLModelDTO
from api.routers.cameras import validate_camera_existence
from api.settings import Settings
from api.utils import restart_processor, handle_response
from libs.utils.config import get_source_config_directory


ml_model_router = APIRouter()
settings = Settings()


def pascal_case_to_snake_case(parameters):
result = {}

for key, value in parameters:
result[''.join(word.title() for word in key.split('_'))] = value

if "ClassId" in result.keys():
result["ClassID"] = result["ClassId"]
del result["ClassId"]

return result


@ml_model_router.post("/{camera_id}")
async def modify_ml_model(camera_id: str, model_parameters: MLModelDTO, reboot_processor: Optional[bool] = True):
validate_camera_existence(camera_id)

parameters = pascal_case_to_snake_case(model_parameters)

base_path = os.path.join(get_source_config_directory(settings.config), camera_id)
models_directory_path = os.path.join(base_path, "ml_models")
json_file_path = os.path.join(models_directory_path, f"model_{parameters['Device']}.json")

model_name = parameters["Name"]
del parameters["Name"]
del parameters["Device"]

# Create or modify .json file
json_content = {
"model_name": model_name,
"variables": parameters
}

if os.path.exists(json_file_path):
with open(json_file_path, 'w') as outfile:
json.dump(json_content, outfile)
else:
# Hypothesis: source config directory (base_path) should always exists.
if not os.path.exists(models_directory_path):
Path(models_directory_path).mkdir(parents=True, exist_ok=True)

with open(json_file_path, 'x+') as outfile:
json.dump(json_content, outfile)

# Reboot processor if set
success = True
if reboot_processor:
success = restart_processor()

return handle_response(json_content, success, decamelize=False)
Loading