Skip to content

Commit

Permalink
add custom dataset upload example (#464)
Browse files Browse the repository at this point in the history
Signed-off-by: Igor Davidyuk <igor.davidyuk@intel.com>
  • Loading branch information
igor-davidyuk authored Jul 16, 2024
1 parent 3c340f0 commit 68fd223
Show file tree
Hide file tree
Showing 5 changed files with 247 additions and 6 deletions.
2 changes: 1 addition & 1 deletion geti_sdk/annotation_readers/base_annotation_reader.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
class AnnotationReader:
"""
Base class for annotation reading, to handle loading and converting annotations
to Sonoma Creek format
to Intel Geti format
"""

def __init__(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ def prepare_dataset(
self, task_type: TaskType, previous_task_type: Optional[TaskType] = None
) -> Dataset:
"""
Prepare the dataset for uploading to Sonoma Creek.
Prepare the dataset for uploading to Intel Geti.
:param task_type: TaskType to prepare the dataset for
:param previous_task_type: Optional type of the (trainable) task preceding
Expand Down
2 changes: 1 addition & 1 deletion geti_sdk/rest_clients/media_client/media_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,7 @@ def _upload_folder(
) -> MediaList[MediaTypeVar]:
"""
Upload all media in a folder to the project. Returns the mapping of filenames
to the unique IDs assigned by Sonoma Creek.
to the unique IDs assigned by Intel Geti.
:param path_to_folder: Folder with media items to upload
:param n_media: Number of media to upload from folder
Expand Down
2 changes: 1 addition & 1 deletion geti_sdk/rest_clients/media_client/video_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ def upload_folder(
) -> MediaList[Video]:
"""
Upload all videos in a folder to the project. Returns the mapping of video
filename to the unique ID assigned by Sonoma Creek.
filename to the unique ID assigned by Intel Geti.
:param path_to_folder: Folder with videos to upload
:param n_videos: Number of videos to upload from folder
Expand Down
245 changes: 243 additions & 2 deletions notebooks/002_create_project_from_dataset.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,9 @@
"id": "b4c88de5-7719-424c-aede-83361646602a",
"metadata": {},
"source": [
"## 1. Automated Project Creation\n",
"The Intel Geti SDK package provides a method to create a project from an existing dataset. This method will create a project, upload the images and annotations to the project, and create the necessary labels and classes. This approach is useful when you have a dataset that is already annotated in one of the supported formats (COCO, Pascal VOC, YOLO, etc.).\n",
"\n",
"### Getting the COCO dataset\n",
"In the next cell, we get the path to the MS COCO dataset. \n",
"\n",
Expand Down Expand Up @@ -182,7 +185,245 @@
"id": "56fc3b9d-0325-46c2-8912-906882c4a337",
"metadata": {},
"source": [
"As you might have noticed, there is one additional label in the project, the `No Object` label. This is added by the system automatically to represent the absence of any 'horse', 'cat' or 'dog' in an image."
"As you might have noticed, there is one additional label in the project, the `No Object` label. This is added by the system automatically to represent the absence of any 'horse', 'cat' or 'dog' in an image.\n",
"\n",
"## 2. Manual Project Creation\n",
"If your dataset does not comply with one of the supported formats, there are several ways how to go around this.\n",
"- You can try to convert your dataset to one of the supported formats and come back to the automated approach. This can be done by writing a script that will do the conversion. The drawback of this approach is that you can end up keeping multiple copies of the same dataset.\n",
"- You can implement an [AnnotationReader](https://openvinotoolkit.github.io/geti-sdk/geti_sdk.annotation_readers.html#) of your own by following a few implementation examples already present in the Intel Geti SDK package - [DirectoryTreeAnnotationReader](https://github.com/openvinotoolkit/geti-sdk/blob/main/geti_sdk/annotation_readers/directory_tree_annotation_reader.py) and [DatumAnnotationReader](https://github.com/openvinotoolkit/geti-sdk/blob/main/geti_sdk/annotation_readers/datumaro_annotation_reader/datumaro_annotation_reader.py). It is especially useful if you have an established home-grown annotation format and the data you gather will be kept in this format in the future as well.\n",
"- You can create a project manually and upload the data and annotations to it. This is the most straightforward approach, but it requires a bit more work with the Geti SDK entities.\n",
"\n",
"In this section we will go with the last approach and create a project manually. We will read the dataset annotations from a `csv` file and use the `geti-sdk` package to create a detection project, upload images and annotations to it.\\\n",
"First, let's read a few lines from the dataset annotation file to see what it looks like."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5ff6658a",
"metadata": {},
"outputs": [],
"source": [
"import csv\n",
"\n",
"ANNOTATION_FILE_PATH = \"./custom_dataset.csv\"\n",
"annotation_file_contents = r\"\"\"image,xmin,ymin,xmax,ymax,label_name\n",
"/images/val2017/000000001675.jpg,0,16,640,308,cat\n",
"/images/val2017/000000004795.jpg,157,131,532,480,cat\"\"\"\n",
"with open(ANNOTATION_FILE_PATH, \"w\") as csv_file:\n",
" csv_file.write(annotation_file_contents)\n",
"\n",
"with open(ANNOTATION_FILE_PATH, newline=\"\") as csv_file:\n",
" reader = csv.reader(csv_file)\n",
" header_line = next(reader)\n",
" first_data_line = next(reader)\n",
"print(header_line)\n",
"print(first_data_line)"
]
},
{
"cell_type": "markdown",
"id": "3f4ae004",
"metadata": {},
"source": [
"We see, that in our example the dataset annotation `csv` file contain six columns: `image` which is the sample path, `x_min`, `y_min`, `x_max`, `y_max` columns contain the bounding box coordinates, and the `label` column contains the object class label. The annotation file structure may vary and the processing code must be adjusted accordingly. It is also important to take into account all the known information about the dataset, such as the computer vision task(s) that the dataset is labeled for, number of classes and the number of images in the dataset to optimally process the it.\\\n",
"As an example, you may not know the number of classes in the dataset, so you must find it out by reading the full annotation file to memory and extracting the unique values from the `label` column.\\\n",
"In other cases, you may know the number of classes and their names, but the sample files are so big you would prefer to read and process the annotations line by line.\n",
"\n",
"To create a project we need to initialize a `ProjectClient` and call the `create_project` method, which is well explained in the previous notebook [001 create project](./001_create_project.ipynb). Our dataset is labeled for the `detection` so we will create a Project of the corresponding type. It will only have one trainable task, which is detection, so we will pass one list of labels to the `create_project` method. We will use our prior knowledge of the dataset - it was labeled for one-class detection so we only use one label."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e8f38ba6",
"metadata": {},
"outputs": [],
"source": [
"from geti_sdk.rest_clients.project_client.project_client import ProjectClient\n",
"\n",
"project_client = ProjectClient(session=geti.session, workspace_id=geti.workspace_id)\n",
"\n",
"# Label names for the first (and only) trainable task in our Project.\n",
"CLASS_NAMES = [\n",
" \"cat\",\n",
"]\n",
"\n",
"project = project_client.create_project(\n",
" project_name=\"Manualy Created Detection Project\",\n",
" project_type=\"detection\",\n",
" labels=[\n",
" CLASS_NAMES,\n",
" ],\n",
")"
]
},
{
"cell_type": "markdown",
"id": "8a61707b",
"metadata": {},
"source": [
"We can examine the list of labels that are present in our newly created Project. The `get_all_labels` method of the ProjectClient returns a list of Geti SDK objects representing labels in the project. We will compile a dictionary that will help us mapping label names to the label objects later."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9866ef40",
"metadata": {},
"outputs": [],
"source": [
"all_labels = project.get_all_labels()\n",
"label_dict = {label.name: label for label in all_labels}\n",
"print(all_labels)"
]
},
{
"cell_type": "markdown",
"id": "c55fc1c5",
"metadata": {},
"source": [
"To upload the images and annotations to the project, we will need an `ImageClient` and an `AnnotationClient` correspondingly."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ba00d7d0",
"metadata": {},
"outputs": [],
"source": [
"from geti_sdk.rest_clients.annotation_clients.annotation_client import AnnotationClient\n",
"from geti_sdk.rest_clients.media_client.image_client import ImageClient\n",
"\n",
"image_client = ImageClient(\n",
" session=geti.session, workspace_id=geti.workspace_id, project=project\n",
")\n",
"annotation_client = AnnotationClient(\n",
" session=geti.session, workspace_id=geti.workspace_id, project=project\n",
")"
]
},
{
"cell_type": "markdown",
"id": "8877c66e",
"metadata": {},
"source": [
"Now we have everything to populate our project's dataset manualy. We will break the process into two steps for the first entry in the dataset:\n",
"1. Upload the image to the project.\n",
"2. Prepare and Upload the annotation to the project.\n",
"\n",
"The first part is straightforward, we will use the `upload_image` method of the `ImageClient` to upload the image to the project. The method can load an image from disk and send it to the server, it returns an `Image` object that we will use to upload the annotation in the next step."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88f92dc3",
"metadata": {},
"outputs": [],
"source": [
"image_path = first_data_line[0]\n",
"image_object = image_client.upload_image(image=COCO_PATH + image_path)\n",
"image_object"
]
},
{
"cell_type": "markdown",
"id": "8377f8a5",
"metadata": {},
"source": [
"To upload the annotation we will use the `upload_annotation` method of the `AnnotationClient`. The method requires the `Image` object, and the `AnnotationScene` object, which we need to create from the annotation data. The `AnnotationScene` object is a container for the annotations of a single data sample, it consists of several `Annotation` instances each representing a single object in the image. The `Annotation` requires a bounding shape and a list of labels for that shape.\\\n",
"Now let's code the same way bottom up."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c85bed24",
"metadata": {},
"outputs": [],
"source": [
"from geti_sdk.data_models.annotation_scene import AnnotationScene\n",
"from geti_sdk.data_models.annotations import Annotation\n",
"from geti_sdk.data_models.shapes import Rectangle\n",
"\n",
"# From the CSV file entry we can get the coordinates of the rectangle\n",
"x_min, y_min, x_max, y_max = first_data_line[1:5]\n",
"\n",
"# We need to create a Rectangle object to represent the shape of the annotation\n",
"# Note: the Rectangle object requires the x, y, width and height of the rectangle,\n",
"# so we need to calculate the width and height from the x_min, y_min, x_max and y_max\n",
"rectangle = Rectangle(\n",
" x=int(x_min),\n",
" y=int(y_min),\n",
" width=int(x_max) - int(x_min),\n",
" height=int(y_max) - int(y_min),\n",
")\n",
"\n",
"# We can now create the Annotation object,\n",
"# We can get a Label object from the label_dict we created earlier\n",
"# using the label name from the CSV file entry as a key\n",
"label = label_dict[first_data_line[5]]\n",
"annotation = Annotation(\n",
" labels=[\n",
" label,\n",
" ],\n",
" shape=rectangle,\n",
")\n",
"\n",
"# We can now create the AnnotationScene object and upload the annotation\n",
"annotation_scene = AnnotationScene(\n",
" [\n",
" annotation,\n",
" ]\n",
")\n",
"annotation_client.upload_annotation(image_object, annotation_scene)"
]
},
{
"cell_type": "markdown",
"id": "1a67b46c",
"metadata": {},
"source": [
"Now we can gather all the steps in one method and iteratively apply it to the rest of the dataset.\n",
"\n",
"```python\n",
"from typing import List\n",
"\n",
"def upload_and_annotate_image(dataset_line: List[str]) -> None:\n",
" \"\"\"\n",
" Uploads an image and its annotation to the project\n",
"\n",
" :param dataset_line: The line from the dataset that contains the image path and annotation\n",
" in format ['image_path', 'xmin', 'ymin', 'xmax', 'ymax', 'label_name']\n",
" \"\"\"\n",
" image_path = dataset_line[0]\n",
" image_object = image_client.upload_image(image=image_path)\n",
"\n",
" x_min, y_min, x_max, y_max = map(int, dataset_line[1:5])\n",
" rectangle = Rectangle(\n",
" x=x_min,\n",
" y=y_min,\n",
" width=x_max - x_min,\n",
" height=y_max - y_min,\n",
" )\n",
" annotation = Annotation(\n",
" labels=[label_dict[dataset_line[5]],],\n",
" shape=rectangle,\n",
" )\n",
" annotation_scene = AnnotationScene([annotation])\n",
" annotation_client.upload_annotation(image_object, annotation_scene)\n",
" print(f\"Uploaded and annotated {image_path}\")\n",
"\n",
"# We can now iterate over the rest of the lines in the CSV file and upload and annotate the images\n",
"with open(ANNOTATION_FILE_PATH, newline='') as csv_file:\n",
" reader = csv.reader(csv_file)\n",
" header_line = next(reader)\n",
" for line in reader:\n",
" upload_and_annotate_image(line)\n",
"```\n",
"\n"
]
}
],
Expand All @@ -202,7 +443,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.19"
"version": "3.10.12"
}
},
"nbformat": 4,
Expand Down

0 comments on commit 68fd223

Please sign in to comment.