From db2f488248fd6770b99b2b172e83bfdf18f1bc73 Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Fri, 14 Jul 2023 15:31:41 +0100 Subject: [PATCH 01/26] Starting on introductory bundle tutorials --- bundle/bundle_intro.ipynb | 627 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 627 insertions(+) create mode 100644 bundle/bundle_intro.ipynb diff --git a/bundle/bundle_intro.ipynb b/bundle/bundle_intro.ipynb new file mode 100644 index 0000000000..81aa3ecee2 --- /dev/null +++ b/bundle/bundle_intro.ipynb @@ -0,0 +1,627 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "e473187c-65db-40f2-b27a-236b3e8f2ad2", + "metadata": {}, + "source": [ + "# MONAI Bundles\n", + "\n", + "Bundles are _self-descriptive networks_ in their essential form. They combine a network definition with the metadata about what they are meant to do, what they are used for, the nature of their inputs and outputs, and scripts (possibly with associated) to train and infer using them. \n", + "\n", + "The key objective with bundles is to provide a structured format for using and distributing your network along with all the added information needed to understand the network in context. This makes it easier for you and others to use the network, adapt it to different applications, reproduce your experiments and results, and simply document your work.\n", + "\n", + "The bundle documentation and specification can be found here: https://docs.monai.io/en/stable/bundle_intro.html\n", + "\n", + "## Bundle Structure\n", + "\n", + "A bundle consists of a named directory containing specific subdirectories for different parts. From the specification we have a basic outline of directories in this form:\n", + "\n", + "```\n", + "ModelName\n", + "┣━ configs\n", + "┃ ┗━ metadata.json\n", + "┣━ models\n", + "┃ ┣━ model.pt\n", + "┃ ┣━ *model.ts\n", + "┃ ┗━ *model.onnx\n", + "┗━ docs\n", + " ┣━ *README.md\n", + " ┗━ *license.txt\n", + "```\n", + "\n", + "Here the `metadata.json` file will contain the name of the bundle, plain language description of what it does and intended purpose, a description of what the input and output values are for the network's forward pass, copyright information, and otherwise anything else you want to add. Further configuration files go into `configs` which will be JSON or YAML documents representing scripts in the form of Python object instantiations.\n", + "\n", + "The `models` directory contains the stored weights for your network which can be in multiple forms. The weight dictionary `model.pt` must be present but the Torchscript `model.ts` and ONNX `model.onnx` files are optional. \n", + "\n", + "The `docs` directory will contain the readme file and any other documentation you want to include. Notebooks and images are good things to include for demonstrating using the bundle\n", + "\n", + "A further `scripts` directory can be included which would contain Python definitions of any sort to be used in the JSON/YAML script files. This directory should be a valid Python module if present, ie. contains a `__init__.py` file.\n", + "\n", + "## Instantiating a new bundle\n", + "\n", + "This notebook will introduce the concepts of the bundle and how to define your own. MONAI provides a number of bundle-related programs through the `monai.bundle` module using the Fire library. We can use `init_bundle` to start creating a bundle from scratch:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "a1d9d107-58d6-4ed8-9cf1-6e9103e78a92", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[01;34mTestBundle\u001b[00m\n", + "├── \u001b[01;34mconfigs\u001b[00m\n", + "│   ├── inference.json\n", + "│   └── metadata.json\n", + "├── \u001b[01;34mdocs\u001b[00m\n", + "│   └── README.md\n", + "├── LICENSE\n", + "└── \u001b[01;34mmodels\u001b[00m\n", + "\n", + "3 directories, 4 files\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "python -m monai.bundle init_bundle mednist_classify\n", + "tree TestBundle" + ] + }, + { + "cell_type": "markdown", + "id": "99c6a04e-4859-4123-9433-6632bbd6ff0d", + "metadata": {}, + "source": [ + "Our new blandly-named bundle, `TestBundle`, doesn't have much in it currently. It has the directory structure so we can start putting definitions in the right places. The first thing we should do is fill in relevant information to the `metadata.json` file so that anyone who has our bundle knows what it is. The default is a template of common fields:" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "3e19c030-4e03-4a96-a127-ee0daa604052", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{\n", + " \"version\": \"0.0.1\",\n", + " \"changelog\": {\n", + " \"0.0.1\": \"Initial version\"\n", + " },\n", + " \"monai_version\": \"1.2.0\",\n", + " \"pytorch_version\": \"2.0.0\",\n", + " \"numpy_version\": \"1.23.5\",\n", + " \"optional_packages_version\": {},\n", + " \"task\": \"Describe what the network predicts\",\n", + " \"description\": \"A longer description of what the network does, use context, inputs, outputs, etc.\",\n", + " \"authors\": \"Your Name Here\",\n", + " \"copyright\": \"Copyright (c) Your Name Here\",\n", + " \"network_data_format\": {\n", + " \"inputs\": {},\n", + " \"outputs\": {}\n", + " }\n", + "}" + ] + } + ], + "source": [ + "!cat TestBundle/configs/metadata.json" + ] + }, + { + "cell_type": "markdown", + "id": "827c759b-9ae6-4ec1-a83d-9077bf23bafd", + "metadata": {}, + "source": [ + "We'll replace this with some more information that reflects our bundle being a demo:" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "a56e4833-171c-432c-8145-f325fad3bfcb", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting TestBundle/configs/metadata.json\n" + ] + } + ], + "source": [ + "%%writefile TestBundle/configs/metadata.json\n", + "\n", + "{\n", + " \"version\": \"0.0.1\",\n", + " \"changelog\": {\n", + " \"0.0.1\": \"Initial version\"\n", + " },\n", + " \"monai_version\": \"1.2.0\",\n", + " \"pytorch_version\": \"2.0.0\",\n", + " \"numpy_version\": \"1.23.5\",\n", + " \"optional_packages_version\": {},\n", + " \"task\": \"Demonstration Bundle Network\",\n", + " \"description\": \"This is a demonstration bundle meant to showcase features of the MONAI bundle system only and does nothing useful\",\n", + " \"authors\": \"Your Name Here\",\n", + " \"copyright\": \"Copyright (c) Your Name Here\",\n", + " \"network_data_format\": {\n", + " \"inputs\": {},\n", + " \"outputs\": {}\n", + " }\n", + "}" + ] + }, + { + "cell_type": "markdown", + "id": "da6aa796-d4ae-423c-9215-957ad968b845", + "metadata": {}, + "source": [ + "## Configuration Files\n", + "\n", + "Configuration files define how to instantiate a number of Python objects and run simple routines. These files, whether JSON or YAML, are Python dictionaries containing expression lists or the arguments to be passed to a named constructor.\n", + "\n", + "The provided `inference.json` file is a demo of applying a network to a series of JPG images. This illustrates some of the concepts around typical bundles, specifically how to declare MONAI objects to put a workflow together, but we're going to ignore that for now and create some YAML configuration files instead which do some very basic things. \n", + "\n", + "Whether you're working with JSON or YAML the config files are doing the same thing which is define a series of object instantiations with the expectation that this constitutes a workflow. Typically for training or inference with a network this would be defining data sources, loaders, transform sequences, and finally a subclass of the [Ignite Engine](https://docs.monai.io/en/stable/engines.html#workflow). A class like `SupervisedTrainer` is the driving program for training a network, so creating an instance of this along with its associated arguments then calling its `run()` method constitutes a workflow or \"program\". \n", + "\n", + "You don't have to use any specific objects types though so you're totally free to design your workflows to be whatever you like, but typically as demonstrated in the model zoo they'll be Ignite-based workflows doing training or inference. We'll start with a very simple workflow which actually just imports Pytorch and MONAI then prints diagnostic information:" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "63322909-1a24-426e-a744-39452cdff14f", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting TestBundle/configs/testconfig.yaml\n" + ] + } + ], + "source": [ + "%%writefile TestBundle/configs/testconfig.yaml\n", + "\n", + "imports: \n", + "- $import torch\n", + "- $import monai\n", + "\n", + "device: $torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')\n", + "\n", + "shape: [4, 4]\n", + "\n", + "test_tensor: '$torch.rand(*@shape).to(@device)'\n", + "\n", + "test_config:\n", + "- '$monai.config.print_config()'\n", + "- '$print(\"Test tensor:\", @test_tensor)'" + ] + }, + { + "cell_type": "markdown", + "id": "c6c3d978-10d1-47ce-9171-2e4a4f7dbac1", + "metadata": {}, + "source": [ + "This file demonstrates a number of key concepts:\n", + "\n", + "* `imports` is a sequence of strings starting with `$` which indicate the string should be interpreted as a Python expression. These will be interpreted at the start of the execution so that modules can be imported into the running namespace. `imports` should be a sequence of such expressions.\n", + "* `device` is an object definition created by evaluating the given expression, in this case creating a Pytorch device object.\n", + "* `shape` is a list of literal values in YAML format we'll use elsewhere.\n", + "* `test_tensor` is another object created by evaluating an expression, this one uses references to `shape` and `device` with the `@` syntax.\n", + "* `test_config` is a list of expressions which are evaluated in order to act as the \"main\" or entry point for the program, in this case printing config information and then our created tensor.\n", + "\n", + "As mentioned `$` and `@` are sigils with special meaning. A string starting with `$` is treated as a Python expression and is evaluated as such when needed, these need to be enclosed in quotes only when JSON/YAML need that to parse correctly. A variable starting with `@` is treated as reference to something we've defined in the script, eg `@shape`, and will only work for such definitions. Accessing a member of a definition before being interpreted can be done with `#`, so something like `@foo#bar` will access the `bar` member. \n", + "\n", + "We can run this \"program\" on the command line now using the bundle submodule and a few arguments to specify the metadata file and configuration file:" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "7968ceb4-89ef-40a9-ac9b-f048c6cca73b", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-07-14 14:41:32,168 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-07-14 14:41:32,168 - INFO - > run_id: 'test_config'\n", + "2023-07-14 14:41:32,168 - INFO - > meta_file: './TestBundle/configs/metadata.json'\n", + "2023-07-14 14:41:32,168 - INFO - > config_file: './TestBundle/configs/testconfig.yaml'\n", + "2023-07-14 14:41:32,168 - INFO - ---\n", + "\n", + "\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/localek10/workspace/monai/MONAI_mine/monai/bundle/workflows.py:213: UserWarning: Default logging file in 'configs/logging.conf' does not exist, skipping logging.\n", + " warnings.warn(\"Default logging file in 'configs/logging.conf' does not exist, skipping logging.\")\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "MONAI version: 1.2.0\n", + "Numpy version: 1.23.5\n", + "Pytorch version: 2.0.0\n", + "MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = False\n", + "MONAI rev id: c33f1ba588ee00229a309000e888f9817b4f1934\n", + "MONAI __file__: /home/localek10/workspace/monai/MONAI_mine/monai/__init__.py\n", + "\n", + "Optional dependencies:\n", + "Pytorch Ignite version: 0.4.12\n", + "ITK version: NOT INSTALLED or UNKNOWN VERSION.\n", + "Nibabel version: 5.0.1\n", + "scikit-image version: NOT INSTALLED or UNKNOWN VERSION.\n", + "Pillow version: 9.4.0\n", + "Tensorboard version: NOT INSTALLED or UNKNOWN VERSION.\n", + "gdown version: NOT INSTALLED or UNKNOWN VERSION.\n", + "TorchVision version: 0.15.0\n", + "tqdm version: 4.65.0\n", + "lmdb version: NOT INSTALLED or UNKNOWN VERSION.\n", + "psutil version: 5.9.0\n", + "pandas version: 1.5.3\n", + "einops version: 0.6.1\n", + "transformers version: NOT INSTALLED or UNKNOWN VERSION.\n", + "mlflow version: NOT INSTALLED or UNKNOWN VERSION.\n", + "pynrrd version: NOT INSTALLED or UNKNOWN VERSION.\n", + "\n", + "For details about installing the optional dependencies, please visit:\n", + " https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies\n", + "\n", + "Test tensor: tensor([[0.0822, 0.0191, 0.5755, 0.5022],\n", + " [0.4899, 0.2152, 0.1622, 0.8672],\n", + " [0.0518, 0.8283, 0.1431, 0.8582],\n", + " [0.9721, 0.3803, 0.2759, 0.8017]], device='cuda:0')\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "# convenient to define the bundle's root in a variable\n", + "BUNDLE=\"./TestBundle\"\n", + "\n", + "python -m monai.bundle run test_config \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"$BUNDLE/configs/testconfig.yaml\"" + ] + }, + { + "cell_type": "markdown", + "id": "4a28777d-b44a-4c78-b81b-a946b7f4ec30", + "metadata": {}, + "source": [ + "Here the `run` routine is invoked and the name of the \"main\" sequence of expressions is given (`test_config`). MONAI will then load and interpret the config then evaluate the expressions of `test_config` in order. Definitions in the configuratoin which aren't needed to do this are ignored, so you can provide multiple expression lists that run different parts of your script without having to create everything. " + ] + }, + { + "cell_type": "markdown", + "id": "d1c00118-a695-4629-a454-3fda51c57232", + "metadata": {}, + "source": [ + "## Object Instantiation\n", + "\n", + "Creating objects is a key concept in config files which would be cumbersome if done only through expressions like what's been demonstrated here. Instead, an object can be defined by a dictionary of values naming first the type with `_target_` and then providing the constructor arguments as named values. The following is a simple example creat a `Dataset` class with a very simple set of values:" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "id": "cb762695-8c5d-4f42-9c29-eb6260990b0c", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting TestBundle/configs/test_object.yaml\n" + ] + } + ], + "source": [ + "%%writefile TestBundle/configs/test_object.yaml\n", + "\n", + "datadicts: '$[{i: (i * i)} for i in range(10)]' # create a fake dataset as a list of dicts\n", + "\n", + "test_dataset: # creates an instance of an object because _target_ is present\n", + " _target_: Dataset # name of type to create is monai.data.Dataset (loaded implicitly from MONAI)\n", + " data: '@datadicts' # argument data provided by a definition\n", + " transform: '$None' # argument transform provided by a Python expression\n", + "\n", + "test:\n", + "- '$print(\"Dataset\", @test_dataset)'\n", + "- '$print(\"Size\", len(@test_dataset))'\n", + "- '$print(\"Transform member\", @test_dataset.transform)'\n", + "- '$print(\"Values\", list(@test_dataset))'" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "id": "2cd1287c-f287-4831-bfc7-4cbdc394a3a1", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-07-14 15:28:36,063 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-07-14 15:28:36,063 - INFO - > run_id: 'test'\n", + "2023-07-14 15:28:36,063 - INFO - > meta_file: './TestBundle/configs/metadata.json'\n", + "2023-07-14 15:28:36,063 - INFO - > config_file: './TestBundle/configs/test_object.yaml'\n", + "2023-07-14 15:28:36,063 - INFO - ---\n", + "\n", + "\n", + "Dataset \n", + "Size 10\n", + "Transform member None\n", + "Values [{0: 0}, {1: 1}, {2: 4}, {3: 9}, {4: 16}, {5: 25}, {6: 36}, {7: 49}, {8: 64}, {9: 81}]\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "BUNDLE=\"./TestBundle\"\n", + "\n", + "# prints normal values\n", + "python -W ignore -m monai.bundle run test \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"$BUNDLE/configs/test_object.yaml\"" + ] + }, + { + "cell_type": "markdown", + "id": "6326d601-23f0-444b-821c-9596bd8c8296", + "metadata": {}, + "source": [ + "This definition is roughly equivalent to the expression `Dataset(data=datadicts, transform=None)`. Like regular Python we don't need to provide values for arguments having defaults, but we can only give argument values by name and not by position. " + ] + }, + { + "cell_type": "markdown", + "id": "93c091b0-6140-4539-bb1e-36bf78445365", + "metadata": {}, + "source": [ + "## Command Line Definitions\n", + "\n", + "Command line arguments can be provided to add or modify definitions in the script you're running. Using `--` before the name of the variable allows you to set their value with the next argument, but this must be a valid Python expression. You can also set individual members of definitions with `#` but we sure to put quotes around the argument in Bash. \n", + "\n", + "We can demo this with an even simpler script:" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "id": "391ec82b-43a2-4b6f-8307-e3c853986719", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Writing TestBundle/configs/test_cmdline.yaml\n" + ] + } + ], + "source": [ + "%%writefile TestBundle/configs/test_cmdline.yaml\n", + "\n", + "shape: [8, 8]\n", + "area: '$@shape[0]*@shape[1]'\n", + "\n", + "test:\n", + "- '$print(\"Height\", @shape[0])'\n", + "- '$print(\"Width\", @shape[1])'\n", + "- '$print(\"Area\", @area)'" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "id": "229617a0-1120-4054-9232-1991cfa21ae9", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-07-14 15:22:37,435 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-07-14 15:22:37,435 - INFO - > run_id: 'test'\n", + "2023-07-14 15:22:37,436 - INFO - > meta_file: './TestBundle/configs/metadata.json'\n", + "2023-07-14 15:22:37,436 - INFO - > config_file: './TestBundle/configs/test_cmdline.yaml'\n", + "2023-07-14 15:22:37,436 - INFO - ---\n", + "\n", + "\n", + "Height 8\n", + "Width 8\n", + "Area 64\n", + "2023-07-14 15:22:40,876 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-07-14 15:22:40,876 - INFO - > run_id: 'test'\n", + "2023-07-14 15:22:40,876 - INFO - > meta_file: './TestBundle/configs/metadata.json'\n", + "2023-07-14 15:22:40,876 - INFO - > config_file: './TestBundle/configs/test_cmdline.yaml'\n", + "2023-07-14 15:22:40,876 - INFO - > shape#0: 4\n", + "2023-07-14 15:22:40,876 - INFO - ---\n", + "\n", + "\n", + "Height 4\n", + "Width 8\n", + "Area 32\n", + "2023-07-14 15:22:44,279 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-07-14 15:22:44,279 - INFO - > run_id: 'test'\n", + "2023-07-14 15:22:44,279 - INFO - > meta_file: './TestBundle/configs/metadata.json'\n", + "2023-07-14 15:22:44,279 - INFO - > config_file: './TestBundle/configs/test_cmdline.yaml'\n", + "2023-07-14 15:22:44,279 - INFO - > area: 32\n", + "2023-07-14 15:22:44,279 - INFO - ---\n", + "\n", + "\n", + "Height 8\n", + "Width 8\n", + "Area 32\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "BUNDLE=\"./TestBundle\"\n", + "\n", + "# prints normal values\n", + "python -W ignore -m monai.bundle run test \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"$BUNDLE/configs/test_cmdline.yaml\"\n", + "\n", + "# half the height\n", + "python -W ignore -m monai.bundle run test \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"$BUNDLE/configs/test_cmdline.yaml\" \\\n", + " '--shape#0' 4\n", + "\n", + "# area definition replaces existing expression with a lie\n", + "python -W ignore -m monai.bundle run test \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"$BUNDLE/configs/test_cmdline.yaml\" \\\n", + " --area 32" + ] + }, + { + "cell_type": "markdown", + "id": "87683aa7-0322-48cb-9919-f3b3b2546763", + "metadata": {}, + "source": [ + "## Multiple Files\n", + "\n", + "Multiple config files can be specified which will create a final script composed of definitions in the first file added to or updated with those in subsequent files. Remember that the files are essentially creating Python dictionaries of definitions that are interpreted later, so later files are just updating that dictionary when loaded. Definitions in one file can be referenced in others:" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "55c034c5-b03f-4ac1-8aa0-a7b768bbbb7e", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Writing TestBundle/configs/multifile1.yaml\n" + ] + } + ], + "source": [ + "%%writefile TestBundle/configs/multifile1.yaml\n", + "\n", + "width: 8\n", + "height: 8" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "id": "2511798a-cd44-4aec-954c-c766b29f0a43", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Writing TestBundle/configs/multifile2.yaml\n" + ] + } + ], + "source": [ + "%%writefile TestBundle/configs/multifile2.yaml\n", + "\n", + "area: '$@width*@height'\n", + "\n", + "test:\n", + "- '$print(\"Area\", @area)'" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "id": "dc6adf63-c4b5-4f97-805a-2321dc1e8d2c", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-07-14 15:09:59,663 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-07-14 15:09:59,663 - INFO - > run_id: 'test'\n", + "2023-07-14 15:09:59,663 - INFO - > meta_file: './TestBundle/configs/metadata.json'\n", + "2023-07-14 15:09:59,663 - INFO - > config_file: ['./TestBundle/configs/multifile1.yaml', './TestBundle/configs/multifile2.yaml']\n", + "2023-07-14 15:09:59,663 - INFO - ---\n", + "\n", + "\n", + "Area 64\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "BUNDLE=\"./TestBundle\"\n", + "\n", + "# area definition replaces existing expression with a lie\n", + "python -W ignore -m monai.bundle run test \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"['$BUNDLE/configs/multifile1.yaml','$BUNDLE/configs/multifile2.yaml']\"" + ] + }, + { + "cell_type": "markdown", + "id": "1afcbac7-1e65-4078-8465-24d5c8e08102", + "metadata": {}, + "source": [ + "The value for `config_file` in this example is a Python list containing 2 strings. It takes a bit of care to get the Bash syntax right so that this expression isn't mangled (eg. avoid spaces to prevent tokenisation and use \"\" quotes so that other quotes aren't interpreted), but is otherwise a simple mechanism.\n", + "\n", + "This mechanism, and the ability to add/modify definitions on the command line, is important for a number of reasons:\n", + "\n", + "* It lets you write a \"common\" configuration file containing definitions to be used with other config files and so reduce duplication.\n", + "* It lets different expressions or setups to be defined with different combinations of files, again avoiding having to duplicate then modify scripts for different experiments.\n", + "* Adding/changing definitions also allows quick minor changes or batching of different operations on the command line or in shell scripts, eg. doing a parameter sweep by looping through possible values and passing them as arguments. " + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python [conda env:monai]", + "language": "python", + "name": "conda-env-monai-py" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.10" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 04d581f96527e2b4e8cbaa7c0fea0ad1e0e4114e Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Fri, 14 Jul 2023 15:43:12 +0100 Subject: [PATCH 02/26] Update --- bundle/bundle_intro.ipynb | 42 +++++++++++++++++++++++++-------------- 1 file changed, 27 insertions(+), 15 deletions(-) diff --git a/bundle/bundle_intro.ipynb b/bundle/bundle_intro.ipynb index 81aa3ecee2..abb9e80fa4 100644 --- a/bundle/bundle_intro.ipynb +++ b/bundle/bundle_intro.ipynb @@ -179,7 +179,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 32, "id": "63322909-1a24-426e-a744-39452cdff14f", "metadata": {}, "outputs": [ @@ -187,12 +187,12 @@ "name": "stdout", "output_type": "stream", "text": [ - "Overwriting TestBundle/configs/testconfig.yaml\n" + "Writing TestBundle/configs/test_config.yaml\n" ] } ], "source": [ - "%%writefile TestBundle/configs/testconfig.yaml\n", + "%%writefile TestBundle/configs/test_config.yaml\n", "\n", "imports: \n", "- $import torch\n", @@ -229,7 +229,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 33, "id": "7968ceb4-89ef-40a9-ac9b-f048c6cca73b", "metadata": {}, "outputs": [ @@ -237,11 +237,11 @@ "name": "stdout", "output_type": "stream", "text": [ - "2023-07-14 14:41:32,168 - INFO - --- input summary of monai.bundle.scripts.run ---\n", - "2023-07-14 14:41:32,168 - INFO - > run_id: 'test_config'\n", - "2023-07-14 14:41:32,168 - INFO - > meta_file: './TestBundle/configs/metadata.json'\n", - "2023-07-14 14:41:32,168 - INFO - > config_file: './TestBundle/configs/testconfig.yaml'\n", - "2023-07-14 14:41:32,168 - INFO - ---\n", + "2023-07-14 15:34:52,646 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-07-14 15:34:52,647 - INFO - > run_id: 'test_config'\n", + "2023-07-14 15:34:52,647 - INFO - > meta_file: './TestBundle/configs/metadata.json'\n", + "2023-07-14 15:34:52,647 - INFO - > config_file: './TestBundle/configs/test_config.yaml'\n", + "2023-07-14 15:34:52,647 - INFO - ---\n", "\n", "\n" ] @@ -286,10 +286,10 @@ "For details about installing the optional dependencies, please visit:\n", " https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies\n", "\n", - "Test tensor: tensor([[0.0822, 0.0191, 0.5755, 0.5022],\n", - " [0.4899, 0.2152, 0.1622, 0.8672],\n", - " [0.0518, 0.8283, 0.1431, 0.8582],\n", - " [0.9721, 0.3803, 0.2759, 0.8017]], device='cuda:0')\n" + "Test tensor: tensor([[0.5281, 0.1114, 0.5124, 0.2523],\n", + " [0.6561, 0.0298, 0.6393, 0.8636],\n", + " [0.3730, 0.8315, 0.1390, 0.6233],\n", + " [0.2646, 0.8929, 0.5250, 0.0472]], device='cuda:0')\n" ] } ], @@ -299,9 +299,10 @@ "# convenient to define the bundle's root in a variable\n", "BUNDLE=\"./TestBundle\"\n", "\n", + "# loads the test_config.yaml file and runs the test_config program it defines\n", "python -m monai.bundle run test_config \\\n", " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", - " --config_file \"$BUNDLE/configs/testconfig.yaml\"" + " --config_file \"$BUNDLE/configs/test_config.yaml\"" ] }, { @@ -599,7 +600,18 @@ "\n", "* It lets you write a \"common\" configuration file containing definitions to be used with other config files and so reduce duplication.\n", "* It lets different expressions or setups to be defined with different combinations of files, again avoiding having to duplicate then modify scripts for different experiments.\n", - "* Adding/changing definitions also allows quick minor changes or batching of different operations on the command line or in shell scripts, eg. doing a parameter sweep by looping through possible values and passing them as arguments. " + "* Adding/changing definitions also allows quick minor changes or batching of different operations on the command line or in shell scripts, eg. doing a parameter sweep by looping through possible values and passing them as arguments. \n", + "\n", + "## Summary and Next\n", + "\n", + "We have here described the basics of bundles:\n", + "\n", + "* Directory structure\n", + "* Metadata file\n", + "* Configuration files\n", + "* Command line usage\n", + "\n", + "In the next tutorial we will actually create a bundle for a real network that does something and demonstrate features for working with networks." ] } ], From ba1959645bd1a5b7af3925f2aab6430a05123b9f Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Thu, 24 Aug 2023 15:41:13 +0100 Subject: [PATCH 03/26] Adding classification notebook tutorial --- bundle/bundle_intro.ipynb | 33 +- bundle/mednist_classification.ipynb | 571 ++++++++++++++++++++++++++++ 2 files changed, 589 insertions(+), 15 deletions(-) create mode 100644 bundle/mednist_classification.ipynb diff --git a/bundle/bundle_intro.ipynb b/bundle/bundle_intro.ipynb index abb9e80fa4..d3ad582e87 100644 --- a/bundle/bundle_intro.ipynb +++ b/bundle/bundle_intro.ipynb @@ -7,7 +7,7 @@ "source": [ "# MONAI Bundles\n", "\n", - "Bundles are _self-descriptive networks_ in their essential form. They combine a network definition with the metadata about what they are meant to do, what they are used for, the nature of their inputs and outputs, and scripts (possibly with associated) to train and infer using them. \n", + "Bundles are essentially _self-descriptive networks_. They combine a network definition with the metadata about what they are meant to do, what they are used for, the nature of their inputs and outputs, and scripts (possibly with associated data) to train and infer using them. \n", "\n", "The key objective with bundles is to provide a structured format for using and distributing your network along with all the added information needed to understand the network in context. This makes it easier for you and others to use the network, adapt it to different applications, reproduce your experiments and results, and simply document your work.\n", "\n", @@ -15,7 +15,7 @@ "\n", "## Bundle Structure\n", "\n", - "A bundle consists of a named directory containing specific subdirectories for different parts. From the specification we have a basic outline of directories in this form:\n", + "A bundle consists of a named directory containing specific subdirectories for different parts. From the specification we have a basic outline of directories in this form (* means optional file):\n", "\n", "```\n", "ModelName\n", @@ -25,16 +25,17 @@ "┃ ┣━ model.pt\n", "┃ ┣━ *model.ts\n", "┃ ┗━ *model.onnx\n", - "┗━ docs\n", - " ┣━ *README.md\n", - " ┗━ *license.txt\n", + "┣━ docs\n", + "┃ ┣━ *README.md\n", + "┃ ┗━ *license.txt\n", + "┗━ *scripts\n", "```\n", "\n", "Here the `metadata.json` file will contain the name of the bundle, plain language description of what it does and intended purpose, a description of what the input and output values are for the network's forward pass, copyright information, and otherwise anything else you want to add. Further configuration files go into `configs` which will be JSON or YAML documents representing scripts in the form of Python object instantiations.\n", "\n", - "The `models` directory contains the stored weights for your network which can be in multiple forms. The weight dictionary `model.pt` must be present but the Torchscript `model.ts` and ONNX `model.onnx` files are optional. \n", + "The `models` directory contains the stored weights for your network which can be in multiple forms. The weight dictionary `model.pt` must be present but the Torchscript `model.ts` and ONNX `model.onnx` files representing the same network are optional. \n", "\n", - "The `docs` directory will contain the readme file and any other documentation you want to include. Notebooks and images are good things to include for demonstrating using the bundle\n", + "The `docs` directory will contain the readme file and any other documentation you want to include. Notebooks and images are good things to include for demonstrating use of the bundle.\n", "\n", "A further `scripts` directory can be included which would contain Python definitions of any sort to be used in the JSON/YAML script files. This directory should be a valid Python module if present, ie. contains a `__init__.py` file.\n", "\n", @@ -69,7 +70,7 @@ "source": [ "%%bash\n", "\n", - "python -m monai.bundle init_bundle mednist_classify\n", + "python -m monai.bundle init_bundle TestBundle\n", "tree TestBundle" ] }, @@ -126,7 +127,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 34, "id": "a56e4833-171c-432c-8145-f325fad3bfcb", "metadata": {}, "outputs": [ @@ -150,6 +151,7 @@ " \"pytorch_version\": \"2.0.0\",\n", " \"numpy_version\": \"1.23.5\",\n", " \"optional_packages_version\": {},\n", + " \"name\": \"TestBundle\"\n", " \"task\": \"Demonstration Bundle Network\",\n", " \"description\": \"This is a demonstration bundle meant to showcase features of the MONAI bundle system only and does nothing useful\",\n", " \"authors\": \"Your Name Here\",\n", @@ -158,6 +160,7 @@ " \"inputs\": {},\n", " \"outputs\": {}\n", " }\n", + " \"intended_use\": \"This is suitable for demonstration only\"\n", "}" ] }, @@ -170,11 +173,11 @@ "\n", "Configuration files define how to instantiate a number of Python objects and run simple routines. These files, whether JSON or YAML, are Python dictionaries containing expression lists or the arguments to be passed to a named constructor.\n", "\n", - "The provided `inference.json` file is a demo of applying a network to a series of JPG images. This illustrates some of the concepts around typical bundles, specifically how to declare MONAI objects to put a workflow together, but we're going to ignore that for now and create some YAML configuration files instead which do some very basic things. \n", + "The provided `inference.json` file is a demo of applying a network to a series of JPEG images. This illustrates some of the concepts around typical bundles, specifically how to declare MONAI objects to put a workflow together, but we're going to ignore that for now and create some YAML configuration files instead which do some very basic things. \n", "\n", "Whether you're working with JSON or YAML the config files are doing the same thing which is define a series of object instantiations with the expectation that this constitutes a workflow. Typically for training or inference with a network this would be defining data sources, loaders, transform sequences, and finally a subclass of the [Ignite Engine](https://docs.monai.io/en/stable/engines.html#workflow). A class like `SupervisedTrainer` is the driving program for training a network, so creating an instance of this along with its associated arguments then calling its `run()` method constitutes a workflow or \"program\". \n", "\n", - "You don't have to use any specific objects types though so you're totally free to design your workflows to be whatever you like, but typically as demonstrated in the model zoo they'll be Ignite-based workflows doing training or inference. We'll start with a very simple workflow which actually just imports Pytorch and MONAI then prints diagnostic information:" + "You don't have to use any specific objects types though so you're totally free to design your workflows to be whatever you like, but typically as demonstrated in the MONAI Model Zoo they'll be Ignite-based workflows doing training or inference. We'll start with a very simple workflow which actually just imports Pytorch and MONAI then prints diagnostic information:" ] }, { @@ -222,7 +225,7 @@ "* `test_tensor` is another object created by evaluating an expression, this one uses references to `shape` and `device` with the `@` syntax.\n", "* `test_config` is a list of expressions which are evaluated in order to act as the \"main\" or entry point for the program, in this case printing config information and then our created tensor.\n", "\n", - "As mentioned `$` and `@` are sigils with special meaning. A string starting with `$` is treated as a Python expression and is evaluated as such when needed, these need to be enclosed in quotes only when JSON/YAML need that to parse correctly. A variable starting with `@` is treated as reference to something we've defined in the script, eg `@shape`, and will only work for such definitions. Accessing a member of a definition before being interpreted can be done with `#`, so something like `@foo#bar` will access the `bar` member. \n", + "As mentioned `$` and `@` are sigils with special meaning. A string starting with `$` is treated as a Python expression and is evaluated as such when needed, these need to be enclosed in quotes only when JSON/YAML need that to parse correctly. A variable starting with `@` is treated as reference to something we've defined in the script, eg `@shape`, and will only work for such definitions. Accessing a member of a definition before being interpreted can be done with `#`, so something like `@foo#bar` will access the `bar` member of a definition `foo`. \n", "\n", "We can run this \"program\" on the command line now using the bundle submodule and a few arguments to specify the metadata file and configuration file:" ] @@ -320,7 +323,7 @@ "source": [ "## Object Instantiation\n", "\n", - "Creating objects is a key concept in config files which would be cumbersome if done only through expressions like what's been demonstrated here. Instead, an object can be defined by a dictionary of values naming first the type with `_target_` and then providing the constructor arguments as named values. The following is a simple example creat a `Dataset` class with a very simple set of values:" + "Creating objects is a key concept in config files which would be cumbersome if done only through expressions as has been demonstrated here. Instead, an object can be defined by a dictionary of values naming first the type with `_target_` and then providing the constructor arguments as named values. The following is a simple example creat a `Dataset` class with a very simple set of values:" ] }, { @@ -394,7 +397,7 @@ "id": "6326d601-23f0-444b-821c-9596bd8c8296", "metadata": {}, "source": [ - "This definition is roughly equivalent to the expression `Dataset(data=datadicts, transform=None)`. Like regular Python we don't need to provide values for arguments having defaults, but we can only give argument values by name and not by position. " + "The `test_dataset` definition is roughly equivalent to the expression `Dataset(data=datadicts, transform=None)`. Like regular Python we don't need to provide values for arguments having defaults, but we can only give argument values by name and not by position. " ] }, { @@ -404,7 +407,7 @@ "source": [ "## Command Line Definitions\n", "\n", - "Command line arguments can be provided to add or modify definitions in the script you're running. Using `--` before the name of the variable allows you to set their value with the next argument, but this must be a valid Python expression. You can also set individual members of definitions with `#` but we sure to put quotes around the argument in Bash. \n", + "Command line arguments can be provided to add or modify definitions in the script you're running. Using `--` before the name of the variable allows you to set their value with the next argument, but this must be a valid Python expression. You can also set individual members of definitions with `#` but be sure to put quotes around the argument in Bash. \n", "\n", "We can demo this with an even simpler script:" ] diff --git a/bundle/mednist_classification.ipynb b/bundle/mednist_classification.ipynb new file mode 100644 index 0000000000..a2abd7d220 --- /dev/null +++ b/bundle/mednist_classification.ipynb @@ -0,0 +1,571 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "64bd2d8c-4799-4073-bc28-c3632589c525", + "metadata": {}, + "source": [ + "# MedNIST Classification Bundle\n", + "\n", + "In this tutorial we'll describe how to create a bundle for a classification network. This will include how to train and apply the network on the command line. MedNIST will be used as the dataset with the bundle based off the [MONAI 101 notebook](https://github.com/Project-MONAI/tutorials/blob/main/2d_classification/monai_101.ipynb).\n", + "\n", + "First we'll consider a condensed version of the code from that notebook and go step-by-step how best to represent this as a bundle:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fd13031d-f67d-4eb3-a98d-4d0c9884e21e", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "\n", + "import monai.transforms as mt\n", + "import torch\n", + "from monai.apps import MedNISTDataset\n", + "from monai.data import DataLoader\n", + "from monai.engines import SupervisedTrainer\n", + "from monai.inferers import SimpleInferer\n", + "from monai.networks import eval_mode\n", + "from monai.networks.nets import densenet121\n", + "\n", + "root_dir = os.environ.get(\"ROOTDIR\", \".\")\n", + "\n", + "max_epochs = 25\n", + "device = torch.device(\"cuda:0\")\n", + "net = densenet121(spatial_dims=2, in_channels=1, out_channels=6).to(device)\n", + "\n", + "transform = mt.Compose(\n", + " [\n", + " mt.LoadImaged(keys=\"image\", image_only=True),\n", + " mt.EnsureChannelFirstd(keys=\"image\"),\n", + " mt.ScaleIntensityd(keys=\"image\"),\n", + " ]\n", + ")\n", + "\n", + "dataset = MedNISTDataset(root_dir=root_dir, transform=transform, section=\"training\", download=True)\n", + "\n", + "train_dl = DataLoader(dataset, batch_size=512, shuffle=True, num_workers=4)\n", + "\n", + "trainer = SupervisedTrainer(\n", + " device=device,\n", + " max_epochs=max_epochs,\n", + " train_data_loader=train_dl,\n", + " network=net,\n", + " optimizer=torch.optim.Adam(net.parameters(), lr=1e-5),\n", + " loss_function=torch.nn.CrossEntropyLoss(),\n", + " inferer=SimpleInferer(),\n", + ")\n", + "\n", + "trainer.run()\n", + "\n", + "torch.jit.script(net).save(\"mednist.ts\")\n", + "\n", + "class_names = (\"AbdomenCT\", \"BreastMRI\", \"CXR\", \"ChestCT\", \"Hand\", \"HeadCT\")\n", + "testdata = MedNISTDataset(root_dir=root_dir, transform=transform, section=\"test\", download=False, runtime_cache=True)\n", + "\n", + "max_items_to_print = 10\n", + "eval_dl = DataLoader(testdata[:max_items_to_print], batch_size=1, num_workers=0)\n", + "with eval_mode(net):\n", + " for item in eval_dl:\n", + " result = net(item[\"image\"].to(device))\n", + " prob = result.detach().to(\"cpu\")[0]\n", + " pred = class_names[prob.argmax()]\n", + " gt = item[\"class_name\"][0]\n", + " print(f\"Prediction: {pred}. Ground-truth: {gt}\")" + ] + }, + { + "cell_type": "markdown", + "id": "1a18d5cd-6338-4b41-87fd-4e119723bfee", + "metadata": {}, + "source": [ + "You can run this cell or save it to a file and run it on the command line. A `DenseNet` based network will be trained to classify MedNIST images into one of six categories. Mostly this script uses Ignite-based classes such as `SupervisedTrainer` which is great for converting into a bundle. Let's start by initialising a bundle directory structure:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "eb9dc6ec-13da-4a37-8afa-28e2766b9343", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[01;34mMedNISTClassifier\u001b[00m\n", + "├── \u001b[01;34mconfigs\u001b[00m\n", + "│   ├── inference.json\n", + "│   └── metadata.json\n", + "├── \u001b[01;34mdocs\u001b[00m\n", + "│   └── README.md\n", + "├── LICENSE\n", + "└── \u001b[01;34mmodels\u001b[00m\n", + "\n", + "3 directories, 4 files\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "python -m monai.bundle init_bundle MedNISTClassifier\n", + "tree MedNISTClassifier" + ] + }, + { + "cell_type": "markdown", + "id": "5888c9bd-5022-40b5-9dec-84d9f737f868", + "metadata": {}, + "source": [ + "## Metadata\n", + "\n", + "We'll first replace the `metadata.json` file with our description of what the network will do:" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "b29f053b-cf16-4ffc-bbe7-d9433fdfa872", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting MedNISTClassifier/configs/metadata.json\n" + ] + } + ], + "source": [ + "%%writefile MedNISTClassifier/configs/metadata.json\n", + "\n", + "{\n", + " \"version\": \"0.0.1\",\n", + " \"changelog\": {\n", + " \"0.0.1\": \"Initial version\"\n", + " },\n", + " \"monai_version\": \"1.2.0\",\n", + " \"pytorch_version\": \"2.0.0\",\n", + " \"numpy_version\": \"1.23.5\",\n", + " \"optional_packages_version\": {},\n", + " \"name\": \"MedNISTClassifier\",\n", + " \"task\": \"MedNIST Classification Network\",\n", + " \"description\": \"This is a demo network for classifying MedNIST images by type/modality\",\n", + " \"authors\": \"Your Name Here\",\n", + " \"copyright\": \"Copyright (c) Your Name Here\",\n", + " \"data_source\": \"MedNIST dataset kindly made available by Dr. Bradley J. Erickson M.D., Ph.D. (Department of Radiology, Mayo Clinic)\",\n", + " \"data_type\": \"jpeg\",\n", + " \"intended_use\": \"This is suitable for demonstration only\",\n", + " \"network_data_format\": {\n", + " \"inputs\": {\n", + " \"image\": {\n", + " \"type\": \"image\",\n", + " \"format\": \"magnitude\",\n", + " \"modality\": \"any\",\n", + " \"num_channels\": 1,\n", + " \"spatial_shape\": [64, 64],\n", + " \"dtype\": \"float32\",\n", + " \"value_range\": [0, 1],\n", + " \"is_patch_data\": false,\n", + " \"channel_def\": {\n", + " \"0\": \"image\"\n", + " }\n", + " }\n", + " },\n", + " \"outputs\": {\n", + " \"pred\": {\n", + " \"type\": \"probabilities\",\n", + " \"format\": \"classes\",\n", + " \"num_channels\": 6,\n", + " \"spatial_shape\": [6],\n", + " \"dtype\": \"float32\",\n", + " \"value_range\": [0, 1],\n", + " \"is_patch_data\": false,\n", + " \"channel_def\": {\n", + " \"0\": \"AbdomenCT\",\n", + " \"1\": \"BreastMRI\",\n", + " \"2\": \"CXR\",\n", + " \"3\": \"ChestCT\",\n", + " \"4\": \"Hand\",\n", + " \"5\": \"HeadCT\"\n", + " }\n", + " }\n", + " }\n", + " }\n", + "}" + ] + }, + { + "cell_type": "markdown", + "id": "3f208bf8-0c3a-4def-ab0f-6091cebcd532", + "metadata": {}, + "source": [ + "This contains more information compared to the previous tutorial's file. For inputs the network, a tensor \"image\" is given as a 64x64 sized single-channel image. This is one of the MedNIST images whose modality varies but will have a value range of `[0, 1]` after rescaling in the transform pipeline. The channel definition states the meaning of each channel, this input has only one which is the greyscale image itself. For network outputs there is only one, \"pred\", representing the prediction of the network as a tensor of size 6. Each of the six values is a prediction of that class which is described in `channel_def`.\n", + "\n", + "## Common Definitions\n", + "\n", + "What we'll now do is construct the bundle configuration scripts to implement training, testing, and inference based off the original script file given above. Common definitions should be placed in a common file used with other scripts to reduce duplication. In our original script, the network definition and transform sequence will be used in multiple places so should go in this common file:" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "id": "d11681af-3210-4b2b-b7bd-8ad8dedfe230", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting MedNISTClassifier/configs/common.yaml\n" + ] + } + ], + "source": [ + "%%writefile MedNISTClassifier/configs/common.yaml\n", + "# only need to import torch right now\n", + "imports: \n", + "- $import torch\n", + "\n", + "# define a default root directory value, this can overridden on the command line\n", + "root_dir: \".\"\n", + "\n", + "# define a device for the network\n", + "device: '$torch.device(''cuda:0'')'\n", + "\n", + "# store the class names for inference later\n", + "class_names: [AbdomenCT, BreastMRI, CXR, ChestCT, Hand, HeadCT]\n", + "\n", + "# define the network separately, don't need to refer to MONAI types by name or import MONAI\n", + "network_def:\n", + " _target_: densenet121\n", + " spatial_dims: 2\n", + " in_channels: 1\n", + " out_channels: 6\n", + "\n", + "# define the network to be the given definition moved to the device\n", + "net: '$@network_def.to(@device)'\n", + "\n", + "# define a transform sequence by instantiating a Compose instance with a transform sequence\n", + "transform:\n", + " _target_: Compose\n", + " transforms:\n", + " - _target_: LoadImaged\n", + " keys: 'image'\n", + " image_only: true\n", + " - _target_: EnsureChannelFirstd\n", + " keys: 'image'\n", + " - _target_: ScaleIntensityd\n", + " keys: 'image'\n", + " " + ] + }, + { + "cell_type": "markdown", + "id": "eaf81ea7-9ea3-4548-a32e-992f0b9bc0ab", + "metadata": {}, + "source": [ + "Although this YAML is very different from the Python code it's defining essentially the same objects. Whether in YAML or JSON a bundle script defines an object instantiation as a dictionary containing the key `_target_` declaring the type to create, with other keys treated as arguments. A Python statement like `obj = ObjType(arg1=val1, arg2=val2)` is thus equivalent to \n", + "\n", + "```yaml\n", + "obj:\n", + " _target_: ObjType\n", + " arg1: val1\n", + " arg2: val2\n", + "```\n", + "\n", + "Note here that MONAI will import all its own symbols such that an explicit import statement is not needed nor is referring to types by fully qualified name, ie. `Compose` is adequate instead of `monai.transforms.Compose`. Definitions found in other packages or those in scripts associated with the bundle need to be referred to by the name they are imported as, eg. `torch.device` as show above.\n", + "\n", + "## Training\n", + "\n", + "For training we need a dataset, dataloader, and trainer object which will be used in the running \"program\":" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4dfd052e-abe7-473a-bbf4-25674a3b20ea", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting MedNISTClassifier/configs/train.yaml\n" + ] + } + ], + "source": [ + "%%writefile MedNISTClassifier/configs/train.yaml\n", + "\n", + "max_epochs: 25\n", + "\n", + "dataset:\n", + " _target_: MedNISTDataset\n", + " root_dir: '@root_dir'\n", + " transform: '@transform'\n", + " section: training\n", + " download: true\n", + "\n", + "train_dl:\n", + " _target_: DataLoader\n", + " dataset: '@dataset'\n", + " batch_size: 512\n", + " shuffle: true\n", + " num_workers: 4\n", + "\n", + "trainer:\n", + " _target_: SupervisedTrainer\n", + " device: '@device'\n", + " max_epochs: '@max_epochs'\n", + " train_data_loader: '@train_dl'\n", + " network: '@net'\n", + " optimizer: \n", + " _target_: torch.optim.Adam\n", + " params: '$@net.parameters()'\n", + " lr: 0.00001\n", + " loss_function: \n", + " _target_: torch.nn.CrossEntropyLoss\n", + " inferer: \n", + " _target_: SimpleInferer\n", + "\n", + "train:\n", + "- '$@trainer.run()'\n", + "- '$torch.jit.script(@net).save(''model.ts'')'" + ] + }, + { + "cell_type": "markdown", + "id": "de752181-80b1-4221-9e4a-315e5f7f22a6", + "metadata": {}, + "source": [ + "There is a lot going on here but hopefully you see how this replicates the object definitions in the original source file. A few specific points:\n", + "* References are made to objects defined in `common.yaml` such as `@root_dir`, so this file needs to be used in conjunction with this one.\n", + "* A `max_epochs` hyperparameter is provided whose value you can change on the command line, eg. `--max_epochs 5`.\n", + "* Definitions for the `optimizer`, `loss_function`, and `inferer` arguments of `trainer` are provided inline but it would be better practice to define these separately.\n", + "* The learning rate is hard-coded as `1e-5`, it would again be better practice to define a separate `lr` hyperparameter, although it can be changed on the command line with `'--trainer#optimizer#lr' 0.001`.\n", + "* The trained network is saved using Pytorch's `jit` module directly, better practice would be to provide a handler, such as `CheckpointSaver`, to the trainer or to an evaluator object, see other tutorial examples on how to do this. This was kept here to match the original example.\n", + "\n", + "Now the network can be trained by running the bundle:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8357670d-fe69-4789-9b9a-77c0d8144b10", + "metadata": {}, + "outputs": [], + "source": [ + "%%bash\n", + "\n", + "BUNDLE=\"./MedNISTClassifier\"\n", + "\n", + "python -m monai.bundle run train \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"['$BUNDLE/configs/common.yaml','$BUNDLE/configs/train.yaml']\"\n", + "\n", + "# we'll use the trained network as the model object for this bundle\n", + "mv mednist.ts $BUNDLE/models/model.ts\n", + "\n", + "# generate the saved dictionary file as well\n", + "cd \"$BUNDLE/models\"\n", + "python -c 'import torch; obj = torch.jit.load(\"model.ts\"); torch.save(obj.state_dict(),\"model.pt\")'" + ] + }, + { + "cell_type": "markdown", + "id": "bbf58fac-b6d5-424d-9e98-1a30937f2116", + "metadata": {}, + "source": [ + "As shown here the Torchscript object produced by the training is moved into the `models` directory of the bundle. The saved weight file is also produced by loading that file again and saving the state. Once again best practice would be to instead use `CheckpointSaver` to save weights in an output location before the final file is chosen for the bundle. \n", + "\n", + "## Evaluation\n", + "\n", + "To replicate the original example's code we'll need to put the evaluation loop code into a separate function and call it. The best practice would be to use an `Evaluator` class to do this with metric classes for assessing performance. Instead we'll stick close to the original code and demonstrate how to integrate your own code into a bundle.\n", + "\n", + "The first thing to do is put the evaluation loop into a function and store it in the `scripts` module within the bundle:" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "fbad1a21-4dda-4b80-8e81-7d7e75307f9c", + "metadata": {}, + "outputs": [], + "source": [ + "!mkdir MedNISTClassifier/scripts" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "0c8725f7-f1cd-48f5-81a5-3f5a9ee03e9c", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Writing MedNISTClassifier/scripts/__init__.py\n" + ] + } + ], + "source": [ + "%%writefile MedNISTClassifier/scripts/__init__.py\n", + "\n", + "from monai.networks.utils import eval_mode\n", + "\n", + "def evaluate(net, dataloader, class_names, device):\n", + " with eval_mode(net):\n", + " for item in dataloader:\n", + " result = net(item[\"image\"].to(device))\n", + " prob = result.detach().to(\"cpu\")[0]\n", + " pred = class_names[prob.argmax()]\n", + " gt = item[\"class_name\"][0]\n", + " print(f\"Prediction: {pred}. Ground-truth: {gt}\")\n", + " " + ] + }, + { + "cell_type": "markdown", + "id": "abf40c4f-3349-4c40-9eef-811388ffd704", + "metadata": {}, + "source": [ + "The `scripts` directory has to be a valid Python module so needs a `__init__.py` file, you can include other files and import them separately or import their members into this file. Here we defined `evaluate` to enclose the loop from the original script. This can then be called as part of a expression sequence \"program\":" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "id": "b4e1f99a-a68b-4aeb-bcf2-842f26609b52", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting MedNISTClassifier/configs/evaluate.yaml\n" + ] + } + ], + "source": [ + "%%writefile MedNISTClassifier/configs/evaluate.yaml\n", + "\n", + "imports: \n", + "- $import scripts\n", + "\n", + "max_items_to_print: 10\n", + "\n", + "ckpt_file: \"\"\n", + "\n", + "testdata:\n", + " _target_: MedNISTDataset\n", + " root_dir: '@root_dir'\n", + " transform: '@transform'\n", + " section: test\n", + " download: false\n", + " runtime_cache: true\n", + "\n", + "eval_dl:\n", + " _target_: DataLoader\n", + " dataset: '$@testdata[:@max_items_to_print]'\n", + " batch_size: 1\n", + " num_workers: 0\n", + "\n", + "# loads the weights from the given file (which needs to be set on the command line) then calls \"evaluate\"\n", + "evaluate:\n", + "- '$@net.load_state_dict(torch.load(@ckpt_file))'\n", + "- '$scripts.evaluate(@net, @eval_dl, @class_names, @device)'\n" + ] + }, + { + "cell_type": "markdown", + "id": "64bb2286-3107-49e9-8dbe-66fe1a2ae08c", + "metadata": {}, + "source": [ + "Evaluation is then run on the command line, using \"evaluate\" as the program to run and providing a path to the model weights with the `ckpt_file` variable:" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "3c5fa39f-8798-4e41-8e2a-3a70a6be3906", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-08-24 14:14:09,479 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-08-24 14:14:09,479 - INFO - > run_id: 'evaluate'\n", + "2023-08-24 14:14:09,479 - INFO - > meta_file: './MedNISTClassifier/configs/metadata.json'\n", + "2023-08-24 14:14:09,479 - INFO - > config_file: ['./MedNISTClassifier/configs/common.yaml',\n", + " './MedNISTClassifier/configs/evaluate.yaml']\n", + "2023-08-24 14:14:09,479 - INFO - > ckpt_file: './MedNISTClassifier/models/model.pt'\n", + "2023-08-24 14:14:09,479 - INFO - ---\n", + "\n", + "\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/localek10/workspace/monai/MONAI_mine/monai/bundle/workflows.py:213: UserWarning: Default logging file in 'configs/logging.conf' does not exist, skipping logging.\n", + " warnings.warn(\"Default logging file in 'configs/logging.conf' does not exist, skipping logging.\")\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Prediction: AbdomenCT. Ground-truth: AbdomenCT\n", + "Prediction: BreastMRI. Ground-truth: BreastMRI\n", + "Prediction: ChestCT. Ground-truth: ChestCT\n", + "Prediction: CXR. Ground-truth: CXR\n", + "Prediction: Hand. Ground-truth: Hand\n", + "Prediction: HeadCT. Ground-truth: HeadCT\n", + "Prediction: HeadCT. Ground-truth: HeadCT\n", + "Prediction: CXR. Ground-truth: CXR\n", + "Prediction: ChestCT. Ground-truth: ChestCT\n", + "Prediction: BreastMRI. Ground-truth: BreastMRI\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "BUNDLE=\"./MedNISTClassifier\"\n", + "export PYTHONPATH=\"$BUNDLE\"\n", + "\n", + "python -m monai.bundle run evaluate \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"['$BUNDLE/configs/common.yaml','$BUNDLE/configs/evaluate.yaml']\" \\\n", + " --ckpt_file \"$BUNDLE/models/model.pt\"" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python [conda env:monai]", + "language": "python", + "name": "conda-env-monai-py" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.10" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 54583baa91c61228724b3b16a7821ae1c77eec74 Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Fri, 25 Aug 2023 00:45:17 +0100 Subject: [PATCH 04/26] Updated organisation of files --- ...ndle_intro.ipynb => 01_bundle_intro.ipynb} | 2 +- ....ipynb => 02_mednist_classification.ipynb} | 0 bundle/README.md | 37 ++++++++++++++----- .../{get_started.md => further_features.md} | 2 +- 4 files changed, 29 insertions(+), 12 deletions(-) rename bundle/{bundle_intro.ipynb => 01_bundle_intro.ipynb} (99%) rename bundle/{mednist_classification.ipynb => 02_mednist_classification.ipynb} (100%) rename bundle/{get_started.md => further_features.md} (99%) diff --git a/bundle/bundle_intro.ipynb b/bundle/01_bundle_intro.ipynb similarity index 99% rename from bundle/bundle_intro.ipynb rename to bundle/01_bundle_intro.ipynb index d3ad582e87..6a1e1f0c7a 100644 --- a/bundle/bundle_intro.ipynb +++ b/bundle/01_bundle_intro.ipynb @@ -225,7 +225,7 @@ "* `test_tensor` is another object created by evaluating an expression, this one uses references to `shape` and `device` with the `@` syntax.\n", "* `test_config` is a list of expressions which are evaluated in order to act as the \"main\" or entry point for the program, in this case printing config information and then our created tensor.\n", "\n", - "As mentioned `$` and `@` are sigils with special meaning. A string starting with `$` is treated as a Python expression and is evaluated as such when needed, these need to be enclosed in quotes only when JSON/YAML need that to parse correctly. A variable starting with `@` is treated as reference to something we've defined in the script, eg `@shape`, and will only work for such definitions. Accessing a member of a definition before being interpreted can be done with `#`, so something like `@foo#bar` will access the `bar` member of a definition `foo`. \n", + "As mentioned `$` and `@` are sigils with special meaning. A string starting with `$` is treated as a Python expression and is evaluated as such when needed, these need to be enclosed in quotes only when JSON/YAML need that to parse correctly. A variable starting with `@` is treated as reference to something we've defined in the script, eg `@shape`, and will only work for such definitions. Accessing a member of a definition before being interpreted can be done with `#`, so something like `@foo#bar` will access the `bar` member of a definition `foo`. More information on the usage of these can be found at https://docs.monai.io/en/latest/config_syntax.html.\n", "\n", "We can run this \"program\" on the command line now using the bundle submodule and a few arguments to specify the metadata file and configuration file:" ] diff --git a/bundle/mednist_classification.ipynb b/bundle/02_mednist_classification.ipynb similarity index 100% rename from bundle/mednist_classification.ipynb rename to bundle/02_mednist_classification.ipynb diff --git a/bundle/README.md b/bundle/README.md index 4175ce470c..f0cdf09dac 100644 --- a/bundle/README.md +++ b/bundle/README.md @@ -1,14 +1,31 @@ -# MONAI bundle -This folder contains the `getting started` tutorial and below code examples of training / inference for MONAI bundle. +# MONAI Bundle -### [introducing_config](./introducing_config) -A simple example to introduce the MONAI bundle config and parsing. +This directory contains the tutorials and materials for MONAI Bundles. A bundle is a self-describing network which +packages network weights, training/validation/testing scripts, Python code, and ancillary files into a defined +directory structure. Bundles can be downloaded from the model zoo and other sources using MONAI's inbuilt API. +These tutorials start with an introduction on how to construct bundles from scratch, and then go into more depth +on specific features. -### [customize component](./custom_component) -Example shows the use cases of bringing customized python components, such as transform, network, and metrics, in a configuration-based workflow. +All other bundle documentation can be found at https://docs.monai.io/en/latest/bundle_intro.html. -### [hybrid programming](./hybrid_programming) -Example shows how to parse the config files in your own python program, instantiate necessary components with python program and execute the inference. +Start the tutorial notebooks on constructing bundles: -### [python bundle workflow](./python_bundle_workflow) -Step by step tutorial examples show how to develop a bundle training or inference workflow in Python instead of JSON / YAML configs. +1. [Bundle Introduction](./01_bundle_intro.ipynb): create a very simple bundle from scratch. +2. [MedNIST Classification](./02_mednist_classification.ipynb): train a network using the bundle for doing a real task. + +More advanced topics are covered in this directory: + +* [Further Features](./further_features.md): covers more advanced features and uses of configs, command usage, and +programmatic use of bundles. + +* [introducing_config](./introducing_config): a simple example to introduce the MONAI bundle config and parsing +implementing a standalone program. + +* [customize component](./custom_component): illustrates bringing customized python components, such as transform, +network, and metrics, into a configuration-based workflow. + +* [hybrid programming](./hybrid_programming): shows how to parse the config files in your own python program, +instantiate necessary components with python program and execute the inference. + +* [python bundle workflow](./python_bundle_workflow): step-by-step tutorial examples show how to develop a bundle +training or inference workflow in Python instead of JSON / YAML configs. diff --git a/bundle/get_started.md b/bundle/further_features.md similarity index 99% rename from bundle/get_started.md rename to bundle/further_features.md index 51d034a9aa..b1c1480c9b 100644 --- a/bundle/get_started.md +++ b/bundle/further_features.md @@ -1,5 +1,5 @@ -# Get started to MONAI bundle +# Further Features of MONAI Bundles A MONAI bundle usually includes the stored weights of a model, TorchScript model, JSON files which include configs and metadata about the model, information for constructing training, inference, and post-processing transform sequences, plain-text description, legal information, and other data the model creator wishes to include. From 2dec291d007b470fb674790c86af7e7e1445c252 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Thu, 24 Aug 2023 23:49:54 +0000 Subject: [PATCH 05/26] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- bundle/README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/bundle/README.md b/bundle/README.md index f0cdf09dac..0665fd962f 100644 --- a/bundle/README.md +++ b/bundle/README.md @@ -1,6 +1,6 @@ # MONAI Bundle -This directory contains the tutorials and materials for MONAI Bundles. A bundle is a self-describing network which +This directory contains the tutorials and materials for MONAI Bundles. A bundle is a self-describing network which packages network weights, training/validation/testing scripts, Python code, and ancillary files into a defined directory structure. Bundles can be downloaded from the model zoo and other sources using MONAI's inbuilt API. These tutorials start with an introduction on how to construct bundles from scratch, and then go into more depth @@ -15,17 +15,17 @@ Start the tutorial notebooks on constructing bundles: More advanced topics are covered in this directory: -* [Further Features](./further_features.md): covers more advanced features and uses of configs, command usage, and +* [Further Features](./further_features.md): covers more advanced features and uses of configs, command usage, and programmatic use of bundles. -* [introducing_config](./introducing_config): a simple example to introduce the MONAI bundle config and parsing +* [introducing_config](./introducing_config): a simple example to introduce the MONAI bundle config and parsing implementing a standalone program. -* [customize component](./custom_component): illustrates bringing customized python components, such as transform, +* [customize component](./custom_component): illustrates bringing customized python components, such as transform, network, and metrics, into a configuration-based workflow. -* [hybrid programming](./hybrid_programming): shows how to parse the config files in your own python program, +* [hybrid programming](./hybrid_programming): shows how to parse the config files in your own python program, instantiate necessary components with python program and execute the inference. -* [python bundle workflow](./python_bundle_workflow): step-by-step tutorial examples show how to develop a bundle +* [python bundle workflow](./python_bundle_workflow): step-by-step tutorial examples show how to develop a bundle training or inference workflow in Python instead of JSON / YAML configs. From fc4acdeea69a9673a858e8540b0e6fe18abb7c5a Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Fri, 25 Aug 2023 00:57:11 +0100 Subject: [PATCH 06/26] Notebook update DCO Remediation Commit for Eric Kerfoot I, Eric Kerfoot , hereby add my Signed-off-by to this commit: db2f488248fd6770b99b2b172e83bfdf18f1bc73 I, Eric Kerfoot , hereby add my Signed-off-by to this commit: 04d581f96527e2b4e8cbaa7c0fea0ad1e0e4114e I, Eric Kerfoot , hereby add my Signed-off-by to this commit: ba1959645bd1a5b7af3925f2aab6430a05123b9f I, Eric Kerfoot , hereby add my Signed-off-by to this commit: 54583baa91c61228724b3b16a7821ae1c77eec74 Signed-off-by: Eric Kerfoot --- bundle/02_mednist_classification.ipynb | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/bundle/02_mednist_classification.ipynb b/bundle/02_mednist_classification.ipynb index a2abd7d220..d7b80eb239 100644 --- a/bundle/02_mednist_classification.ipynb +++ b/bundle/02_mednist_classification.ipynb @@ -545,6 +545,20 @@ " --config_file \"['$BUNDLE/configs/common.yaml','$BUNDLE/configs/evaluate.yaml']\" \\\n", " --ckpt_file \"$BUNDLE/models/model.pt\"" ] + }, + { + "cell_type": "markdown", + "id": "6fd62905-4ea8-4f08-bcea-823074fc4ce4", + "metadata": {}, + "source": [ + "## Summary and Next\n", + "\n", + "This tutorial has covered:\n", + "* Creating full training scripts in bundles\n", + "* Training a network then evaluating it's performance with scripts\n", + "\n", + "That's it to creating a bundle to match an existing script. It was mentioned in a number of places that best practice wasn't followed to stick to the original script's structure, so further tutorials will cover this in greater detail. " + ] } ], "metadata": { From 0000e1e3ff81eb221c5e7774997975ef15dbe956 Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Wed, 30 Aug 2023 17:02:56 +0100 Subject: [PATCH 07/26] Adding third notebook Signed-off-by: Eric Kerfoot --- bundle/01_bundle_intro.ipynb | 11 + bundle/02_mednist_classification.ipynb | 17 +- bundle/03_mednist_classification_v2.ipynb | 1070 +++++++++++++++++++++ 3 files changed, 1096 insertions(+), 2 deletions(-) create mode 100644 bundle/03_mednist_classification_v2.ipynb diff --git a/bundle/01_bundle_intro.ipynb b/bundle/01_bundle_intro.ipynb index 6a1e1f0c7a..636d51a792 100644 --- a/bundle/01_bundle_intro.ipynb +++ b/bundle/01_bundle_intro.ipynb @@ -5,6 +5,17 @@ "id": "e473187c-65db-40f2-b27a-236b3e8f2ad2", "metadata": {}, "source": [ + "Copyright (c) MONAI Consortium \n", + "Licensed under the Apache License, Version 2.0 (the \"License\"); \n", + "you may not use this file except in compliance with the License. \n", + "You may obtain a copy of the License at \n", + "    http://www.apache.org/licenses/LICENSE-2.0 \n", + "Unless required by applicable law or agreed to in writing, software \n", + "distributed under the License is distributed on an \"AS IS\" BASIS, \n", + "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \n", + "See the License for the specific language governing permissions and \n", + "limitations under the License.\n", + "\n", "# MONAI Bundles\n", "\n", "Bundles are essentially _self-descriptive networks_. They combine a network definition with the metadata about what they are meant to do, what they are used for, the nature of their inputs and outputs, and scripts (possibly with associated data) to train and infer using them. \n", diff --git a/bundle/02_mednist_classification.ipynb b/bundle/02_mednist_classification.ipynb index d7b80eb239..752575d70b 100644 --- a/bundle/02_mednist_classification.ipynb +++ b/bundle/02_mednist_classification.ipynb @@ -5,6 +5,17 @@ "id": "64bd2d8c-4799-4073-bc28-c3632589c525", "metadata": {}, "source": [ + "Copyright (c) MONAI Consortium \n", + "Licensed under the Apache License, Version 2.0 (the \"License\"); \n", + "you may not use this file except in compliance with the License. \n", + "You may obtain a copy of the License at \n", + "    http://www.apache.org/licenses/LICENSE-2.0 \n", + "Unless required by applicable law or agreed to in writing, software \n", + "distributed under the License is distributed on an \"AS IS\" BASIS, \n", + "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \n", + "See the License for the specific language governing permissions and \n", + "limitations under the License.\n", + "\n", "# MedNIST Classification Bundle\n", "\n", "In this tutorial we'll describe how to create a bundle for a classification network. This will include how to train and apply the network on the command line. MedNIST will be used as the dataset with the bundle based off the [MONAI 101 notebook](https://github.com/Project-MONAI/tutorials/blob/main/2d_classification/monai_101.ipynb).\n", @@ -325,7 +336,7 @@ " optimizer: \n", " _target_: torch.optim.Adam\n", " params: '$@net.parameters()'\n", - " lr: 0.00001\n", + " lr: 0.00001 # learning rate set slow so that you can see network improvement over epochs\n", " loss_function: \n", " _target_: torch.nn.CrossEntropyLoss\n", " inferer: \n", @@ -362,9 +373,11 @@ "\n", "BUNDLE=\"./MedNISTClassifier\"\n", "\n", + "# run the bundle with epochs set to 2 for speed during testing, change this to get a better result\n", "python -m monai.bundle run train \\\n", " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", - " --config_file \"['$BUNDLE/configs/common.yaml','$BUNDLE/configs/train.yaml']\"\n", + " --config_file \"['$BUNDLE/configs/common.yaml','$BUNDLE/configs/train.yaml']\" \\\n", + " --max_epochs 2\n", "\n", "# we'll use the trained network as the model object for this bundle\n", "mv mednist.ts $BUNDLE/models/model.ts\n", diff --git a/bundle/03_mednist_classification_v2.ipynb b/bundle/03_mednist_classification_v2.ipynb new file mode 100644 index 0000000000..342b3cde59 --- /dev/null +++ b/bundle/03_mednist_classification_v2.ipynb @@ -0,0 +1,1070 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "64bd2d8c-4799-4073-bc28-c3632589c525", + "metadata": {}, + "source": [ + "Copyright (c) MONAI Consortium \n", + "Licensed under the Apache License, Version 2.0 (the \"License\"); \n", + "you may not use this file except in compliance with the License. \n", + "You may obtain a copy of the License at \n", + "    http://www.apache.org/licenses/LICENSE-2.0 \n", + "Unless required by applicable law or agreed to in writing, software \n", + "distributed under the License is distributed on an \"AS IS\" BASIS, \n", + "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \n", + "See the License for the specific language governing permissions and \n", + "limitations under the License.\n", + "\n", + "# MedNIST Classification Bundle\n", + "\n", + "In this tutorial we'll revisit the bundle replicating [MONAI 101 notebook](https://github.com/Project-MONAI/tutorials/blob/main/2d_classification/monai_101.ipynb) and add more features representing best practice concepts. This will include evaluation and checkpoint saving techniques.\n", + "\n", + "We'll first create a bundle very much like in the previous tutorial with the same metadata and common script file:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "eb9dc6ec-13da-4a37-8afa-28e2766b9343", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[01;34mMedNISTClassifier_v2\u001b[00m\n", + "├── \u001b[01;34mconfigs\u001b[00m\n", + "│   ├── inference.json\n", + "│   └── metadata.json\n", + "├── \u001b[01;34mdocs\u001b[00m\n", + "│   └── README.md\n", + "├── LICENSE\n", + "└── \u001b[01;34mmodels\u001b[00m\n", + "\n", + "3 directories, 4 files\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "python -m monai.bundle init_bundle MedNISTClassifier_v2\n", + "tree MedNISTClassifier_v2" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "b29f053b-cf16-4ffc-bbe7-d9433fdfa872", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting MedNISTClassifier_v2/configs/metadata.json\n" + ] + } + ], + "source": [ + "%%writefile MedNISTClassifier_v2/configs/metadata.json\n", + "\n", + "{\n", + " \"version\": \"0.0.1\",\n", + " \"changelog\": {\n", + " \"0.0.1\": \"Initial version\"\n", + " },\n", + " \"monai_version\": \"1.2.0\",\n", + " \"pytorch_version\": \"2.0.0\",\n", + " \"numpy_version\": \"1.23.5\",\n", + " \"optional_packages_version\": {},\n", + " \"name\": \"MedNISTClassifier\",\n", + " \"task\": \"MedNIST Classification Network\",\n", + " \"description\": \"This is a demo network for classifying MedNIST images by type/modality\",\n", + " \"authors\": \"Your Name Here\",\n", + " \"copyright\": \"Copyright (c) Your Name Here\",\n", + " \"data_source\": \"MedNIST dataset kindly made available by Dr. Bradley J. Erickson M.D., Ph.D. (Department of Radiology, Mayo Clinic)\",\n", + " \"data_type\": \"jpeg\",\n", + " \"intended_use\": \"This is suitable for demonstration only\",\n", + " \"network_data_format\": {\n", + " \"inputs\": {\n", + " \"image\": {\n", + " \"type\": \"image\",\n", + " \"format\": \"magnitude\",\n", + " \"modality\": \"any\",\n", + " \"num_channels\": 1,\n", + " \"spatial_shape\": [64, 64],\n", + " \"dtype\": \"float32\",\n", + " \"value_range\": [0, 1],\n", + " \"is_patch_data\": false,\n", + " \"channel_def\": {\n", + " \"0\": \"image\"\n", + " }\n", + " }\n", + " },\n", + " \"outputs\": {\n", + " \"pred\": {\n", + " \"type\": \"probabilities\",\n", + " \"format\": \"classes\",\n", + " \"num_channels\": 6,\n", + " \"spatial_shape\": [6],\n", + " \"dtype\": \"float32\",\n", + " \"value_range\": [0, 1],\n", + " \"is_patch_data\": false,\n", + " \"channel_def\": {\n", + " \"0\": \"AbdomenCT\",\n", + " \"1\": \"BreastMRI\",\n", + " \"2\": \"CXR\",\n", + " \"3\": \"ChestCT\",\n", + " \"4\": \"Hand\",\n", + " \"5\": \"HeadCT\"\n", + " }\n", + " }\n", + " }\n", + " }\n", + "}" + ] + }, + { + "cell_type": "markdown", + "id": "04826c73-7c26-4c5e-8d2a-8968c3954b5a", + "metadata": {}, + "source": [ + "As you've likely seen in outputs, there should be a `logging.conf` file in the `configs` directory to set up the Python logger appropriately. This will improve the output we get in the notebook:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "0cb1b023-d192-4ad7-b2eb-c4a2c6b42b84", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Writing MedNISTClassifier_v2/configs/logging.conf\n" + ] + } + ], + "source": [ + "%%writefile MedNISTClassifier_v2/configs/logging.conf\n", + "\n", + "[loggers]\n", + "keys=root\n", + "\n", + "[handlers]\n", + "keys=consoleHandler\n", + "\n", + "[formatters]\n", + "keys=fullFormatter\n", + "\n", + "[logger_root]\n", + "level=INFO\n", + "handlers=consoleHandler\n", + "\n", + "[handler_consoleHandler]\n", + "class=StreamHandler\n", + "level=INFO\n", + "formatter=fullFormatter\n", + "args=(sys.stdout,)\n", + "\n", + "[formatter_fullFormatter]\n", + "format=%(asctime)s - %(name)s - %(levelname)s - %(message)s\n" + ] + }, + { + "cell_type": "markdown", + "id": "b306ff33-c39b-4822-b6d4-346987cfe87b", + "metadata": {}, + "source": [ + "We'll change the common file slightly by adding some extra symbols, specifically `bundle_root` which should always be present in bundles. We'll keep `root_dir` since it's used to determine where MedNIST is downloaded to." + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "id": "d11681af-3210-4b2b-b7bd-8ad8dedfe230", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting MedNISTClassifier_v2/configs/common.yaml\n" + ] + } + ], + "source": [ + "%%writefile MedNISTClassifier_v2/configs/common.yaml\n", + "\n", + "# added a few more imports\n", + "imports: \n", + "- $import torch\n", + "- $import datetime\n", + "- $import os\n", + "\n", + "root_dir: .\n", + "\n", + "# use constants from MONAI instead of hard-coding names\n", + "image: $monai.utils.CommonKeys.IMAGE\n", + "label: $monai.utils.CommonKeys.LABEL\n", + "pred: $monai.utils.CommonKeys.PRED\n", + "\n", + "# these are added definitions\n", + "bundle_root: .\n", + "ckpt_path: $@bundle_root + '/models/model.pt'\n", + "\n", + "# define a device for the network\n", + "device: '$torch.device(''cuda:0'')'\n", + "\n", + "# store the class names for inference later\n", + "class_names: [AbdomenCT, BreastMRI, CXR, ChestCT, Hand, HeadCT]\n", + "\n", + "# define the network separately, don't need to refer to MONAI types by name or import MONAI\n", + "network_def:\n", + " _target_: densenet121\n", + " spatial_dims: 2\n", + " in_channels: 1\n", + " out_channels: 6\n", + "\n", + "# define the network to be the given definition moved to the device\n", + "net: '$@network_def.to(@device)'\n", + "\n", + "# define a transform sequence as a list of transform objects instead of using Compose here\n", + "train_transforms:\n", + "- _target_: LoadImaged\n", + " keys: '@image'\n", + " image_only: true\n", + "- _target_: EnsureChannelFirstd\n", + " keys: '@image'\n", + "- _target_: ScaleIntensityd\n", + " keys: '@image'\n", + " " + ] + }, + { + "cell_type": "markdown", + "id": "eaf81ea7-9ea3-4548-a32e-992f0b9bc0ab", + "metadata": {}, + "source": [ + "\n", + "## Training\n", + "\n", + "For training we have the same elements again but we'll add a `SupervisedEvaluator` object to track model progress with handlers to save checkpoints. " + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "id": "4dfd052e-abe7-473a-bbf4-25674a3b20ea", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting MedNISTClassifier_v2/configs/train.yaml\n" + ] + } + ], + "source": [ + "%%writefile MedNISTClassifier_v2/configs/train.yaml\n", + "\n", + "max_epochs: 25\n", + "learning_rate: 0.00001 # learning rate, again artificially slow\n", + "val_interval: 1 # run validation every n'th epoch\n", + "save_interval: 1 # save the model weights every n'th epoch\n", + "\n", + "# choose a unique output subdirectory every time training is started, \n", + "output_dir: '$datetime.datetime.now().strftime(@root_dir+''/output/output_%y%m%d_%H%M%S'')'\n", + "\n", + "train_dataset:\n", + " _target_: MedNISTDataset\n", + " root_dir: '@root_dir'\n", + " transform: \n", + " _target_: Compose\n", + " transforms: '@train_transforms'\n", + " section: training\n", + " download: true\n", + "\n", + "train_dl:\n", + " _target_: DataLoader\n", + " dataset: '@train_dataset'\n", + " batch_size: 512\n", + " shuffle: true\n", + " num_workers: 4\n", + "\n", + "# separate dataset taking from the \"validation\" section\n", + "eval_dataset:\n", + " _target_: MedNISTDataset\n", + " root_dir: '@root_dir'\n", + " transform: \n", + " _target_: Compose\n", + " transforms: '$@train_transforms'\n", + " section: validation\n", + " download: true\n", + "\n", + "# separate dataloader for evaluation\n", + "eval_dl:\n", + " _target_: DataLoader\n", + " dataset: '@eval_dataset'\n", + " batch_size: 512\n", + " shuffle: false\n", + " num_workers: 4\n", + "\n", + "# transforms applied to network output, in this case applying activation, argmax, and one-hot-encoding\n", + "post_transform:\n", + " _target_: Compose\n", + " transforms:\n", + " - _target_: Activationsd\n", + " keys: '@pred'\n", + " softmax: true # apply softmax to the prediction to emphasize the most likely value\n", + " - _target_: AsDiscreted\n", + " keys: ['@label','@pred']\n", + " argmax: [false, true] # apply argmax to the prediction only to get a class index number\n", + " to_onehot: 6 # convert both prediction and label to one-hot format so that both have shape (6,)\n", + "\n", + "# separating out loss, inferer, and optimizer definitions\n", + "\n", + "loss_function:\n", + " _target_: torch.nn.CrossEntropyLoss\n", + "\n", + "inferer: \n", + " _target_: SimpleInferer\n", + "\n", + "optimizer: \n", + " _target_: torch.optim.Adam\n", + " params: '$@net.parameters()'\n", + " lr: '@learning_rate'\n", + "\n", + "# Handlers to load the checkpoint if present, run validation at the chosen interval, save the checkpoint\n", + "# at the chosen interval, log stats, and write the log to a file in the output directory.\n", + "handlers:\n", + "- _target_: CheckpointLoader\n", + " _disabled_: '$not os.path.exists(@ckpt_path)'\n", + " load_path: '@ckpt_path'\n", + " load_dict:\n", + " model: '@net'\n", + "- _target_: ValidationHandler\n", + " validator: '@evaluator'\n", + " epoch_level: true\n", + " interval: '@val_interval'\n", + "- _target_: CheckpointSaver\n", + " save_dir: '@output_dir'\n", + " save_dict:\n", + " model: '@net'\n", + " save_interval: '@save_interval'\n", + " save_final: true # save the final weights, either when the run finishes or is interrupted somehow\n", + "- _target_: StatsHandler\n", + " name: train_loss\n", + " tag_name: train_loss\n", + " output_transform: '$monai.handlers.from_engine([''loss''], first=True)' # print per-iteration loss\n", + "- _target_: LogfileHandler\n", + " output_dir: '@output_dir'\n", + "\n", + "trainer:\n", + " _target_: SupervisedTrainer\n", + " device: '@device'\n", + " max_epochs: '@max_epochs'\n", + " train_data_loader: '@train_dl'\n", + " network: '@net'\n", + " optimizer: '@optimizer'\n", + " loss_function: '@loss_function'\n", + " inferer: '@inferer'\n", + " train_handlers: '@handlers'\n", + "\n", + "# validation handlers which log stats and direct the log to a file\n", + "val_handlers:\n", + "- _target_: StatsHandler\n", + " name: val_stats\n", + " output_transform: '$lambda x: None'\n", + "- _target_: LogfileHandler\n", + " output_dir: '@output_dir'\n", + " \n", + "# Metrics to assess validation results, you can have more than one here but may \n", + "# need to adapt the format of pred and label.\n", + "metrics:\n", + " accuracy:\n", + " _target_: 'ignite.metrics.Accuracy'\n", + " output_transform: '$monai.handlers.from_engine([@pred, @label])'\n", + "\n", + "# runs the evaluation process, invoked by trainer via the ValidationHandler object\n", + "evaluator:\n", + " _target_: SupervisedEvaluator\n", + " device: '@device'\n", + " val_data_loader: '@eval_dl'\n", + " network: '@net'\n", + " inferer: '@inferer'\n", + " postprocessing: '@post_transform'\n", + " key_val_metric: '@metrics'\n", + " val_handlers: '@val_handlers'\n", + "\n", + "train:\n", + "- '$@trainer.run()'\n" + ] + }, + { + "cell_type": "markdown", + "id": "de752181-80b1-4221-9e4a-315e5f7f22a6", + "metadata": {}, + "source": [ + "We can now train as normal, specifying the logging config file and a maximum number of epochs you probably will want to set higher for a good result:" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "id": "8357670d-fe69-4789-9b9a-77c0d8144b10", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-08-30 12:38:23,636 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-08-30 12:38:23,636 - INFO - > run_id: 'train'\n", + "2023-08-30 12:38:23,636 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n", + "2023-08-30 12:38:23,636 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n", + " './MedNISTClassifier_v2/configs/train.yaml']\n", + "2023-08-30 12:38:23,636 - INFO - > logging_file: './MedNISTClassifier_v2/configs/logging.conf'\n", + "2023-08-30 12:38:23,636 - INFO - > bundle_root: './MedNISTClassifier_v2'\n", + "2023-08-30 12:38:23,636 - INFO - > max_epochs: 2\n", + "2023-08-30 12:38:23,636 - INFO - ---\n", + "\n", + "\n", + "2023-08-30 12:38:23,636 - INFO - Setting logging properties based on config: ./MedNISTClassifier_v2/configs/logging.conf.\n", + "2023-08-30 12:38:23,768 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.\n", + "2023-08-30 12:38:23,768 - INFO - File exists: MedNIST.tar.gz, skipped downloading.\n", + "2023-08-30 12:38:23,768 - INFO - Non-empty folder exists in MedNIST, skipped extracting.\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Loading dataset: 100%|██████████| 47164/47164 [00:41<00:00, 1134.34it/s]\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-08-30 12:39:05,994 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.\n", + "2023-08-30 12:39:05,994 - INFO - File exists: MedNIST.tar.gz, skipped downloading.\n", + "2023-08-30 12:39:05,994 - INFO - Non-empty folder exists in MedNIST, skipped extracting.\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Loading dataset: 100%|██████████| 5895/5895 [00:05<00:00, 1135.59it/s]\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-08-30 12:39:11,320 - ignite.engine.engine.SupervisedTrainer - INFO - Engine run resuming from iteration 0, epoch 0 until 2 epochs\n", + "2023-08-30 12:39:12,457 - INFO - Epoch: 1/2, Iter: 1/93 -- train_loss: 1.8415 \n", + "2023-08-30 12:39:12,828 - INFO - Epoch: 1/2, Iter: 2/93 -- train_loss: 1.8107 \n", + "2023-08-30 12:39:13,194 - INFO - Epoch: 1/2, Iter: 3/93 -- train_loss: 1.7766 \n", + "2023-08-30 12:39:13,569 - INFO - Epoch: 1/2, Iter: 4/93 -- train_loss: 1.7330 \n", + "2023-08-30 12:39:13,951 - INFO - Epoch: 1/2, Iter: 5/93 -- train_loss: 1.7159 \n", + "2023-08-30 12:39:14,326 - INFO - Epoch: 1/2, Iter: 6/93 -- train_loss: 1.6599 \n", + "2023-08-30 12:39:14,698 - INFO - Epoch: 1/2, Iter: 7/93 -- train_loss: 1.6619 \n", + "2023-08-30 12:39:15,068 - INFO - Epoch: 1/2, Iter: 8/93 -- train_loss: 1.6289 \n", + "2023-08-30 12:39:15,442 - INFO - Epoch: 1/2, Iter: 9/93 -- train_loss: 1.5839 \n", + "2023-08-30 12:39:15,813 - INFO - Epoch: 1/2, Iter: 10/93 -- train_loss: 1.5505 \n", + "2023-08-30 12:39:16,184 - INFO - Epoch: 1/2, Iter: 11/93 -- train_loss: 1.5104 \n", + "2023-08-30 12:39:16,555 - INFO - Epoch: 1/2, Iter: 12/93 -- train_loss: 1.5082 \n", + "2023-08-30 12:39:16,928 - INFO - Epoch: 1/2, Iter: 13/93 -- train_loss: 1.4683 \n", + "2023-08-30 12:39:17,298 - INFO - Epoch: 1/2, Iter: 14/93 -- train_loss: 1.4428 \n", + "2023-08-30 12:39:17,669 - INFO - Epoch: 1/2, Iter: 15/93 -- train_loss: 1.4370 \n", + "2023-08-30 12:39:18,040 - INFO - Epoch: 1/2, Iter: 16/93 -- train_loss: 1.4218 \n", + "2023-08-30 12:39:18,413 - INFO - Epoch: 1/2, Iter: 17/93 -- train_loss: 1.3643 \n", + "2023-08-30 12:39:18,788 - INFO - Epoch: 1/2, Iter: 18/93 -- train_loss: 1.3395 \n", + "2023-08-30 12:39:19,156 - INFO - Epoch: 1/2, Iter: 19/93 -- train_loss: 1.3353 \n", + "2023-08-30 12:39:19,526 - INFO - Epoch: 1/2, Iter: 20/93 -- train_loss: 1.2964 \n", + "2023-08-30 12:39:19,899 - INFO - Epoch: 1/2, Iter: 21/93 -- train_loss: 1.2980 \n", + "2023-08-30 12:39:20,269 - INFO - Epoch: 1/2, Iter: 22/93 -- train_loss: 1.2524 \n", + "2023-08-30 12:39:20,637 - INFO - Epoch: 1/2, Iter: 23/93 -- train_loss: 1.2426 \n", + "2023-08-30 12:39:21,005 - INFO - Epoch: 1/2, Iter: 24/93 -- train_loss: 1.2124 \n", + "2023-08-30 12:39:21,384 - INFO - Epoch: 1/2, Iter: 25/93 -- train_loss: 1.2232 \n", + "2023-08-30 12:39:21,755 - INFO - Epoch: 1/2, Iter: 26/93 -- train_loss: 1.2067 \n", + "2023-08-30 12:39:22,145 - INFO - Epoch: 1/2, Iter: 27/93 -- train_loss: 1.1653 \n", + "2023-08-30 12:39:22,519 - INFO - Epoch: 1/2, Iter: 28/93 -- train_loss: 1.1216 \n", + "2023-08-30 12:39:22,899 - INFO - Epoch: 1/2, Iter: 29/93 -- train_loss: 1.1002 \n", + "2023-08-30 12:39:23,268 - INFO - Epoch: 1/2, Iter: 30/93 -- train_loss: 1.0889 \n", + "2023-08-30 12:39:23,635 - INFO - Epoch: 1/2, Iter: 31/93 -- train_loss: 1.0906 \n", + "2023-08-30 12:39:24,005 - INFO - Epoch: 1/2, Iter: 32/93 -- train_loss: 1.0542 \n", + "2023-08-30 12:39:24,379 - INFO - Epoch: 1/2, Iter: 33/93 -- train_loss: 1.0505 \n", + "2023-08-30 12:39:24,752 - INFO - Epoch: 1/2, Iter: 34/93 -- train_loss: 1.0479 \n", + "2023-08-30 12:39:25,121 - INFO - Epoch: 1/2, Iter: 35/93 -- train_loss: 0.9899 \n", + "2023-08-30 12:39:25,497 - INFO - Epoch: 1/2, Iter: 36/93 -- train_loss: 1.0060 \n", + "2023-08-30 12:39:25,877 - INFO - Epoch: 1/2, Iter: 37/93 -- train_loss: 0.9894 \n", + "2023-08-30 12:39:26,250 - INFO - Epoch: 1/2, Iter: 38/93 -- train_loss: 0.9567 \n", + "2023-08-30 12:39:26,618 - INFO - Epoch: 1/2, Iter: 39/93 -- train_loss: 0.9446 \n", + "2023-08-30 12:39:26,998 - INFO - Epoch: 1/2, Iter: 40/93 -- train_loss: 0.9262 \n", + "2023-08-30 12:39:27,374 - INFO - Epoch: 1/2, Iter: 41/93 -- train_loss: 0.9277 \n", + "2023-08-30 12:39:27,743 - INFO - Epoch: 1/2, Iter: 42/93 -- train_loss: 0.8966 \n", + "2023-08-30 12:39:28,112 - INFO - Epoch: 1/2, Iter: 43/93 -- train_loss: 0.8847 \n", + "2023-08-30 12:39:28,490 - INFO - Epoch: 1/2, Iter: 44/93 -- train_loss: 0.8708 \n", + "2023-08-30 12:39:28,865 - INFO - Epoch: 1/2, Iter: 45/93 -- train_loss: 0.8846 \n", + "2023-08-30 12:39:29,237 - INFO - Epoch: 1/2, Iter: 46/93 -- train_loss: 0.8167 \n", + "2023-08-30 12:39:29,611 - INFO - Epoch: 1/2, Iter: 47/93 -- train_loss: 0.8477 \n", + "2023-08-30 12:39:29,982 - INFO - Epoch: 1/2, Iter: 48/93 -- train_loss: 0.8050 \n", + "2023-08-30 12:39:30,358 - INFO - Epoch: 1/2, Iter: 49/93 -- train_loss: 0.7793 \n", + "2023-08-30 12:39:30,729 - INFO - Epoch: 1/2, Iter: 50/93 -- train_loss: 0.7661 \n", + "2023-08-30 12:39:31,101 - INFO - Epoch: 1/2, Iter: 51/93 -- train_loss: 0.7868 \n", + "2023-08-30 12:39:31,610 - INFO - Epoch: 1/2, Iter: 52/93 -- train_loss: 0.7492 \n", + "2023-08-30 12:39:31,984 - INFO - Epoch: 1/2, Iter: 53/93 -- train_loss: 0.7325 \n", + "2023-08-30 12:39:32,355 - INFO - Epoch: 1/2, Iter: 54/93 -- train_loss: 0.7154 \n", + "2023-08-30 12:39:32,723 - INFO - Epoch: 1/2, Iter: 55/93 -- train_loss: 0.7304 \n", + "2023-08-30 12:39:33,094 - INFO - Epoch: 1/2, Iter: 56/93 -- train_loss: 0.6743 \n", + "2023-08-30 12:39:33,478 - INFO - Epoch: 1/2, Iter: 57/93 -- train_loss: 0.6978 \n", + "2023-08-30 12:39:33,850 - INFO - Epoch: 1/2, Iter: 58/93 -- train_loss: 0.6747 \n", + "2023-08-30 12:39:34,220 - INFO - Epoch: 1/2, Iter: 59/93 -- train_loss: 0.7037 \n", + "2023-08-30 12:39:34,591 - INFO - Epoch: 1/2, Iter: 60/93 -- train_loss: 0.6550 \n", + "2023-08-30 12:39:34,968 - INFO - Epoch: 1/2, Iter: 61/93 -- train_loss: 0.6728 \n", + "2023-08-30 12:39:35,340 - INFO - Epoch: 1/2, Iter: 62/93 -- train_loss: 0.6274 \n", + "2023-08-30 12:39:35,709 - INFO - Epoch: 1/2, Iter: 63/93 -- train_loss: 0.6296 \n", + "2023-08-30 12:39:36,080 - INFO - Epoch: 1/2, Iter: 64/93 -- train_loss: 0.6272 \n", + "2023-08-30 12:39:36,456 - INFO - Epoch: 1/2, Iter: 65/93 -- train_loss: 0.6205 \n", + "2023-08-30 12:39:36,828 - INFO - Epoch: 1/2, Iter: 66/93 -- train_loss: 0.5981 \n", + "2023-08-30 12:39:37,197 - INFO - Epoch: 1/2, Iter: 67/93 -- train_loss: 0.5998 \n", + "2023-08-30 12:39:37,574 - INFO - Epoch: 1/2, Iter: 68/93 -- train_loss: 0.5809 \n", + "2023-08-30 12:39:37,951 - INFO - Epoch: 1/2, Iter: 69/93 -- train_loss: 0.5781 \n", + "2023-08-30 12:39:38,322 - INFO - Epoch: 1/2, Iter: 70/93 -- train_loss: 0.5665 \n", + "2023-08-30 12:39:38,691 - INFO - Epoch: 1/2, Iter: 71/93 -- train_loss: 0.5403 \n", + "2023-08-30 12:39:39,063 - INFO - Epoch: 1/2, Iter: 72/93 -- train_loss: 0.5393 \n", + "2023-08-30 12:39:39,443 - INFO - Epoch: 1/2, Iter: 73/93 -- train_loss: 0.5547 \n", + "2023-08-30 12:39:39,815 - INFO - Epoch: 1/2, Iter: 74/93 -- train_loss: 0.5080 \n", + "2023-08-30 12:39:40,185 - INFO - Epoch: 1/2, Iter: 75/93 -- train_loss: 0.5292 \n", + "2023-08-30 12:39:40,557 - INFO - Epoch: 1/2, Iter: 76/93 -- train_loss: 0.4856 \n", + "2023-08-30 12:39:40,932 - INFO - Epoch: 1/2, Iter: 77/93 -- train_loss: 0.4987 \n", + "2023-08-30 12:39:41,304 - INFO - Epoch: 1/2, Iter: 78/93 -- train_loss: 0.4931 \n", + "2023-08-30 12:39:41,674 - INFO - Epoch: 1/2, Iter: 79/93 -- train_loss: 0.4819 \n", + "2023-08-30 12:39:42,047 - INFO - Epoch: 1/2, Iter: 80/93 -- train_loss: 0.4818 \n", + "2023-08-30 12:39:42,424 - INFO - Epoch: 1/2, Iter: 81/93 -- train_loss: 0.4978 \n", + "2023-08-30 12:39:42,804 - INFO - Epoch: 1/2, Iter: 82/93 -- train_loss: 0.4684 \n", + "2023-08-30 12:39:43,175 - INFO - Epoch: 1/2, Iter: 83/93 -- train_loss: 0.4431 \n", + "2023-08-30 12:39:43,555 - INFO - Epoch: 1/2, Iter: 84/93 -- train_loss: 0.4568 \n", + "2023-08-30 12:39:43,937 - INFO - Epoch: 1/2, Iter: 85/93 -- train_loss: 0.4712 \n", + "2023-08-30 12:39:44,313 - INFO - Epoch: 1/2, Iter: 86/93 -- train_loss: 0.4307 \n", + "2023-08-30 12:39:44,683 - INFO - Epoch: 1/2, Iter: 87/93 -- train_loss: 0.4360 \n", + "2023-08-30 12:39:45,235 - INFO - Epoch: 1/2, Iter: 88/93 -- train_loss: 0.4141 \n", + "2023-08-30 12:39:45,615 - INFO - Epoch: 1/2, Iter: 89/93 -- train_loss: 0.4159 \n", + "2023-08-30 12:39:45,990 - INFO - Epoch: 1/2, Iter: 90/93 -- train_loss: 0.4035 \n", + "2023-08-30 12:39:46,359 - INFO - Epoch: 1/2, Iter: 91/93 -- train_loss: 0.3963 \n", + "2023-08-30 12:39:46,731 - INFO - Epoch: 1/2, Iter: 92/93 -- train_loss: 0.4143 \n", + "2023-08-30 12:39:46,962 - INFO - Epoch: 1/2, Iter: 93/93 -- train_loss: 0.3548 \n", + "2023-08-30 12:39:46,963 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run resuming from iteration 0, epoch 0 until 1 epochs\n", + "2023-08-30 12:39:56,039 - ignite.engine.engine.SupervisedEvaluator - INFO - Got new best metric of accuracy: 0.9889737065309584\n", + "2023-08-30 12:39:56,040 - INFO - Epoch[1] Metrics -- accuracy: 0.9890 \n", + "2023-08-30 12:39:56,040 - INFO - Key metric: accuracy best value: 0.9889737065309584 at epoch: 1\n", + "2023-08-30 12:39:56,040 - ignite.engine.engine.SupervisedEvaluator - INFO - Epoch[1] Complete. Time taken: 00:00:08.961\n", + "2023-08-30 12:39:56,040 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run complete. Time taken: 00:00:09.077\n", + "2023-08-30 12:39:56,161 - ignite.engine.engine.SupervisedTrainer - INFO - Saved checkpoint at epoch: 1\n", + "2023-08-30 12:39:56,161 - ignite.engine.engine.SupervisedTrainer - INFO - Epoch[1] Complete. Time taken: 00:00:44.714\n", + "2023-08-30 12:39:56,769 - INFO - Epoch: 2/2, Iter: 1/93 -- train_loss: 0.3996 \n", + "2023-08-30 12:39:57,157 - INFO - Epoch: 2/2, Iter: 2/93 -- train_loss: 0.3662 \n", + "2023-08-30 12:39:57,528 - INFO - Epoch: 2/2, Iter: 3/93 -- train_loss: 0.3753 \n", + "2023-08-30 12:39:57,902 - INFO - Epoch: 2/2, Iter: 4/93 -- train_loss: 0.3637 \n", + "2023-08-30 12:39:58,279 - INFO - Epoch: 2/2, Iter: 5/93 -- train_loss: 0.3660 \n", + "2023-08-30 12:39:58,655 - INFO - Epoch: 2/2, Iter: 6/93 -- train_loss: 0.3651 \n", + "2023-08-30 12:39:59,025 - INFO - Epoch: 2/2, Iter: 7/93 -- train_loss: 0.3792 \n", + "2023-08-30 12:39:59,400 - INFO - Epoch: 2/2, Iter: 8/93 -- train_loss: 0.3327 \n", + "2023-08-30 12:39:59,782 - INFO - Epoch: 2/2, Iter: 9/93 -- train_loss: 0.3364 \n", + "2023-08-30 12:40:00,154 - INFO - Epoch: 2/2, Iter: 10/93 -- train_loss: 0.3670 \n", + "2023-08-30 12:40:00,524 - INFO - Epoch: 2/2, Iter: 11/93 -- train_loss: 0.3640 \n", + "2023-08-30 12:40:00,898 - INFO - Epoch: 2/2, Iter: 12/93 -- train_loss: 0.3332 \n", + "2023-08-30 12:40:01,277 - INFO - Epoch: 2/2, Iter: 13/93 -- train_loss: 0.3037 \n", + "2023-08-30 12:40:01,649 - INFO - Epoch: 2/2, Iter: 14/93 -- train_loss: 0.3297 \n", + "2023-08-30 12:40:02,018 - INFO - Epoch: 2/2, Iter: 15/93 -- train_loss: 0.3120 \n", + "2023-08-30 12:40:02,390 - INFO - Epoch: 2/2, Iter: 16/93 -- train_loss: 0.3109 \n", + "2023-08-30 12:40:02,769 - INFO - Epoch: 2/2, Iter: 17/93 -- train_loss: 0.3292 \n", + "2023-08-30 12:40:03,141 - INFO - Epoch: 2/2, Iter: 18/93 -- train_loss: 0.3157 \n", + "2023-08-30 12:40:03,510 - INFO - Epoch: 2/2, Iter: 19/93 -- train_loss: 0.3049 \n", + "2023-08-30 12:40:03,882 - INFO - Epoch: 2/2, Iter: 20/93 -- train_loss: 0.2881 \n", + "2023-08-30 12:40:04,262 - INFO - Epoch: 2/2, Iter: 21/93 -- train_loss: 0.2818 \n", + "2023-08-30 12:40:04,634 - INFO - Epoch: 2/2, Iter: 22/93 -- train_loss: 0.2728 \n", + "2023-08-30 12:40:05,003 - INFO - Epoch: 2/2, Iter: 23/93 -- train_loss: 0.2728 \n", + "2023-08-30 12:40:05,375 - INFO - Epoch: 2/2, Iter: 24/93 -- train_loss: 0.2852 \n", + "2023-08-30 12:40:05,753 - INFO - Epoch: 2/2, Iter: 25/93 -- train_loss: 0.2658 \n", + "2023-08-30 12:40:06,126 - INFO - Epoch: 2/2, Iter: 26/93 -- train_loss: 0.2662 \n", + "2023-08-30 12:40:06,495 - INFO - Epoch: 2/2, Iter: 27/93 -- train_loss: 0.2818 \n", + "2023-08-30 12:40:06,868 - INFO - Epoch: 2/2, Iter: 28/93 -- train_loss: 0.2564 \n", + "2023-08-30 12:40:07,248 - INFO - Epoch: 2/2, Iter: 29/93 -- train_loss: 0.2550 \n", + "2023-08-30 12:40:07,622 - INFO - Epoch: 2/2, Iter: 30/93 -- train_loss: 0.2681 \n", + "2023-08-30 12:40:07,992 - INFO - Epoch: 2/2, Iter: 31/93 -- train_loss: 0.2559 \n", + "2023-08-30 12:40:08,365 - INFO - Epoch: 2/2, Iter: 32/93 -- train_loss: 0.2672 \n", + "2023-08-30 12:40:08,751 - INFO - Epoch: 2/2, Iter: 33/93 -- train_loss: 0.2685 \n", + "2023-08-30 12:40:09,124 - INFO - Epoch: 2/2, Iter: 34/93 -- train_loss: 0.2602 \n", + "2023-08-30 12:40:09,737 - INFO - Epoch: 2/2, Iter: 35/93 -- train_loss: 0.2622 \n", + "2023-08-30 12:40:10,111 - INFO - Epoch: 2/2, Iter: 36/93 -- train_loss: 0.2438 \n", + "2023-08-30 12:40:10,488 - INFO - Epoch: 2/2, Iter: 37/93 -- train_loss: 0.2609 \n", + "2023-08-30 12:40:10,863 - INFO - Epoch: 2/2, Iter: 38/93 -- train_loss: 0.2211 \n", + "2023-08-30 12:40:11,236 - INFO - Epoch: 2/2, Iter: 39/93 -- train_loss: 0.2437 \n", + "2023-08-30 12:40:11,609 - INFO - Epoch: 2/2, Iter: 40/93 -- train_loss: 0.2296 \n", + "2023-08-30 12:40:11,989 - INFO - Epoch: 2/2, Iter: 41/93 -- train_loss: 0.2312 \n", + "2023-08-30 12:40:12,361 - INFO - Epoch: 2/2, Iter: 42/93 -- train_loss: 0.2214 \n", + "2023-08-30 12:40:12,733 - INFO - Epoch: 2/2, Iter: 43/93 -- train_loss: 0.2339 \n", + "2023-08-30 12:40:13,112 - INFO - Epoch: 2/2, Iter: 44/93 -- train_loss: 0.2359 \n", + "2023-08-30 12:40:13,492 - INFO - Epoch: 2/2, Iter: 45/93 -- train_loss: 0.2351 \n", + "2023-08-30 12:40:13,868 - INFO - Epoch: 2/2, Iter: 46/93 -- train_loss: 0.2161 \n", + "2023-08-30 12:40:14,238 - INFO - Epoch: 2/2, Iter: 47/93 -- train_loss: 0.2140 \n", + "2023-08-30 12:40:14,617 - INFO - Epoch: 2/2, Iter: 48/93 -- train_loss: 0.2275 \n", + "2023-08-30 12:40:14,999 - INFO - Epoch: 2/2, Iter: 49/93 -- train_loss: 0.2160 \n", + "2023-08-30 12:40:15,373 - INFO - Epoch: 2/2, Iter: 50/93 -- train_loss: 0.1924 \n", + "2023-08-30 12:40:15,751 - INFO - Epoch: 2/2, Iter: 51/93 -- train_loss: 0.2017 \n", + "2023-08-30 12:40:16,135 - INFO - Epoch: 2/2, Iter: 52/93 -- train_loss: 0.1886 \n", + "2023-08-30 12:40:16,516 - INFO - Epoch: 2/2, Iter: 53/93 -- train_loss: 0.2080 \n", + "2023-08-30 12:40:16,890 - INFO - Epoch: 2/2, Iter: 54/93 -- train_loss: 0.1862 \n", + "2023-08-30 12:40:17,264 - INFO - Epoch: 2/2, Iter: 55/93 -- train_loss: 0.2107 \n", + "2023-08-30 12:40:17,636 - INFO - Epoch: 2/2, Iter: 56/93 -- train_loss: 0.1911 \n", + "2023-08-30 12:40:18,012 - INFO - Epoch: 2/2, Iter: 57/93 -- train_loss: 0.1933 \n", + "2023-08-30 12:40:18,389 - INFO - Epoch: 2/2, Iter: 58/93 -- train_loss: 0.1964 \n", + "2023-08-30 12:40:18,759 - INFO - Epoch: 2/2, Iter: 59/93 -- train_loss: 0.1780 \n", + "2023-08-30 12:40:19,134 - INFO - Epoch: 2/2, Iter: 60/93 -- train_loss: 0.1969 \n", + "2023-08-30 12:40:19,510 - INFO - Epoch: 2/2, Iter: 61/93 -- train_loss: 0.2030 \n", + "2023-08-30 12:40:19,890 - INFO - Epoch: 2/2, Iter: 62/93 -- train_loss: 0.1805 \n", + "2023-08-30 12:40:20,262 - INFO - Epoch: 2/2, Iter: 63/93 -- train_loss: 0.1901 \n", + "2023-08-30 12:40:20,635 - INFO - Epoch: 2/2, Iter: 64/93 -- train_loss: 0.1830 \n", + "2023-08-30 12:40:21,012 - INFO - Epoch: 2/2, Iter: 65/93 -- train_loss: 0.1713 \n", + "2023-08-30 12:40:21,385 - INFO - Epoch: 2/2, Iter: 66/93 -- train_loss: 0.1820 \n", + "2023-08-30 12:40:21,756 - INFO - Epoch: 2/2, Iter: 67/93 -- train_loss: 0.1912 \n", + "2023-08-30 12:40:22,154 - INFO - Epoch: 2/2, Iter: 68/93 -- train_loss: 0.1689 \n", + "2023-08-30 12:40:22,529 - INFO - Epoch: 2/2, Iter: 69/93 -- train_loss: 0.1651 \n", + "2023-08-30 12:40:22,900 - INFO - Epoch: 2/2, Iter: 70/93 -- train_loss: 0.1832 \n", + "2023-08-30 12:40:23,270 - INFO - Epoch: 2/2, Iter: 71/93 -- train_loss: 0.1659 \n", + "2023-08-30 12:40:23,643 - INFO - Epoch: 2/2, Iter: 72/93 -- train_loss: 0.1636 \n", + "2023-08-30 12:40:24,017 - INFO - Epoch: 2/2, Iter: 73/93 -- train_loss: 0.1625 \n", + "2023-08-30 12:40:24,389 - INFO - Epoch: 2/2, Iter: 74/93 -- train_loss: 0.1583 \n", + "2023-08-30 12:40:24,759 - INFO - Epoch: 2/2, Iter: 75/93 -- train_loss: 0.1654 \n", + "2023-08-30 12:40:25,131 - INFO - Epoch: 2/2, Iter: 76/93 -- train_loss: 0.1575 \n", + "2023-08-30 12:40:25,506 - INFO - Epoch: 2/2, Iter: 77/93 -- train_loss: 0.1678 \n", + "2023-08-30 12:40:25,879 - INFO - Epoch: 2/2, Iter: 78/93 -- train_loss: 0.1731 \n", + "2023-08-30 12:40:26,249 - INFO - Epoch: 2/2, Iter: 79/93 -- train_loss: 0.1732 \n", + "2023-08-30 12:40:26,620 - INFO - Epoch: 2/2, Iter: 80/93 -- train_loss: 0.1535 \n", + "2023-08-30 12:40:26,995 - INFO - Epoch: 2/2, Iter: 81/93 -- train_loss: 0.1750 \n", + "2023-08-30 12:40:27,367 - INFO - Epoch: 2/2, Iter: 82/93 -- train_loss: 0.1701 \n", + "2023-08-30 12:40:27,737 - INFO - Epoch: 2/2, Iter: 83/93 -- train_loss: 0.1671 \n", + "2023-08-30 12:40:28,109 - INFO - Epoch: 2/2, Iter: 84/93 -- train_loss: 0.1661 \n", + "2023-08-30 12:40:28,487 - INFO - Epoch: 2/2, Iter: 85/93 -- train_loss: 0.1436 \n", + "2023-08-30 12:40:28,858 - INFO - Epoch: 2/2, Iter: 86/93 -- train_loss: 0.1486 \n", + "2023-08-30 12:40:29,229 - INFO - Epoch: 2/2, Iter: 87/93 -- train_loss: 0.1446 \n", + "2023-08-30 12:40:29,601 - INFO - Epoch: 2/2, Iter: 88/93 -- train_loss: 0.1411 \n", + "2023-08-30 12:40:29,976 - INFO - Epoch: 2/2, Iter: 89/93 -- train_loss: 0.1547 \n", + "2023-08-30 12:40:30,346 - INFO - Epoch: 2/2, Iter: 90/93 -- train_loss: 0.1410 \n", + "2023-08-30 12:40:30,718 - INFO - Epoch: 2/2, Iter: 91/93 -- train_loss: 0.1753 \n", + "2023-08-30 12:40:31,088 - INFO - Epoch: 2/2, Iter: 92/93 -- train_loss: 0.1475 \n", + "2023-08-30 12:40:31,202 - INFO - Epoch: 2/2, Iter: 93/93 -- train_loss: 0.1644 \n", + "2023-08-30 12:40:31,202 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run resuming from iteration 0, epoch 1 until 2 epochs\n", + "2023-08-30 12:40:40,084 - ignite.engine.engine.SupervisedEvaluator - INFO - Got new best metric of accuracy: 0.9945151258128357\n", + "2023-08-30 12:40:40,084 - INFO - Epoch[2] Metrics -- accuracy: 0.9945 \n", + "2023-08-30 12:40:40,084 - INFO - Key metric: accuracy best value: 0.9945151258128357 at epoch: 2\n", + "2023-08-30 12:40:40,084 - ignite.engine.engine.SupervisedEvaluator - INFO - Epoch[2] Complete. Time taken: 00:00:08.764\n", + "2023-08-30 12:40:40,084 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run complete. Time taken: 00:00:08.882\n", + "2023-08-30 12:40:40,233 - ignite.engine.engine.SupervisedTrainer - INFO - Saved checkpoint at epoch: 2\n", + "2023-08-30 12:40:40,233 - ignite.engine.engine.SupervisedTrainer - INFO - Epoch[2] Complete. Time taken: 00:00:44.072\n", + "2023-08-30 12:40:40,318 - ignite.engine.engine.SupervisedTrainer - INFO - Train completed, saved final checkpoint: output/output_230830_123911/model_final_iteration=186.pt\n", + "2023-08-30 12:40:40,318 - ignite.engine.engine.SupervisedTrainer - INFO - Engine run complete. Time taken: 00:01:28.998\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "BUNDLE=\"./MedNISTClassifier_v2\"\n", + "\n", + "python -m monai.bundle run train \\\n", + " --bundle_root \"$BUNDLE\" \\\n", + " --logging_file \"$BUNDLE/configs/logging.conf\" \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"['$BUNDLE/configs/common.yaml','$BUNDLE/configs/train.yaml']\" \\\n", + " --max_epochs 2" + ] + }, + { + "cell_type": "markdown", + "id": "627bf8a5-1524-425f-93f8-28e217f2adec", + "metadata": {}, + "source": [ + "Results and logs get put into unique timestamped directories:" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "id": "00c84e2c-1709-4136-8612-87142026ac2e", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[01;34moutput/output_230830_123911/\u001b[00m\n", + "├── log.txt\n", + "├── model_epoch=1.pt\n", + "├── model_epoch=2.pt\n", + "└── model_final_iteration=186.pt\n", + "\n", + "0 directories, 4 files\n" + ] + } + ], + "source": [ + "!tree output/output_230830_123911/" + ] + }, + { + "cell_type": "markdown", + "id": "5705ff79-fe58-410a-bb93-80b4f3fa2ea2", + "metadata": {}, + "source": [ + "## Inference\n", + "\n", + "What is also needed is an inference script which will apply a loaded network to every image in a given directory and write a result to a file or to the log output. For segmentation networks this should save generated segmentations to know locations, but for this classification network we'll stick to just printing results to the log. \n", + "\n", + "First thing to do is create a test directory with only a few test images so we can demonstrate inference quickly:" + ] + }, + { + "cell_type": "code", + "execution_count": 38, + "id": "3a957503-39e4-4f73-a989-ce6e4e2d3e9e", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Loading dataset: 100%|██████████| 5895/5895 [00:03<00:00, 1771.10it/s]\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "MedNIST/AbdomenCT/001990.jpeg Label: 0\n", + "MedNIST/BreastMRI/007676.jpeg Label: 1\n", + "MedNIST/ChestCT/006763.jpeg Label: 3\n", + "MedNIST/CXR/001214.jpeg Label: 2\n", + "MedNIST/Hand/004427.jpeg Label: 4\n", + "MedNIST/HeadCT/003806.jpeg Label: 5\n", + "MedNIST/HeadCT/004638.jpeg Label: 5\n", + "MedNIST/CXR/005013.jpeg Label: 2\n", + "MedNIST/ChestCT/008275.jpeg Label: 3\n", + "MedNIST/BreastMRI/000630.jpeg Label: 1\n", + "MedNIST/BreastMRI/007547.jpeg Label: 1\n", + "MedNIST/BreastMRI/008425.jpeg Label: 1\n", + "MedNIST/AbdomenCT/003981.jpeg Label: 0\n", + "MedNIST/Hand/001130.jpeg Label: 4\n", + "MedNIST/BreastMRI/005118.jpeg Label: 1\n", + "MedNIST/CXR/006505.jpeg Label: 2\n", + "MedNIST/ChestCT/008218.jpeg Label: 3\n", + "MedNIST/HeadCT/005305.jpeg Label: 5\n", + "MedNIST/AbdomenCT/007871.jpeg Label: 0\n", + "MedNIST/Hand/007065.jpeg Label: 4\n" + ] + } + ], + "source": [ + "from monai.apps import MedNISTDataset\n", + "\n", + "root_dir = \".\" # assuming MedNIST was downloaded to the current directory\n", + "num_images = 20\n", + "dataset = MedNISTDataset(root_dir=root_dir, section=\"test\", download=False)\n", + "\n", + "!mkdir -p test_images\n", + "\n", + "for i in range(num_images):\n", + " filename = dataset[i][\"image_meta_dict\"][\"filename_or_obj\"]\n", + " print(filename, \"Label:\", dataset[i][\"label\"])\n", + " !cp {root_dir}/{filename} test_images" + ] + }, + { + "cell_type": "markdown", + "id": "ef85014c-d1eb-4a93-911b-f405eac74094", + "metadata": {}, + "source": [ + "Next we'll create the inference script which will apply the network to all the files in the given directory (thus assuming all are images) and save the results to a csv file:" + ] + }, + { + "cell_type": "markdown", + "id": "0044efdc-6c5e-479c-880b-acd9e7ab4fea", + "metadata": {}, + "source": [ + "Next the inference script:" + ] + }, + { + "cell_type": "code", + "execution_count": 90, + "id": "3c5556db-2e63-484c-9358-977b4c35d60f", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting MedNISTClassifier_v2/configs/inference.yaml\n" + ] + } + ], + "source": [ + "%%writefile MedNISTClassifier_v2/configs/inference.yaml\n", + "\n", + "imports:\n", + "- $import glob\n", + "\n", + "input_dir: 'input'\n", + "# dataset is a list of dictionaries to work with dictionary transforms\n", + "input_files: '$[{@image: f} for f in sorted(glob.glob(@input_dir+''/*.*''))]'\n", + "\n", + "infer_dataset:\n", + " _target_: Dataset\n", + " data: '@input_files'\n", + " transform: \n", + " _target_: Compose\n", + " transforms: '@train_transforms'\n", + "\n", + "infer_dl:\n", + " _target_: DataLoader\n", + " dataset: '@infer_dataset'\n", + " batch_size: 1\n", + " shuffle: false\n", + " num_workers: 0\n", + "\n", + "# transforms applied to network output, same as those in training except \"label\" isn't present\n", + "post_transform:\n", + " _target_: Compose\n", + " transforms:\n", + " - _target_: Activationsd\n", + " keys: '@pred'\n", + " softmax: true \n", + " - _target_: AsDiscreted\n", + " keys: ['@pred']\n", + " argmax: true \n", + "\n", + "# handlers to load the checkpoint file (and fail if a file isn't found), and save classification results to a csv file\n", + "handlers:\n", + "- _target_: CheckpointLoader\n", + " load_path: '@ckpt_path'\n", + " load_dict:\n", + " model: '@net'\n", + "- _target_: ClassificationSaver\n", + " batch_transform: '$lambda batch: batch[0][@image].meta'\n", + " output_transform: '$monai.handlers.from_engine([''pred''])'\n", + "\n", + "inferer: \n", + " _target_: SimpleInferer\n", + "\n", + "evaluator:\n", + " _target_: SupervisedEvaluator\n", + " device: '@device'\n", + " val_data_loader: '@infer_dl'\n", + " network: '@net'\n", + " inferer: '@inferer'\n", + " postprocessing: '@post_transform'\n", + " val_handlers: '@handlers'\n", + "\n", + "inference:\n", + "- '$@evaluator.run()'" + ] + }, + { + "cell_type": "markdown", + "id": "5e9a706a-b135-4943-8245-0da8d5dad415", + "metadata": {}, + "source": [ + "Inference can now be run, specifying the checkpoint file to load as being one from our training run and the input directory as \"test_images\" which was created above:" + ] + }, + { + "cell_type": "code", + "execution_count": 89, + "id": "acdcc111-f259-4701-8b1d-31fcf74398bc", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-08-30 16:22:23,441 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-08-30 16:22:23,442 - INFO - > run_id: 'inference'\n", + "2023-08-30 16:22:23,442 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n", + "2023-08-30 16:22:23,442 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n", + " './MedNISTClassifier_v2/configs/inference.yaml']\n", + "2023-08-30 16:22:23,442 - INFO - > logging_file: './MedNISTClassifier_v2/configs/logging.conf'\n", + "2023-08-30 16:22:23,442 - INFO - > bundle_root: './MedNISTClassifier_v2'\n", + "2023-08-30 16:22:23,442 - INFO - > input_dir: 'test_images'\n", + "2023-08-30 16:22:23,442 - INFO - > ckpt_path: 'output/output_230830_123911/model_final_iteration=186.pt'\n", + "2023-08-30 16:22:23,442 - INFO - > handlers#1#filename: 'pred.csv'\n", + "2023-08-30 16:22:23,442 - INFO - ---\n", + "\n", + "\n", + "2023-08-30 16:22:23,442 - INFO - Setting logging properties based on config: ./MedNISTClassifier_v2/configs/logging.conf.\n", + "2023-08-30 16:22:23,812 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run resuming from iteration 0, epoch 0 until 1 epochs\n", + "2023-08-30 16:22:23,924 - ignite.engine.engine.SupervisedEvaluator - INFO - Restored all variables from output/output_230830_123911/model_final_iteration=186.pt\n", + "2023-08-30 16:22:24,801 - ignite.engine.engine.SupervisedEvaluator - INFO - Epoch[1] Complete. Time taken: 00:00:00.876\n", + "2023-08-30 16:22:24,802 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run complete. Time taken: 00:00:00.990\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "BUNDLE=\"./MedNISTClassifier_v2\"\n", + "\n", + "python -m monai.bundle run inference \\\n", + " --bundle_root \"$BUNDLE\" \\\n", + " --logging_file \"$BUNDLE/configs/logging.conf\" \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"['$BUNDLE/configs/common.yaml','$BUNDLE/configs/inference.yaml']\" \\\n", + " --ckpt_path 'output/output_230830_123911/model_final_iteration=186.pt' \\\n", + " --input_dir test_images " + ] + }, + { + "cell_type": "markdown", + "id": "955faa08-0552-4bff-ba84-238e9a404f62", + "metadata": {}, + "source": [ + "This will save the results of the inference to \"predictions.csv\" by default. You can change what the output filename is with an argument like `'--handlers#1#filename' pred.csv` which will directly change the `filename` parameter of the appropriate handler. Note the single quotes around the argument name since the hash sigil is interpreted by Bash as a comment otherwise.\n", + "\n", + "Looking at the output, the results aren't terribly legible:" + ] + }, + { + "cell_type": "code", + "execution_count": 88, + "id": "4a695039-7a53-4f9a-9754-769a9f8ebac8", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "test_images/000630.jpeg,1.0\n", + "test_images/001130.jpeg,4.0\n", + "test_images/001214.jpeg,2.0\n", + "test_images/001990.jpeg,0.0\n", + "test_images/003806.jpeg,5.0\n", + "test_images/003981.jpeg,0.0\n", + "test_images/004427.jpeg,4.0\n", + "test_images/004638.jpeg,5.0\n", + "test_images/005013.jpeg,2.0\n", + "test_images/005118.jpeg,1.0\n", + "test_images/005305.jpeg,5.0\n", + "test_images/006505.jpeg,2.0\n", + "test_images/006763.jpeg,3.0\n", + "test_images/007065.jpeg,4.0\n", + "test_images/007547.jpeg,1.0\n", + "test_images/007676.jpeg,1.0\n", + "test_images/007871.jpeg,0.0\n", + "test_images/008218.jpeg,3.0\n", + "test_images/008275.jpeg,3.0\n", + "test_images/008425.jpeg,1.0\n" + ] + } + ], + "source": [ + "!cat predictions.csv" + ] + }, + { + "cell_type": "markdown", + "id": "a231c937-9ced-4a6d-b01c-3bc9a128fd62", + "metadata": {}, + "source": [ + "The second column is the predicted class which we can use as an index into our list of class names to get something more readable:" + ] + }, + { + "cell_type": "code", + "execution_count": 121, + "id": "1065f928-3f66-47af-aed4-be2f0443cf2f", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "test_images/000630.jpeg BreastMRI\n", + "test_images/001130.jpeg Hand\n", + "test_images/001214.jpeg CXR\n", + "test_images/001990.jpeg AbdomenCT\n", + "test_images/003806.jpeg HeadCT\n", + "test_images/003981.jpeg AbdomenCT\n", + "test_images/004427.jpeg Hand\n", + "test_images/004638.jpeg HeadCT\n", + "test_images/005013.jpeg CXR\n", + "test_images/005118.jpeg BreastMRI\n", + "test_images/005305.jpeg HeadCT\n", + "test_images/006505.jpeg CXR\n", + "test_images/006763.jpeg ChestCT\n", + "test_images/007065.jpeg Hand\n", + "test_images/007547.jpeg BreastMRI\n", + "test_images/007676.jpeg BreastMRI\n", + "test_images/007871.jpeg AbdomenCT\n", + "test_images/008218.jpeg ChestCT\n", + "test_images/008275.jpeg ChestCT\n", + "test_images/008425.jpeg BreastMRI\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "\n", + "class_names = [\"AbdomenCT\", \"BreastMRI\", \"CXR\", \"ChestCT\", \"Hand\", \"HeadCT\"]\n", + "\n", + "for fn, idx in np.loadtxt(\"predictions.csv\", delimiter=\",\", dtype=str):\n", + " print(fn, class_names[int(float(idx))])" + ] + }, + { + "cell_type": "markdown", + "id": "18a62139-8a21-4bb9-96d4-e86d61298c40", + "metadata": {}, + "source": [ + "## Summary and Next\n", + "\n", + "This tutorial has covered MONAI Bundle best practices:\n", + " * Separate common definition config files which are combined with specific application files\n", + " * Separating out definitions in config files for easier reading and changes\n", + " * Using Engine based classes for traning and validation\n", + " * Simple training run management with uniquely-created results directories\n", + " * Inference script to generate a results csv file containing predictions\n", + " \n", + "The next tutorial will discuss creating bundles to wrap pre-existing Pytorch code so that you can get code into the bundle ecosystem without rewriting the world." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python [conda env:monai]", + "language": "python", + "name": "conda-env-monai-py" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.10" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From b82a5f77adf721b8868d388679befd1ea13be687 Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Thu, 31 Aug 2023 11:08:27 +0100 Subject: [PATCH 08/26] Last notebook started --- bundle/02_mednist_classification.ipynb | 4 + bundle/04_integrating_code.ipynb | 165 +++++++++++++++++++++++++ 2 files changed, 169 insertions(+) create mode 100644 bundle/04_integrating_code.ipynb diff --git a/bundle/02_mednist_classification.ipynb b/bundle/02_mednist_classification.ipynb index 752575d70b..88b0ac2f9a 100644 --- a/bundle/02_mednist_classification.ipynb +++ b/bundle/02_mednist_classification.ipynb @@ -20,6 +20,10 @@ "\n", "In this tutorial we'll describe how to create a bundle for a classification network. This will include how to train and apply the network on the command line. MedNIST will be used as the dataset with the bundle based off the [MONAI 101 notebook](https://github.com/Project-MONAI/tutorials/blob/main/2d_classification/monai_101.ipynb).\n", "\n", + "The dataset is kindly made available by Dr. Bradley J. Erickson M.D., Ph.D. (Department of Radiology, Mayo Clinic) under the Creative Commons CC BY-SA 4.0 license. If you use the MedNIST dataset, please acknowledge the source of the MedNIST dataset: the repository https://github.com/Project-MONAI/MONAI/ or the MedNIST tutorial for image classification https://github.com/Project-MONAI/MONAI/blob/master/examples/notebooks/mednist_tutorial.ipynb.\n", + "\n", + "This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/4.0/.\n", + "\n", "First we'll consider a condensed version of the code from that notebook and go step-by-step how best to represent this as a bundle:" ] }, diff --git a/bundle/04_integrating_code.ipynb b/bundle/04_integrating_code.ipynb new file mode 100644 index 0000000000..87e369bfb8 --- /dev/null +++ b/bundle/04_integrating_code.ipynb @@ -0,0 +1,165 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "64bd2d8c-4799-4073-bc28-c3632589c525", + "metadata": {}, + "source": [ + "Copyright (c) MONAI Consortium \n", + "Licensed under the Apache License, Version 2.0 (the \"License\"); \n", + "you may not use this file except in compliance with the License. \n", + "You may obtain a copy of the License at \n", + "    http://www.apache.org/licenses/LICENSE-2.0 \n", + "Unless required by applicable law or agreed to in writing, software \n", + "distributed under the License is distributed on an \"AS IS\" BASIS, \n", + "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \n", + "See the License for the specific language governing permissions and \n", + "limitations under the License.\n", + "\n", + "# Integrating Non-MONAI Code Into a Bundle\n", + "\n", + "This notebook will discuss strategies for integrating non-MONAI deep learning code into a bundle. This allows existing Pytorch workflows to be integrated into the bundle ecosystem, for example as a distributable bundle for the model zoo or some other repository like Hugging Face, or to integrate with MONAI Label. The assumption taken here is that you already have the components for preprocessing, inference, validation, and other parts of a workflow, and so the task is how to integrate these parts into MONAI types which can be embedded in config files.\n", + "\n", + "In the following cells we'll construct a bundle which follows the [CIFAR10 tutorial](https://github.com/pytorch/tutorials/blob/32d834139b8627eeacb5fb2862be9f095fcb0b52/beginner_source/blitz/cifar10_tutorial.py) in Pytorch's tutorials repo. A number of code components will be copied into the `scripts` directory of the bundle and linked into config files suitable to be used on the command line.\n", + "\n", + "We'll start with an initialised bundle and provide an appropriate metadata file describing the CIFAR10 classification network we'll provide:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "eb9dc6ec-13da-4a37-8afa-28e2766b9343", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\u001b[01;34mIntegrationBundle\u001b[00m\n", + "├── \u001b[01;34mconfigs\u001b[00m\n", + "│   └── metadata.json\n", + "├── \u001b[01;34mdocs\u001b[00m\n", + "│   └── README.md\n", + "├── LICENSE\n", + "└── \u001b[01;34mmodels\u001b[00m\n", + "\n", + "3 directories, 3 files\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "python -m monai.bundle init_bundle IntegrationBundle\n", + "rm IntegrationBundle/configs/inference.json\n", + "tree IntegrationBundle" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "id": "b29f053b-cf16-4ffc-bbe7-d9433fdfa872", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting IntegrationBundle/configs/metadata.json\n" + ] + } + ], + "source": [ + "%%writefile IntegrationBundle/configs/metadata.json\n", + "\n", + "{\n", + " \"version\": \"0.0.1\",\n", + " \"changelog\": {\n", + " \"0.0.1\": \"Initial version\"\n", + " },\n", + " \"monai_version\": \"1.2.0\",\n", + " \"pytorch_version\": \"2.0.0\",\n", + " \"numpy_version\": \"1.23.5\",\n", + " \"optional_packages_version\": {},\n", + " \"name\": \"IntegrationBundle\",\n", + " \"task\": \"Example Bundle\",\n", + " \"description\": \"This illustrates integrating non-MONAI code (CIFAR10 classification) into a bundle\",\n", + " \"authors\": \"Your Name Here\",\n", + " \"copyright\": \"Copyright (c) Your Name Here\",\n", + " \"data_source\": \"CIFAR10\",\n", + " \"data_type\": \"float32\",\n", + " \"intended_use\": \"This is suitable for demonstration only\",\n", + " \"network_data_format\": {\n", + " \"inputs\": {\n", + " \"image\": {\n", + " \"type\": \"image\",\n", + " \"format\": \"magnitude\",\n", + " \"modality\": \"natural\",\n", + " \"num_channels\": 3,\n", + " \"spatial_shape\": [32, 32],\n", + " \"dtype\": \"float32\",\n", + " \"value_range\": [-1, 1],\n", + " \"is_patch_data\": false,\n", + " \"channel_def\": {\n", + " \"0\": \"red\",\n", + " \"1\": \"green\",\n", + " \"2\": \"blue\"\n", + " }\n", + " }\n", + " },\n", + " \"outputs\": {\n", + " \"pred\": {\n", + " \"type\": \"probabilities\",\n", + " \"format\": \"classes\",\n", + " \"num_channels\": 10,\n", + " \"spatial_shape\": [10],\n", + " \"dtype\": \"float32\",\n", + " \"value_range\": [0, 1],\n", + " \"is_patch_data\": false,\n", + " \"channel_def\": {\n", + " \"0\": \"plane\",\n", + " \"1\": \"car\",\n", + " \"2\": \"bird\",\n", + " \"3\": \"cat\",\n", + " \"4\": \"deer\",\n", + " \"5\": \"dog\",\n", + " \"6\": \"frog\",\n", + " \"7\": \"horse\",\n", + " \"8\": \"ship\",\n", + " \"9\": \"truck\"\n", + " }\n", + " }\n", + " }\n", + " }\n", + "}" + ] + }, + { + "cell_type": "markdown", + "id": "f9eac927-052d-4632-966f-a87f06311b9b", + "metadata": {}, + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python [conda env:monai]", + "language": "python", + "name": "conda-env-monai-py" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.10" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From a2b12eb1a720cf3caee8ad6a7185c7927195c1b6 Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Wed, 6 Sep 2023 17:25:39 +0100 Subject: [PATCH 09/26] Updates, need to finalise content of notebook 4 to be useful --- bundle/02_mednist_classification.ipynb | 12 +- bundle/04_integrating_code.ipynb | 578 ++++++++++++++++++++++++- 2 files changed, 587 insertions(+), 3 deletions(-) diff --git a/bundle/02_mednist_classification.ipynb b/bundle/02_mednist_classification.ipynb index 88b0ac2f9a..317098a4a9 100644 --- a/bundle/02_mednist_classification.ipynb +++ b/bundle/02_mednist_classification.ipynb @@ -59,7 +59,9 @@ " ]\n", ")\n", "\n", - "dataset = MedNISTDataset(root_dir=root_dir, transform=transform, section=\"training\", download=True)\n", + "dataset = MedNISTDataset(\n", + " root_dir=root_dir, transform=transform, section=\"training\", download=True\n", + ")\n", "\n", "train_dl = DataLoader(dataset, batch_size=512, shuffle=True, num_workers=4)\n", "\n", @@ -78,7 +80,13 @@ "torch.jit.script(net).save(\"mednist.ts\")\n", "\n", "class_names = (\"AbdomenCT\", \"BreastMRI\", \"CXR\", \"ChestCT\", \"Hand\", \"HeadCT\")\n", - "testdata = MedNISTDataset(root_dir=root_dir, transform=transform, section=\"test\", download=False, runtime_cache=True)\n", + "testdata = MedNISTDataset(\n", + " root_dir=root_dir,\n", + " transform=transform,\n", + " section=\"test\",\n", + " download=False,\n", + " runtime_cache=True,\n", + ")\n", "\n", "max_items_to_print = 10\n", "eval_dl = DataLoader(testdata[:max_items_to_print], batch_size=1, num_workers=0)\n", diff --git a/bundle/04_integrating_code.ipynb b/bundle/04_integrating_code.ipynb index 87e369bfb8..38861ca322 100644 --- a/bundle/04_integrating_code.ipynb +++ b/bundle/04_integrating_code.ipynb @@ -22,7 +22,7 @@ "\n", "In the following cells we'll construct a bundle which follows the [CIFAR10 tutorial](https://github.com/pytorch/tutorials/blob/32d834139b8627eeacb5fb2862be9f095fcb0b52/beginner_source/blitz/cifar10_tutorial.py) in Pytorch's tutorials repo. A number of code components will be copied into the `scripts` directory of the bundle and linked into config files suitable to be used on the command line.\n", "\n", - "We'll start with an initialised bundle and provide an appropriate metadata file describing the CIFAR10 classification network we'll provide:" + "We'll start with an initialised bundle with a \"scripts\" directory and provide an appropriate metadata file describing the CIFAR10 classification network we'll provide:" ] }, { @@ -52,6 +52,9 @@ "\n", "python -m monai.bundle init_bundle IntegrationBundle\n", "rm IntegrationBundle/configs/inference.json\n", + "mkdir IntegrationBundle/scripts\n", + "echo \"\" > IntegrationBundle/scripts/__init__.py\n", + "\n", "tree IntegrationBundle" ] }, @@ -138,6 +141,579 @@ "cell_type": "markdown", "id": "f9eac927-052d-4632-966f-a87f06311b9b", "metadata": {}, + "source": [ + "## Scripts\n", + "\n", + "Taking the CIFAR10 tutorial as the \"codebase\" we're using currently, which we want to convert into a bundle, we want to copy components into `scripts` from that codebase. We'll start with the network given in the tutorial:" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "id": "dcdbe1ae-ea13-49cb-b5a3-3c2c78f91f2b", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting IntegrationBundle/scripts/net.py\n" + ] + } + ], + "source": [ + "%%writefile IntegrationBundle/scripts/net.py\n", + "\n", + "import torch\n", + "import torch.nn as nn\n", + "import torch.nn.functional as F\n", + "\n", + "\n", + "class Net(nn.Module):\n", + " def __init__(self):\n", + " super().__init__()\n", + " self.conv1 = nn.Conv2d(3, 6, 5)\n", + " self.pool = nn.MaxPool2d(2, 2)\n", + " self.conv2 = nn.Conv2d(6, 16, 5)\n", + " self.fc1 = nn.Linear(16 * 5 * 5, 120)\n", + " self.fc2 = nn.Linear(120, 84)\n", + " self.fc3 = nn.Linear(84, 10)\n", + "\n", + " def forward(self, x):\n", + " x = self.pool(F.relu(self.conv1(x)))\n", + " x = self.pool(F.relu(self.conv2(x)))\n", + " x = torch.flatten(x, 1)\n", + " x = F.relu(self.fc1(x))\n", + " x = F.relu(self.fc2(x))\n", + " x = self.fc3(x)\n", + " return x" + ] + }, + { + "cell_type": "markdown", + "id": "e6d11fac-ad12-4f47-a0cb-5c78263e1142", + "metadata": {}, + "source": [ + "Data transforms and data loaders are provided using definitions from `torchvision`. If we assume that these aren't easily converted into MONAI types, we instead need a function to return data loaders which will be used in config files. We could adapt the existing code by simply copying it into a function returning these definitions for use in the bundle:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "189d71c5-6556-4891-a382-0adbc8f80d30", + "metadata": {}, + "outputs": [], + "source": [ + "%%writefile IntegrationBundle/scripts/transforms.py\n", + "\n", + "import torchvision.transforms as transforms\n", + "\n", + "transform = transforms.Compose(\n", + " [transforms.ToTensor(),\n", + " transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "3d8f233e-495c-450c-a445-46d295ba7461", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Writing IntegrationBundle/scripts/dataloaders.py\n" + ] + } + ], + "source": [ + "%%writefile IntegrationBundle/scripts/dataloaders.py\n", + "\n", + "\n", + "import torch\n", + "import torchvision\n", + "\n", + "batch_size = 4\n", + "\n", + "def get_dataloader(is_training, transform):\n", + " \n", + " if is_training:\n", + " trainset = torchvision.datasets.CIFAR10(root='./data', train=True,\n", + " download=True, transform=transform)\n", + " trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,\n", + " shuffle=True, num_workers=2)\n", + " return trainloader\n", + " else:\n", + " testset = torchvision.datasets.CIFAR10(root='./data', train=False,\n", + " download=True, transform=transform)\n", + " testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,\n", + " shuffle=False, num_workers=2)\n", + " return testloader " + ] + }, + { + "cell_type": "markdown", + "id": "317e2abf-673d-4a84-9afb-187bf01da278", + "metadata": {}, + "source": [ + "The training process in the tutorial is just a loop going through the dataset twice. The simplest adaptation for this is to wrap it in a function taking only the network and dataloader as arguments:" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "1a836b1b-06da-4866-82a2-47d1efed5d7c", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting IntegrationBundle/scripts/train.py\n" + ] + } + ], + "source": [ + "%%writefile IntegrationBundle/scripts/train.py\n", + "\n", + "import torch.nn as nn\n", + "import torch.optim as optim\n", + "\n", + "\n", + "def train(net,trainloader):\n", + " criterion = nn.CrossEntropyLoss()\n", + " optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)\n", + "\n", + " for epoch in range(2): \n", + "\n", + " running_loss = 0.0\n", + " for i, data in enumerate(trainloader, 0):\n", + " inputs, labels = data\n", + "\n", + " optimizer.zero_grad()\n", + "\n", + " outputs = net(inputs)\n", + " loss = criterion(outputs, labels)\n", + " loss.backward()\n", + " optimizer.step()\n", + "\n", + " running_loss += loss.item()\n", + " if i % 2000 == 1999: \n", + " print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')\n", + " running_loss = 0.0\n", + "\n", + " print('Finished Training')\n" + ] + }, + { + "cell_type": "markdown", + "id": "3baf799c-8f3d-4a84-aa0d-6acbe1a0d96b", + "metadata": {}, + "source": [ + "This function will hard code all sorts of parameters like loss function, learning rate, epoch count, etc. For this example it will work but of course if you're adapting other code it would make sense to include more parameterisation to your wrapper components. \n", + "\n", + "## Training\n", + "\n", + "We can now define a training config file:" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "id": "0b9764a8-674c-42ae-ad4b-f2dea027bdbf", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting IntegrationBundle/configs/train.yaml\n" + ] + } + ], + "source": [ + "%%writefile IntegrationBundle/configs/train.yaml\n", + "\n", + "\n", + "imports:\n", + "- $import torch\n", + "- $import scripts\n", + "- $import scripts.net\n", + "- $import scripts.train\n", + "- $import scripts.transforms\n", + "- $import scripts.dataloaders\n", + "\n", + "net:\n", + " _target_: scripts.net.Net\n", + "\n", + "transforms: '$scripts.transforms.transform'\n", + "\n", + "dataloader: '$scripts.dataloaders.get_dataloader(True, @transforms)'\n", + "\n", + "train:\n", + "- $scripts.train.train(@net, @dataloader)\n", + "- $torch.save(@net.state_dict(), './cifar_net.pth')\n" + ] + }, + { + "cell_type": "markdown", + "id": "e6c88aea-8182-44f1-853c-7d728bdae45b", + "metadata": {}, + "source": [ + "The key concept demonstrated here is how to refer to definitions in the `scripts` directory within a config file and tie them together into a program. These definitions can be existing types or wrapper functions around existing code to make them easier to refer to here. A lot of good practice is ignored here but it shows how to adapt code into a bundle with minimal changes.\n", + "\n", + "Let's train something simple with this setup:" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "id": "65149911-3771-4a49-ade6-378305a4b946", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-09-04 15:19:03,804 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-09-04 15:19:03,804 - INFO - > run_id: 'train'\n", + "2023-09-04 15:19:03,804 - INFO - > meta_file: './IntegrationBundle/configs/metadata.json'\n", + "2023-09-04 15:19:03,804 - INFO - > config_file: './IntegrationBundle/configs/train.yaml'\n", + "2023-09-04 15:19:03,804 - INFO - > bundle_root: './IntegrationBundle'\n", + "2023-09-04 15:19:03,804 - INFO - ---\n", + "\n", + "\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/localek10/workspace/monai/MONAI_mine/monai/bundle/workflows.py:213: UserWarning: Default logging file in 'configs/logging.conf' does not exist, skipping logging.\n", + " warnings.warn(\"Default logging file in 'configs/logging.conf' does not exist, skipping logging.\")\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Files already downloaded and verified\n", + "Files already downloaded and verified\n", + "[1, 2000] loss: 2.226\n", + "[1, 4000] loss: 1.913\n", + "[1, 6000] loss: 1.700\n", + "[1, 8000] loss: 1.593\n", + "[1, 10000] loss: 1.524\n", + "[1, 12000] loss: 1.476\n", + "[2, 2000] loss: 1.397\n", + "[2, 4000] loss: 1.384\n", + "[2, 6000] loss: 1.372\n", + "[2, 8000] loss: 1.333\n", + "[2, 10000] loss: 1.312\n", + "[2, 12000] loss: 1.303\n", + "Finished Training\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "BUNDLE=\"./IntegrationBundle\"\n", + "\n", + "export PYTHONPATH=$BUNDLE\n", + "\n", + "python -m monai.bundle run train \\\n", + " --bundle_root \"$BUNDLE\" \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"$BUNDLE/configs/train.yaml\" " + ] + }, + { + "cell_type": "markdown", + "id": "1c27ba04-3271-4119-a57a-698aa7a83409", + "metadata": {}, + "source": [ + "## Testing \n", + "\n", + "The second part of the tutorial script is testing the network with the test data which can again be put into a simple routine called from a config file: " + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "id": "fc35814e-625d-4871-ac1c-200a0cc562d9", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Writing IntegrationBundle/scripts/test.py\n" + ] + } + ], + "source": [ + "%%writefile IntegrationBundle/scripts/test.py\n", + "\n", + "import torch\n", + "\n", + "def test(net, testloader):\n", + " correct = 0\n", + " total = 0\n", + " \n", + " with torch.no_grad():\n", + " for data in testloader:\n", + " images, labels = data\n", + " outputs = net(images)\n", + " _, predicted = torch.max(outputs.data, 1)\n", + " total += labels.size(0)\n", + " correct += (predicted == labels).sum().item()\n", + "\n", + " print(f'Accuracy of the network on the 10000 test images: {100 * correct // total} %')\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fb49aef2-9fb5-4e74-83d2-9da935e07648", + "metadata": {}, + "outputs": [], + "source": [ + "%%writefile IntegrationBundle/configs/test.yaml\n", + "\n", + "imports:\n", + "- $import torch\n", + "- $import scripts\n", + "- $import scripts.test\n", + "- $import scripts.transforms\n", + "- $import scripts.dataloaders\n", + "\n", + "net:\n", + " _target_: scripts.net.Net\n", + "\n", + "transforms: '$scripts.transforms.transform'\n", + "\n", + "dataloader: '$scripts.dataloaders.get_dataloader(False, @transforms)'\n", + "\n", + "test:\n", + "- $@net.load_state_dict(torch.load('./cifar_net.pth'))\n", + "- $scripts.test.test(@net, @dataloader)\n" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "ab171286-045c-4067-a2ea-be359168869d", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-09-05 12:42:29,561 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-09-05 12:42:29,561 - INFO - > run_id: 'test'\n", + "2023-09-05 12:42:29,561 - INFO - > meta_file: './IntegrationBundle/configs/metadata.json'\n", + "2023-09-05 12:42:29,561 - INFO - > config_file: './IntegrationBundle/configs/test.yaml'\n", + "2023-09-05 12:42:29,561 - INFO - > bundle_root: './IntegrationBundle'\n", + "2023-09-05 12:42:29,561 - INFO - ---\n", + "\n", + "\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/localek10/workspace/monai/MONAI_mine/monai/bundle/workflows.py:213: UserWarning: Default logging file in 'configs/logging.conf' does not exist, skipping logging.\n", + " warnings.warn(\"Default logging file in 'configs/logging.conf' does not exist, skipping logging.\")\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Files already downloaded and verified\n", + "Accuracy of the network on the 10000 test images: 54 %\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "BUNDLE=\"./IntegrationBundle\"\n", + "\n", + "export PYTHONPATH=$BUNDLE\n", + "\n", + "python -m monai.bundle run test \\\n", + " --bundle_root \"$BUNDLE\" \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"$BUNDLE/configs/test.yaml\" " + ] + }, + { + "cell_type": "markdown", + "id": "4f218b72-734b-4b6e-93e5-990b8c647e8a", + "metadata": {}, + "source": [ + "## Inference\n", + "\n", + "The original script lacked a section on inference with the network, however this is rather straight forward and so an script and config file can easily implement this:" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "1f510a23-aa3a-4e34-81e2-b4c719d87939", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting IntegrationBundle/scripts/inference.py\n" + ] + } + ], + "source": [ + "%%writefile IntegrationBundle/scripts/inference.py\n", + "\n", + "import torch\n", + "from PIL import Image\n", + "\n", + "def inference(net, transforms, filenames):\n", + " for fn in filenames:\n", + " with Image.open(fn) as im:\n", + " tim=transforms(im)\n", + " outputs=net(tim[None])\n", + " _, predictions = torch.max(outputs, 1)\n", + " print(fn, predictions[0].item())" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "7f1251be-f0dd-4cbf-8903-3f3769c8049c", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Overwriting IntegrationBundle/configs/inference.yaml\n" + ] + } + ], + "source": [ + "%%writefile IntegrationBundle/configs/inference.yaml\n", + "\n", + "imports:\n", + "- $import glob\n", + "- $import torch\n", + "- $import scripts\n", + "- $import scripts.inference\n", + "- $import scripts.transforms\n", + "\n", + "ckpt_path: './cifar_net.pth'\n", + "\n", + "input_dir: 'test_cifar10'\n", + "input_files: '$sorted(glob.glob(@input_dir+''/*.*''))'\n", + "\n", + "net:\n", + " _target_: scripts.net.Net\n", + "\n", + "transforms: '$scripts.transforms.transform'\n", + "\n", + "inference:\n", + "- $@net.load_state_dict(torch.load('./cifar_net.pth'))\n", + "- $scripts.inference.inference(@net, @transforms, @input_files)" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "28d1230e-1d3a-4929-a266-e5f763dfde7f", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-09-05 12:44:47,247 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-09-05 12:44:47,247 - INFO - > run_id: 'inference'\n", + "2023-09-05 12:44:47,247 - INFO - > meta_file: './IntegrationBundle/configs/metadata.json'\n", + "2023-09-05 12:44:47,247 - INFO - > config_file: './IntegrationBundle/configs/inference.yaml'\n", + "2023-09-05 12:44:47,247 - INFO - > bundle_root: './IntegrationBundle'\n", + "2023-09-05 12:44:47,247 - INFO - ---\n", + "\n", + "\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/localek10/workspace/monai/MONAI_mine/monai/bundle/workflows.py:213: UserWarning: Default logging file in 'configs/logging.conf' does not exist, skipping logging.\n", + " warnings.warn(\"Default logging file in 'configs/logging.conf' does not exist, skipping logging.\")\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "test_cifar10/img_0_3.png 3\n", + "test_cifar10/img_1_8.png 1\n", + "test_cifar10/img_2_8.png 8\n", + "test_cifar10/img_3_0.png 0\n", + "test_cifar10/img_4_6.png 6\n", + "test_cifar10/img_5_6.png 6\n", + "test_cifar10/img_6_1.png 5\n", + "test_cifar10/img_7_6.png 6\n", + "test_cifar10/img_8_3.png 3\n", + "test_cifar10/img_9_1.png 1\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "BUNDLE=\"./IntegrationBundle\"\n", + "\n", + "export PYTHONPATH=$BUNDLE\n", + "\n", + "python -m monai.bundle run inference \\\n", + " --bundle_root \"$BUNDLE\" \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"$BUNDLE/configs/inference.yaml\" " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "527b3326-0e80-4b24-b001-c2b6cb63db82", + "metadata": {}, + "outputs": [], "source": [] } ], From 4ff059bdfb9440117465103a9fdc0dae1249e19c Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Wed, 6 Sep 2023 16:26:41 +0000 Subject: [PATCH 10/26] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- bundle/02_mednist_classification.ipynb | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/bundle/02_mednist_classification.ipynb b/bundle/02_mednist_classification.ipynb index 317098a4a9..2e609b3b4e 100644 --- a/bundle/02_mednist_classification.ipynb +++ b/bundle/02_mednist_classification.ipynb @@ -59,9 +59,7 @@ " ]\n", ")\n", "\n", - "dataset = MedNISTDataset(\n", - " root_dir=root_dir, transform=transform, section=\"training\", download=True\n", - ")\n", + "dataset = MedNISTDataset(root_dir=root_dir, transform=transform, section=\"training\", download=True)\n", "\n", "train_dl = DataLoader(dataset, batch_size=512, shuffle=True, num_workers=4)\n", "\n", From f9b1257605a4c7d0f4e32279774434733757e10d Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Wed, 6 Sep 2023 17:28:26 +0100 Subject: [PATCH 11/26] Remediation DCO Remediation Commit for Eric Kerfoot I, Eric Kerfoot , hereby add my Signed-off-by to this commit: b82a5f77adf721b8868d388679befd1ea13be687 I, Eric Kerfoot , hereby add my Signed-off-by to this commit: a2b12eb1a720cf3caee8ad6a7185c7927195c1b6 Signed-off-by: Eric Kerfoot --- bundle/04_integrating_code.ipynb | 8 -------- 1 file changed, 8 deletions(-) diff --git a/bundle/04_integrating_code.ipynb b/bundle/04_integrating_code.ipynb index 38861ca322..8a051e2d61 100644 --- a/bundle/04_integrating_code.ipynb +++ b/bundle/04_integrating_code.ipynb @@ -707,14 +707,6 @@ " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", " --config_file \"$BUNDLE/configs/inference.yaml\" " ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "527b3326-0e80-4b24-b001-c2b6cb63db82", - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { From decd1b18fc82eab3ed61d1d3df5defd4dd7d66b5 Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Thu, 7 Sep 2023 17:47:44 +0100 Subject: [PATCH 12/26] Updates to last notebook for now Signed-off-by: Eric Kerfoot --- bundle/03_mednist_classification_v2.ipynb | 144 ++++++++++++++++++---- bundle/04_integrating_code.ipynb | 48 ++++++-- 2 files changed, 164 insertions(+), 28 deletions(-) diff --git a/bundle/03_mednist_classification_v2.ipynb b/bundle/03_mednist_classification_v2.ipynb index 342b3cde59..621baee8d6 100644 --- a/bundle/03_mednist_classification_v2.ipynb +++ b/bundle/03_mednist_classification_v2.ipynb @@ -789,18 +789,32 @@ }, { "cell_type": "markdown", - "id": "ef85014c-d1eb-4a93-911b-f405eac74094", - "metadata": {}, + "id": "0044efdc-6c5e-479c-880b-acd9e7ab4fea", + "metadata": { + "tags": [] + }, "source": [ - "Next we'll create the inference script which will apply the network to all the files in the given directory (thus assuming all are images) and save the results to a csv file:" + "Next remove the existing example inference script:" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "7f800520-f29f-4b80-9af4-5e069f97824b", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "!rm \"MedNISTClassifier_v2/configs/inference.json\"" ] }, { "cell_type": "markdown", - "id": "0044efdc-6c5e-479c-880b-acd9e7ab4fea", + "id": "ef85014c-d1eb-4a93-911b-f405eac74094", "metadata": {}, "source": [ - "Next the inference script:" + "Next we'll create the inference script which will apply the network to all the files in the given directory (thus assuming all are images) and save the results to a csv file:" ] }, { @@ -888,7 +902,7 @@ }, { "cell_type": "code", - "execution_count": 89, + "execution_count": 6, "id": "acdcc111-f259-4701-8b1d-31fcf74398bc", "metadata": {}, "outputs": [ @@ -896,24 +910,23 @@ "name": "stdout", "output_type": "stream", "text": [ - "2023-08-30 16:22:23,441 - INFO - --- input summary of monai.bundle.scripts.run ---\n", - "2023-08-30 16:22:23,442 - INFO - > run_id: 'inference'\n", - "2023-08-30 16:22:23,442 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n", - "2023-08-30 16:22:23,442 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n", + "2023-09-07 16:20:16,087 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-09-07 16:20:16,087 - INFO - > run_id: 'inference'\n", + "2023-09-07 16:20:16,087 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n", + "2023-09-07 16:20:16,087 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n", " './MedNISTClassifier_v2/configs/inference.yaml']\n", - "2023-08-30 16:22:23,442 - INFO - > logging_file: './MedNISTClassifier_v2/configs/logging.conf'\n", - "2023-08-30 16:22:23,442 - INFO - > bundle_root: './MedNISTClassifier_v2'\n", - "2023-08-30 16:22:23,442 - INFO - > input_dir: 'test_images'\n", - "2023-08-30 16:22:23,442 - INFO - > ckpt_path: 'output/output_230830_123911/model_final_iteration=186.pt'\n", - "2023-08-30 16:22:23,442 - INFO - > handlers#1#filename: 'pred.csv'\n", - "2023-08-30 16:22:23,442 - INFO - ---\n", + "2023-09-07 16:20:16,087 - INFO - > logging_file: './MedNISTClassifier_v2/configs/logging.conf'\n", + "2023-09-07 16:20:16,087 - INFO - > bundle_root: './MedNISTClassifier_v2'\n", + "2023-09-07 16:20:16,087 - INFO - > ckpt_path: 'output/output_230830_123911/model_final_iteration=186.pt'\n", + "2023-09-07 16:20:16,087 - INFO - > input_dir: 'test_images'\n", + "2023-09-07 16:20:16,087 - INFO - ---\n", "\n", "\n", - "2023-08-30 16:22:23,442 - INFO - Setting logging properties based on config: ./MedNISTClassifier_v2/configs/logging.conf.\n", - "2023-08-30 16:22:23,812 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run resuming from iteration 0, epoch 0 until 1 epochs\n", - "2023-08-30 16:22:23,924 - ignite.engine.engine.SupervisedEvaluator - INFO - Restored all variables from output/output_230830_123911/model_final_iteration=186.pt\n", - "2023-08-30 16:22:24,801 - ignite.engine.engine.SupervisedEvaluator - INFO - Epoch[1] Complete. Time taken: 00:00:00.876\n", - "2023-08-30 16:22:24,802 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run complete. Time taken: 00:00:00.990\n" + "2023-09-07 16:20:16,088 - INFO - Setting logging properties based on config: ./MedNISTClassifier_v2/configs/logging.conf.\n", + "2023-09-07 16:20:16,487 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run resuming from iteration 0, epoch 0 until 1 epochs\n", + "2023-09-07 16:20:16,598 - ignite.engine.engine.SupervisedEvaluator - INFO - Restored all variables from output/output_230830_123911/model_final_iteration=186.pt\n", + "2023-09-07 16:20:17,836 - ignite.engine.engine.SupervisedEvaluator - INFO - Epoch[1] Complete. Time taken: 00:00:01.237\n", + "2023-09-07 16:20:17,837 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run complete. Time taken: 00:00:01.350\n" ] } ], @@ -1028,6 +1041,95 @@ " print(fn, class_names[int(float(idx))])" ] }, + { + "cell_type": "markdown", + "id": "235e90b9-9209-4a58-885d-042ab55c9c18", + "metadata": {}, + "source": [ + "## Putting the Bundle Together\n", + "\n", + "We have a checkpoint for our network which produces good results that we can now make the \"official\" shared weights for the bundle. We need to copy the checkpoint into the `models` directory and optionally produce a Torchscript version of the network. \n", + "\n", + "For the Torchscript convertion MONAI provides the `ckpt_export` program in the bundles submodule:" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "c6672caa-fd51-4dde-a31d-5c4de8c3cc1d", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-09-07 16:20:25,463 - INFO - --- input summary of monai.bundle.scripts.ckpt_export ---\n", + "2023-09-07 16:20:25,463 - INFO - > net_id: 'network_def'\n", + "2023-09-07 16:20:25,463 - INFO - > filepath: './MedNISTClassifier_v2/models/model.ts'\n", + "2023-09-07 16:20:25,463 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n", + "2023-09-07 16:20:25,463 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n", + " './MedNISTClassifier_v2/configs/inference.yaml']\n", + "2023-09-07 16:20:25,463 - INFO - > ckpt_file: './MedNISTClassifier_v2/models/model.pt'\n", + "2023-09-07 16:20:25,463 - INFO - > key_in_ckpt: 'model'\n", + "2023-09-07 16:20:25,463 - INFO - > bundle_root: './MedNISTClassifier_v2'\n", + "2023-09-07 16:20:25,463 - INFO - ---\n", + "\n", + "\n", + "2023-09-07 16:20:28,048 - INFO - exported to file: ./MedNISTClassifier_v2/models/model.ts.\n", + "\u001b[01;34m./MedNISTClassifier_v2\u001b[00m\n", + "├── \u001b[01;34mconfigs\u001b[00m\n", + "│   ├── common.yaml\n", + "│   ├── inference.yaml\n", + "│   ├── logging.conf\n", + "│   ├── metadata.json\n", + "│   └── train.yaml\n", + "├── \u001b[01;34mdocs\u001b[00m\n", + "│   └── README.md\n", + "├── LICENSE\n", + "└── \u001b[01;34mmodels\u001b[00m\n", + " ├── model.pt\n", + " └── model.ts\n", + "\n", + "3 directories, 9 files\n" + ] + } + ], + "source": [ + "%%bash\n", + "\n", + "BUNDLE=\"./MedNISTClassifier_v2\"\n", + "\n", + "cp \"output/output_230830_123911/model_final_iteration=186.pt\" \"$BUNDLE/models/model.pt\"\n", + "\n", + "python -m monai.bundle ckpt_export \\\n", + " --bundle_root \"$BUNDLE\" \\\n", + " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", + " --config_file \"['$BUNDLE/configs/common.yaml','$BUNDLE/configs/inference.yaml']\" \\\n", + " --net_id network_def \\\n", + " --key_in_ckpt model \\\n", + " --ckpt_file \"$BUNDLE/models/model.pt\" \\\n", + " --filepath \"$BUNDLE/models/model.ts\" \n", + "\n", + "tree \"$BUNDLE\"" + ] + }, + { + "cell_type": "markdown", + "id": "8def15f8-d0dc-4ed0-8bf7-669e0720ac81", + "metadata": {}, + "source": [ + "This will have produced the `model.ts` file in `models` as shown here which can be loaded in Python without the bundle config scripts like any other Torchscript object.\n", + "\n", + "The arguments for the `ckpt_export` command specify the components to use in the config files and the checkpoint:\n", + "* `bundle_root`, `meta_file`, and `config_file` are as in previous usages.\n", + "* `net_id` specifies the object in the config files which represents the network definition, ie. the instantiated network object.\n", + "* `key_in_ckpt` names the key under which the weights for the model are found in the checkpoint, this assumes the checkpoint is a dictionary which is what `CheckpointSaver` produces, if this file isn't a dictionary omit this argument.\n", + "* `ckpt_file` the name of the checkpoint file itself\n", + "* `filepath` the output filename to store the Torchscript object to." + ] + }, { "cell_type": "markdown", "id": "18a62139-8a21-4bb9-96d4-e86d61298c40", diff --git a/bundle/04_integrating_code.ipynb b/bundle/04_integrating_code.ipynb index 8a051e2d61..fc6b936d05 100644 --- a/bundle/04_integrating_code.ipynb +++ b/bundle/04_integrating_code.ipynb @@ -27,7 +27,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": 8, "id": "eb9dc6ec-13da-4a37-8afa-28e2766b9343", "metadata": {}, "outputs": [ @@ -37,13 +37,14 @@ "text": [ "\u001b[01;34mIntegrationBundle\u001b[00m\n", "├── \u001b[01;34mconfigs\u001b[00m\n", - "│   └── metadata.json\n", + "│   ├── metadata.json\n", "├── \u001b[01;34mdocs\u001b[00m\n", "│   └── README.md\n", "├── LICENSE\n", - "└── \u001b[01;34mmodels\u001b[00m\n", + "└── \u001b[01;34mscripts\u001b[00m\n", + " └── __init__.py\n", "\n", - "3 directories, 3 files\n" + "5 directories, 20 files\n" ] } ], @@ -60,9 +61,11 @@ }, { "cell_type": "code", - "execution_count": 15, + "execution_count": 9, "id": "b29f053b-cf16-4ffc-bbe7-d9433fdfa872", - "metadata": {}, + "metadata": { + "tags": [] + }, "outputs": [ { "name": "stdout", @@ -83,7 +86,9 @@ " \"monai_version\": \"1.2.0\",\n", " \"pytorch_version\": \"2.0.0\",\n", " \"numpy_version\": \"1.23.5\",\n", - " \"optional_packages_version\": {},\n", + " \"optional_packages_version\": {\n", + " \"torchvision\": \"0.15.0\"\n", + " },\n", " \"name\": \"IntegrationBundle\",\n", " \"task\": \"Example Bundle\",\n", " \"description\": \"This illustrates integrating non-MONAI code (CIFAR10 classification) into a bundle\",\n", @@ -142,6 +147,8 @@ "id": "f9eac927-052d-4632-966f-a87f06311b9b", "metadata": {}, "source": [ + "Note that `torchvision` was added as an optional package but will be required to run the bundle. \n", + "\n", "## Scripts\n", "\n", "Taking the CIFAR10 tutorial as the \"codebase\" we're using currently, which we want to convert into a bundle, we want to copy components into `scripts` from that codebase. We'll start with the network given in the tutorial:" @@ -707,6 +714,33 @@ " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", " --config_file \"$BUNDLE/configs/inference.yaml\" " ] + }, + { + "cell_type": "markdown", + "id": "a1a06d82-1a8a-4607-8620-474e89061027", + "metadata": {}, + "source": [ + "## Adaptation Strategies\n", + "\n", + "This notebook has demonstrated one strategy of integrating existing code into a bundle. Code from an existing project, in this case an example script, was copied into the `scripts` directory of a bundle with added functions to make definitions easily referenced in config files. What shows up in the config files is a thin adapter layer to interface between what is expected in bundles and the codebase. \n", + "\n", + "It's clear that a mixed approach, where old components are replaced with MONAI types, would also work well given the simplicity of the code here. Substituting the Torchvision transforms with those from MONAI, using a `Trainer` class instead of the `train` function, and similarly testing and inference using an `Evaluator` class, would produce essentially the same results. It is up to you to determine what rewriting of code in the config scripts is justified for your codebase rather than adapting things in some way. \n", + "\n", + "The third approach involves a codebase which is installed as a package. If an external network with its training components is installed with `pip` for example, perhaps no code would be needed to adapt into a bundle, and you can just write config scripts to import this package and reference its definitions. Some adapter code may be needed in `scripts` but this could be like those demonstrated here, simple wrapper functions returning objects assigned to keys in config files through evaluated Python expressions. \n", + "\n", + "Creating a bundle compatible with other tools requires you to define specific items in the config files. For example, MONAI Label states requirements [here](https://github.com/Project-MONAI/MONAILabel/blob/c90f42c0730554e3a05af93645ae84ccdcb5e14b/monailabel/tasks/infer/bundle.py#L33) as names that must be present in `inference.json/yaml` to work with the label server. You would have to provide `network_def`, `preprocessing`, `postprocessing`, and others. This means that the code from your existing codebase would have to be divided up into these components if it isn't already, and its inputs and output would have to match what would be expected of the MONAI types typically used for these definitions. \n", + "\n", + "If you need to adapt your code to a bundle it's going to be very specific to your situation how integration is going to work. Using config files as adapter layers is shown here to work, but by understanding how bundles are structured and what the moving pieces are to a bundle \"program\" you can figure out your own strategy.\n", + "\n", + "## Summary and Next\n", + "\n", + "In this tutorial we have looked at how to adapt code to a MONAI bundle:\n", + "* Wrapping code in thin adaptation layers\n", + "* Using these components in config files\n", + "* Discussion of the architectural concepts around the process of adaptation\n", + "\n", + "In future tutorials we shall delve into other details and strategies with MONAI bundles." + ] } ], "metadata": { From a8ce90a5334f6eeb5debf71bfe052b766999e10e Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Thu, 7 Sep 2023 17:52:44 +0100 Subject: [PATCH 13/26] README Update Signed-off-by: Eric Kerfoot --- bundle/README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/bundle/README.md b/bundle/README.md index 0665fd962f..12370eadc8 100644 --- a/bundle/README.md +++ b/bundle/README.md @@ -12,7 +12,8 @@ Start the tutorial notebooks on constructing bundles: 1. [Bundle Introduction](./01_bundle_intro.ipynb): create a very simple bundle from scratch. 2. [MedNIST Classification](./02_mednist_classification.ipynb): train a network using the bundle for doing a real task. - +3. [MedNIST Classification With Best Practices](./03_mednist_classification_v2.ipynb): do the same again but better. +4. [Integrating Existing Code](./04_integrating_code.ipynb): discussion on how to integrate existing, possible non-MONAI, code into a bundle. More advanced topics are covered in this directory: * [Further Features](./further_features.md): covers more advanced features and uses of configs, command usage, and From b00fa49d599bda8f27e0057061e5b77320fb160f Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Thu, 7 Sep 2023 18:15:59 +0100 Subject: [PATCH 14/26] Formatting Signed-off-by: Eric Kerfoot --- bundle/03_mednist_classification_v2.ipynb | 27 +++++++++++++++++------ bundle/04_integrating_code.ipynb | 4 ++-- 2 files changed, 22 insertions(+), 9 deletions(-) diff --git a/bundle/03_mednist_classification_v2.ipynb b/bundle/03_mednist_classification_v2.ipynb index 621baee8d6..4405d98aa5 100644 --- a/bundle/03_mednist_classification_v2.ipynb +++ b/bundle/03_mednist_classification_v2.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "64bd2d8c-4799-4073-bc28-c3632589c525", + "id": "9b2bc6c7-f54c-436f-ab66-86a631fb75d8", "metadata": {}, "source": [ "Copyright (c) MONAI Consortium \n", @@ -14,8 +14,25 @@ "distributed under the License is distributed on an \"AS IS\" BASIS, \n", "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \n", "See the License for the specific language governing permissions and \n", - "limitations under the License.\n", - "\n", + "limitations under the License." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2f51a451-566f-4501-aeb8-f3cd5d1f7bf9", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "from monai.apps import MedNISTDataset" + ] + }, + { + "cell_type": "markdown", + "id": "2682936a-09ed-4703-af06-c59f755395ee", + "metadata": {}, + "source": [ "# MedNIST Classification Bundle\n", "\n", "In this tutorial we'll revisit the bundle replicating [MONAI 101 notebook](https://github.com/Project-MONAI/tutorials/blob/main/2d_classification/monai_101.ipynb) and add more features representing best practice concepts. This will include evaluation and checkpoint saving techniques.\n", @@ -773,8 +790,6 @@ } ], "source": [ - "from monai.apps import MedNISTDataset\n", - "\n", "root_dir = \".\" # assuming MedNIST was downloaded to the current directory\n", "num_images = 20\n", "dataset = MedNISTDataset(root_dir=root_dir, section=\"test\", download=False)\n", @@ -1033,8 +1048,6 @@ } ], "source": [ - "import numpy as np\n", - "\n", "class_names = [\"AbdomenCT\", \"BreastMRI\", \"CXR\", \"ChestCT\", \"Hand\", \"HeadCT\"]\n", "\n", "for fn, idx in np.loadtxt(\"predictions.csv\", delimiter=\",\", dtype=str):\n", diff --git a/bundle/04_integrating_code.ipynb b/bundle/04_integrating_code.ipynb index fc6b936d05..ef4658e2ea 100644 --- a/bundle/04_integrating_code.ipynb +++ b/bundle/04_integrating_code.ipynb @@ -241,7 +241,6 @@ "source": [ "%%writefile IntegrationBundle/scripts/dataloaders.py\n", "\n", - "\n", "import torch\n", "import torchvision\n", "\n", @@ -350,7 +349,6 @@ "source": [ "%%writefile IntegrationBundle/configs/train.yaml\n", "\n", - "\n", "imports:\n", "- $import torch\n", "- $import scripts\n", @@ -477,6 +475,7 @@ "\n", "import torch\n", "\n", + "\n", "def test(net, testloader):\n", " correct = 0\n", " total = 0\n", @@ -605,6 +604,7 @@ "import torch\n", "from PIL import Image\n", "\n", + "\n", "def inference(net, transforms, filenames):\n", " for fn in filenames:\n", " with Image.open(fn) as im:\n", From ace252a829b4f0aaa963dbb93ec75f64f869ada0 Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Fri, 8 Sep 2023 13:21:48 +0100 Subject: [PATCH 15/26] Move Code Into Markdown Signed-off-by: Eric Kerfoot --- bundle/02_mednist_classification.ipynb | 30 +++++++------------------- bundle/README.md | 1 + 2 files changed, 9 insertions(+), 22 deletions(-) diff --git a/bundle/02_mednist_classification.ipynb b/bundle/02_mednist_classification.ipynb index 2e609b3b4e..f04f96d258 100644 --- a/bundle/02_mednist_classification.ipynb +++ b/bundle/02_mednist_classification.ipynb @@ -24,16 +24,9 @@ "\n", "This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/4.0/.\n", "\n", - "First we'll consider a condensed version of the code from that notebook and go step-by-step how best to represent this as a bundle:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "fd13031d-f67d-4eb3-a98d-4d0c9884e21e", - "metadata": {}, - "outputs": [], - "source": [ + "First we'll consider a condensed version of the code from that notebook and go step-by-step how best to represent this as a bundle:\n", + "\n", + "```python\n", "import os\n", "\n", "import monai.transforms as mt\n", @@ -51,13 +44,11 @@ "device = torch.device(\"cuda:0\")\n", "net = densenet121(spatial_dims=2, in_channels=1, out_channels=6).to(device)\n", "\n", - "transform = mt.Compose(\n", - " [\n", + "transform = mt.Compose([\n", " mt.LoadImaged(keys=\"image\", image_only=True),\n", " mt.EnsureChannelFirstd(keys=\"image\"),\n", " mt.ScaleIntensityd(keys=\"image\"),\n", - " ]\n", - ")\n", + "])\n", "\n", "dataset = MedNISTDataset(root_dir=root_dir, transform=transform, section=\"training\", download=True)\n", "\n", @@ -78,13 +69,7 @@ "torch.jit.script(net).save(\"mednist.ts\")\n", "\n", "class_names = (\"AbdomenCT\", \"BreastMRI\", \"CXR\", \"ChestCT\", \"Hand\", \"HeadCT\")\n", - "testdata = MedNISTDataset(\n", - " root_dir=root_dir,\n", - " transform=transform,\n", - " section=\"test\",\n", - " download=False,\n", - " runtime_cache=True,\n", - ")\n", + "testdata = MedNISTDataset(root_dir=root_dir, transform=transform, section=\"test\", runtime_cache=True)\n", "\n", "max_items_to_print = 10\n", "eval_dl = DataLoader(testdata[:max_items_to_print], batch_size=1, num_workers=0)\n", @@ -94,7 +79,8 @@ " prob = result.detach().to(\"cpu\")[0]\n", " pred = class_names[prob.argmax()]\n", " gt = item[\"class_name\"][0]\n", - " print(f\"Prediction: {pred}. Ground-truth: {gt}\")" + " print(f\"Prediction: {pred}. Ground-truth: {gt}\")\n", + "```" ] }, { diff --git a/bundle/README.md b/bundle/README.md index 12370eadc8..2b54ba9d0b 100644 --- a/bundle/README.md +++ b/bundle/README.md @@ -14,6 +14,7 @@ Start the tutorial notebooks on constructing bundles: 2. [MedNIST Classification](./02_mednist_classification.ipynb): train a network using the bundle for doing a real task. 3. [MedNIST Classification With Best Practices](./03_mednist_classification_v2.ipynb): do the same again but better. 4. [Integrating Existing Code](./04_integrating_code.ipynb): discussion on how to integrate existing, possible non-MONAI, code into a bundle. + More advanced topics are covered in this directory: * [Further Features](./further_features.md): covers more advanced features and uses of configs, command usage, and From 6721823fa9d77bac8070c465a4a8a031bad3293e Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Fri, 8 Sep 2023 15:26:16 +0100 Subject: [PATCH 16/26] Trying notebook fix Signed-off-by: Eric Kerfoot --- bundle/01_bundle_intro.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/bundle/01_bundle_intro.ipynb b/bundle/01_bundle_intro.ipynb index 636d51a792..e7f87c1f92 100644 --- a/bundle/01_bundle_intro.ipynb +++ b/bundle/01_bundle_intro.ipynb @@ -82,7 +82,7 @@ "%%bash\n", "\n", "python -m monai.bundle init_bundle TestBundle\n", - "tree TestBundle" + "#tree TestBundle" ] }, { From 2607f65e3486eba103c8e2136d2e41ee9f573c6d Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Fri, 8 Sep 2023 16:03:55 +0100 Subject: [PATCH 17/26] Fix for notebook Signed-off-by: Eric Kerfoot --- bundle/01_bundle_intro.ipynb | 8 ++++---- runner.sh | 4 ++++ 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/bundle/01_bundle_intro.ipynb b/bundle/01_bundle_intro.ipynb index e7f87c1f92..f80120c0fe 100644 --- a/bundle/01_bundle_intro.ipynb +++ b/bundle/01_bundle_intro.ipynb @@ -82,7 +82,7 @@ "%%bash\n", "\n", "python -m monai.bundle init_bundle TestBundle\n", - "#tree TestBundle" + "tree TestBundle" ] }, { @@ -138,7 +138,7 @@ }, { "cell_type": "code", - "execution_count": 34, + "execution_count": 2, "id": "a56e4833-171c-432c-8145-f325fad3bfcb", "metadata": {}, "outputs": [ @@ -162,7 +162,7 @@ " \"pytorch_version\": \"2.0.0\",\n", " \"numpy_version\": \"1.23.5\",\n", " \"optional_packages_version\": {},\n", - " \"name\": \"TestBundle\"\n", + " \"name\": \"TestBundle\",\n", " \"task\": \"Demonstration Bundle Network\",\n", " \"description\": \"This is a demonstration bundle meant to showcase features of the MONAI bundle system only and does nothing useful\",\n", " \"authors\": \"Your Name Here\",\n", @@ -170,7 +170,7 @@ " \"network_data_format\": {\n", " \"inputs\": {},\n", " \"outputs\": {}\n", - " }\n", + " },\n", " \"intended_use\": \"This is suitable for demonstration only\"\n", "}" ] diff --git a/runner.sh b/runner.sh index f1c827d8a0..11c34fd03c 100755 --- a/runner.sh +++ b/runner.sh @@ -72,6 +72,10 @@ doesnt_contain_max_epochs=("${doesnt_contain_max_epochs[@]}" TensorRT_inference_ doesnt_contain_max_epochs=("${doesnt_contain_max_epochs[@]}" lazy_resampling_benchmark.ipynb) doesnt_contain_max_epochs=("${doesnt_contain_max_epochs[@]}" modular_patch_inferer.ipynb) doesnt_contain_max_epochs=("${doesnt_contain_max_epochs[@]}" GDS_dataset.ipynb) +doesnt_contain_max_epochs=("${doesnt_contain_max_epochs[@]}" 01_bundle_intro.ipynb) +doesnt_contain_max_epochs=("${doesnt_contain_max_epochs[@]}" 02_mednist_classification.ipynb) +doesnt_contain_max_epochs=("${doesnt_contain_max_epochs[@]}" 03_mednist_classification_v2.ipynb) +doesnt_contain_max_epochs=("${doesnt_contain_max_epochs[@]}" 04_integrating_code.ipynb) # Execution of the notebook in these folders / with the filename cannot be automated skip_run_papermill=() From 83ccc0e1d8cb324756e96a9d0942ae10d781ad8a Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Mon, 11 Sep 2023 14:54:50 +0100 Subject: [PATCH 18/26] Attempting to Fix Papermill Problem Signed-off-by: Eric Kerfoot I had to set MKL_SERVICE_FORCE_INTEL=1 to get the tests to run, maybe this is an issue on the CI system as well. --- bundle/03_mednist_classification_v2.ipynb | 4 ++-- bundle/04_integrating_code.ipynb | 18 ++++++++++++++++++ 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/bundle/03_mednist_classification_v2.ipynb b/bundle/03_mednist_classification_v2.ipynb index 4405d98aa5..a5917ad994 100644 --- a/bundle/03_mednist_classification_v2.ipynb +++ b/bundle/03_mednist_classification_v2.ipynb @@ -201,7 +201,7 @@ }, { "cell_type": "code", - "execution_count": 31, + "execution_count": 9, "id": "d11681af-3210-4b2b-b7bd-8ad8dedfe230", "metadata": {}, "outputs": [ @@ -274,7 +274,7 @@ }, { "cell_type": "code", - "execution_count": 24, + "execution_count": 8, "id": "4dfd052e-abe7-473a-bbf4-25674a3b20ea", "metadata": {}, "outputs": [ diff --git a/bundle/04_integrating_code.ipynb b/bundle/04_integrating_code.ipynb index ef4658e2ea..5cb03a19ca 100644 --- a/bundle/04_integrating_code.ipynb +++ b/bundle/04_integrating_code.ipynb @@ -732,6 +732,24 @@ "\n", "If you need to adapt your code to a bundle it's going to be very specific to your situation how integration is going to work. Using config files as adapter layers is shown here to work, but by understanding how bundles are structured and what the moving pieces are to a bundle \"program\" you can figure out your own strategy.\n", "\n", + "### Adapting Data Processing\n", + "\n", + "One common module is data processing, either pre or post at various stages. MONAI transforms assume that Numpy arrays or Pytorch tensors, or dictionaries thereof, are the inputs and outputs to transforms. You can integrate existing transforms using `Lambda/Lambdad` to wrap a callable object within a MONAI transform rather than define your own `Transform` subclass. This does require that data have the correct type and shape. For example, if you have a function in `scripts` simply called `preprocess` which accepts a single image input as a Numpy array, this can be adapted into a transform sequence as such:\n", + "\n", + "```python\n", + "train_transforms:\n", + "- _target_: LoadImage\n", + " image_only: true\n", + "- _target_: EnsureChannelFirst\n", + "- _target_: ToNumpy\n", + "- _target_: Lambda\n", + " func: '$@scripts.preprocess'\n", + "- _target_: ToTensor\n", + "```\n", + "\n", + "Minimising conversions to and from different formats would improve performance but otherwise this avoids complex rewriting of code to fit MONAI tranforms. A preprocess function which takes multiple inputs and produces multiple outputs would be more suited to a dictionary-based transform sequence but would also require adaptor code or a `MapTransform` subclass. \n", + "\n", + "\n", "## Summary and Next\n", "\n", "In this tutorial we have looked at how to adapt code to a MONAI bundle:\n", From 05440ecb0be9560223dffff379831936db91e102 Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Mon, 11 Sep 2023 15:21:10 +0100 Subject: [PATCH 19/26] Test Change Signed-off-by: Eric Kerfoot --- bundle/01_bundle_intro.ipynb | 29 +++++++++++++++++++++++++++-- 1 file changed, 27 insertions(+), 2 deletions(-) diff --git a/bundle/01_bundle_intro.ipynb b/bundle/01_bundle_intro.ipynb index f80120c0fe..2d18ad03df 100644 --- a/bundle/01_bundle_intro.ipynb +++ b/bundle/01_bundle_intro.ipynb @@ -81,8 +81,33 @@ "source": [ "%%bash\n", "\n", - "python -m monai.bundle init_bundle TestBundle\n", - "tree TestBundle" + "python -m monai.bundle init_bundle TestBundle &>out.txt\n", + "#tree TestBundle" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "deb05591-a71d-44d9-86ab-eb22a0a82070", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "ename": "Exception", + "evalue": "Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.\n\tTry to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.\nworkflow_name None\nconfig_file ['./MedNISTClassifier_v2/configs/common.yaml', './MedNISTClassifier_v2/configs/train.yaml']\nmeta_file ./MedNISTClassifier_v2/configs/metadata.json\nlogging_file ./MedNISTClassifier_v2/configs/logging.conf\ninit_id None\nrun_id train\nfinal_id None\ntracking None\nbundle_root ./MedNISTClassifier_v2\nmax_epochs 2\n2023-09-11 14:48:22,104 - INFO - --- input summary of monai.bundle.scripts.run ---\n2023-09-11 14:48:22,104 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n './MedNISTClassifier_v2/configs/train.yaml']\n2023-09-11 14:48:22,104 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n2023-09-11 14:48:22,104 - INFO - > logging_file: './MedNISTClassifier_v2/configs/logging.conf'\n2023-09-11 14:48:22,104 - INFO - > run_id: 'train'\n2023-09-11 14:48:22,104 - INFO - > bundle_root: './MedNISTClassifier_v2'\n2023-09-11 14:48:22,105 - INFO - > max_epochs: 2\n2023-09-11 14:48:22,105 - INFO - ---\n\n\n2023-09-11 14:48:22,105 - INFO - Setting logging properties based on config: ./MedNISTClassifier_v2/configs/logging.conf.\n2023-09-11 14:48:22,235 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.\n2023-09-11 14:48:22,235 - INFO - File exists: MedNIST.tar.gz, skipped downloading.\n2023-09-11 14:48:22,235 - INFO - Non-empty folder exists in MedNIST, skipped extracting.\n\nLoading dataset: 0%| | 0/47164 [00:00 2\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mException\u001b[39;00m(o\u001b[38;5;241m.\u001b[39mread())\n", + "\u001b[0;31mException\u001b[0m: Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.\n\tTry to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.\nworkflow_name None\nconfig_file ['./MedNISTClassifier_v2/configs/common.yaml', './MedNISTClassifier_v2/configs/train.yaml']\nmeta_file ./MedNISTClassifier_v2/configs/metadata.json\nlogging_file ./MedNISTClassifier_v2/configs/logging.conf\ninit_id None\nrun_id train\nfinal_id None\ntracking None\nbundle_root ./MedNISTClassifier_v2\nmax_epochs 2\n2023-09-11 14:48:22,104 - INFO - --- input summary of monai.bundle.scripts.run ---\n2023-09-11 14:48:22,104 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n './MedNISTClassifier_v2/configs/train.yaml']\n2023-09-11 14:48:22,104 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n2023-09-11 14:48:22,104 - INFO - > logging_file: './MedNISTClassifier_v2/configs/logging.conf'\n2023-09-11 14:48:22,104 - INFO - > run_id: 'train'\n2023-09-11 14:48:22,104 - INFO - > bundle_root: './MedNISTClassifier_v2'\n2023-09-11 14:48:22,105 - INFO - > max_epochs: 2\n2023-09-11 14:48:22,105 - INFO - ---\n\n\n2023-09-11 14:48:22,105 - INFO - Setting logging properties based on config: ./MedNISTClassifier_v2/configs/logging.conf.\n2023-09-11 14:48:22,235 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.\n2023-09-11 14:48:22,235 - INFO - File exists: MedNIST.tar.gz, skipped downloading.\n2023-09-11 14:48:22,235 - INFO - Non-empty folder exists in MedNIST, skipped extracting.\n\nLoading dataset: 0%| | 0/47164 [00:00 Date: Mon, 11 Sep 2023 15:30:55 +0100 Subject: [PATCH 20/26] `tree` Not Present in CI Environment? Signed-off-by: Eric Kerfoot --- bundle/01_bundle_intro.ipynb | 29 ++--------------------- bundle/02_mednist_classification.ipynb | 2 +- bundle/03_mednist_classification_v2.ipynb | 4 ++-- bundle/04_integrating_code.ipynb | 2 +- 4 files changed, 6 insertions(+), 31 deletions(-) diff --git a/bundle/01_bundle_intro.ipynb b/bundle/01_bundle_intro.ipynb index 2d18ad03df..1cffba3f23 100644 --- a/bundle/01_bundle_intro.ipynb +++ b/bundle/01_bundle_intro.ipynb @@ -81,33 +81,8 @@ "source": [ "%%bash\n", "\n", - "python -m monai.bundle init_bundle TestBundle &>out.txt\n", - "#tree TestBundle" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "id": "deb05591-a71d-44d9-86ab-eb22a0a82070", - "metadata": { - "tags": [] - }, - "outputs": [ - { - "ename": "Exception", - "evalue": "Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.\n\tTry to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.\nworkflow_name None\nconfig_file ['./MedNISTClassifier_v2/configs/common.yaml', './MedNISTClassifier_v2/configs/train.yaml']\nmeta_file ./MedNISTClassifier_v2/configs/metadata.json\nlogging_file ./MedNISTClassifier_v2/configs/logging.conf\ninit_id None\nrun_id train\nfinal_id None\ntracking None\nbundle_root ./MedNISTClassifier_v2\nmax_epochs 2\n2023-09-11 14:48:22,104 - INFO - --- input summary of monai.bundle.scripts.run ---\n2023-09-11 14:48:22,104 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n './MedNISTClassifier_v2/configs/train.yaml']\n2023-09-11 14:48:22,104 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n2023-09-11 14:48:22,104 - INFO - > logging_file: './MedNISTClassifier_v2/configs/logging.conf'\n2023-09-11 14:48:22,104 - INFO - > run_id: 'train'\n2023-09-11 14:48:22,104 - INFO - > bundle_root: './MedNISTClassifier_v2'\n2023-09-11 14:48:22,105 - INFO - > max_epochs: 2\n2023-09-11 14:48:22,105 - INFO - ---\n\n\n2023-09-11 14:48:22,105 - INFO - Setting logging properties based on config: ./MedNISTClassifier_v2/configs/logging.conf.\n2023-09-11 14:48:22,235 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.\n2023-09-11 14:48:22,235 - INFO - File exists: MedNIST.tar.gz, skipped downloading.\n2023-09-11 14:48:22,235 - INFO - Non-empty folder exists in MedNIST, skipped extracting.\n\nLoading dataset: 0%| | 0/47164 [00:00 2\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mException\u001b[39;00m(o\u001b[38;5;241m.\u001b[39mread())\n", - "\u001b[0;31mException\u001b[0m: Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.\n\tTry to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.\nworkflow_name None\nconfig_file ['./MedNISTClassifier_v2/configs/common.yaml', './MedNISTClassifier_v2/configs/train.yaml']\nmeta_file ./MedNISTClassifier_v2/configs/metadata.json\nlogging_file ./MedNISTClassifier_v2/configs/logging.conf\ninit_id None\nrun_id train\nfinal_id None\ntracking None\nbundle_root ./MedNISTClassifier_v2\nmax_epochs 2\n2023-09-11 14:48:22,104 - INFO - --- input summary of monai.bundle.scripts.run ---\n2023-09-11 14:48:22,104 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n './MedNISTClassifier_v2/configs/train.yaml']\n2023-09-11 14:48:22,104 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n2023-09-11 14:48:22,104 - INFO - > logging_file: './MedNISTClassifier_v2/configs/logging.conf'\n2023-09-11 14:48:22,104 - INFO - > run_id: 'train'\n2023-09-11 14:48:22,104 - INFO - > bundle_root: './MedNISTClassifier_v2'\n2023-09-11 14:48:22,105 - INFO - > max_epochs: 2\n2023-09-11 14:48:22,105 - INFO - ---\n\n\n2023-09-11 14:48:22,105 - INFO - Setting logging properties based on config: ./MedNISTClassifier_v2/configs/logging.conf.\n2023-09-11 14:48:22,235 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.\n2023-09-11 14:48:22,235 - INFO - File exists: MedNIST.tar.gz, skipped downloading.\n2023-09-11 14:48:22,235 - INFO - Non-empty folder exists in MedNIST, skipped extracting.\n\nLoading dataset: 0%| | 0/47164 [00:00 IntegrationBundle/scripts/__init__.py\n", "\n", - "tree IntegrationBundle" + "which tree && tree IntegrationBundle" ] }, { From 63f078c79e6ddac0ec16fca036f0ac07164d73cb Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Mon, 11 Sep 2023 15:45:28 +0100 Subject: [PATCH 21/26] Without tree? Signed-off-by: Eric Kerfoot --- bundle/01_bundle_intro.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/bundle/01_bundle_intro.ipynb b/bundle/01_bundle_intro.ipynb index 1cffba3f23..43eb2c16f9 100644 --- a/bundle/01_bundle_intro.ipynb +++ b/bundle/01_bundle_intro.ipynb @@ -82,7 +82,7 @@ "%%bash\n", "\n", "python -m monai.bundle init_bundle TestBundle\n", - "which tree && tree TestBundle" + "#which tree && tree TestBundle" ] }, { From 68f42e07a46ba229a1e5ff8d092f451c91449ed0 Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Mon, 11 Sep 2023 15:53:20 +0100 Subject: [PATCH 22/26] Final Try Signed-off-by: Eric Kerfoot --- bundle/01_bundle_intro.ipynb | 2 +- bundle/02_mednist_classification.ipynb | 2 +- bundle/03_mednist_classification_v2.ipynb | 6 +++--- bundle/04_integrating_code.ipynb | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/bundle/01_bundle_intro.ipynb b/bundle/01_bundle_intro.ipynb index 43eb2c16f9..951589e204 100644 --- a/bundle/01_bundle_intro.ipynb +++ b/bundle/01_bundle_intro.ipynb @@ -82,7 +82,7 @@ "%%bash\n", "\n", "python -m monai.bundle init_bundle TestBundle\n", - "#which tree && tree TestBundle" + "which tree && tree TestBundle || true" ] }, { diff --git a/bundle/02_mednist_classification.ipynb b/bundle/02_mednist_classification.ipynb index 660a2ddcde..bf5f120105 100644 --- a/bundle/02_mednist_classification.ipynb +++ b/bundle/02_mednist_classification.ipynb @@ -118,7 +118,7 @@ "%%bash\n", "\n", "python -m monai.bundle init_bundle MedNISTClassifier\n", - "which tree && tree MedNISTClassifier" + "which tree && tree MedNISTClassifier || true" ] }, { diff --git a/bundle/03_mednist_classification_v2.ipynb b/bundle/03_mednist_classification_v2.ipynb index 1bdde34824..04b8f7cec1 100644 --- a/bundle/03_mednist_classification_v2.ipynb +++ b/bundle/03_mednist_classification_v2.ipynb @@ -67,7 +67,7 @@ "%%bash\n", "\n", "python -m monai.bundle init_bundle MedNISTClassifier_v2\n", - "which tree && tree MedNISTClassifier_v2" + "which tree && tree MedNISTClassifier_v2 || true" ] }, { @@ -734,7 +734,7 @@ } ], "source": [ - "!tree output/output_230830_123911/" + "!which tree && tree output/output_230830_123911/ || true" ] }, { @@ -1125,7 +1125,7 @@ " --ckpt_file \"$BUNDLE/models/model.pt\" \\\n", " --filepath \"$BUNDLE/models/model.ts\" \n", "\n", - "which tree && tree \"$BUNDLE\"" + "which tree && tree \"$BUNDLE\" || true" ] }, { diff --git a/bundle/04_integrating_code.ipynb b/bundle/04_integrating_code.ipynb index 57c1a3f2c3..0ef55bc56d 100644 --- a/bundle/04_integrating_code.ipynb +++ b/bundle/04_integrating_code.ipynb @@ -56,7 +56,7 @@ "mkdir IntegrationBundle/scripts\n", "echo \"\" > IntegrationBundle/scripts/__init__.py\n", "\n", - "which tree && tree IntegrationBundle" + "which tree && tree IntegrationBundle || true" ] }, { From 92e7594ef5f475cb4f7ff5dc1c59a0ccad310bc3 Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Mon, 11 Sep 2023 18:18:15 +0100 Subject: [PATCH 23/26] Further fixes Signed-off-by: Eric Kerfoot --- bundle/02_mednist_classification.ipynb | 103 +++- bundle/03_mednist_classification_v2.ipynb | 548 +++++++++++----------- bundle/04_integrating_code.ipynb | 243 +++++++--- 3 files changed, 528 insertions(+), 366 deletions(-) diff --git a/bundle/02_mednist_classification.ipynb b/bundle/02_mednist_classification.ipynb index bf5f120105..1a620090e6 100644 --- a/bundle/02_mednist_classification.ipynb +++ b/bundle/02_mednist_classification.ipynb @@ -101,6 +101,7 @@ "name": "stdout", "output_type": "stream", "text": [ + "/usr/bin/tree\n", "\u001b[01;34mMedNISTClassifier\u001b[00m\n", "├── \u001b[01;34mconfigs\u001b[00m\n", "│   ├── inference.json\n", @@ -133,7 +134,7 @@ }, { "cell_type": "code", - "execution_count": 14, + "execution_count": 2, "id": "b29f053b-cf16-4ffc-bbe7-d9433fdfa872", "metadata": {}, "outputs": [ @@ -218,7 +219,7 @@ }, { "cell_type": "code", - "execution_count": 23, + "execution_count": 3, "id": "d11681af-3210-4b2b-b7bd-8ad8dedfe230", "metadata": {}, "outputs": [ @@ -226,7 +227,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Overwriting MedNISTClassifier/configs/common.yaml\n" + "Writing MedNISTClassifier/configs/common.yaml\n" ] } ], @@ -292,7 +293,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 4, "id": "4dfd052e-abe7-473a-bbf4-25674a3b20ea", "metadata": {}, "outputs": [ @@ -300,7 +301,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Overwriting MedNISTClassifier/configs/train.yaml\n" + "Writing MedNISTClassifier/configs/train.yaml\n" ] } ], @@ -360,10 +361,59 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "id": "8357670d-fe69-4789-9b9a-77c0d8144b10", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "workflow_name None\n", + "config_file ['./MedNISTClassifier/configs/common.yaml', './MedNISTClassifier/configs/train.yaml']\n", + "meta_file ./MedNISTClassifier/configs/metadata.json\n", + "logging_file None\n", + "init_id None\n", + "run_id train\n", + "final_id None\n", + "tracking None\n", + "max_epochs 2\n", + "2023-09-11 16:19:49,915 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-09-11 16:19:49,915 - INFO - > config_file: ['./MedNISTClassifier/configs/common.yaml',\n", + " './MedNISTClassifier/configs/train.yaml']\n", + "2023-09-11 16:19:49,915 - INFO - > meta_file: './MedNISTClassifier/configs/metadata.json'\n", + "2023-09-11 16:19:49,915 - INFO - > run_id: 'train'\n", + "2023-09-11 16:19:49,915 - INFO - > max_epochs: 2\n", + "2023-09-11 16:19:49,915 - INFO - ---\n", + "\n", + "\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/localek10/workspace/monai/MONAI_mine/monai/bundle/workflows.py:257: UserWarning: Default logging file in MedNISTClassifier/configs/logging.conf does not exist, skipping logging.\n", + " warnings.warn(f\"Default logging file in {logging_file} does not exist, skipping logging.\")\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "2023-09-11 16:19:50,055 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.\n", + "2023-09-11 16:19:50,055 - INFO - File exists: MedNIST.tar.gz, skipped downloading.\n", + "2023-09-11 16:19:50,055 - INFO - Non-empty folder exists in MedNIST, skipped extracting.\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Loading dataset: 100%|██████████| 47164/47164 [00:41<00:00, 1145.05it/s]\n" + ] + } + ], "source": [ "%%bash\n", "\n", @@ -376,11 +426,11 @@ " --max_epochs 2\n", "\n", "# we'll use the trained network as the model object for this bundle\n", - "mv mednist.ts $BUNDLE/models/model.ts\n", + "mv model.ts $BUNDLE/models/model.ts\n", "\n", "# generate the saved dictionary file as well\n", "cd \"$BUNDLE/models\"\n", - "python -c 'import torch; obj = torch.jit.load(\"model.ts\"); torch.save(obj.state_dict(),\"model.pt\")'" + "python -c 'import torch; obj = torch.jit.load(\"model.ts\"); torch.save(obj.state_dict(), \"model.pt\")'" ] }, { @@ -399,7 +449,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 7, "id": "fbad1a21-4dda-4b80-8e81-7d7e75307f9c", "metadata": {}, "outputs": [], @@ -409,7 +459,7 @@ }, { "cell_type": "code", - "execution_count": 10, + "execution_count": 8, "id": "0c8725f7-f1cd-48f5-81a5-3f5a9ee03e9c", "metadata": {}, "outputs": [ @@ -447,7 +497,7 @@ }, { "cell_type": "code", - "execution_count": 15, + "execution_count": 9, "id": "b4e1f99a-a68b-4aeb-bcf2-842f26609b52", "metadata": {}, "outputs": [ @@ -455,7 +505,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Overwriting MedNISTClassifier/configs/evaluate.yaml\n" + "Writing MedNISTClassifier/configs/evaluate.yaml\n" ] } ], @@ -499,7 +549,7 @@ }, { "cell_type": "code", - "execution_count": 17, + "execution_count": 10, "id": "3c5fa39f-8798-4e41-8e2a-3a70a6be3906", "metadata": {}, "outputs": [ @@ -507,13 +557,22 @@ "name": "stdout", "output_type": "stream", "text": [ - "2023-08-24 14:14:09,479 - INFO - --- input summary of monai.bundle.scripts.run ---\n", - "2023-08-24 14:14:09,479 - INFO - > run_id: 'evaluate'\n", - "2023-08-24 14:14:09,479 - INFO - > meta_file: './MedNISTClassifier/configs/metadata.json'\n", - "2023-08-24 14:14:09,479 - INFO - > config_file: ['./MedNISTClassifier/configs/common.yaml',\n", + "workflow_name None\n", + "config_file ['./MedNISTClassifier/configs/common.yaml', './MedNISTClassifier/configs/evaluate.yaml']\n", + "meta_file ./MedNISTClassifier/configs/metadata.json\n", + "logging_file None\n", + "init_id None\n", + "run_id evaluate\n", + "final_id None\n", + "tracking None\n", + "ckpt_file ./MedNISTClassifier/models/model.pt\n", + "2023-09-11 16:22:56,379 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-09-11 16:22:56,379 - INFO - > config_file: ['./MedNISTClassifier/configs/common.yaml',\n", " './MedNISTClassifier/configs/evaluate.yaml']\n", - "2023-08-24 14:14:09,479 - INFO - > ckpt_file: './MedNISTClassifier/models/model.pt'\n", - "2023-08-24 14:14:09,479 - INFO - ---\n", + "2023-09-11 16:22:56,379 - INFO - > meta_file: './MedNISTClassifier/configs/metadata.json'\n", + "2023-09-11 16:22:56,379 - INFO - > run_id: 'evaluate'\n", + "2023-09-11 16:22:56,379 - INFO - > ckpt_file: './MedNISTClassifier/models/model.pt'\n", + "2023-09-11 16:22:56,379 - INFO - ---\n", "\n", "\n" ] @@ -522,8 +581,8 @@ "name": "stderr", "output_type": "stream", "text": [ - "/home/localek10/workspace/monai/MONAI_mine/monai/bundle/workflows.py:213: UserWarning: Default logging file in 'configs/logging.conf' does not exist, skipping logging.\n", - " warnings.warn(\"Default logging file in 'configs/logging.conf' does not exist, skipping logging.\")\n" + "/home/localek10/workspace/monai/MONAI_mine/monai/bundle/workflows.py:257: UserWarning: Default logging file in MedNISTClassifier/configs/logging.conf does not exist, skipping logging.\n", + " warnings.warn(f\"Default logging file in {logging_file} does not exist, skipping logging.\")\n" ] }, { diff --git a/bundle/03_mednist_classification_v2.ipynb b/bundle/03_mednist_classification_v2.ipynb index 04b8f7cec1..3eb5f09085 100644 --- a/bundle/03_mednist_classification_v2.ipynb +++ b/bundle/03_mednist_classification_v2.ipynb @@ -19,7 +19,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "id": "2f51a451-566f-4501-aeb8-f3cd5d1f7bf9", "metadata": {}, "outputs": [], @@ -42,7 +42,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": 2, "id": "eb9dc6ec-13da-4a37-8afa-28e2766b9343", "metadata": {}, "outputs": [ @@ -50,6 +50,7 @@ "name": "stdout", "output_type": "stream", "text": [ + "/usr/bin/tree\n", "\u001b[01;34mMedNISTClassifier_v2\u001b[00m\n", "├── \u001b[01;34mconfigs\u001b[00m\n", "│   ├── inference.json\n", @@ -72,7 +73,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 3, "id": "b29f053b-cf16-4ffc-bbe7-d9433fdfa872", "metadata": {}, "outputs": [ @@ -201,7 +202,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 5, "id": "d11681af-3210-4b2b-b7bd-8ad8dedfe230", "metadata": {}, "outputs": [ @@ -209,7 +210,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Overwriting MedNISTClassifier_v2/configs/common.yaml\n" + "Writing MedNISTClassifier_v2/configs/common.yaml\n" ] } ], @@ -274,7 +275,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 6, "id": "4dfd052e-abe7-473a-bbf4-25674a3b20ea", "metadata": {}, "outputs": [ @@ -282,7 +283,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Overwriting MedNISTClassifier_v2/configs/train.yaml\n" + "Writing MedNISTClassifier_v2/configs/train.yaml\n" ] } ], @@ -432,7 +433,7 @@ }, { "cell_type": "code", - "execution_count": 25, + "execution_count": 7, "id": "8357670d-fe69-4789-9b9a-77c0d8144b10", "metadata": {}, "outputs": [ @@ -440,255 +441,255 @@ "name": "stdout", "output_type": "stream", "text": [ - "2023-08-30 12:38:23,636 - INFO - --- input summary of monai.bundle.scripts.run ---\n", - "2023-08-30 12:38:23,636 - INFO - > run_id: 'train'\n", - "2023-08-30 12:38:23,636 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n", - "2023-08-30 12:38:23,636 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n", + "2023-09-11 16:44:56,163 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-09-11 16:44:56,163 - INFO - > run_id: 'train'\n", + "2023-09-11 16:44:56,163 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n", + "2023-09-11 16:44:56,164 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n", " './MedNISTClassifier_v2/configs/train.yaml']\n", - "2023-08-30 12:38:23,636 - INFO - > logging_file: './MedNISTClassifier_v2/configs/logging.conf'\n", - "2023-08-30 12:38:23,636 - INFO - > bundle_root: './MedNISTClassifier_v2'\n", - "2023-08-30 12:38:23,636 - INFO - > max_epochs: 2\n", - "2023-08-30 12:38:23,636 - INFO - ---\n", + "2023-09-11 16:44:56,164 - INFO - > logging_file: './MedNISTClassifier_v2/configs/logging.conf'\n", + "2023-09-11 16:44:56,164 - INFO - > bundle_root: './MedNISTClassifier_v2'\n", + "2023-09-11 16:44:56,164 - INFO - > max_epochs: 2\n", + "2023-09-11 16:44:56,164 - INFO - ---\n", "\n", "\n", - "2023-08-30 12:38:23,636 - INFO - Setting logging properties based on config: ./MedNISTClassifier_v2/configs/logging.conf.\n", - "2023-08-30 12:38:23,768 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.\n", - "2023-08-30 12:38:23,768 - INFO - File exists: MedNIST.tar.gz, skipped downloading.\n", - "2023-08-30 12:38:23,768 - INFO - Non-empty folder exists in MedNIST, skipped extracting.\n" + "2023-09-11 16:44:56,164 - INFO - Setting logging properties based on config: ./MedNISTClassifier_v2/configs/logging.conf.\n", + "2023-09-11 16:44:56,297 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.\n", + "2023-09-11 16:44:56,297 - INFO - File exists: MedNIST.tar.gz, skipped downloading.\n", + "2023-09-11 16:44:56,297 - INFO - Non-empty folder exists in MedNIST, skipped extracting.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ - "Loading dataset: 100%|██████████| 47164/47164 [00:41<00:00, 1134.34it/s]\n" + "Loading dataset: 100%|██████████| 47164/47164 [00:43<00:00, 1085.57it/s]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ - "2023-08-30 12:39:05,994 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.\n", - "2023-08-30 12:39:05,994 - INFO - File exists: MedNIST.tar.gz, skipped downloading.\n", - "2023-08-30 12:39:05,994 - INFO - Non-empty folder exists in MedNIST, skipped extracting.\n" + "2023-09-11 16:45:40,487 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.\n", + "2023-09-11 16:45:40,487 - INFO - File exists: MedNIST.tar.gz, skipped downloading.\n", + "2023-09-11 16:45:40,487 - INFO - Non-empty folder exists in MedNIST, skipped extracting.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ - "Loading dataset: 100%|██████████| 5895/5895 [00:05<00:00, 1135.59it/s]\n" + "Loading dataset: 100%|██████████| 5895/5895 [00:06<00:00, 894.97it/s] \n" ] }, { "name": "stdout", "output_type": "stream", "text": [ - "2023-08-30 12:39:11,320 - ignite.engine.engine.SupervisedTrainer - INFO - Engine run resuming from iteration 0, epoch 0 until 2 epochs\n", - "2023-08-30 12:39:12,457 - INFO - Epoch: 1/2, Iter: 1/93 -- train_loss: 1.8415 \n", - "2023-08-30 12:39:12,828 - INFO - Epoch: 1/2, Iter: 2/93 -- train_loss: 1.8107 \n", - "2023-08-30 12:39:13,194 - INFO - Epoch: 1/2, Iter: 3/93 -- train_loss: 1.7766 \n", - "2023-08-30 12:39:13,569 - INFO - Epoch: 1/2, Iter: 4/93 -- train_loss: 1.7330 \n", - "2023-08-30 12:39:13,951 - INFO - Epoch: 1/2, Iter: 5/93 -- train_loss: 1.7159 \n", - "2023-08-30 12:39:14,326 - INFO - Epoch: 1/2, Iter: 6/93 -- train_loss: 1.6599 \n", - "2023-08-30 12:39:14,698 - INFO - Epoch: 1/2, Iter: 7/93 -- train_loss: 1.6619 \n", - "2023-08-30 12:39:15,068 - INFO - Epoch: 1/2, Iter: 8/93 -- train_loss: 1.6289 \n", - "2023-08-30 12:39:15,442 - INFO - Epoch: 1/2, Iter: 9/93 -- train_loss: 1.5839 \n", - "2023-08-30 12:39:15,813 - INFO - Epoch: 1/2, Iter: 10/93 -- train_loss: 1.5505 \n", - "2023-08-30 12:39:16,184 - INFO - Epoch: 1/2, Iter: 11/93 -- train_loss: 1.5104 \n", - "2023-08-30 12:39:16,555 - INFO - Epoch: 1/2, Iter: 12/93 -- train_loss: 1.5082 \n", - "2023-08-30 12:39:16,928 - INFO - Epoch: 1/2, Iter: 13/93 -- train_loss: 1.4683 \n", - "2023-08-30 12:39:17,298 - INFO - Epoch: 1/2, Iter: 14/93 -- train_loss: 1.4428 \n", - "2023-08-30 12:39:17,669 - INFO - Epoch: 1/2, Iter: 15/93 -- train_loss: 1.4370 \n", - "2023-08-30 12:39:18,040 - INFO - Epoch: 1/2, Iter: 16/93 -- train_loss: 1.4218 \n", - "2023-08-30 12:39:18,413 - INFO - Epoch: 1/2, Iter: 17/93 -- train_loss: 1.3643 \n", - "2023-08-30 12:39:18,788 - INFO - Epoch: 1/2, Iter: 18/93 -- train_loss: 1.3395 \n", - "2023-08-30 12:39:19,156 - INFO - Epoch: 1/2, Iter: 19/93 -- train_loss: 1.3353 \n", - "2023-08-30 12:39:19,526 - INFO - Epoch: 1/2, Iter: 20/93 -- train_loss: 1.2964 \n", - "2023-08-30 12:39:19,899 - INFO - Epoch: 1/2, Iter: 21/93 -- train_loss: 1.2980 \n", - "2023-08-30 12:39:20,269 - INFO - Epoch: 1/2, Iter: 22/93 -- train_loss: 1.2524 \n", - "2023-08-30 12:39:20,637 - INFO - Epoch: 1/2, Iter: 23/93 -- train_loss: 1.2426 \n", - "2023-08-30 12:39:21,005 - INFO - Epoch: 1/2, Iter: 24/93 -- train_loss: 1.2124 \n", - "2023-08-30 12:39:21,384 - INFO - Epoch: 1/2, Iter: 25/93 -- train_loss: 1.2232 \n", - "2023-08-30 12:39:21,755 - INFO - Epoch: 1/2, Iter: 26/93 -- train_loss: 1.2067 \n", - "2023-08-30 12:39:22,145 - INFO - Epoch: 1/2, Iter: 27/93 -- train_loss: 1.1653 \n", - "2023-08-30 12:39:22,519 - INFO - Epoch: 1/2, Iter: 28/93 -- train_loss: 1.1216 \n", - "2023-08-30 12:39:22,899 - INFO - Epoch: 1/2, Iter: 29/93 -- train_loss: 1.1002 \n", - "2023-08-30 12:39:23,268 - INFO - Epoch: 1/2, Iter: 30/93 -- train_loss: 1.0889 \n", - "2023-08-30 12:39:23,635 - INFO - Epoch: 1/2, Iter: 31/93 -- train_loss: 1.0906 \n", - "2023-08-30 12:39:24,005 - INFO - Epoch: 1/2, Iter: 32/93 -- train_loss: 1.0542 \n", - "2023-08-30 12:39:24,379 - INFO - Epoch: 1/2, Iter: 33/93 -- train_loss: 1.0505 \n", - "2023-08-30 12:39:24,752 - INFO - Epoch: 1/2, Iter: 34/93 -- train_loss: 1.0479 \n", - "2023-08-30 12:39:25,121 - INFO - Epoch: 1/2, Iter: 35/93 -- train_loss: 0.9899 \n", - "2023-08-30 12:39:25,497 - INFO - Epoch: 1/2, Iter: 36/93 -- train_loss: 1.0060 \n", - "2023-08-30 12:39:25,877 - INFO - Epoch: 1/2, Iter: 37/93 -- train_loss: 0.9894 \n", - "2023-08-30 12:39:26,250 - INFO - Epoch: 1/2, Iter: 38/93 -- train_loss: 0.9567 \n", - "2023-08-30 12:39:26,618 - INFO - Epoch: 1/2, Iter: 39/93 -- train_loss: 0.9446 \n", - "2023-08-30 12:39:26,998 - INFO - Epoch: 1/2, Iter: 40/93 -- train_loss: 0.9262 \n", - "2023-08-30 12:39:27,374 - INFO - Epoch: 1/2, Iter: 41/93 -- train_loss: 0.9277 \n", - "2023-08-30 12:39:27,743 - INFO - Epoch: 1/2, Iter: 42/93 -- train_loss: 0.8966 \n", - "2023-08-30 12:39:28,112 - INFO - Epoch: 1/2, Iter: 43/93 -- train_loss: 0.8847 \n", - "2023-08-30 12:39:28,490 - INFO - Epoch: 1/2, Iter: 44/93 -- train_loss: 0.8708 \n", - "2023-08-30 12:39:28,865 - INFO - Epoch: 1/2, Iter: 45/93 -- train_loss: 0.8846 \n", - "2023-08-30 12:39:29,237 - INFO - Epoch: 1/2, Iter: 46/93 -- train_loss: 0.8167 \n", - "2023-08-30 12:39:29,611 - INFO - Epoch: 1/2, Iter: 47/93 -- train_loss: 0.8477 \n", - "2023-08-30 12:39:29,982 - INFO - Epoch: 1/2, Iter: 48/93 -- train_loss: 0.8050 \n", - "2023-08-30 12:39:30,358 - INFO - Epoch: 1/2, Iter: 49/93 -- train_loss: 0.7793 \n", - "2023-08-30 12:39:30,729 - INFO - Epoch: 1/2, Iter: 50/93 -- train_loss: 0.7661 \n", - "2023-08-30 12:39:31,101 - INFO - Epoch: 1/2, Iter: 51/93 -- train_loss: 0.7868 \n", - "2023-08-30 12:39:31,610 - INFO - Epoch: 1/2, Iter: 52/93 -- train_loss: 0.7492 \n", - "2023-08-30 12:39:31,984 - INFO - Epoch: 1/2, Iter: 53/93 -- train_loss: 0.7325 \n", - "2023-08-30 12:39:32,355 - INFO - Epoch: 1/2, Iter: 54/93 -- train_loss: 0.7154 \n", - "2023-08-30 12:39:32,723 - INFO - Epoch: 1/2, Iter: 55/93 -- train_loss: 0.7304 \n", - "2023-08-30 12:39:33,094 - INFO - Epoch: 1/2, Iter: 56/93 -- train_loss: 0.6743 \n", - "2023-08-30 12:39:33,478 - INFO - Epoch: 1/2, Iter: 57/93 -- train_loss: 0.6978 \n", - "2023-08-30 12:39:33,850 - INFO - Epoch: 1/2, Iter: 58/93 -- train_loss: 0.6747 \n", - "2023-08-30 12:39:34,220 - INFO - Epoch: 1/2, Iter: 59/93 -- train_loss: 0.7037 \n", - "2023-08-30 12:39:34,591 - INFO - Epoch: 1/2, Iter: 60/93 -- train_loss: 0.6550 \n", - "2023-08-30 12:39:34,968 - INFO - Epoch: 1/2, Iter: 61/93 -- train_loss: 0.6728 \n", - "2023-08-30 12:39:35,340 - INFO - Epoch: 1/2, Iter: 62/93 -- train_loss: 0.6274 \n", - "2023-08-30 12:39:35,709 - INFO - Epoch: 1/2, Iter: 63/93 -- train_loss: 0.6296 \n", - "2023-08-30 12:39:36,080 - INFO - Epoch: 1/2, Iter: 64/93 -- train_loss: 0.6272 \n", - "2023-08-30 12:39:36,456 - INFO - Epoch: 1/2, Iter: 65/93 -- train_loss: 0.6205 \n", - "2023-08-30 12:39:36,828 - INFO - Epoch: 1/2, Iter: 66/93 -- train_loss: 0.5981 \n", - "2023-08-30 12:39:37,197 - INFO - Epoch: 1/2, Iter: 67/93 -- train_loss: 0.5998 \n", - "2023-08-30 12:39:37,574 - INFO - Epoch: 1/2, Iter: 68/93 -- train_loss: 0.5809 \n", - "2023-08-30 12:39:37,951 - INFO - Epoch: 1/2, Iter: 69/93 -- train_loss: 0.5781 \n", - "2023-08-30 12:39:38,322 - INFO - Epoch: 1/2, Iter: 70/93 -- train_loss: 0.5665 \n", - "2023-08-30 12:39:38,691 - INFO - Epoch: 1/2, Iter: 71/93 -- train_loss: 0.5403 \n", - "2023-08-30 12:39:39,063 - INFO - Epoch: 1/2, Iter: 72/93 -- train_loss: 0.5393 \n", - "2023-08-30 12:39:39,443 - INFO - Epoch: 1/2, Iter: 73/93 -- train_loss: 0.5547 \n", - "2023-08-30 12:39:39,815 - INFO - Epoch: 1/2, Iter: 74/93 -- train_loss: 0.5080 \n", - "2023-08-30 12:39:40,185 - INFO - Epoch: 1/2, Iter: 75/93 -- train_loss: 0.5292 \n", - "2023-08-30 12:39:40,557 - INFO - Epoch: 1/2, Iter: 76/93 -- train_loss: 0.4856 \n", - "2023-08-30 12:39:40,932 - INFO - Epoch: 1/2, Iter: 77/93 -- train_loss: 0.4987 \n", - "2023-08-30 12:39:41,304 - INFO - Epoch: 1/2, Iter: 78/93 -- train_loss: 0.4931 \n", - "2023-08-30 12:39:41,674 - INFO - Epoch: 1/2, Iter: 79/93 -- train_loss: 0.4819 \n", - "2023-08-30 12:39:42,047 - INFO - Epoch: 1/2, Iter: 80/93 -- train_loss: 0.4818 \n", - "2023-08-30 12:39:42,424 - INFO - Epoch: 1/2, Iter: 81/93 -- train_loss: 0.4978 \n", - "2023-08-30 12:39:42,804 - INFO - Epoch: 1/2, Iter: 82/93 -- train_loss: 0.4684 \n", - "2023-08-30 12:39:43,175 - INFO - Epoch: 1/2, Iter: 83/93 -- train_loss: 0.4431 \n", - "2023-08-30 12:39:43,555 - INFO - Epoch: 1/2, Iter: 84/93 -- train_loss: 0.4568 \n", - "2023-08-30 12:39:43,937 - INFO - Epoch: 1/2, Iter: 85/93 -- train_loss: 0.4712 \n", - "2023-08-30 12:39:44,313 - INFO - Epoch: 1/2, Iter: 86/93 -- train_loss: 0.4307 \n", - "2023-08-30 12:39:44,683 - INFO - Epoch: 1/2, Iter: 87/93 -- train_loss: 0.4360 \n", - "2023-08-30 12:39:45,235 - INFO - Epoch: 1/2, Iter: 88/93 -- train_loss: 0.4141 \n", - "2023-08-30 12:39:45,615 - INFO - Epoch: 1/2, Iter: 89/93 -- train_loss: 0.4159 \n", - "2023-08-30 12:39:45,990 - INFO - Epoch: 1/2, Iter: 90/93 -- train_loss: 0.4035 \n", - "2023-08-30 12:39:46,359 - INFO - Epoch: 1/2, Iter: 91/93 -- train_loss: 0.3963 \n", - "2023-08-30 12:39:46,731 - INFO - Epoch: 1/2, Iter: 92/93 -- train_loss: 0.4143 \n", - "2023-08-30 12:39:46,962 - INFO - Epoch: 1/2, Iter: 93/93 -- train_loss: 0.3548 \n", - "2023-08-30 12:39:46,963 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run resuming from iteration 0, epoch 0 until 1 epochs\n", - "2023-08-30 12:39:56,039 - ignite.engine.engine.SupervisedEvaluator - INFO - Got new best metric of accuracy: 0.9889737065309584\n", - "2023-08-30 12:39:56,040 - INFO - Epoch[1] Metrics -- accuracy: 0.9890 \n", - "2023-08-30 12:39:56,040 - INFO - Key metric: accuracy best value: 0.9889737065309584 at epoch: 1\n", - "2023-08-30 12:39:56,040 - ignite.engine.engine.SupervisedEvaluator - INFO - Epoch[1] Complete. Time taken: 00:00:08.961\n", - "2023-08-30 12:39:56,040 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run complete. Time taken: 00:00:09.077\n", - "2023-08-30 12:39:56,161 - ignite.engine.engine.SupervisedTrainer - INFO - Saved checkpoint at epoch: 1\n", - "2023-08-30 12:39:56,161 - ignite.engine.engine.SupervisedTrainer - INFO - Epoch[1] Complete. Time taken: 00:00:44.714\n", - "2023-08-30 12:39:56,769 - INFO - Epoch: 2/2, Iter: 1/93 -- train_loss: 0.3996 \n", - "2023-08-30 12:39:57,157 - INFO - Epoch: 2/2, Iter: 2/93 -- train_loss: 0.3662 \n", - "2023-08-30 12:39:57,528 - INFO - Epoch: 2/2, Iter: 3/93 -- train_loss: 0.3753 \n", - "2023-08-30 12:39:57,902 - INFO - Epoch: 2/2, Iter: 4/93 -- train_loss: 0.3637 \n", - "2023-08-30 12:39:58,279 - INFO - Epoch: 2/2, Iter: 5/93 -- train_loss: 0.3660 \n", - "2023-08-30 12:39:58,655 - INFO - Epoch: 2/2, Iter: 6/93 -- train_loss: 0.3651 \n", - "2023-08-30 12:39:59,025 - INFO - Epoch: 2/2, Iter: 7/93 -- train_loss: 0.3792 \n", - "2023-08-30 12:39:59,400 - INFO - Epoch: 2/2, Iter: 8/93 -- train_loss: 0.3327 \n", - "2023-08-30 12:39:59,782 - INFO - Epoch: 2/2, Iter: 9/93 -- train_loss: 0.3364 \n", - "2023-08-30 12:40:00,154 - INFO - Epoch: 2/2, Iter: 10/93 -- train_loss: 0.3670 \n", - "2023-08-30 12:40:00,524 - INFO - Epoch: 2/2, Iter: 11/93 -- train_loss: 0.3640 \n", - "2023-08-30 12:40:00,898 - INFO - Epoch: 2/2, Iter: 12/93 -- train_loss: 0.3332 \n", - "2023-08-30 12:40:01,277 - INFO - Epoch: 2/2, Iter: 13/93 -- train_loss: 0.3037 \n", - "2023-08-30 12:40:01,649 - INFO - Epoch: 2/2, Iter: 14/93 -- train_loss: 0.3297 \n", - "2023-08-30 12:40:02,018 - INFO - Epoch: 2/2, Iter: 15/93 -- train_loss: 0.3120 \n", - "2023-08-30 12:40:02,390 - INFO - Epoch: 2/2, Iter: 16/93 -- train_loss: 0.3109 \n", - "2023-08-30 12:40:02,769 - INFO - Epoch: 2/2, Iter: 17/93 -- train_loss: 0.3292 \n", - "2023-08-30 12:40:03,141 - INFO - Epoch: 2/2, Iter: 18/93 -- train_loss: 0.3157 \n", - "2023-08-30 12:40:03,510 - INFO - Epoch: 2/2, Iter: 19/93 -- train_loss: 0.3049 \n", - "2023-08-30 12:40:03,882 - INFO - Epoch: 2/2, Iter: 20/93 -- train_loss: 0.2881 \n", - "2023-08-30 12:40:04,262 - INFO - Epoch: 2/2, Iter: 21/93 -- train_loss: 0.2818 \n", - "2023-08-30 12:40:04,634 - INFO - Epoch: 2/2, Iter: 22/93 -- train_loss: 0.2728 \n", - "2023-08-30 12:40:05,003 - INFO - Epoch: 2/2, Iter: 23/93 -- train_loss: 0.2728 \n", - "2023-08-30 12:40:05,375 - INFO - Epoch: 2/2, Iter: 24/93 -- train_loss: 0.2852 \n", - "2023-08-30 12:40:05,753 - INFO - Epoch: 2/2, Iter: 25/93 -- train_loss: 0.2658 \n", - "2023-08-30 12:40:06,126 - INFO - Epoch: 2/2, Iter: 26/93 -- train_loss: 0.2662 \n", - "2023-08-30 12:40:06,495 - INFO - Epoch: 2/2, Iter: 27/93 -- train_loss: 0.2818 \n", - "2023-08-30 12:40:06,868 - INFO - Epoch: 2/2, Iter: 28/93 -- train_loss: 0.2564 \n", - "2023-08-30 12:40:07,248 - INFO - Epoch: 2/2, Iter: 29/93 -- train_loss: 0.2550 \n", - "2023-08-30 12:40:07,622 - INFO - Epoch: 2/2, Iter: 30/93 -- train_loss: 0.2681 \n", - "2023-08-30 12:40:07,992 - INFO - Epoch: 2/2, Iter: 31/93 -- train_loss: 0.2559 \n", - "2023-08-30 12:40:08,365 - INFO - Epoch: 2/2, Iter: 32/93 -- train_loss: 0.2672 \n", - "2023-08-30 12:40:08,751 - INFO - Epoch: 2/2, Iter: 33/93 -- train_loss: 0.2685 \n", - "2023-08-30 12:40:09,124 - INFO - Epoch: 2/2, Iter: 34/93 -- train_loss: 0.2602 \n", - "2023-08-30 12:40:09,737 - INFO - Epoch: 2/2, Iter: 35/93 -- train_loss: 0.2622 \n", - "2023-08-30 12:40:10,111 - INFO - Epoch: 2/2, Iter: 36/93 -- train_loss: 0.2438 \n", - "2023-08-30 12:40:10,488 - INFO - Epoch: 2/2, Iter: 37/93 -- train_loss: 0.2609 \n", - "2023-08-30 12:40:10,863 - INFO - Epoch: 2/2, Iter: 38/93 -- train_loss: 0.2211 \n", - "2023-08-30 12:40:11,236 - INFO - Epoch: 2/2, Iter: 39/93 -- train_loss: 0.2437 \n", - "2023-08-30 12:40:11,609 - INFO - Epoch: 2/2, Iter: 40/93 -- train_loss: 0.2296 \n", - "2023-08-30 12:40:11,989 - INFO - Epoch: 2/2, Iter: 41/93 -- train_loss: 0.2312 \n", - "2023-08-30 12:40:12,361 - INFO - Epoch: 2/2, Iter: 42/93 -- train_loss: 0.2214 \n", - "2023-08-30 12:40:12,733 - INFO - Epoch: 2/2, Iter: 43/93 -- train_loss: 0.2339 \n", - "2023-08-30 12:40:13,112 - INFO - Epoch: 2/2, Iter: 44/93 -- train_loss: 0.2359 \n", - "2023-08-30 12:40:13,492 - INFO - Epoch: 2/2, Iter: 45/93 -- train_loss: 0.2351 \n", - "2023-08-30 12:40:13,868 - INFO - Epoch: 2/2, Iter: 46/93 -- train_loss: 0.2161 \n", - "2023-08-30 12:40:14,238 - INFO - Epoch: 2/2, Iter: 47/93 -- train_loss: 0.2140 \n", - "2023-08-30 12:40:14,617 - INFO - Epoch: 2/2, Iter: 48/93 -- train_loss: 0.2275 \n", - "2023-08-30 12:40:14,999 - INFO - Epoch: 2/2, Iter: 49/93 -- train_loss: 0.2160 \n", - "2023-08-30 12:40:15,373 - INFO - Epoch: 2/2, Iter: 50/93 -- train_loss: 0.1924 \n", - "2023-08-30 12:40:15,751 - INFO - Epoch: 2/2, Iter: 51/93 -- train_loss: 0.2017 \n", - "2023-08-30 12:40:16,135 - INFO - Epoch: 2/2, Iter: 52/93 -- train_loss: 0.1886 \n", - "2023-08-30 12:40:16,516 - INFO - Epoch: 2/2, Iter: 53/93 -- train_loss: 0.2080 \n", - "2023-08-30 12:40:16,890 - INFO - Epoch: 2/2, Iter: 54/93 -- train_loss: 0.1862 \n", - "2023-08-30 12:40:17,264 - INFO - Epoch: 2/2, Iter: 55/93 -- train_loss: 0.2107 \n", - "2023-08-30 12:40:17,636 - INFO - Epoch: 2/2, Iter: 56/93 -- train_loss: 0.1911 \n", - "2023-08-30 12:40:18,012 - INFO - Epoch: 2/2, Iter: 57/93 -- train_loss: 0.1933 \n", - "2023-08-30 12:40:18,389 - INFO - Epoch: 2/2, Iter: 58/93 -- train_loss: 0.1964 \n", - "2023-08-30 12:40:18,759 - INFO - Epoch: 2/2, Iter: 59/93 -- train_loss: 0.1780 \n", - "2023-08-30 12:40:19,134 - INFO - Epoch: 2/2, Iter: 60/93 -- train_loss: 0.1969 \n", - "2023-08-30 12:40:19,510 - INFO - Epoch: 2/2, Iter: 61/93 -- train_loss: 0.2030 \n", - "2023-08-30 12:40:19,890 - INFO - Epoch: 2/2, Iter: 62/93 -- train_loss: 0.1805 \n", - "2023-08-30 12:40:20,262 - INFO - Epoch: 2/2, Iter: 63/93 -- train_loss: 0.1901 \n", - "2023-08-30 12:40:20,635 - INFO - Epoch: 2/2, Iter: 64/93 -- train_loss: 0.1830 \n", - "2023-08-30 12:40:21,012 - INFO - Epoch: 2/2, Iter: 65/93 -- train_loss: 0.1713 \n", - "2023-08-30 12:40:21,385 - INFO - Epoch: 2/2, Iter: 66/93 -- train_loss: 0.1820 \n", - "2023-08-30 12:40:21,756 - INFO - Epoch: 2/2, Iter: 67/93 -- train_loss: 0.1912 \n", - "2023-08-30 12:40:22,154 - INFO - Epoch: 2/2, Iter: 68/93 -- train_loss: 0.1689 \n", - "2023-08-30 12:40:22,529 - INFO - Epoch: 2/2, Iter: 69/93 -- train_loss: 0.1651 \n", - "2023-08-30 12:40:22,900 - INFO - Epoch: 2/2, Iter: 70/93 -- train_loss: 0.1832 \n", - "2023-08-30 12:40:23,270 - INFO - Epoch: 2/2, Iter: 71/93 -- train_loss: 0.1659 \n", - "2023-08-30 12:40:23,643 - INFO - Epoch: 2/2, Iter: 72/93 -- train_loss: 0.1636 \n", - "2023-08-30 12:40:24,017 - INFO - Epoch: 2/2, Iter: 73/93 -- train_loss: 0.1625 \n", - "2023-08-30 12:40:24,389 - INFO - Epoch: 2/2, Iter: 74/93 -- train_loss: 0.1583 \n", - "2023-08-30 12:40:24,759 - INFO - Epoch: 2/2, Iter: 75/93 -- train_loss: 0.1654 \n", - "2023-08-30 12:40:25,131 - INFO - Epoch: 2/2, Iter: 76/93 -- train_loss: 0.1575 \n", - "2023-08-30 12:40:25,506 - INFO - Epoch: 2/2, Iter: 77/93 -- train_loss: 0.1678 \n", - "2023-08-30 12:40:25,879 - INFO - Epoch: 2/2, Iter: 78/93 -- train_loss: 0.1731 \n", - "2023-08-30 12:40:26,249 - INFO - Epoch: 2/2, Iter: 79/93 -- train_loss: 0.1732 \n", - "2023-08-30 12:40:26,620 - INFO - Epoch: 2/2, Iter: 80/93 -- train_loss: 0.1535 \n", - "2023-08-30 12:40:26,995 - INFO - Epoch: 2/2, Iter: 81/93 -- train_loss: 0.1750 \n", - "2023-08-30 12:40:27,367 - INFO - Epoch: 2/2, Iter: 82/93 -- train_loss: 0.1701 \n", - "2023-08-30 12:40:27,737 - INFO - Epoch: 2/2, Iter: 83/93 -- train_loss: 0.1671 \n", - "2023-08-30 12:40:28,109 - INFO - Epoch: 2/2, Iter: 84/93 -- train_loss: 0.1661 \n", - "2023-08-30 12:40:28,487 - INFO - Epoch: 2/2, Iter: 85/93 -- train_loss: 0.1436 \n", - "2023-08-30 12:40:28,858 - INFO - Epoch: 2/2, Iter: 86/93 -- train_loss: 0.1486 \n", - "2023-08-30 12:40:29,229 - INFO - Epoch: 2/2, Iter: 87/93 -- train_loss: 0.1446 \n", - "2023-08-30 12:40:29,601 - INFO - Epoch: 2/2, Iter: 88/93 -- train_loss: 0.1411 \n", - "2023-08-30 12:40:29,976 - INFO - Epoch: 2/2, Iter: 89/93 -- train_loss: 0.1547 \n", - "2023-08-30 12:40:30,346 - INFO - Epoch: 2/2, Iter: 90/93 -- train_loss: 0.1410 \n", - "2023-08-30 12:40:30,718 - INFO - Epoch: 2/2, Iter: 91/93 -- train_loss: 0.1753 \n", - "2023-08-30 12:40:31,088 - INFO - Epoch: 2/2, Iter: 92/93 -- train_loss: 0.1475 \n", - "2023-08-30 12:40:31,202 - INFO - Epoch: 2/2, Iter: 93/93 -- train_loss: 0.1644 \n", - "2023-08-30 12:40:31,202 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run resuming from iteration 0, epoch 1 until 2 epochs\n", - "2023-08-30 12:40:40,084 - ignite.engine.engine.SupervisedEvaluator - INFO - Got new best metric of accuracy: 0.9945151258128357\n", - "2023-08-30 12:40:40,084 - INFO - Epoch[2] Metrics -- accuracy: 0.9945 \n", - "2023-08-30 12:40:40,084 - INFO - Key metric: accuracy best value: 0.9945151258128357 at epoch: 2\n", - "2023-08-30 12:40:40,084 - ignite.engine.engine.SupervisedEvaluator - INFO - Epoch[2] Complete. Time taken: 00:00:08.764\n", - "2023-08-30 12:40:40,084 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run complete. Time taken: 00:00:08.882\n", - "2023-08-30 12:40:40,233 - ignite.engine.engine.SupervisedTrainer - INFO - Saved checkpoint at epoch: 2\n", - "2023-08-30 12:40:40,233 - ignite.engine.engine.SupervisedTrainer - INFO - Epoch[2] Complete. Time taken: 00:00:44.072\n", - "2023-08-30 12:40:40,318 - ignite.engine.engine.SupervisedTrainer - INFO - Train completed, saved final checkpoint: output/output_230830_123911/model_final_iteration=186.pt\n", - "2023-08-30 12:40:40,318 - ignite.engine.engine.SupervisedTrainer - INFO - Engine run complete. Time taken: 00:01:28.998\n" + "2023-09-11 16:45:47,217 - ignite.engine.engine.SupervisedTrainer - INFO - Engine run resuming from iteration 0, epoch 0 until 2 epochs\n", + "2023-09-11 16:45:48,671 - INFO - Epoch: 1/2, Iter: 1/93 -- train_loss: 1.8460 \n", + "2023-09-11 16:45:49,046 - INFO - Epoch: 1/2, Iter: 2/93 -- train_loss: 1.8131 \n", + "2023-09-11 16:45:49,436 - INFO - Epoch: 1/2, Iter: 3/93 -- train_loss: 1.7636 \n", + "2023-09-11 16:45:49,803 - INFO - Epoch: 1/2, Iter: 4/93 -- train_loss: 1.7511 \n", + "2023-09-11 16:45:50,174 - INFO - Epoch: 1/2, Iter: 5/93 -- train_loss: 1.7146 \n", + "2023-09-11 16:45:50,544 - INFO - Epoch: 1/2, Iter: 6/93 -- train_loss: 1.6879 \n", + "2023-09-11 16:45:50,919 - INFO - Epoch: 1/2, Iter: 7/93 -- train_loss: 1.6394 \n", + "2023-09-11 16:45:51,288 - INFO - Epoch: 1/2, Iter: 8/93 -- train_loss: 1.6172 \n", + "2023-09-11 16:45:51,665 - INFO - Epoch: 1/2, Iter: 9/93 -- train_loss: 1.5972 \n", + "2023-09-11 16:45:52,034 - INFO - Epoch: 1/2, Iter: 10/93 -- train_loss: 1.5663 \n", + "2023-09-11 16:45:52,413 - INFO - Epoch: 1/2, Iter: 11/93 -- train_loss: 1.5483 \n", + "2023-09-11 16:45:52,787 - INFO - Epoch: 1/2, Iter: 12/93 -- train_loss: 1.4914 \n", + "2023-09-11 16:45:53,162 - INFO - Epoch: 1/2, Iter: 13/93 -- train_loss: 1.4504 \n", + "2023-09-11 16:45:53,530 - INFO - Epoch: 1/2, Iter: 14/93 -- train_loss: 1.4477 \n", + "2023-09-11 16:45:53,904 - INFO - Epoch: 1/2, Iter: 15/93 -- train_loss: 1.4099 \n", + "2023-09-11 16:45:54,275 - INFO - Epoch: 1/2, Iter: 16/93 -- train_loss: 1.3985 \n", + "2023-09-11 16:45:54,648 - INFO - Epoch: 1/2, Iter: 17/93 -- train_loss: 1.3849 \n", + "2023-09-11 16:45:55,015 - INFO - Epoch: 1/2, Iter: 18/93 -- train_loss: 1.3735 \n", + "2023-09-11 16:45:55,394 - INFO - Epoch: 1/2, Iter: 19/93 -- train_loss: 1.3040 \n", + "2023-09-11 16:45:55,762 - INFO - Epoch: 1/2, Iter: 20/93 -- train_loss: 1.3018 \n", + "2023-09-11 16:45:56,137 - INFO - Epoch: 1/2, Iter: 21/93 -- train_loss: 1.2656 \n", + "2023-09-11 16:45:56,509 - INFO - Epoch: 1/2, Iter: 22/93 -- train_loss: 1.2451 \n", + "2023-09-11 16:45:56,883 - INFO - Epoch: 1/2, Iter: 23/93 -- train_loss: 1.2429 \n", + "2023-09-11 16:45:57,252 - INFO - Epoch: 1/2, Iter: 24/93 -- train_loss: 1.2009 \n", + "2023-09-11 16:45:57,631 - INFO - Epoch: 1/2, Iter: 25/93 -- train_loss: 1.1890 \n", + "2023-09-11 16:45:58,000 - INFO - Epoch: 1/2, Iter: 26/93 -- train_loss: 1.1832 \n", + "2023-09-11 16:45:58,373 - INFO - Epoch: 1/2, Iter: 27/93 -- train_loss: 1.1359 \n", + "2023-09-11 16:45:58,745 - INFO - Epoch: 1/2, Iter: 28/93 -- train_loss: 1.1588 \n", + "2023-09-11 16:45:59,122 - INFO - Epoch: 1/2, Iter: 29/93 -- train_loss: 1.1134 \n", + "2023-09-11 16:45:59,489 - INFO - Epoch: 1/2, Iter: 30/93 -- train_loss: 1.0843 \n", + "2023-09-11 16:45:59,863 - INFO - Epoch: 1/2, Iter: 31/93 -- train_loss: 1.0956 \n", + "2023-09-11 16:46:00,235 - INFO - Epoch: 1/2, Iter: 32/93 -- train_loss: 1.0651 \n", + "2023-09-11 16:46:00,611 - INFO - Epoch: 1/2, Iter: 33/93 -- train_loss: 1.0697 \n", + "2023-09-11 16:46:00,978 - INFO - Epoch: 1/2, Iter: 34/93 -- train_loss: 1.0189 \n", + "2023-09-11 16:46:01,631 - INFO - Epoch: 1/2, Iter: 35/93 -- train_loss: 0.9943 \n", + "2023-09-11 16:46:01,998 - INFO - Epoch: 1/2, Iter: 36/93 -- train_loss: 1.0024 \n", + "2023-09-11 16:46:02,372 - INFO - Epoch: 1/2, Iter: 37/93 -- train_loss: 0.9881 \n", + "2023-09-11 16:46:02,739 - INFO - Epoch: 1/2, Iter: 38/93 -- train_loss: 1.0021 \n", + "2023-09-11 16:46:03,114 - INFO - Epoch: 1/2, Iter: 39/93 -- train_loss: 0.9297 \n", + "2023-09-11 16:46:03,482 - INFO - Epoch: 1/2, Iter: 40/93 -- train_loss: 0.9498 \n", + "2023-09-11 16:46:03,868 - INFO - Epoch: 1/2, Iter: 41/93 -- train_loss: 0.9560 \n", + "2023-09-11 16:46:04,239 - INFO - Epoch: 1/2, Iter: 42/93 -- train_loss: 0.9241 \n", + "2023-09-11 16:46:04,621 - INFO - Epoch: 1/2, Iter: 43/93 -- train_loss: 0.8911 \n", + "2023-09-11 16:46:04,990 - INFO - Epoch: 1/2, Iter: 44/93 -- train_loss: 0.8677 \n", + "2023-09-11 16:46:05,370 - INFO - Epoch: 1/2, Iter: 45/93 -- train_loss: 0.8857 \n", + "2023-09-11 16:46:05,738 - INFO - Epoch: 1/2, Iter: 46/93 -- train_loss: 0.8587 \n", + "2023-09-11 16:46:06,114 - INFO - Epoch: 1/2, Iter: 47/93 -- train_loss: 0.8366 \n", + "2023-09-11 16:46:06,481 - INFO - Epoch: 1/2, Iter: 48/93 -- train_loss: 0.8365 \n", + "2023-09-11 16:46:06,858 - INFO - Epoch: 1/2, Iter: 49/93 -- train_loss: 0.8071 \n", + "2023-09-11 16:46:07,228 - INFO - Epoch: 1/2, Iter: 50/93 -- train_loss: 0.7914 \n", + "2023-09-11 16:46:07,603 - INFO - Epoch: 1/2, Iter: 51/93 -- train_loss: 0.7689 \n", + "2023-09-11 16:46:07,971 - INFO - Epoch: 1/2, Iter: 52/93 -- train_loss: 0.7649 \n", + "2023-09-11 16:46:08,351 - INFO - Epoch: 1/2, Iter: 53/93 -- train_loss: 0.7562 \n", + "2023-09-11 16:46:08,721 - INFO - Epoch: 1/2, Iter: 54/93 -- train_loss: 0.7854 \n", + "2023-09-11 16:46:09,098 - INFO - Epoch: 1/2, Iter: 55/93 -- train_loss: 0.7297 \n", + "2023-09-11 16:46:09,466 - INFO - Epoch: 1/2, Iter: 56/93 -- train_loss: 0.7237 \n", + "2023-09-11 16:46:09,841 - INFO - Epoch: 1/2, Iter: 57/93 -- train_loss: 0.7184 \n", + "2023-09-11 16:46:10,209 - INFO - Epoch: 1/2, Iter: 58/93 -- train_loss: 0.7446 \n", + "2023-09-11 16:46:10,585 - INFO - Epoch: 1/2, Iter: 59/93 -- train_loss: 0.7179 \n", + "2023-09-11 16:46:10,954 - INFO - Epoch: 1/2, Iter: 60/93 -- train_loss: 0.6467 \n", + "2023-09-11 16:46:11,332 - INFO - Epoch: 1/2, Iter: 61/93 -- train_loss: 0.6886 \n", + "2023-09-11 16:46:11,701 - INFO - Epoch: 1/2, Iter: 62/93 -- train_loss: 0.6816 \n", + "2023-09-11 16:46:12,082 - INFO - Epoch: 1/2, Iter: 63/93 -- train_loss: 0.6509 \n", + "2023-09-11 16:46:12,451 - INFO - Epoch: 1/2, Iter: 64/93 -- train_loss: 0.6453 \n", + "2023-09-11 16:46:12,833 - INFO - Epoch: 1/2, Iter: 65/93 -- train_loss: 0.6316 \n", + "2023-09-11 16:46:13,203 - INFO - Epoch: 1/2, Iter: 66/93 -- train_loss: 0.6317 \n", + "2023-09-11 16:46:13,581 - INFO - Epoch: 1/2, Iter: 67/93 -- train_loss: 0.5938 \n", + "2023-09-11 16:46:13,957 - INFO - Epoch: 1/2, Iter: 68/93 -- train_loss: 0.6120 \n", + "2023-09-11 16:46:14,335 - INFO - Epoch: 1/2, Iter: 69/93 -- train_loss: 0.5958 \n", + "2023-09-11 16:46:14,704 - INFO - Epoch: 1/2, Iter: 70/93 -- train_loss: 0.5930 \n", + "2023-09-11 16:46:15,079 - INFO - Epoch: 1/2, Iter: 71/93 -- train_loss: 0.5662 \n", + "2023-09-11 16:46:15,448 - INFO - Epoch: 1/2, Iter: 72/93 -- train_loss: 0.5763 \n", + "2023-09-11 16:46:16,041 - INFO - Epoch: 1/2, Iter: 73/93 -- train_loss: 0.5695 \n", + "2023-09-11 16:46:16,410 - INFO - Epoch: 1/2, Iter: 74/93 -- train_loss: 0.5743 \n", + "2023-09-11 16:46:16,789 - INFO - Epoch: 1/2, Iter: 75/93 -- train_loss: 0.5466 \n", + "2023-09-11 16:46:17,157 - INFO - Epoch: 1/2, Iter: 76/93 -- train_loss: 0.5320 \n", + "2023-09-11 16:46:17,540 - INFO - Epoch: 1/2, Iter: 77/93 -- train_loss: 0.5176 \n", + "2023-09-11 16:46:17,911 - INFO - Epoch: 1/2, Iter: 78/93 -- train_loss: 0.5000 \n", + "2023-09-11 16:46:18,287 - INFO - Epoch: 1/2, Iter: 79/93 -- train_loss: 0.5113 \n", + "2023-09-11 16:46:18,658 - INFO - Epoch: 1/2, Iter: 80/93 -- train_loss: 0.4966 \n", + "2023-09-11 16:46:19,035 - INFO - Epoch: 1/2, Iter: 81/93 -- train_loss: 0.5185 \n", + "2023-09-11 16:46:19,404 - INFO - Epoch: 1/2, Iter: 82/93 -- train_loss: 0.4719 \n", + "2023-09-11 16:46:19,783 - INFO - Epoch: 1/2, Iter: 83/93 -- train_loss: 0.4695 \n", + "2023-09-11 16:46:20,154 - INFO - Epoch: 1/2, Iter: 84/93 -- train_loss: 0.4637 \n", + "2023-09-11 16:46:20,535 - INFO - Epoch: 1/2, Iter: 85/93 -- train_loss: 0.4910 \n", + "2023-09-11 16:46:20,906 - INFO - Epoch: 1/2, Iter: 86/93 -- train_loss: 0.4873 \n", + "2023-09-11 16:46:21,284 - INFO - Epoch: 1/2, Iter: 87/93 -- train_loss: 0.4566 \n", + "2023-09-11 16:46:21,654 - INFO - Epoch: 1/2, Iter: 88/93 -- train_loss: 0.4357 \n", + "2023-09-11 16:46:22,047 - INFO - Epoch: 1/2, Iter: 89/93 -- train_loss: 0.4304 \n", + "2023-09-11 16:46:22,419 - INFO - Epoch: 1/2, Iter: 90/93 -- train_loss: 0.4286 \n", + "2023-09-11 16:46:22,796 - INFO - Epoch: 1/2, Iter: 91/93 -- train_loss: 0.4116 \n", + "2023-09-11 16:46:23,165 - INFO - Epoch: 1/2, Iter: 92/93 -- train_loss: 0.4424 \n", + "2023-09-11 16:46:23,422 - INFO - Epoch: 1/2, Iter: 93/93 -- train_loss: 0.5651 \n", + "2023-09-11 16:46:23,423 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run resuming from iteration 0, epoch 0 until 1 epochs\n", + "2023-09-11 16:46:32,635 - ignite.engine.engine.SupervisedEvaluator - INFO - Got new best metric of accuracy: 0.9867684478371501\n", + "2023-09-11 16:46:32,635 - INFO - Epoch[1] Metrics -- accuracy: 0.9868 \n", + "2023-09-11 16:46:32,635 - INFO - Key metric: accuracy best value: 0.9867684478371501 at epoch: 1\n", + "2023-09-11 16:46:32,635 - ignite.engine.engine.SupervisedEvaluator - INFO - Epoch[1] Complete. Time taken: 00:00:09.072\n", + "2023-09-11 16:46:32,635 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run complete. Time taken: 00:00:09.213\n", + "2023-09-11 16:46:32,765 - ignite.engine.engine.SupervisedTrainer - INFO - Saved checkpoint at epoch: 1\n", + "2023-09-11 16:46:32,765 - ignite.engine.engine.SupervisedTrainer - INFO - Epoch[1] Complete. Time taken: 00:00:45.405\n", + "2023-09-11 16:46:33,378 - INFO - Epoch: 2/2, Iter: 1/93 -- train_loss: 0.4218 \n", + "2023-09-11 16:46:33,760 - INFO - Epoch: 2/2, Iter: 2/93 -- train_loss: 0.4012 \n", + "2023-09-11 16:46:34,130 - INFO - Epoch: 2/2, Iter: 3/93 -- train_loss: 0.3729 \n", + "2023-09-11 16:46:34,507 - INFO - Epoch: 2/2, Iter: 4/93 -- train_loss: 0.3895 \n", + "2023-09-11 16:46:34,889 - INFO - Epoch: 2/2, Iter: 5/93 -- train_loss: 0.3915 \n", + "2023-09-11 16:46:35,260 - INFO - Epoch: 2/2, Iter: 6/93 -- train_loss: 0.4068 \n", + "2023-09-11 16:46:35,630 - INFO - Epoch: 2/2, Iter: 7/93 -- train_loss: 0.3784 \n", + "2023-09-11 16:46:36,002 - INFO - Epoch: 2/2, Iter: 8/93 -- train_loss: 0.3559 \n", + "2023-09-11 16:46:36,379 - INFO - Epoch: 2/2, Iter: 9/93 -- train_loss: 0.3693 \n", + "2023-09-11 16:46:36,749 - INFO - Epoch: 2/2, Iter: 10/93 -- train_loss: 0.3890 \n", + "2023-09-11 16:46:37,118 - INFO - Epoch: 2/2, Iter: 11/93 -- train_loss: 0.3663 \n", + "2023-09-11 16:46:37,491 - INFO - Epoch: 2/2, Iter: 12/93 -- train_loss: 0.3512 \n", + "2023-09-11 16:46:37,863 - INFO - Epoch: 2/2, Iter: 13/93 -- train_loss: 0.3410 \n", + "2023-09-11 16:46:38,236 - INFO - Epoch: 2/2, Iter: 14/93 -- train_loss: 0.3644 \n", + "2023-09-11 16:46:38,608 - INFO - Epoch: 2/2, Iter: 15/93 -- train_loss: 0.3316 \n", + "2023-09-11 16:46:38,982 - INFO - Epoch: 2/2, Iter: 16/93 -- train_loss: 0.3547 \n", + "2023-09-11 16:46:39,353 - INFO - Epoch: 2/2, Iter: 17/93 -- train_loss: 0.3406 \n", + "2023-09-11 16:46:39,729 - INFO - Epoch: 2/2, Iter: 18/93 -- train_loss: 0.3200 \n", + "2023-09-11 16:46:40,101 - INFO - Epoch: 2/2, Iter: 19/93 -- train_loss: 0.3069 \n", + "2023-09-11 16:46:40,475 - INFO - Epoch: 2/2, Iter: 20/93 -- train_loss: 0.3044 \n", + "2023-09-11 16:46:40,850 - INFO - Epoch: 2/2, Iter: 21/93 -- train_loss: 0.2921 \n", + "2023-09-11 16:46:41,502 - INFO - Epoch: 2/2, Iter: 22/93 -- train_loss: 0.2953 \n", + "2023-09-11 16:46:41,875 - INFO - Epoch: 2/2, Iter: 23/93 -- train_loss: 0.3098 \n", + "2023-09-11 16:46:42,248 - INFO - Epoch: 2/2, Iter: 24/93 -- train_loss: 0.3126 \n", + "2023-09-11 16:46:42,622 - INFO - Epoch: 2/2, Iter: 25/93 -- train_loss: 0.2839 \n", + "2023-09-11 16:46:42,995 - INFO - Epoch: 2/2, Iter: 26/93 -- train_loss: 0.2934 \n", + "2023-09-11 16:46:43,373 - INFO - Epoch: 2/2, Iter: 27/93 -- train_loss: 0.2862 \n", + "2023-09-11 16:46:43,753 - INFO - Epoch: 2/2, Iter: 28/93 -- train_loss: 0.2911 \n", + "2023-09-11 16:46:44,126 - INFO - Epoch: 2/2, Iter: 29/93 -- train_loss: 0.2814 \n", + "2023-09-11 16:46:44,500 - INFO - Epoch: 2/2, Iter: 30/93 -- train_loss: 0.2819 \n", + "2023-09-11 16:46:44,873 - INFO - Epoch: 2/2, Iter: 31/93 -- train_loss: 0.2679 \n", + "2023-09-11 16:46:45,246 - INFO - Epoch: 2/2, Iter: 32/93 -- train_loss: 0.2932 \n", + "2023-09-11 16:46:45,617 - INFO - Epoch: 2/2, Iter: 33/93 -- train_loss: 0.2752 \n", + "2023-09-11 16:46:45,994 - INFO - Epoch: 2/2, Iter: 34/93 -- train_loss: 0.2591 \n", + "2023-09-11 16:46:46,371 - INFO - Epoch: 2/2, Iter: 35/93 -- train_loss: 0.2724 \n", + "2023-09-11 16:46:46,748 - INFO - Epoch: 2/2, Iter: 36/93 -- train_loss: 0.2638 \n", + "2023-09-11 16:46:47,120 - INFO - Epoch: 2/2, Iter: 37/93 -- train_loss: 0.2707 \n", + "2023-09-11 16:46:47,495 - INFO - Epoch: 2/2, Iter: 38/93 -- train_loss: 0.2540 \n", + "2023-09-11 16:46:47,867 - INFO - Epoch: 2/2, Iter: 39/93 -- train_loss: 0.2716 \n", + "2023-09-11 16:46:48,241 - INFO - Epoch: 2/2, Iter: 40/93 -- train_loss: 0.2449 \n", + "2023-09-11 16:46:48,613 - INFO - Epoch: 2/2, Iter: 41/93 -- train_loss: 0.2530 \n", + "2023-09-11 16:46:48,987 - INFO - Epoch: 2/2, Iter: 42/93 -- train_loss: 0.2429 \n", + "2023-09-11 16:46:49,364 - INFO - Epoch: 2/2, Iter: 43/93 -- train_loss: 0.2279 \n", + "2023-09-11 16:46:49,740 - INFO - Epoch: 2/2, Iter: 44/93 -- train_loss: 0.2243 \n", + "2023-09-11 16:46:50,113 - INFO - Epoch: 2/2, Iter: 45/93 -- train_loss: 0.2431 \n", + "2023-09-11 16:46:50,492 - INFO - Epoch: 2/2, Iter: 46/93 -- train_loss: 0.2439 \n", + "2023-09-11 16:46:50,864 - INFO - Epoch: 2/2, Iter: 47/93 -- train_loss: 0.2279 \n", + "2023-09-11 16:46:51,238 - INFO - Epoch: 2/2, Iter: 48/93 -- train_loss: 0.2097 \n", + "2023-09-11 16:46:51,616 - INFO - Epoch: 2/2, Iter: 49/93 -- train_loss: 0.2345 \n", + "2023-09-11 16:46:51,992 - INFO - Epoch: 2/2, Iter: 50/93 -- train_loss: 0.2191 \n", + "2023-09-11 16:46:52,447 - INFO - Epoch: 2/2, Iter: 51/93 -- train_loss: 0.2042 \n", + "2023-09-11 16:46:52,821 - INFO - Epoch: 2/2, Iter: 52/93 -- train_loss: 0.2438 \n", + "2023-09-11 16:46:53,193 - INFO - Epoch: 2/2, Iter: 53/93 -- train_loss: 0.2154 \n", + "2023-09-11 16:46:53,566 - INFO - Epoch: 2/2, Iter: 54/93 -- train_loss: 0.2276 \n", + "2023-09-11 16:46:53,939 - INFO - Epoch: 2/2, Iter: 55/93 -- train_loss: 0.2033 \n", + "2023-09-11 16:46:54,313 - INFO - Epoch: 2/2, Iter: 56/93 -- train_loss: 0.2054 \n", + "2023-09-11 16:46:54,692 - INFO - Epoch: 2/2, Iter: 57/93 -- train_loss: 0.2188 \n", + "2023-09-11 16:46:55,065 - INFO - Epoch: 2/2, Iter: 58/93 -- train_loss: 0.1989 \n", + "2023-09-11 16:46:55,438 - INFO - Epoch: 2/2, Iter: 59/93 -- train_loss: 0.1964 \n", + "2023-09-11 16:46:55,815 - INFO - Epoch: 2/2, Iter: 60/93 -- train_loss: 0.2212 \n", + "2023-09-11 16:46:56,200 - INFO - Epoch: 2/2, Iter: 61/93 -- train_loss: 0.2041 \n", + "2023-09-11 16:46:56,577 - INFO - Epoch: 2/2, Iter: 62/93 -- train_loss: 0.1918 \n", + "2023-09-11 16:46:56,958 - INFO - Epoch: 2/2, Iter: 63/93 -- train_loss: 0.2110 \n", + "2023-09-11 16:46:57,333 - INFO - Epoch: 2/2, Iter: 64/93 -- train_loss: 0.1816 \n", + "2023-09-11 16:46:57,706 - INFO - Epoch: 2/2, Iter: 65/93 -- train_loss: 0.1850 \n", + "2023-09-11 16:46:58,079 - INFO - Epoch: 2/2, Iter: 66/93 -- train_loss: 0.2006 \n", + "2023-09-11 16:46:58,459 - INFO - Epoch: 2/2, Iter: 67/93 -- train_loss: 0.1794 \n", + "2023-09-11 16:46:58,835 - INFO - Epoch: 2/2, Iter: 68/93 -- train_loss: 0.1977 \n", + "2023-09-11 16:46:59,208 - INFO - Epoch: 2/2, Iter: 69/93 -- train_loss: 0.2084 \n", + "2023-09-11 16:46:59,582 - INFO - Epoch: 2/2, Iter: 70/93 -- train_loss: 0.1948 \n", + "2023-09-11 16:46:59,955 - INFO - Epoch: 2/2, Iter: 71/93 -- train_loss: 0.1848 \n", + "2023-09-11 16:47:00,328 - INFO - Epoch: 2/2, Iter: 72/93 -- train_loss: 0.1792 \n", + "2023-09-11 16:47:00,701 - INFO - Epoch: 2/2, Iter: 73/93 -- train_loss: 0.1613 \n", + "2023-09-11 16:47:01,076 - INFO - Epoch: 2/2, Iter: 74/93 -- train_loss: 0.1810 \n", + "2023-09-11 16:47:01,451 - INFO - Epoch: 2/2, Iter: 75/93 -- train_loss: 0.1802 \n", + "2023-09-11 16:47:01,830 - INFO - Epoch: 2/2, Iter: 76/93 -- train_loss: 0.1606 \n", + "2023-09-11 16:47:02,205 - INFO - Epoch: 2/2, Iter: 77/93 -- train_loss: 0.1644 \n", + "2023-09-11 16:47:02,586 - INFO - Epoch: 2/2, Iter: 78/93 -- train_loss: 0.1597 \n", + "2023-09-11 16:47:02,961 - INFO - Epoch: 2/2, Iter: 79/93 -- train_loss: 0.1742 \n", + "2023-09-11 16:47:03,336 - INFO - Epoch: 2/2, Iter: 80/93 -- train_loss: 0.1581 \n", + "2023-09-11 16:47:03,718 - INFO - Epoch: 2/2, Iter: 81/93 -- train_loss: 0.1650 \n", + "2023-09-11 16:47:04,098 - INFO - Epoch: 2/2, Iter: 82/93 -- train_loss: 0.1644 \n", + "2023-09-11 16:47:04,473 - INFO - Epoch: 2/2, Iter: 83/93 -- train_loss: 0.1667 \n", + "2023-09-11 16:47:04,849 - INFO - Epoch: 2/2, Iter: 84/93 -- train_loss: 0.1704 \n", + "2023-09-11 16:47:05,228 - INFO - Epoch: 2/2, Iter: 85/93 -- train_loss: 0.1650 \n", + "2023-09-11 16:47:05,602 - INFO - Epoch: 2/2, Iter: 86/93 -- train_loss: 0.1483 \n", + "2023-09-11 16:47:05,975 - INFO - Epoch: 2/2, Iter: 87/93 -- train_loss: 0.1452 \n", + "2023-09-11 16:47:06,353 - INFO - Epoch: 2/2, Iter: 88/93 -- train_loss: 0.1462 \n", + "2023-09-11 16:47:06,727 - INFO - Epoch: 2/2, Iter: 89/93 -- train_loss: 0.1543 \n", + "2023-09-11 16:47:07,101 - INFO - Epoch: 2/2, Iter: 90/93 -- train_loss: 0.1516 \n", + "2023-09-11 16:47:07,486 - INFO - Epoch: 2/2, Iter: 91/93 -- train_loss: 0.1564 \n", + "2023-09-11 16:47:07,879 - INFO - Epoch: 2/2, Iter: 92/93 -- train_loss: 0.1535 \n", + "2023-09-11 16:47:07,995 - INFO - Epoch: 2/2, Iter: 93/93 -- train_loss: 0.2525 \n", + "2023-09-11 16:47:07,995 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run resuming from iteration 0, epoch 1 until 2 epochs\n", + "2023-09-11 16:47:17,163 - ignite.engine.engine.SupervisedEvaluator - INFO - Got new best metric of accuracy: 0.9939496748657054\n", + "2023-09-11 16:47:17,163 - INFO - Epoch[2] Metrics -- accuracy: 0.9939 \n", + "2023-09-11 16:47:17,163 - INFO - Key metric: accuracy best value: 0.9939496748657054 at epoch: 2\n", + "2023-09-11 16:47:17,163 - ignite.engine.engine.SupervisedEvaluator - INFO - Epoch[2] Complete. Time taken: 00:00:09.038\n", + "2023-09-11 16:47:17,163 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run complete. Time taken: 00:00:09.168\n", + "2023-09-11 16:47:17,301 - ignite.engine.engine.SupervisedTrainer - INFO - Saved checkpoint at epoch: 2\n", + "2023-09-11 16:47:17,302 - ignite.engine.engine.SupervisedTrainer - INFO - Epoch[2] Complete. Time taken: 00:00:44.536\n", + "2023-09-11 16:47:17,387 - ignite.engine.engine.SupervisedTrainer - INFO - Train completed, saved final checkpoint: output/output_230911_164547/model_final_iteration=186.pt\n", + "2023-09-11 16:47:17,387 - ignite.engine.engine.SupervisedTrainer - INFO - Engine run complete. Time taken: 00:01:30.170\n" ] } ], @@ -715,7 +716,7 @@ }, { "cell_type": "code", - "execution_count": 26, + "execution_count": 10, "id": "00c84e2c-1709-4136-8612-87142026ac2e", "metadata": {}, "outputs": [ @@ -723,7 +724,8 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001b[01;34moutput/output_230830_123911/\u001b[00m\n", + "/usr/bin/tree\n", + "\u001b[01;34moutput/output_230911_164547\u001b[00m\n", "├── log.txt\n", "├── model_epoch=1.pt\n", "├── model_epoch=2.pt\n", @@ -734,7 +736,7 @@ } ], "source": [ - "!which tree && tree output/output_230830_123911/ || true" + "!which tree && tree output/* || true" ] }, { @@ -751,7 +753,7 @@ }, { "cell_type": "code", - "execution_count": 38, + "execution_count": 11, "id": "3a957503-39e4-4f73-a989-ce6e4e2d3e9e", "metadata": {}, "outputs": [ @@ -759,7 +761,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "Loading dataset: 100%|██████████| 5895/5895 [00:03<00:00, 1771.10it/s]\n" + "Loading dataset: 100%|██████████| 5895/5895 [00:03<00:00, 1671.21it/s]\n" ] }, { @@ -814,7 +816,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 12, "id": "7f800520-f29f-4b80-9af4-5e069f97824b", "metadata": { "tags": [] @@ -834,7 +836,7 @@ }, { "cell_type": "code", - "execution_count": 90, + "execution_count": 13, "id": "3c5556db-2e63-484c-9358-977b4c35d60f", "metadata": {}, "outputs": [ @@ -842,7 +844,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Overwriting MedNISTClassifier_v2/configs/inference.yaml\n" + "Writing MedNISTClassifier_v2/configs/inference.yaml\n" ] } ], @@ -917,7 +919,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 22, "id": "acdcc111-f259-4701-8b1d-31fcf74398bc", "metadata": {}, "outputs": [ @@ -925,23 +927,23 @@ "name": "stdout", "output_type": "stream", "text": [ - "2023-09-07 16:20:16,087 - INFO - --- input summary of monai.bundle.scripts.run ---\n", - "2023-09-07 16:20:16,087 - INFO - > run_id: 'inference'\n", - "2023-09-07 16:20:16,087 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n", - "2023-09-07 16:20:16,087 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n", + "2023-09-11 16:54:49,564 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-09-11 16:54:49,564 - INFO - > run_id: 'inference'\n", + "2023-09-11 16:54:49,564 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n", + "2023-09-11 16:54:49,564 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n", " './MedNISTClassifier_v2/configs/inference.yaml']\n", - "2023-09-07 16:20:16,087 - INFO - > logging_file: './MedNISTClassifier_v2/configs/logging.conf'\n", - "2023-09-07 16:20:16,087 - INFO - > bundle_root: './MedNISTClassifier_v2'\n", - "2023-09-07 16:20:16,087 - INFO - > ckpt_path: 'output/output_230830_123911/model_final_iteration=186.pt'\n", - "2023-09-07 16:20:16,087 - INFO - > input_dir: 'test_images'\n", - "2023-09-07 16:20:16,087 - INFO - ---\n", + "2023-09-11 16:54:49,564 - INFO - > logging_file: './MedNISTClassifier_v2/configs/logging.conf'\n", + "2023-09-11 16:54:49,565 - INFO - > bundle_root: './MedNISTClassifier_v2'\n", + "2023-09-11 16:54:49,565 - INFO - > ckpt_path: 'output/output_230911_164547/model_final_iteration=186.pt'\n", + "2023-09-11 16:54:49,565 - INFO - > input_dir: 'test_images'\n", + "2023-09-11 16:54:49,565 - INFO - ---\n", "\n", "\n", - "2023-09-07 16:20:16,088 - INFO - Setting logging properties based on config: ./MedNISTClassifier_v2/configs/logging.conf.\n", - "2023-09-07 16:20:16,487 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run resuming from iteration 0, epoch 0 until 1 epochs\n", - "2023-09-07 16:20:16,598 - ignite.engine.engine.SupervisedEvaluator - INFO - Restored all variables from output/output_230830_123911/model_final_iteration=186.pt\n", - "2023-09-07 16:20:17,836 - ignite.engine.engine.SupervisedEvaluator - INFO - Epoch[1] Complete. Time taken: 00:00:01.237\n", - "2023-09-07 16:20:17,837 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run complete. Time taken: 00:00:01.350\n" + "2023-09-11 16:54:49,565 - INFO - Setting logging properties based on config: ./MedNISTClassifier_v2/configs/logging.conf.\n", + "2023-09-11 16:54:49,924 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run resuming from iteration 0, epoch 0 until 1 epochs\n", + "2023-09-11 16:54:50,035 - ignite.engine.engine.SupervisedEvaluator - INFO - Restored all variables from output/output_230911_164547/model_final_iteration=186.pt\n", + "2023-09-11 16:54:50,936 - ignite.engine.engine.SupervisedEvaluator - INFO - Epoch[1] Complete. Time taken: 00:00:00.901\n", + "2023-09-11 16:54:50,936 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run complete. Time taken: 00:00:01.012\n" ] } ], @@ -949,13 +951,15 @@ "%%bash\n", "\n", "BUNDLE=\"./MedNISTClassifier_v2\"\n", + "# need to capture name since it'll be different for you\n", + "ckpt=$(find output -name 'model_final_iteration=186.pt'|sort|tail -1)\n", "\n", "python -m monai.bundle run inference \\\n", " --bundle_root \"$BUNDLE\" \\\n", " --logging_file \"$BUNDLE/configs/logging.conf\" \\\n", " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", " --config_file \"['$BUNDLE/configs/common.yaml','$BUNDLE/configs/inference.yaml']\" \\\n", - " --ckpt_path 'output/output_230830_123911/model_final_iteration=186.pt' \\\n", + " --ckpt_path \"$ckpt\" \\\n", " --input_dir test_images " ] }, @@ -971,7 +975,7 @@ }, { "cell_type": "code", - "execution_count": 88, + "execution_count": 23, "id": "4a695039-7a53-4f9a-9754-769a9f8ebac8", "metadata": {}, "outputs": [ @@ -1016,7 +1020,7 @@ }, { "cell_type": "code", - "execution_count": 121, + "execution_count": 24, "id": "1065f928-3f66-47af-aed4-be2f0443cf2f", "metadata": {}, "outputs": [ @@ -1068,7 +1072,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 25, "id": "c6672caa-fd51-4dde-a31d-5c4de8c3cc1d", "metadata": { "tags": [] @@ -1078,19 +1082,20 @@ "name": "stdout", "output_type": "stream", "text": [ - "2023-09-07 16:20:25,463 - INFO - --- input summary of monai.bundle.scripts.ckpt_export ---\n", - "2023-09-07 16:20:25,463 - INFO - > net_id: 'network_def'\n", - "2023-09-07 16:20:25,463 - INFO - > filepath: './MedNISTClassifier_v2/models/model.ts'\n", - "2023-09-07 16:20:25,463 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n", - "2023-09-07 16:20:25,463 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n", + "2023-09-11 16:57:08,807 - INFO - --- input summary of monai.bundle.scripts.ckpt_export ---\n", + "2023-09-11 16:57:08,807 - INFO - > net_id: 'network_def'\n", + "2023-09-11 16:57:08,807 - INFO - > filepath: './MedNISTClassifier_v2/models/model.ts'\n", + "2023-09-11 16:57:08,807 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n", + "2023-09-11 16:57:08,807 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n", " './MedNISTClassifier_v2/configs/inference.yaml']\n", - "2023-09-07 16:20:25,463 - INFO - > ckpt_file: './MedNISTClassifier_v2/models/model.pt'\n", - "2023-09-07 16:20:25,463 - INFO - > key_in_ckpt: 'model'\n", - "2023-09-07 16:20:25,463 - INFO - > bundle_root: './MedNISTClassifier_v2'\n", - "2023-09-07 16:20:25,463 - INFO - ---\n", + "2023-09-11 16:57:08,807 - INFO - > ckpt_file: './MedNISTClassifier_v2/models/model.pt'\n", + "2023-09-11 16:57:08,807 - INFO - > key_in_ckpt: 'model'\n", + "2023-09-11 16:57:08,807 - INFO - > bundle_root: './MedNISTClassifier_v2'\n", + "2023-09-11 16:57:08,807 - INFO - ---\n", "\n", "\n", - "2023-09-07 16:20:28,048 - INFO - exported to file: ./MedNISTClassifier_v2/models/model.ts.\n", + "2023-09-11 16:57:12,519 - INFO - exported to file: ./MedNISTClassifier_v2/models/model.ts.\n", + "/usr/bin/tree\n", "\u001b[01;34m./MedNISTClassifier_v2\u001b[00m\n", "├── \u001b[01;34mconfigs\u001b[00m\n", "│   ├── common.yaml\n", @@ -1114,7 +1119,8 @@ "\n", "BUNDLE=\"./MedNISTClassifier_v2\"\n", "\n", - "cp \"output/output_230830_123911/model_final_iteration=186.pt\" \"$BUNDLE/models/model.pt\"\n", + "ckpt=$(find output -name 'model_final_iteration=186.pt'|sort|tail -1)\n", + "cp \"$ckpt\" \"$BUNDLE/models/model.pt\"\n", "\n", "python -m monai.bundle ckpt_export \\\n", " --bundle_root \"$BUNDLE\" \\\n", @@ -1163,9 +1169,9 @@ ], "metadata": { "kernelspec": { - "display_name": "Python [conda env:monai]", + "display_name": "Python [conda env:monai1]", "language": "python", - "name": "conda-env-monai-py" + "name": "conda-env-monai1-py" }, "language_info": { "codemirror_mode": { @@ -1177,7 +1183,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.10" + "version": "3.9.18" } }, "nbformat": 4, diff --git a/bundle/04_integrating_code.ipynb b/bundle/04_integrating_code.ipynb index 0ef55bc56d..62391bc92a 100644 --- a/bundle/04_integrating_code.ipynb +++ b/bundle/04_integrating_code.ipynb @@ -27,7 +27,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 1, "id": "eb9dc6ec-13da-4a37-8afa-28e2766b9343", "metadata": {}, "outputs": [ @@ -35,16 +35,18 @@ "name": "stdout", "output_type": "stream", "text": [ + "/usr/bin/tree\n", "\u001b[01;34mIntegrationBundle\u001b[00m\n", "├── \u001b[01;34mconfigs\u001b[00m\n", - "│   ├── metadata.json\n", + "│   └── metadata.json\n", "├── \u001b[01;34mdocs\u001b[00m\n", "│   └── README.md\n", "├── LICENSE\n", + "├── \u001b[01;34mmodels\u001b[00m\n", "└── \u001b[01;34mscripts\u001b[00m\n", " └── __init__.py\n", "\n", - "5 directories, 20 files\n" + "4 directories, 4 files\n" ] } ], @@ -61,7 +63,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 2, "id": "b29f053b-cf16-4ffc-bbe7-d9433fdfa872", "metadata": { "tags": [] @@ -156,7 +158,7 @@ }, { "cell_type": "code", - "execution_count": 25, + "execution_count": 3, "id": "dcdbe1ae-ea13-49cb-b5a3-3c2c78f91f2b", "metadata": { "tags": [] @@ -166,7 +168,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Overwriting IntegrationBundle/scripts/net.py\n" + "Writing IntegrationBundle/scripts/net.py\n" ] } ], @@ -208,10 +210,18 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 4, "id": "189d71c5-6556-4891-a382-0adbc8f80d30", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Writing IntegrationBundle/scripts/transforms.py\n" + ] + } + ], "source": [ "%%writefile IntegrationBundle/scripts/transforms.py\n", "\n", @@ -224,7 +234,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 5, "id": "3d8f233e-495c-450c-a445-46d295ba7461", "metadata": { "tags": [] @@ -272,7 +282,7 @@ }, { "cell_type": "code", - "execution_count": 13, + "execution_count": 6, "id": "1a836b1b-06da-4866-82a2-47d1efed5d7c", "metadata": { "tags": [] @@ -282,7 +292,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Overwriting IntegrationBundle/scripts/train.py\n" + "Writing IntegrationBundle/scripts/train.py\n" ] } ], @@ -332,7 +342,7 @@ }, { "cell_type": "code", - "execution_count": 30, + "execution_count": 7, "id": "0b9764a8-674c-42ae-ad4b-f2dea027bdbf", "metadata": { "tags": [] @@ -342,7 +352,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Overwriting IntegrationBundle/configs/train.yaml\n" + "Writing IntegrationBundle/configs/train.yaml\n" ] } ], @@ -381,7 +391,7 @@ }, { "cell_type": "code", - "execution_count": 31, + "execution_count": 1, "id": "65149911-3771-4a49-ade6-378305a4b946", "metadata": { "tags": [] @@ -391,12 +401,12 @@ "name": "stdout", "output_type": "stream", "text": [ - "2023-09-04 15:19:03,804 - INFO - --- input summary of monai.bundle.scripts.run ---\n", - "2023-09-04 15:19:03,804 - INFO - > run_id: 'train'\n", - "2023-09-04 15:19:03,804 - INFO - > meta_file: './IntegrationBundle/configs/metadata.json'\n", - "2023-09-04 15:19:03,804 - INFO - > config_file: './IntegrationBundle/configs/train.yaml'\n", - "2023-09-04 15:19:03,804 - INFO - > bundle_root: './IntegrationBundle'\n", - "2023-09-04 15:19:03,804 - INFO - ---\n", + "2023-09-11 17:28:16,125 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-09-11 17:28:16,125 - INFO - > run_id: 'train'\n", + "2023-09-11 17:28:16,125 - INFO - > meta_file: './IntegrationBundle/configs/metadata.json'\n", + "2023-09-11 17:28:16,125 - INFO - > config_file: './IntegrationBundle/configs/train.yaml'\n", + "2023-09-11 17:28:16,125 - INFO - > bundle_root: './IntegrationBundle'\n", + "2023-09-11 17:28:16,125 - INFO - ---\n", "\n", "\n" ] @@ -405,28 +415,40 @@ "name": "stderr", "output_type": "stream", "text": [ - "/home/localek10/workspace/monai/MONAI_mine/monai/bundle/workflows.py:213: UserWarning: Default logging file in 'configs/logging.conf' does not exist, skipping logging.\n", - " warnings.warn(\"Default logging file in 'configs/logging.conf' does not exist, skipping logging.\")\n" + "Default logging file in 'configs/logging.conf' does not exist, skipping logging.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ - "Files already downloaded and verified\n", - "Files already downloaded and verified\n", - "[1, 2000] loss: 2.226\n", - "[1, 4000] loss: 1.913\n", - "[1, 6000] loss: 1.700\n", - "[1, 8000] loss: 1.593\n", - "[1, 10000] loss: 1.524\n", - "[1, 12000] loss: 1.476\n", - "[2, 2000] loss: 1.397\n", - "[2, 4000] loss: 1.384\n", - "[2, 6000] loss: 1.372\n", - "[2, 8000] loss: 1.333\n", - "[2, 10000] loss: 1.312\n", - "[2, 12000] loss: 1.303\n", + "Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "100%|██████████| 170498071/170498071 [00:56<00:00, 3010200.83it/s]\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Extracting ./data/cifar-10-python.tar.gz to ./data\n", + "[1, 2000] loss: 2.162\n", + "[1, 4000] loss: 1.888\n", + "[1, 6000] loss: 1.688\n", + "[1, 8000] loss: 1.580\n", + "[1, 10000] loss: 1.487\n", + "[1, 12000] loss: 1.446\n", + "[2, 2000] loss: 1.402\n", + "[2, 4000] loss: 1.392\n", + "[2, 6000] loss: 1.339\n", + "[2, 8000] loss: 1.317\n", + "[2, 10000] loss: 1.276\n", + "[2, 12000] loss: 1.275\n", "Finished Training\n" ] } @@ -456,7 +478,7 @@ }, { "cell_type": "code", - "execution_count": 32, + "execution_count": 2, "id": "fc35814e-625d-4871-ac1c-200a0cc562d9", "metadata": { "tags": [] @@ -494,10 +516,18 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "id": "fb49aef2-9fb5-4e74-83d2-9da935e07648", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Writing IntegrationBundle/configs/test.yaml\n" + ] + } + ], "source": [ "%%writefile IntegrationBundle/configs/test.yaml\n", "\n", @@ -522,7 +552,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 4, "id": "ab171286-045c-4067-a2ea-be359168869d", "metadata": { "tags": [] @@ -532,12 +562,12 @@ "name": "stdout", "output_type": "stream", "text": [ - "2023-09-05 12:42:29,561 - INFO - --- input summary of monai.bundle.scripts.run ---\n", - "2023-09-05 12:42:29,561 - INFO - > run_id: 'test'\n", - "2023-09-05 12:42:29,561 - INFO - > meta_file: './IntegrationBundle/configs/metadata.json'\n", - "2023-09-05 12:42:29,561 - INFO - > config_file: './IntegrationBundle/configs/test.yaml'\n", - "2023-09-05 12:42:29,561 - INFO - > bundle_root: './IntegrationBundle'\n", - "2023-09-05 12:42:29,561 - INFO - ---\n", + "2023-09-11 17:31:17,644 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-09-11 17:31:17,644 - INFO - > run_id: 'test'\n", + "2023-09-11 17:31:17,644 - INFO - > meta_file: './IntegrationBundle/configs/metadata.json'\n", + "2023-09-11 17:31:17,644 - INFO - > config_file: './IntegrationBundle/configs/test.yaml'\n", + "2023-09-11 17:31:17,644 - INFO - > bundle_root: './IntegrationBundle'\n", + "2023-09-11 17:31:17,644 - INFO - ---\n", "\n", "\n" ] @@ -546,8 +576,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "/home/localek10/workspace/monai/MONAI_mine/monai/bundle/workflows.py:213: UserWarning: Default logging file in 'configs/logging.conf' does not exist, skipping logging.\n", - " warnings.warn(\"Default logging file in 'configs/logging.conf' does not exist, skipping logging.\")\n" + "Default logging file in 'configs/logging.conf' does not exist, skipping logging.\n" ] }, { @@ -579,12 +608,12 @@ "source": [ "## Inference\n", "\n", - "The original script lacked a section on inference with the network, however this is rather straight forward and so an script and config file can easily implement this:" + "The original script lacked a section on inference with the network, however this is rather straight forward and so a script and config file can easily implement this:" ] }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 8, "id": "1f510a23-aa3a-4e34-81e2-b4c719d87939", "metadata": { "tags": [] @@ -616,7 +645,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 9, "id": "7f1251be-f0dd-4cbf-8903-3f3769c8049c", "metadata": { "tags": [] @@ -655,9 +684,68 @@ "- $scripts.inference.inference(@net, @transforms, @input_files)" ] }, + { + "cell_type": "markdown", + "id": "e14c3ea9-5d0f-4c62-9cfe-c3c02c7fe6e1", + "metadata": {}, + "source": [ + "Here we'll create a test set of image files to load and predict on:" + ] + }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 23, + "id": "cc2f063b-43f4-403e-b963-cf42b7e08637", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "test_cifar10/img00.png Label: 3\n", + "test_cifar10/img01.png Label: 8\n", + "test_cifar10/img02.png Label: 8\n", + "test_cifar10/img03.png Label: 0\n", + "test_cifar10/img04.png Label: 6\n", + "test_cifar10/img05.png Label: 6\n", + "test_cifar10/img06.png Label: 1\n", + "test_cifar10/img07.png Label: 6\n", + "test_cifar10/img08.png Label: 3\n", + "test_cifar10/img09.png Label: 1\n", + "test_cifar10/img10.png Label: 0\n", + "test_cifar10/img11.png Label: 9\n", + "test_cifar10/img12.png Label: 5\n", + "test_cifar10/img13.png Label: 7\n", + "test_cifar10/img14.png Label: 9\n", + "test_cifar10/img15.png Label: 8\n", + "test_cifar10/img16.png Label: 5\n", + "test_cifar10/img17.png Label: 7\n", + "test_cifar10/img18.png Label: 8\n", + "test_cifar10/img19.png Label: 6\n" + ] + } + ], + "source": [ + "import torchvision\n", + "\n", + "root_dir = \".\" # assuming CIFAR10 was downloaded to the current directory\n", + "num_images = 20\n", + "dataset = torchvision.datasets.CIFAR10(root=f\"{root_dir}/data\", train=False)\n", + "\n", + "!mkdir -p test_cifar10\n", + "\n", + "for i in range(num_images):\n", + " pil, label = dataset[i]\n", + " filename = f\"test_cifar10/img{i:02}.png\"\n", + " print(filename, \"Label:\", label)\n", + " pil.save(filename)" + ] + }, + { + "cell_type": "code", + "execution_count": 24, "id": "28d1230e-1d3a-4929-a266-e5f763dfde7f", "metadata": { "tags": [] @@ -667,12 +755,12 @@ "name": "stdout", "output_type": "stream", "text": [ - "2023-09-05 12:44:47,247 - INFO - --- input summary of monai.bundle.scripts.run ---\n", - "2023-09-05 12:44:47,247 - INFO - > run_id: 'inference'\n", - "2023-09-05 12:44:47,247 - INFO - > meta_file: './IntegrationBundle/configs/metadata.json'\n", - "2023-09-05 12:44:47,247 - INFO - > config_file: './IntegrationBundle/configs/inference.yaml'\n", - "2023-09-05 12:44:47,247 - INFO - > bundle_root: './IntegrationBundle'\n", - "2023-09-05 12:44:47,247 - INFO - ---\n", + "2023-09-11 17:54:11,793 - INFO - --- input summary of monai.bundle.scripts.run ---\n", + "2023-09-11 17:54:11,793 - INFO - > run_id: 'inference'\n", + "2023-09-11 17:54:11,793 - INFO - > meta_file: './IntegrationBundle/configs/metadata.json'\n", + "2023-09-11 17:54:11,793 - INFO - > config_file: './IntegrationBundle/configs/inference.yaml'\n", + "2023-09-11 17:54:11,793 - INFO - > bundle_root: './IntegrationBundle'\n", + "2023-09-11 17:54:11,793 - INFO - ---\n", "\n", "\n" ] @@ -681,24 +769,33 @@ "name": "stderr", "output_type": "stream", "text": [ - "/home/localek10/workspace/monai/MONAI_mine/monai/bundle/workflows.py:213: UserWarning: Default logging file in 'configs/logging.conf' does not exist, skipping logging.\n", - " warnings.warn(\"Default logging file in 'configs/logging.conf' does not exist, skipping logging.\")\n" + "Default logging file in 'configs/logging.conf' does not exist, skipping logging.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ - "test_cifar10/img_0_3.png 3\n", - "test_cifar10/img_1_8.png 1\n", - "test_cifar10/img_2_8.png 8\n", - "test_cifar10/img_3_0.png 0\n", - "test_cifar10/img_4_6.png 6\n", - "test_cifar10/img_5_6.png 6\n", - "test_cifar10/img_6_1.png 5\n", - "test_cifar10/img_7_6.png 6\n", - "test_cifar10/img_8_3.png 3\n", - "test_cifar10/img_9_1.png 1\n" + "test_cifar10/img00.png 3\n", + "test_cifar10/img01.png 8\n", + "test_cifar10/img02.png 8\n", + "test_cifar10/img03.png 0\n", + "test_cifar10/img04.png 6\n", + "test_cifar10/img05.png 6\n", + "test_cifar10/img06.png 1\n", + "test_cifar10/img07.png 4\n", + "test_cifar10/img08.png 3\n", + "test_cifar10/img09.png 1\n", + "test_cifar10/img10.png 0\n", + "test_cifar10/img11.png 9\n", + "test_cifar10/img12.png 6\n", + "test_cifar10/img13.png 7\n", + "test_cifar10/img14.png 9\n", + "test_cifar10/img15.png 1\n", + "test_cifar10/img16.png 5\n", + "test_cifar10/img17.png 3\n", + "test_cifar10/img18.png 8\n", + "test_cifar10/img19.png 4\n" ] } ], @@ -763,9 +860,9 @@ ], "metadata": { "kernelspec": { - "display_name": "Python [conda env:monai]", + "display_name": "Python [conda env:monai1]", "language": "python", - "name": "conda-env-monai-py" + "name": "conda-env-monai1-py" }, "language_info": { "codemirror_mode": { @@ -777,7 +874,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.10" + "version": "3.9.18" } }, "nbformat": 4, From f94d594ba1d453d28aacc930549ec5f195d97298 Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Tue, 12 Sep 2023 13:01:22 +0100 Subject: [PATCH 24/26] Need to modify notebook to capture output Signed-off-by: Eric Kerfoot --- bundle/03_mednist_classification_v2.ipynb | 274 ++-------------------- 1 file changed, 15 insertions(+), 259 deletions(-) diff --git a/bundle/03_mednist_classification_v2.ipynb b/bundle/03_mednist_classification_v2.ipynb index 3eb5f09085..ef0e178a23 100644 --- a/bundle/03_mednist_classification_v2.ipynb +++ b/bundle/03_mednist_classification_v2.ipynb @@ -433,266 +433,10 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 1, "id": "8357670d-fe69-4789-9b9a-77c0d8144b10", "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "2023-09-11 16:44:56,163 - INFO - --- input summary of monai.bundle.scripts.run ---\n", - "2023-09-11 16:44:56,163 - INFO - > run_id: 'train'\n", - "2023-09-11 16:44:56,163 - INFO - > meta_file: './MedNISTClassifier_v2/configs/metadata.json'\n", - "2023-09-11 16:44:56,164 - INFO - > config_file: ['./MedNISTClassifier_v2/configs/common.yaml',\n", - " './MedNISTClassifier_v2/configs/train.yaml']\n", - "2023-09-11 16:44:56,164 - INFO - > logging_file: './MedNISTClassifier_v2/configs/logging.conf'\n", - "2023-09-11 16:44:56,164 - INFO - > bundle_root: './MedNISTClassifier_v2'\n", - "2023-09-11 16:44:56,164 - INFO - > max_epochs: 2\n", - "2023-09-11 16:44:56,164 - INFO - ---\n", - "\n", - "\n", - "2023-09-11 16:44:56,164 - INFO - Setting logging properties based on config: ./MedNISTClassifier_v2/configs/logging.conf.\n", - "2023-09-11 16:44:56,297 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.\n", - "2023-09-11 16:44:56,297 - INFO - File exists: MedNIST.tar.gz, skipped downloading.\n", - "2023-09-11 16:44:56,297 - INFO - Non-empty folder exists in MedNIST, skipped extracting.\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "Loading dataset: 100%|██████████| 47164/47164 [00:43<00:00, 1085.57it/s]\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "2023-09-11 16:45:40,487 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.\n", - "2023-09-11 16:45:40,487 - INFO - File exists: MedNIST.tar.gz, skipped downloading.\n", - "2023-09-11 16:45:40,487 - INFO - Non-empty folder exists in MedNIST, skipped extracting.\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "Loading dataset: 100%|██████████| 5895/5895 [00:06<00:00, 894.97it/s] \n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "2023-09-11 16:45:47,217 - ignite.engine.engine.SupervisedTrainer - INFO - Engine run resuming from iteration 0, epoch 0 until 2 epochs\n", - "2023-09-11 16:45:48,671 - INFO - Epoch: 1/2, Iter: 1/93 -- train_loss: 1.8460 \n", - "2023-09-11 16:45:49,046 - INFO - Epoch: 1/2, Iter: 2/93 -- train_loss: 1.8131 \n", - "2023-09-11 16:45:49,436 - INFO - Epoch: 1/2, Iter: 3/93 -- train_loss: 1.7636 \n", - "2023-09-11 16:45:49,803 - INFO - Epoch: 1/2, Iter: 4/93 -- train_loss: 1.7511 \n", - "2023-09-11 16:45:50,174 - INFO - Epoch: 1/2, Iter: 5/93 -- train_loss: 1.7146 \n", - "2023-09-11 16:45:50,544 - INFO - Epoch: 1/2, Iter: 6/93 -- train_loss: 1.6879 \n", - "2023-09-11 16:45:50,919 - INFO - Epoch: 1/2, Iter: 7/93 -- train_loss: 1.6394 \n", - "2023-09-11 16:45:51,288 - INFO - Epoch: 1/2, Iter: 8/93 -- train_loss: 1.6172 \n", - "2023-09-11 16:45:51,665 - INFO - Epoch: 1/2, Iter: 9/93 -- train_loss: 1.5972 \n", - "2023-09-11 16:45:52,034 - INFO - Epoch: 1/2, Iter: 10/93 -- train_loss: 1.5663 \n", - "2023-09-11 16:45:52,413 - INFO - Epoch: 1/2, Iter: 11/93 -- train_loss: 1.5483 \n", - "2023-09-11 16:45:52,787 - INFO - Epoch: 1/2, Iter: 12/93 -- train_loss: 1.4914 \n", - "2023-09-11 16:45:53,162 - INFO - Epoch: 1/2, Iter: 13/93 -- train_loss: 1.4504 \n", - "2023-09-11 16:45:53,530 - INFO - Epoch: 1/2, Iter: 14/93 -- train_loss: 1.4477 \n", - "2023-09-11 16:45:53,904 - INFO - Epoch: 1/2, Iter: 15/93 -- train_loss: 1.4099 \n", - "2023-09-11 16:45:54,275 - INFO - Epoch: 1/2, Iter: 16/93 -- train_loss: 1.3985 \n", - "2023-09-11 16:45:54,648 - INFO - Epoch: 1/2, Iter: 17/93 -- train_loss: 1.3849 \n", - "2023-09-11 16:45:55,015 - INFO - Epoch: 1/2, Iter: 18/93 -- train_loss: 1.3735 \n", - "2023-09-11 16:45:55,394 - INFO - Epoch: 1/2, Iter: 19/93 -- train_loss: 1.3040 \n", - "2023-09-11 16:45:55,762 - INFO - Epoch: 1/2, Iter: 20/93 -- train_loss: 1.3018 \n", - "2023-09-11 16:45:56,137 - INFO - Epoch: 1/2, Iter: 21/93 -- train_loss: 1.2656 \n", - "2023-09-11 16:45:56,509 - INFO - Epoch: 1/2, Iter: 22/93 -- train_loss: 1.2451 \n", - "2023-09-11 16:45:56,883 - INFO - Epoch: 1/2, Iter: 23/93 -- train_loss: 1.2429 \n", - "2023-09-11 16:45:57,252 - INFO - Epoch: 1/2, Iter: 24/93 -- train_loss: 1.2009 \n", - "2023-09-11 16:45:57,631 - INFO - Epoch: 1/2, Iter: 25/93 -- train_loss: 1.1890 \n", - "2023-09-11 16:45:58,000 - INFO - Epoch: 1/2, Iter: 26/93 -- train_loss: 1.1832 \n", - "2023-09-11 16:45:58,373 - INFO - Epoch: 1/2, Iter: 27/93 -- train_loss: 1.1359 \n", - "2023-09-11 16:45:58,745 - INFO - Epoch: 1/2, Iter: 28/93 -- train_loss: 1.1588 \n", - "2023-09-11 16:45:59,122 - INFO - Epoch: 1/2, Iter: 29/93 -- train_loss: 1.1134 \n", - "2023-09-11 16:45:59,489 - INFO - Epoch: 1/2, Iter: 30/93 -- train_loss: 1.0843 \n", - "2023-09-11 16:45:59,863 - INFO - Epoch: 1/2, Iter: 31/93 -- train_loss: 1.0956 \n", - "2023-09-11 16:46:00,235 - INFO - Epoch: 1/2, Iter: 32/93 -- train_loss: 1.0651 \n", - "2023-09-11 16:46:00,611 - INFO - Epoch: 1/2, Iter: 33/93 -- train_loss: 1.0697 \n", - "2023-09-11 16:46:00,978 - INFO - Epoch: 1/2, Iter: 34/93 -- train_loss: 1.0189 \n", - "2023-09-11 16:46:01,631 - INFO - Epoch: 1/2, Iter: 35/93 -- train_loss: 0.9943 \n", - "2023-09-11 16:46:01,998 - INFO - Epoch: 1/2, Iter: 36/93 -- train_loss: 1.0024 \n", - "2023-09-11 16:46:02,372 - INFO - Epoch: 1/2, Iter: 37/93 -- train_loss: 0.9881 \n", - "2023-09-11 16:46:02,739 - INFO - Epoch: 1/2, Iter: 38/93 -- train_loss: 1.0021 \n", - "2023-09-11 16:46:03,114 - INFO - Epoch: 1/2, Iter: 39/93 -- train_loss: 0.9297 \n", - "2023-09-11 16:46:03,482 - INFO - Epoch: 1/2, Iter: 40/93 -- train_loss: 0.9498 \n", - "2023-09-11 16:46:03,868 - INFO - Epoch: 1/2, Iter: 41/93 -- train_loss: 0.9560 \n", - "2023-09-11 16:46:04,239 - INFO - Epoch: 1/2, Iter: 42/93 -- train_loss: 0.9241 \n", - "2023-09-11 16:46:04,621 - INFO - Epoch: 1/2, Iter: 43/93 -- train_loss: 0.8911 \n", - "2023-09-11 16:46:04,990 - INFO - Epoch: 1/2, Iter: 44/93 -- train_loss: 0.8677 \n", - "2023-09-11 16:46:05,370 - INFO - Epoch: 1/2, Iter: 45/93 -- train_loss: 0.8857 \n", - "2023-09-11 16:46:05,738 - INFO - Epoch: 1/2, Iter: 46/93 -- train_loss: 0.8587 \n", - "2023-09-11 16:46:06,114 - INFO - Epoch: 1/2, Iter: 47/93 -- train_loss: 0.8366 \n", - "2023-09-11 16:46:06,481 - INFO - Epoch: 1/2, Iter: 48/93 -- train_loss: 0.8365 \n", - "2023-09-11 16:46:06,858 - INFO - Epoch: 1/2, Iter: 49/93 -- train_loss: 0.8071 \n", - "2023-09-11 16:46:07,228 - INFO - Epoch: 1/2, Iter: 50/93 -- train_loss: 0.7914 \n", - "2023-09-11 16:46:07,603 - INFO - Epoch: 1/2, Iter: 51/93 -- train_loss: 0.7689 \n", - "2023-09-11 16:46:07,971 - INFO - Epoch: 1/2, Iter: 52/93 -- train_loss: 0.7649 \n", - "2023-09-11 16:46:08,351 - INFO - Epoch: 1/2, Iter: 53/93 -- train_loss: 0.7562 \n", - "2023-09-11 16:46:08,721 - INFO - Epoch: 1/2, Iter: 54/93 -- train_loss: 0.7854 \n", - "2023-09-11 16:46:09,098 - INFO - Epoch: 1/2, Iter: 55/93 -- train_loss: 0.7297 \n", - "2023-09-11 16:46:09,466 - INFO - Epoch: 1/2, Iter: 56/93 -- train_loss: 0.7237 \n", - "2023-09-11 16:46:09,841 - INFO - Epoch: 1/2, Iter: 57/93 -- train_loss: 0.7184 \n", - "2023-09-11 16:46:10,209 - INFO - Epoch: 1/2, Iter: 58/93 -- train_loss: 0.7446 \n", - "2023-09-11 16:46:10,585 - INFO - Epoch: 1/2, Iter: 59/93 -- train_loss: 0.7179 \n", - "2023-09-11 16:46:10,954 - INFO - Epoch: 1/2, Iter: 60/93 -- train_loss: 0.6467 \n", - "2023-09-11 16:46:11,332 - INFO - Epoch: 1/2, Iter: 61/93 -- train_loss: 0.6886 \n", - "2023-09-11 16:46:11,701 - INFO - Epoch: 1/2, Iter: 62/93 -- train_loss: 0.6816 \n", - "2023-09-11 16:46:12,082 - INFO - Epoch: 1/2, Iter: 63/93 -- train_loss: 0.6509 \n", - "2023-09-11 16:46:12,451 - INFO - Epoch: 1/2, Iter: 64/93 -- train_loss: 0.6453 \n", - "2023-09-11 16:46:12,833 - INFO - Epoch: 1/2, Iter: 65/93 -- train_loss: 0.6316 \n", - "2023-09-11 16:46:13,203 - INFO - Epoch: 1/2, Iter: 66/93 -- train_loss: 0.6317 \n", - "2023-09-11 16:46:13,581 - INFO - Epoch: 1/2, Iter: 67/93 -- train_loss: 0.5938 \n", - "2023-09-11 16:46:13,957 - INFO - Epoch: 1/2, Iter: 68/93 -- train_loss: 0.6120 \n", - "2023-09-11 16:46:14,335 - INFO - Epoch: 1/2, Iter: 69/93 -- train_loss: 0.5958 \n", - "2023-09-11 16:46:14,704 - INFO - Epoch: 1/2, Iter: 70/93 -- train_loss: 0.5930 \n", - "2023-09-11 16:46:15,079 - INFO - Epoch: 1/2, Iter: 71/93 -- train_loss: 0.5662 \n", - "2023-09-11 16:46:15,448 - INFO - Epoch: 1/2, Iter: 72/93 -- train_loss: 0.5763 \n", - "2023-09-11 16:46:16,041 - INFO - Epoch: 1/2, Iter: 73/93 -- train_loss: 0.5695 \n", - "2023-09-11 16:46:16,410 - INFO - Epoch: 1/2, Iter: 74/93 -- train_loss: 0.5743 \n", - "2023-09-11 16:46:16,789 - INFO - Epoch: 1/2, Iter: 75/93 -- train_loss: 0.5466 \n", - "2023-09-11 16:46:17,157 - INFO - Epoch: 1/2, Iter: 76/93 -- train_loss: 0.5320 \n", - "2023-09-11 16:46:17,540 - INFO - Epoch: 1/2, Iter: 77/93 -- train_loss: 0.5176 \n", - "2023-09-11 16:46:17,911 - INFO - Epoch: 1/2, Iter: 78/93 -- train_loss: 0.5000 \n", - "2023-09-11 16:46:18,287 - INFO - Epoch: 1/2, Iter: 79/93 -- train_loss: 0.5113 \n", - "2023-09-11 16:46:18,658 - INFO - Epoch: 1/2, Iter: 80/93 -- train_loss: 0.4966 \n", - "2023-09-11 16:46:19,035 - INFO - Epoch: 1/2, Iter: 81/93 -- train_loss: 0.5185 \n", - "2023-09-11 16:46:19,404 - INFO - Epoch: 1/2, Iter: 82/93 -- train_loss: 0.4719 \n", - "2023-09-11 16:46:19,783 - INFO - Epoch: 1/2, Iter: 83/93 -- train_loss: 0.4695 \n", - "2023-09-11 16:46:20,154 - INFO - Epoch: 1/2, Iter: 84/93 -- train_loss: 0.4637 \n", - "2023-09-11 16:46:20,535 - INFO - Epoch: 1/2, Iter: 85/93 -- train_loss: 0.4910 \n", - "2023-09-11 16:46:20,906 - INFO - Epoch: 1/2, Iter: 86/93 -- train_loss: 0.4873 \n", - "2023-09-11 16:46:21,284 - INFO - Epoch: 1/2, Iter: 87/93 -- train_loss: 0.4566 \n", - "2023-09-11 16:46:21,654 - INFO - Epoch: 1/2, Iter: 88/93 -- train_loss: 0.4357 \n", - "2023-09-11 16:46:22,047 - INFO - Epoch: 1/2, Iter: 89/93 -- train_loss: 0.4304 \n", - "2023-09-11 16:46:22,419 - INFO - Epoch: 1/2, Iter: 90/93 -- train_loss: 0.4286 \n", - "2023-09-11 16:46:22,796 - INFO - Epoch: 1/2, Iter: 91/93 -- train_loss: 0.4116 \n", - "2023-09-11 16:46:23,165 - INFO - Epoch: 1/2, Iter: 92/93 -- train_loss: 0.4424 \n", - "2023-09-11 16:46:23,422 - INFO - Epoch: 1/2, Iter: 93/93 -- train_loss: 0.5651 \n", - "2023-09-11 16:46:23,423 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run resuming from iteration 0, epoch 0 until 1 epochs\n", - "2023-09-11 16:46:32,635 - ignite.engine.engine.SupervisedEvaluator - INFO - Got new best metric of accuracy: 0.9867684478371501\n", - "2023-09-11 16:46:32,635 - INFO - Epoch[1] Metrics -- accuracy: 0.9868 \n", - "2023-09-11 16:46:32,635 - INFO - Key metric: accuracy best value: 0.9867684478371501 at epoch: 1\n", - "2023-09-11 16:46:32,635 - ignite.engine.engine.SupervisedEvaluator - INFO - Epoch[1] Complete. Time taken: 00:00:09.072\n", - "2023-09-11 16:46:32,635 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run complete. Time taken: 00:00:09.213\n", - "2023-09-11 16:46:32,765 - ignite.engine.engine.SupervisedTrainer - INFO - Saved checkpoint at epoch: 1\n", - "2023-09-11 16:46:32,765 - ignite.engine.engine.SupervisedTrainer - INFO - Epoch[1] Complete. Time taken: 00:00:45.405\n", - "2023-09-11 16:46:33,378 - INFO - Epoch: 2/2, Iter: 1/93 -- train_loss: 0.4218 \n", - "2023-09-11 16:46:33,760 - INFO - Epoch: 2/2, Iter: 2/93 -- train_loss: 0.4012 \n", - "2023-09-11 16:46:34,130 - INFO - Epoch: 2/2, Iter: 3/93 -- train_loss: 0.3729 \n", - "2023-09-11 16:46:34,507 - INFO - Epoch: 2/2, Iter: 4/93 -- train_loss: 0.3895 \n", - "2023-09-11 16:46:34,889 - INFO - Epoch: 2/2, Iter: 5/93 -- train_loss: 0.3915 \n", - "2023-09-11 16:46:35,260 - INFO - Epoch: 2/2, Iter: 6/93 -- train_loss: 0.4068 \n", - "2023-09-11 16:46:35,630 - INFO - Epoch: 2/2, Iter: 7/93 -- train_loss: 0.3784 \n", - "2023-09-11 16:46:36,002 - INFO - Epoch: 2/2, Iter: 8/93 -- train_loss: 0.3559 \n", - "2023-09-11 16:46:36,379 - INFO - Epoch: 2/2, Iter: 9/93 -- train_loss: 0.3693 \n", - "2023-09-11 16:46:36,749 - INFO - Epoch: 2/2, Iter: 10/93 -- train_loss: 0.3890 \n", - "2023-09-11 16:46:37,118 - INFO - Epoch: 2/2, Iter: 11/93 -- train_loss: 0.3663 \n", - "2023-09-11 16:46:37,491 - INFO - Epoch: 2/2, Iter: 12/93 -- train_loss: 0.3512 \n", - "2023-09-11 16:46:37,863 - INFO - Epoch: 2/2, Iter: 13/93 -- train_loss: 0.3410 \n", - "2023-09-11 16:46:38,236 - INFO - Epoch: 2/2, Iter: 14/93 -- train_loss: 0.3644 \n", - "2023-09-11 16:46:38,608 - INFO - Epoch: 2/2, Iter: 15/93 -- train_loss: 0.3316 \n", - "2023-09-11 16:46:38,982 - INFO - Epoch: 2/2, Iter: 16/93 -- train_loss: 0.3547 \n", - "2023-09-11 16:46:39,353 - INFO - Epoch: 2/2, Iter: 17/93 -- train_loss: 0.3406 \n", - "2023-09-11 16:46:39,729 - INFO - Epoch: 2/2, Iter: 18/93 -- train_loss: 0.3200 \n", - "2023-09-11 16:46:40,101 - INFO - Epoch: 2/2, Iter: 19/93 -- train_loss: 0.3069 \n", - "2023-09-11 16:46:40,475 - INFO - Epoch: 2/2, Iter: 20/93 -- train_loss: 0.3044 \n", - "2023-09-11 16:46:40,850 - INFO - Epoch: 2/2, Iter: 21/93 -- train_loss: 0.2921 \n", - "2023-09-11 16:46:41,502 - INFO - Epoch: 2/2, Iter: 22/93 -- train_loss: 0.2953 \n", - "2023-09-11 16:46:41,875 - INFO - Epoch: 2/2, Iter: 23/93 -- train_loss: 0.3098 \n", - "2023-09-11 16:46:42,248 - INFO - Epoch: 2/2, Iter: 24/93 -- train_loss: 0.3126 \n", - "2023-09-11 16:46:42,622 - INFO - Epoch: 2/2, Iter: 25/93 -- train_loss: 0.2839 \n", - "2023-09-11 16:46:42,995 - INFO - Epoch: 2/2, Iter: 26/93 -- train_loss: 0.2934 \n", - "2023-09-11 16:46:43,373 - INFO - Epoch: 2/2, Iter: 27/93 -- train_loss: 0.2862 \n", - "2023-09-11 16:46:43,753 - INFO - Epoch: 2/2, Iter: 28/93 -- train_loss: 0.2911 \n", - "2023-09-11 16:46:44,126 - INFO - Epoch: 2/2, Iter: 29/93 -- train_loss: 0.2814 \n", - "2023-09-11 16:46:44,500 - INFO - Epoch: 2/2, Iter: 30/93 -- train_loss: 0.2819 \n", - "2023-09-11 16:46:44,873 - INFO - Epoch: 2/2, Iter: 31/93 -- train_loss: 0.2679 \n", - "2023-09-11 16:46:45,246 - INFO - Epoch: 2/2, Iter: 32/93 -- train_loss: 0.2932 \n", - "2023-09-11 16:46:45,617 - INFO - Epoch: 2/2, Iter: 33/93 -- train_loss: 0.2752 \n", - "2023-09-11 16:46:45,994 - INFO - Epoch: 2/2, Iter: 34/93 -- train_loss: 0.2591 \n", - "2023-09-11 16:46:46,371 - INFO - Epoch: 2/2, Iter: 35/93 -- train_loss: 0.2724 \n", - "2023-09-11 16:46:46,748 - INFO - Epoch: 2/2, Iter: 36/93 -- train_loss: 0.2638 \n", - "2023-09-11 16:46:47,120 - INFO - Epoch: 2/2, Iter: 37/93 -- train_loss: 0.2707 \n", - "2023-09-11 16:46:47,495 - INFO - Epoch: 2/2, Iter: 38/93 -- train_loss: 0.2540 \n", - "2023-09-11 16:46:47,867 - INFO - Epoch: 2/2, Iter: 39/93 -- train_loss: 0.2716 \n", - "2023-09-11 16:46:48,241 - INFO - Epoch: 2/2, Iter: 40/93 -- train_loss: 0.2449 \n", - "2023-09-11 16:46:48,613 - INFO - Epoch: 2/2, Iter: 41/93 -- train_loss: 0.2530 \n", - "2023-09-11 16:46:48,987 - INFO - Epoch: 2/2, Iter: 42/93 -- train_loss: 0.2429 \n", - "2023-09-11 16:46:49,364 - INFO - Epoch: 2/2, Iter: 43/93 -- train_loss: 0.2279 \n", - "2023-09-11 16:46:49,740 - INFO - Epoch: 2/2, Iter: 44/93 -- train_loss: 0.2243 \n", - "2023-09-11 16:46:50,113 - INFO - Epoch: 2/2, Iter: 45/93 -- train_loss: 0.2431 \n", - "2023-09-11 16:46:50,492 - INFO - Epoch: 2/2, Iter: 46/93 -- train_loss: 0.2439 \n", - "2023-09-11 16:46:50,864 - INFO - Epoch: 2/2, Iter: 47/93 -- train_loss: 0.2279 \n", - "2023-09-11 16:46:51,238 - INFO - Epoch: 2/2, Iter: 48/93 -- train_loss: 0.2097 \n", - "2023-09-11 16:46:51,616 - INFO - Epoch: 2/2, Iter: 49/93 -- train_loss: 0.2345 \n", - "2023-09-11 16:46:51,992 - INFO - Epoch: 2/2, Iter: 50/93 -- train_loss: 0.2191 \n", - "2023-09-11 16:46:52,447 - INFO - Epoch: 2/2, Iter: 51/93 -- train_loss: 0.2042 \n", - "2023-09-11 16:46:52,821 - INFO - Epoch: 2/2, Iter: 52/93 -- train_loss: 0.2438 \n", - "2023-09-11 16:46:53,193 - INFO - Epoch: 2/2, Iter: 53/93 -- train_loss: 0.2154 \n", - "2023-09-11 16:46:53,566 - INFO - Epoch: 2/2, Iter: 54/93 -- train_loss: 0.2276 \n", - "2023-09-11 16:46:53,939 - INFO - Epoch: 2/2, Iter: 55/93 -- train_loss: 0.2033 \n", - "2023-09-11 16:46:54,313 - INFO - Epoch: 2/2, Iter: 56/93 -- train_loss: 0.2054 \n", - "2023-09-11 16:46:54,692 - INFO - Epoch: 2/2, Iter: 57/93 -- train_loss: 0.2188 \n", - "2023-09-11 16:46:55,065 - INFO - Epoch: 2/2, Iter: 58/93 -- train_loss: 0.1989 \n", - "2023-09-11 16:46:55,438 - INFO - Epoch: 2/2, Iter: 59/93 -- train_loss: 0.1964 \n", - "2023-09-11 16:46:55,815 - INFO - Epoch: 2/2, Iter: 60/93 -- train_loss: 0.2212 \n", - "2023-09-11 16:46:56,200 - INFO - Epoch: 2/2, Iter: 61/93 -- train_loss: 0.2041 \n", - "2023-09-11 16:46:56,577 - INFO - Epoch: 2/2, Iter: 62/93 -- train_loss: 0.1918 \n", - "2023-09-11 16:46:56,958 - INFO - Epoch: 2/2, Iter: 63/93 -- train_loss: 0.2110 \n", - "2023-09-11 16:46:57,333 - INFO - Epoch: 2/2, Iter: 64/93 -- train_loss: 0.1816 \n", - "2023-09-11 16:46:57,706 - INFO - Epoch: 2/2, Iter: 65/93 -- train_loss: 0.1850 \n", - "2023-09-11 16:46:58,079 - INFO - Epoch: 2/2, Iter: 66/93 -- train_loss: 0.2006 \n", - "2023-09-11 16:46:58,459 - INFO - Epoch: 2/2, Iter: 67/93 -- train_loss: 0.1794 \n", - "2023-09-11 16:46:58,835 - INFO - Epoch: 2/2, Iter: 68/93 -- train_loss: 0.1977 \n", - "2023-09-11 16:46:59,208 - INFO - Epoch: 2/2, Iter: 69/93 -- train_loss: 0.2084 \n", - "2023-09-11 16:46:59,582 - INFO - Epoch: 2/2, Iter: 70/93 -- train_loss: 0.1948 \n", - "2023-09-11 16:46:59,955 - INFO - Epoch: 2/2, Iter: 71/93 -- train_loss: 0.1848 \n", - "2023-09-11 16:47:00,328 - INFO - Epoch: 2/2, Iter: 72/93 -- train_loss: 0.1792 \n", - "2023-09-11 16:47:00,701 - INFO - Epoch: 2/2, Iter: 73/93 -- train_loss: 0.1613 \n", - "2023-09-11 16:47:01,076 - INFO - Epoch: 2/2, Iter: 74/93 -- train_loss: 0.1810 \n", - "2023-09-11 16:47:01,451 - INFO - Epoch: 2/2, Iter: 75/93 -- train_loss: 0.1802 \n", - "2023-09-11 16:47:01,830 - INFO - Epoch: 2/2, Iter: 76/93 -- train_loss: 0.1606 \n", - "2023-09-11 16:47:02,205 - INFO - Epoch: 2/2, Iter: 77/93 -- train_loss: 0.1644 \n", - "2023-09-11 16:47:02,586 - INFO - Epoch: 2/2, Iter: 78/93 -- train_loss: 0.1597 \n", - "2023-09-11 16:47:02,961 - INFO - Epoch: 2/2, Iter: 79/93 -- train_loss: 0.1742 \n", - "2023-09-11 16:47:03,336 - INFO - Epoch: 2/2, Iter: 80/93 -- train_loss: 0.1581 \n", - "2023-09-11 16:47:03,718 - INFO - Epoch: 2/2, Iter: 81/93 -- train_loss: 0.1650 \n", - "2023-09-11 16:47:04,098 - INFO - Epoch: 2/2, Iter: 82/93 -- train_loss: 0.1644 \n", - "2023-09-11 16:47:04,473 - INFO - Epoch: 2/2, Iter: 83/93 -- train_loss: 0.1667 \n", - "2023-09-11 16:47:04,849 - INFO - Epoch: 2/2, Iter: 84/93 -- train_loss: 0.1704 \n", - "2023-09-11 16:47:05,228 - INFO - Epoch: 2/2, Iter: 85/93 -- train_loss: 0.1650 \n", - "2023-09-11 16:47:05,602 - INFO - Epoch: 2/2, Iter: 86/93 -- train_loss: 0.1483 \n", - "2023-09-11 16:47:05,975 - INFO - Epoch: 2/2, Iter: 87/93 -- train_loss: 0.1452 \n", - "2023-09-11 16:47:06,353 - INFO - Epoch: 2/2, Iter: 88/93 -- train_loss: 0.1462 \n", - "2023-09-11 16:47:06,727 - INFO - Epoch: 2/2, Iter: 89/93 -- train_loss: 0.1543 \n", - "2023-09-11 16:47:07,101 - INFO - Epoch: 2/2, Iter: 90/93 -- train_loss: 0.1516 \n", - "2023-09-11 16:47:07,486 - INFO - Epoch: 2/2, Iter: 91/93 -- train_loss: 0.1564 \n", - "2023-09-11 16:47:07,879 - INFO - Epoch: 2/2, Iter: 92/93 -- train_loss: 0.1535 \n", - "2023-09-11 16:47:07,995 - INFO - Epoch: 2/2, Iter: 93/93 -- train_loss: 0.2525 \n", - "2023-09-11 16:47:07,995 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run resuming from iteration 0, epoch 1 until 2 epochs\n", - "2023-09-11 16:47:17,163 - ignite.engine.engine.SupervisedEvaluator - INFO - Got new best metric of accuracy: 0.9939496748657054\n", - "2023-09-11 16:47:17,163 - INFO - Epoch[2] Metrics -- accuracy: 0.9939 \n", - "2023-09-11 16:47:17,163 - INFO - Key metric: accuracy best value: 0.9939496748657054 at epoch: 2\n", - "2023-09-11 16:47:17,163 - ignite.engine.engine.SupervisedEvaluator - INFO - Epoch[2] Complete. Time taken: 00:00:09.038\n", - "2023-09-11 16:47:17,163 - ignite.engine.engine.SupervisedEvaluator - INFO - Engine run complete. Time taken: 00:00:09.168\n", - "2023-09-11 16:47:17,301 - ignite.engine.engine.SupervisedTrainer - INFO - Saved checkpoint at epoch: 2\n", - "2023-09-11 16:47:17,302 - ignite.engine.engine.SupervisedTrainer - INFO - Epoch[2] Complete. Time taken: 00:00:44.536\n", - "2023-09-11 16:47:17,387 - ignite.engine.engine.SupervisedTrainer - INFO - Train completed, saved final checkpoint: output/output_230911_164547/model_final_iteration=186.pt\n", - "2023-09-11 16:47:17,387 - ignite.engine.engine.SupervisedTrainer - INFO - Engine run complete. Time taken: 00:01:30.170\n" - ] - } - ], + "outputs": [], "source": [ "%%bash\n", "\n", @@ -703,7 +447,19 @@ " --logging_file \"$BUNDLE/configs/logging.conf\" \\\n", " --meta_file \"$BUNDLE/configs/metadata.json\" \\\n", " --config_file \"['$BUNDLE/configs/common.yaml','$BUNDLE/configs/train.yaml']\" \\\n", - " --max_epochs 2" + " --max_epochs 2 &> out.txt || true" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3d7e7e11-db67-47e3-a03d-0955feee1636", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "raise Exception(open(\"out.txt\").read())" ] }, { From 6e00fe9788f2b6d61b001a92643ceafa0e83d46f Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Tue, 19 Sep 2023 13:16:39 +0100 Subject: [PATCH 25/26] Adding expected 'setup' cells, they aren't strictly needed but won't hurt Signed-off-by: Eric Kerfoot --- bundle/01_bundle_intro.ipynb | 49 +++++++++++++++++++++- bundle/02_mednist_classification.ipynb | 48 ++++++++++++++++++++- bundle/03_mednist_classification_v2.ipynb | 35 ++++++++++++++-- bundle/04_integrating_code.ipynb | 51 +++++++++++++++++++++-- 4 files changed, 172 insertions(+), 11 deletions(-) diff --git a/bundle/01_bundle_intro.ipynb b/bundle/01_bundle_intro.ipynb index 951589e204..976e61e71e 100644 --- a/bundle/01_bundle_intro.ipynb +++ b/bundle/01_bundle_intro.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "e473187c-65db-40f2-b27a-236b3e8f2ad2", + "id": "a7318b28-758a-41f3-a5cb-2b634dfe0100", "metadata": {}, "source": [ "Copyright (c) MONAI Consortium \n", @@ -14,8 +14,52 @@ "distributed under the License is distributed on an \"AS IS\" BASIS, \n", "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \n", "See the License for the specific language governing permissions and \n", - "limitations under the License.\n", + "limitations under the License." + ] + }, + { + "cell_type": "markdown", + "id": "45839c34-faf2-4f14-b28a-fd6ff635db34", + "metadata": {}, + "source": [ + "## Setup environment" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1bf88e03-1c87-4901-9cfb-9c626d454b98", + "metadata": {}, + "outputs": [], + "source": [ + "!python -c \"import monai\" || pip install -q \"monai-weekly[ignite,pyyaml]\"" + ] + }, + { + "cell_type": "markdown", + "id": "2814d671-6db5-4a89-9237-46ed4a950594", + "metadata": {}, + "source": [ + "## Setup imports" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "280efd0a-74dd-41c7-8a2b-0de382dc0657", + "metadata": {}, + "outputs": [], + "source": [ + "from monai.config import print_config\n", "\n", + "print_config()" + ] + }, + { + "cell_type": "markdown", + "id": "8e2cb6cb-8fc2-41cc-941b-ff2e37c4c043", + "metadata": {}, + "source": [ "# MONAI Bundles\n", "\n", "Bundles are essentially _self-descriptive networks_. They combine a network definition with the metadata about what they are meant to do, what they are used for, the nature of their inputs and outputs, and scripts (possibly with associated data) to train and infer using them. \n", @@ -82,6 +126,7 @@ "%%bash\n", "\n", "python -m monai.bundle init_bundle TestBundle\n", + "# you may need to install tree with \"sudo apt install tree\"\n", "which tree && tree TestBundle || true" ] }, diff --git a/bundle/02_mednist_classification.ipynb b/bundle/02_mednist_classification.ipynb index 1a620090e6..3dfea41f38 100644 --- a/bundle/02_mednist_classification.ipynb +++ b/bundle/02_mednist_classification.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "64bd2d8c-4799-4073-bc28-c3632589c525", + "id": "5e8ae3d7-3e2e-4755-a0b6-709ef4180719", "metadata": {}, "source": [ "Copyright (c) MONAI Consortium \n", @@ -14,8 +14,52 @@ "distributed under the License is distributed on an \"AS IS\" BASIS, \n", "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \n", "See the License for the specific language governing permissions and \n", - "limitations under the License.\n", + "limitations under the License." + ] + }, + { + "cell_type": "markdown", + "id": "191c5d77-8ae5-49ab-be22-45f5ba41641f", + "metadata": {}, + "source": [ + "## Setup environment" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "886952c4-0be4-459d-9c53-b81b29199c76", + "metadata": {}, + "outputs": [], + "source": [ + "!python -c \"import monai\" || pip install -q \"monai-weekly[ignite,pyyaml]\"" + ] + }, + { + "cell_type": "markdown", + "id": "a20e1274-0a27-4e37-95d7-fb813243c34c", + "metadata": {}, + "source": [ + "## Setup imports" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b1144d87-ec2f-4b9b-907a-16ea2da279c4", + "metadata": {}, + "outputs": [], + "source": [ + "from monai.config import print_config\n", "\n", + "print_config()" + ] + }, + { + "cell_type": "markdown", + "id": "c572d8b6-3dca-4487-80ad-928090b3e8ab", + "metadata": {}, + "source": [ "# MedNIST Classification Bundle\n", "\n", "In this tutorial we'll describe how to create a bundle for a classification network. This will include how to train and apply the network on the command line. MedNIST will be used as the dataset with the bundle based off the [MONAI 101 notebook](https://github.com/Project-MONAI/tutorials/blob/main/2d_classification/monai_101.ipynb).\n", diff --git a/bundle/03_mednist_classification_v2.ipynb b/bundle/03_mednist_classification_v2.ipynb index ef0e178a23..6aff84cc80 100644 --- a/bundle/03_mednist_classification_v2.ipynb +++ b/bundle/03_mednist_classification_v2.ipynb @@ -17,15 +17,44 @@ "limitations under the License." ] }, + { + "cell_type": "markdown", + "id": "ddfe7d95-3567-4cb2-9eb5-65f235113768", + "metadata": {}, + "source": [ + "## Setup environment" + ] + }, { "cell_type": "code", - "execution_count": 1, - "id": "2f51a451-566f-4501-aeb8-f3cd5d1f7bf9", + "execution_count": null, + "id": "fab1bcae-678b-4b19-a513-d0577d3d7e2b", + "metadata": {}, + "outputs": [], + "source": [ + "!python -c \"import monai\" || pip install -q \"monai-weekly[ignite,pyyaml]\"" + ] + }, + { + "cell_type": "markdown", + "id": "c8ae8b11-f5cf-4f91-ac60-8660f2ab2a4d", + "metadata": {}, + "source": [ + "## Setup imports" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f1492c89-b19f-4216-b3a0-9960397e72ca", "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", - "from monai.apps import MedNISTDataset" + "from monai.apps import MedNISTDataset\n", + "from monai.config import print_config\n", + "\n", + "print_config()" ] }, { diff --git a/bundle/04_integrating_code.ipynb b/bundle/04_integrating_code.ipynb index 62391bc92a..ee1986a328 100644 --- a/bundle/04_integrating_code.ipynb +++ b/bundle/04_integrating_code.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "64bd2d8c-4799-4073-bc28-c3632589c525", + "id": "c0f57371-fbd0-4a3e-94fb-4c9c8aea956c", "metadata": {}, "source": [ "Copyright (c) MONAI Consortium \n", @@ -14,8 +14,53 @@ "distributed under the License is distributed on an \"AS IS\" BASIS, \n", "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \n", "See the License for the specific language governing permissions and \n", - "limitations under the License.\n", + "limitations under the License." + ] + }, + { + "cell_type": "markdown", + "id": "91b49f99-5a9f-4bbe-a034-fb8a5f3fc71d", + "metadata": {}, + "source": [ + "## Setup environment" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cd80c262-cf94-48df-b78e-c54a88a7ffb5", + "metadata": {}, + "outputs": [], + "source": [ + "!python -c \"import monai\" || pip install -q \"monai-weekly[ignite,pyyaml]\"" + ] + }, + { + "cell_type": "markdown", + "id": "c36673a2-02cd-4eea-90ef-8226832c30d0", + "metadata": {}, + "source": [ + "## Setup imports" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "eeeee791-025e-4b1d-9dec-ebc83a8be4eb", + "metadata": {}, + "outputs": [], + "source": [ + "import torchvision\n", + "from monai.config import print_config\n", "\n", + "print_config()" + ] + }, + { + "cell_type": "markdown", + "id": "0fdad73c-f1ab-4874-9e4e-af687f78801a", + "metadata": {}, + "source": [ "# Integrating Non-MONAI Code Into a Bundle\n", "\n", "This notebook will discuss strategies for integrating non-MONAI deep learning code into a bundle. This allows existing Pytorch workflows to be integrated into the bundle ecosystem, for example as a distributable bundle for the model zoo or some other repository like Hugging Face, or to integrate with MONAI Label. The assumption taken here is that you already have the components for preprocessing, inference, validation, and other parts of a workflow, and so the task is how to integrate these parts into MONAI types which can be embedded in config files.\n", @@ -728,8 +773,6 @@ } ], "source": [ - "import torchvision\n", - "\n", "root_dir = \".\" # assuming CIFAR10 was downloaded to the current directory\n", "num_images = 20\n", "dataset = torchvision.datasets.CIFAR10(root=f\"{root_dir}/data\", train=False)\n", From 6a3082bf1cee4bcd38743d0bc47ffc41103f7ab8 Mon Sep 17 00:00:00 2001 From: Eric Kerfoot Date: Tue, 19 Sep 2023 16:43:02 +0100 Subject: [PATCH 26/26] Let's just ignore these notebooks for now Signed-off-by: Eric Kerfoot --- runner.sh | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/runner.sh b/runner.sh index 38cd4458c8..373cc39599 100755 --- a/runner.sh +++ b/runner.sh @@ -115,6 +115,10 @@ skip_run_papermill=("${skip_run_papermill[@]}" .*mednist_classifier_ray*) # htt skip_run_papermill=("${skip_run_papermill[@]}" .*TorchIO_MONAI_PyTorch_Lightning*) # https://github.com/Project-MONAI/tutorials/issues/1324 skip_run_papermill=("${skip_run_papermill[@]}" .*GDS_dataset*) # https://github.com/Project-MONAI/tutorials/issues/1324 skip_run_papermill=("${skip_run_papermill[@]}" .*learn2reg_nlst_paired_lung_ct.ipynb*) # slow test +skip_run_papermill=("${skip_run_papermill[@]}" .*01_bundle_intro.ipynb*) +skip_run_papermill=("${skip_run_papermill[@]}" .*02_mednist_classification.ipynb*) +skip_run_papermill=("${skip_run_papermill[@]}" .*03_mednist_classification_v2.ipynb*) +skip_run_papermill=("${skip_run_papermill[@]}" .*04_integrating_code.ipynb*) # output formatting separator=""