Skip to content
This repository has been archived by the owner on Nov 16, 2023. It is now read-only.

Adding system requirements in README #74

Merged
merged 20 commits into from
Dec 3, 2019
Merged

Adding system requirements in README #74

merged 20 commits into from
Dec 3, 2019

Conversation

vapaunic
Copy link
Contributor

@vapaunic vapaunic commented Dec 3, 2019

Addressing issues #38 and #52

maxkazmsft and others added 20 commits November 7, 2019 16:09
…a dataset (#7)

* Finished version of numpy data loader

* Working training script for demo

* Adds the new metrics

* Fixes docstrings and adds header

* Removing extra setup.py
* Merging work on salt dataset

* Adds computer vision to dependencies

* Updates dependecies

* Update

* Updates the environemnt files

* Updates readme and envs

* Initial running version of dutchf3

* INFRA: added structure templates.

* VOXEL: initial rough code push - need to clean up before PRing.

* Working version

* Working version before refactor

* quick minor fixes in README

* 3D SEG: first commit for PR.

* 3D SEG: removed data files to avoid redistribution.

* Updates

* 3D SEG: restyled batch file, moving onto others.

* Working HRNet

* 3D SEG: finished going through Waldeland code

* Updates test scripts and makes it take processing arguments

* minor update

* Fixing imports

* Refactoring the experiments

* Removing .vscode

* Updates gitignore

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* minor wording fix

* minor wording fix

* enabled splitting dataset into sections, rather than only patches

* enabled splitting dataset into sections, rather than only patches

* merged duplicate ifelse blocks

* merged duplicate ifelse blocks

* refactored prepare_data.py

* refactored prepare_data.py

* added scripts for section train test

* added scripts for section train test

* section train/test works for single channel input

* section train/test works for single channel input

* Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py

This PR includes the following changes:
- added README instructions for running f3dutch experiments
- prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic.
- ran black formatter on the file, which created all the formatting changes (sorry!)

* Merged PR 204: Adds loaders to deepseismic from cv_lib

* train and test script for section based training/testing

* train and test script for section based training/testing

* Merged PR 209: changes to section loaders in data.py

Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts:
- get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders.
- SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py
h, w = img.shape[-2], img.shape[-1]  # height and width

* Merged PR 210: BENCHMARKS: added placeholder for benchmarks.

BENCHMARKS: added placeholder for benchmarks.

* Merged PR 211: Fixes issues left over from changes to data.py

* removing experiments from deep_seismic, following the new struct

* removing experiments from deep_seismic, following the new struct

* Merged PR 220: Adds Horovod and fixes

Add Horovod training script
Updates dependencies in Horovod docker file
Removes hard coding of path in data.py

* section train/test scripts

* section train/test scripts

* Add cv_lib to repo and updates instructions

* Add cv_lib to repo and updates instructions

* Removes data.py and updates readme

* Removes data.py and updates readme

* Updates requirements

* Updates requirements

* Merged PR 222: Moves cv_lib into repo and updates setup instructions

* renamed train/test scripts

* renamed train/test scripts

* train test works on alaudah section experiments, a few minor bugs left

* train test works on alaudah section experiments, a few minor bugs left

* cleaning up loaders

* cleaning up loaders

* Merged PR 236: Cleaned up dutchf3 data loaders

@<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments.

The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders.

This will affect your code if you access these attributes. E.g. if you have something like this in your experiments:
```
train_set = TrainPatchLoader(…)
patches = train_set.patches[train_set.split]
```

or
```
train_set = TrainSectionLoader(…)
sections = train_set.sections[train_set.split]
```

* training testing for sections works

* training testing for sections works

* minor changes

* minor changes

* reverting changes on dutchf3/local/default.py file

* reverting changes on dutchf3/local/default.py file

* added config file

* added config file

* Updates the repo with preliminary results for 2D segmentation

* Merged PR 248: Experiment: section-based Alaudah training/testing

This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment.

* Merged PR 253: Waldeland based voxel loaders and TextureNet model

Related work items: #16357

* Merged PR 290: A demo notebook on local train/eval on F3 data set

Notebook and associated files + minor change in a patch_deconvnet_skip.py model file.

Related work items: #17432

* Merged PR 312: moved dutchf3_section to experiments/interpretation

moved dutchf3_section to experiments/interpretation

Related work items: #17683

* Merged PR 309: minor change to README to reflect the changes in prepare_data script

minor change to README to reflect the changes in prepare_data script

Related work items: #17681

* Merged PR 315: Removing voxel exp

Related work items: #17702

* sync with new experiment structure

* sync with new experiment structure

* added a logging handler for array metrics

* added a logging handler for array metrics

* first draft of metrics based on the ignite confusion matrix

* first draft of metrics based on the ignite confusion matrix

* metrics now based on ignite.metrics

* metrics now based on ignite.metrics

* modified patch train.py with new metrics

* modified patch train.py with new metrics

* Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo.

Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script.

Related work items: #18264

* modified metrics with ignore_index

* modified metrics with ignore_index

* Merged PR 405: minor mods to notebook, more documentation

A very small PR - Just a few more lines of documentation in the notebook, to improve clarity.

Related work items: #17432

* Merged PR 368: Adds penobscot

Adds for penobscot
- Dataset reader
- Training script
- Testing script
- Section depth augmentation
- Patch depth augmentation
- Iinline visualisation for Tensorboard

Related work items: #14560, #17697, #17699, #17700

* Merged PR 407: Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Related work items: #16362

* Merged PR 452: decouple docker image creation from azureml

removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb

All other changes are due to trivial reruns

Related work items: #18346

* Merged PR 512: Pre-commit hooks for formatting and style checking

Opening this PR to start the discussion -

I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added:
- .pre-commit-config.yaml - defines git hooks to be installed
- .flake8 - settings for flake8 linter
- pyproject.toml - settings for black formatter

The last two files define the formatting and linting style we want to enforce on the repo.

All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors.

Some questions to start the discussion:
- Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that.
- Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file.
- Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this?

Thanks!

Related work items: #18350

* Merged PR 513: 3D training script for Waldeland's model with Ignite

Related work items: #16356

* Merged PR 565: Demo notebook updated with 3D graph

Changes:
1) Updated demo notebook with the 3D visualization
2) Formatting changes due to new black/flake8 git hook

Related work items: #17432

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* merged tests into this branch

* merged tests into this branch

* Merged PR 569: Minor PR: change to pre-commit configuration files

Related work items: #18350

* Merged PR 586: Purging unused files and experiments

Purging unused files and experiments

Related work items: #20499

* moved prepare data under scripts

* moved prepare data under scripts

* removed untested model configs

* removed untested model configs

* fixed weird bug in penobscot data loader

* fixed weird bug in penobscot data loader

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* fixed bugs in my previous 'fix'

* fixed bugs in my previous 'fix'

* removed redundant _open_mask from subclasses

* removed redundant _open_mask from subclasses

* Merged PR 601: Fixes to penobscot experiments

A few changes:
- Instructions in README on how to download and process Penobscot and F3 2D data sets
- moved prepare_data scripts to the scripts/ directory
- fixed a weird issue with a class method in Penobscot data loader
- fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue)
- removed config files that were not tested or working in Penobscot experiments
- modified default.py so it's working if train.py ran without a config file

Related work items: #20694

* Merged PR 605: added common metrics to Waldeland model in Ignite

Related work items: #19550

* Removed redundant extract_metric_from

* Removed redundant extract_metric_from

* formatting changes in metrics

* formatting changes in metrics

* modified penobscot experiment to use new local metrics

* modified penobscot experiment to use new local metrics

* modified section experimen to pass device to metrics

* modified section experimen to pass device to metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* fixed other experiments after new metrics

* fixed other experiments after new metrics

* removed apex metrics from distributed train.py

* removed apex metrics from distributed train.py

* added ignite-based metrics to dutch voxel experiment

* added ignite-based metrics to dutch voxel experiment

* removed apex metrics

* removed apex metrics

* modified penobscot test script to use new metrics

* pytorch-ignite pre-release with new metrics until stable available

* removed cell output from the F3 notebook

* deleted .vscode

* modified metric import in test_metrics.py

* separated metrics out as a module

* relative logger file path, modified section experiment

* removed the REPO_PATH from init

* created util logging function, and moved logging file to each experiment

* modified demo experiment

* modified penobscot experiment

* modified dutchf3_voxel experiment

* no logging in voxel2pixel

* modified dutchf3 patch local experiment

* modified patch distributed experiment

* modified interpretation notebook

* minor changes to comments
* Update README with introduction to DeepSeismic

Add intro material for DeepSeismic

* Adding logo file

* Adding image to readme

* Update README.md
* Updates notebook to use itkwidgets for interactive visualisation
* Adding TF 2.0 to allow for tensorboard vis in notebooks

* Modifies hrnet config for notebook

* Add HRNet notebook for demo

* Updates HRNet notebook and tidies F3
* Update it to include sections for imaging

* Update README.md

* Update README.md
@vapaunic
Copy link
Contributor Author

vapaunic commented Dec 3, 2019

@msalvaris , @maxkazmsft , I added a bit about system requirement (Linux and gpu) and Azure dsvm info to the README, to address issues #38 and #52 . Let me know if it's not sufficient.

Copy link
Contributor

@msalvaris msalvaris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good 👍 🚀 👯

@vapaunic vapaunic added this to the Second Bug Bash prep milestone Dec 3, 2019
@vapaunic vapaunic merged commit 65b5cf9 into microsoft:staging Dec 3, 2019
georgeAccnt-GH added a commit that referenced this pull request Dec 4, 2019
* azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing

* azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing

* merge upstream into my fork (#1)

* MINOR: addressing broken F3 download link (#73)

* Update main_build.yml for Azure Pipelines

* Update main_build.yml for Azure Pipelines

* BUILD: added build status badges (#6)

* Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7)

* Finished version of numpy data loader

* Working training script for demo

* Adds the new metrics

* Fixes docstrings and adds header

* Removing extra setup.py

* Log config file now experiment specific (#8)

* Merging work on salt dataset

* Adds computer vision to dependencies

* Updates dependecies

* Update

* Updates the environemnt files

* Updates readme and envs

* Initial running version of dutchf3

* INFRA: added structure templates.

* VOXEL: initial rough code push - need to clean up before PRing.

* Working version

* Working version before refactor

* quick minor fixes in README

* 3D SEG: first commit for PR.

* 3D SEG: removed data files to avoid redistribution.

* Updates

* 3D SEG: restyled batch file, moving onto others.

* Working HRNet

* 3D SEG: finished going through Waldeland code

* Updates test scripts and makes it take processing arguments

* minor update

* Fixing imports

* Refactoring the experiments

* Removing .vscode

* Updates gitignore

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* minor wording fix

* minor wording fix

* enabled splitting dataset into sections, rather than only patches

* enabled splitting dataset into sections, rather than only patches

* merged duplicate ifelse blocks

* merged duplicate ifelse blocks

* refactored prepare_data.py

* refactored prepare_data.py

* added scripts for section train test

* added scripts for section train test

* section train/test works for single channel input

* section train/test works for single channel input

* Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py

This PR includes the following changes:
- added README instructions for running f3dutch experiments
- prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic.
- ran black formatter on the file, which created all the formatting changes (sorry!)

* Merged PR 204: Adds loaders to deepseismic from cv_lib

* train and test script for section based training/testing

* train and test script for section based training/testing

* Merged PR 209: changes to section loaders in data.py

Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts:
- get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders.
- SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py
h, w = img.shape[-2], img.shape[-1]  # height and width

* Merged PR 210: BENCHMARKS: added placeholder for benchmarks.

BENCHMARKS: added placeholder for benchmarks.

* Merged PR 211: Fixes issues left over from changes to data.py

* removing experiments from deep_seismic, following the new struct

* removing experiments from deep_seismic, following the new struct

* Merged PR 220: Adds Horovod and fixes

Add Horovod training script
Updates dependencies in Horovod docker file
Removes hard coding of path in data.py

* section train/test scripts

* section train/test scripts

* Add cv_lib to repo and updates instructions

* Add cv_lib to repo and updates instructions

* Removes data.py and updates readme

* Removes data.py and updates readme

* Updates requirements

* Updates requirements

* Merged PR 222: Moves cv_lib into repo and updates setup instructions

* renamed train/test scripts

* renamed train/test scripts

* train test works on alaudah section experiments, a few minor bugs left

* train test works on alaudah section experiments, a few minor bugs left

* cleaning up loaders

* cleaning up loaders

* Merged PR 236: Cleaned up dutchf3 data loaders

@<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments.

The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders.

This will affect your code if you access these attributes. E.g. if you have something like this in your experiments:
```
train_set = TrainPatchLoader(…)
patches = train_set.patches[train_set.split]
```

or
```
train_set = TrainSectionLoader(…)
sections = train_set.sections[train_set.split]
```

* training testing for sections works

* training testing for sections works

* minor changes

* minor changes

* reverting changes on dutchf3/local/default.py file

* reverting changes on dutchf3/local/default.py file

* added config file

* added config file

* Updates the repo with preliminary results for 2D segmentation

* Merged PR 248: Experiment: section-based Alaudah training/testing

This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment.

* Merged PR 253: Waldeland based voxel loaders and TextureNet model

Related work items: #16357

* Merged PR 290: A demo notebook on local train/eval on F3 data set

Notebook and associated files + minor change in a patch_deconvnet_skip.py model file.

Related work items: #17432

* Merged PR 312: moved dutchf3_section to experiments/interpretation

moved dutchf3_section to experiments/interpretation

Related work items: #17683

* Merged PR 309: minor change to README to reflect the changes in prepare_data script

minor change to README to reflect the changes in prepare_data script

Related work items: #17681

* Merged PR 315: Removing voxel exp

Related work items: #17702

* sync with new experiment structure

* sync with new experiment structure

* added a logging handler for array metrics

* added a logging handler for array metrics

* first draft of metrics based on the ignite confusion matrix

* first draft of metrics based on the ignite confusion matrix

* metrics now based on ignite.metrics

* metrics now based on ignite.metrics

* modified patch train.py with new metrics

* modified patch train.py with new metrics

* Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo.

Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script.

Related work items: #18264

* modified metrics with ignore_index

* modified metrics with ignore_index

* Merged PR 405: minor mods to notebook, more documentation

A very small PR - Just a few more lines of documentation in the notebook, to improve clarity.

Related work items: #17432

* Merged PR 368: Adds penobscot

Adds for penobscot
- Dataset reader
- Training script
- Testing script
- Section depth augmentation
- Patch depth augmentation
- Iinline visualisation for Tensorboard

Related work items: #14560, #17697, #17699, #17700

* Merged PR 407: Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Related work items: #16362

* Merged PR 452: decouple docker image creation from azureml

removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb

All other changes are due to trivial reruns

Related work items: #18346

* Merged PR 512: Pre-commit hooks for formatting and style checking

Opening this PR to start the discussion -

I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added:
- .pre-commit-config.yaml - defines git hooks to be installed
- .flake8 - settings for flake8 linter
- pyproject.toml - settings for black formatter

The last two files define the formatting and linting style we want to enforce on the repo.

All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors.

Some questions to start the discussion:
- Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that.
- Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file.
- Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this?

Thanks!

Related work items: #18350

* Merged PR 513: 3D training script for Waldeland's model with Ignite

Related work items: #16356

* Merged PR 565: Demo notebook updated with 3D graph

Changes:
1) Updated demo notebook with the 3D visualization
2) Formatting changes due to new black/flake8 git hook

Related work items: #17432

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* merged tests into this branch

* merged tests into this branch

* Merged PR 569: Minor PR: change to pre-commit configuration files

Related work items: #18350

* Merged PR 586: Purging unused files and experiments

Purging unused files and experiments

Related work items: #20499

* moved prepare data under scripts

* moved prepare data under scripts

* removed untested model configs

* removed untested model configs

* fixed weird bug in penobscot data loader

* fixed weird bug in penobscot data loader

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* fixed bugs in my previous 'fix'

* fixed bugs in my previous 'fix'

* removed redundant _open_mask from subclasses

* removed redundant _open_mask from subclasses

* Merged PR 601: Fixes to penobscot experiments

A few changes:
- Instructions in README on how to download and process Penobscot and F3 2D data sets
- moved prepare_data scripts to the scripts/ directory
- fixed a weird issue with a class method in Penobscot data loader
- fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue)
- removed config files that were not tested or working in Penobscot experiments
- modified default.py so it's working if train.py ran without a config file

Related work items: #20694

* Merged PR 605: added common metrics to Waldeland model in Ignite

Related work items: #19550

* Removed redundant extract_metric_from

* Removed redundant extract_metric_from

* formatting changes in metrics

* formatting changes in metrics

* modified penobscot experiment to use new local metrics

* modified penobscot experiment to use new local metrics

* modified section experimen to pass device to metrics

* modified section experimen to pass device to metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* fixed other experiments after new metrics

* fixed other experiments after new metrics

* removed apex metrics from distributed train.py

* removed apex metrics from distributed train.py

* added ignite-based metrics to dutch voxel experiment

* added ignite-based metrics to dutch voxel experiment

* removed apex metrics

* removed apex metrics

* modified penobscot test script to use new metrics

* pytorch-ignite pre-release with new metrics until stable available

* removed cell output from the F3 notebook

* deleted .vscode

* modified metric import in test_metrics.py

* separated metrics out as a module

* relative logger file path, modified section experiment

* removed the REPO_PATH from init

* created util logging function, and moved logging file to each experiment

* modified demo experiment

* modified penobscot experiment

* modified dutchf3_voxel experiment

* no logging in voxel2pixel

* modified dutchf3 patch local experiment

* modified patch distributed experiment

* modified interpretation notebook

* minor changes to comments

* DOC: forking dislaimer and new build names. (#9)

* Updating README.md with introduction material (#10)

* Update README with introduction to DeepSeismic

Add intro material for DeepSeismic

* Adding logo file

* Adding image to readme

* Update README.md

* Updates the 3D visualisation to use itkwidgets (#11)

* Updates notebook to use itkwidgets for interactive visualisation

* Adds jupytext to pre-commit (#12)


* Add jupytext

* Adds demo notebook for HRNet (#13)

* Adding TF 2.0 to allow for tensorboard vis in notebooks

* Modifies hrnet config for notebook

* Add HRNet notebook for demo

* Updates HRNet notebook and tidies F3

* removed my username references (#15)

* moving 3D models into contrib folder (#16)

* Weetok (#17)

* Update it to include sections for imaging

* Update README.md

* Update README.md

* fixed link for F3 download

* MINOR: python version fix to 3.6.7 (#72)

* Adding system requirements in README (#74)

* Update main_build.yml for Azure Pipelines

* Update main_build.yml for Azure Pipelines

* BUILD: added build status badges (#6)

* Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7)

* Finished version of numpy data loader

* Working training script for demo

* Adds the new metrics

* Fixes docstrings and adds header

* Removing extra setup.py

* Log config file now experiment specific (#8)

* Merging work on salt dataset

* Adds computer vision to dependencies

* Updates dependecies

* Update

* Updates the environemnt files

* Updates readme and envs

* Initial running version of dutchf3

* INFRA: added structure templates.

* VOXEL: initial rough code push - need to clean up before PRing.

* Working version

* Working version before refactor

* quick minor fixes in README

* 3D SEG: first commit for PR.

* 3D SEG: removed data files to avoid redistribution.

* Updates

* 3D SEG: restyled batch file, moving onto others.

* Working HRNet

* 3D SEG: finished going through Waldeland code

* Updates test scripts and makes it take processing arguments

* minor update

* Fixing imports

* Refactoring the experiments

* Removing .vscode

* Updates gitignore

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* minor wording fix

* minor wording fix

* enabled splitting dataset into sections, rather than only patches

* enabled splitting dataset into sections, rather than only patches

* merged duplicate ifelse blocks

* merged duplicate ifelse blocks

* refactored prepare_data.py

* refactored prepare_data.py

* added scripts for section train test

* added scripts for section train test

* section train/test works for single channel input

* section train/test works for single channel input

* Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py

This PR includes the following changes:
- added README instructions for running f3dutch experiments
- prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic.
- ran black formatter on the file, which created all the formatting changes (sorry!)

* Merged PR 204: Adds loaders to deepseismic from cv_lib

* train and test script for section based training/testing

* train and test script for section based training/testing

* Merged PR 209: changes to section loaders in data.py

Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts:
- get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders.
- SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py
h, w = img.shape[-2], img.shape[-1]  # height and width

* Merged PR 210: BENCHMARKS: added placeholder for benchmarks.

BENCHMARKS: added placeholder for benchmarks.

* Merged PR 211: Fixes issues left over from changes to data.py

* removing experiments from deep_seismic, following the new struct

* removing experiments from deep_seismic, following the new struct

* Merged PR 220: Adds Horovod and fixes

Add Horovod training script
Updates dependencies in Horovod docker file
Removes hard coding of path in data.py

* section train/test scripts

* section train/test scripts

* Add cv_lib to repo and updates instructions

* Add cv_lib to repo and updates instructions

* Removes data.py and updates readme

* Removes data.py and updates readme

* Updates requirements

* Updates requirements

* Merged PR 222: Moves cv_lib into repo and updates setup instructions

* renamed train/test scripts

* renamed train/test scripts

* train test works on alaudah section experiments, a few minor bugs left

* train test works on alaudah section experiments, a few minor bugs left

* cleaning up loaders

* cleaning up loaders

* Merged PR 236: Cleaned up dutchf3 data loaders

@<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments.

The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders.

This will affect your code if you access these attributes. E.g. if you have something like this in your experiments:
```
train_set = TrainPatchLoader(…)
patches = train_set.patches[train_set.split]
```

or
```
train_set = TrainSectionLoader(…)
sections = train_set.sections[train_set.split]
```

* training testing for sections works

* training testing for sections works

* minor changes

* minor changes

* reverting changes on dutchf3/local/default.py file

* reverting changes on dutchf3/local/default.py file

* added config file

* added config file

* Updates the repo with preliminary results for 2D segmentation

* Merged PR 248: Experiment: section-based Alaudah training/testing

This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment.

* Merged PR 253: Waldeland based voxel loaders and TextureNet model

Related work items: #16357

* Merged PR 290: A demo notebook on local train/eval on F3 data set

Notebook and associated files + minor change in a patch_deconvnet_skip.py model file.

Related work items: #17432

* Merged PR 312: moved dutchf3_section to experiments/interpretation

moved dutchf3_section to experiments/interpretation

Related work items: #17683

* Merged PR 309: minor change to README to reflect the changes in prepare_data script

minor change to README to reflect the changes in prepare_data script

Related work items: #17681

* Merged PR 315: Removing voxel exp

Related work items: #17702

* sync with new experiment structure

* sync with new experiment structure

* added a logging handler for array metrics

* added a logging handler for array metrics

* first draft of metrics based on the ignite confusion matrix

* first draft of metrics based on the ignite confusion matrix

* metrics now based on ignite.metrics

* metrics now based on ignite.metrics

* modified patch train.py with new metrics

* modified patch train.py with new metrics

* Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo.

Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script.

Related work items: #18264

* modified metrics with ignore_index

* modified metrics with ignore_index

* Merged PR 405: minor mods to notebook, more documentation

A very small PR - Just a few more lines of documentation in the notebook, to improve clarity.

Related work items: #17432

* Merged PR 368: Adds penobscot

Adds for penobscot
- Dataset reader
- Training script
- Testing script
- Section depth augmentation
- Patch depth augmentation
- Iinline visualisation for Tensorboard

Related work items: #14560, #17697, #17699, #17700

* Merged PR 407: Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Related work items: #16362

* Merged PR 452: decouple docker image creation from azureml

removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb

All other changes are due to trivial reruns

Related work items: #18346

* Merged PR 512: Pre-commit hooks for formatting and style checking

Opening this PR to start the discussion -

I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added:
- .pre-commit-config.yaml - defines git hooks to be installed
- .flake8 - settings for flake8 linter
- pyproject.toml - settings for black formatter

The last two files define the formatting and linting style we want to enforce on the repo.

All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors.

Some questions to start the discussion:
- Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that.
- Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file.
- Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this?

Thanks!

Related work items: #18350

* Merged PR 513: 3D training script for Waldeland's model with Ignite

Related work items: #16356

* Merged PR 565: Demo notebook updated with 3D graph

Changes:
1) Updated demo notebook with the 3D visualization
2) Formatting changes due to new black/flake8 git hook

Related work items: #17432

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* merged tests into this branch

* merged tests into this branch

* Merged PR 569: Minor PR: change to pre-commit configuration files

Related work items: #18350

* Merged PR 586: Purging unused files and experiments

Purging unused files and experiments

Related work items: #20499

* moved prepare data under scripts

* moved prepare data under scripts

* removed untested model configs

* removed untested model configs

* fixed weird bug in penobscot data loader

* fixed weird bug in penobscot data loader

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* fixed bugs in my previous 'fix'

* fixed bugs in my previous 'fix'

* removed redundant _open_mask from subclasses

* removed redundant _open_mask from subclasses

* Merged PR 601: Fixes to penobscot experiments

A few changes:
- Instructions in README on how to download and process Penobscot and F3 2D data sets
- moved prepare_data scripts to the scripts/ directory
- fixed a weird issue with a class method in Penobscot data loader
- fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue)
- removed config files that were not tested or working in Penobscot experiments
- modified default.py so it's working if train.py ran without a config file

Related work items: #20694

* Merged PR 605: added common metrics to Waldeland model in Ignite

Related work items: #19550

* Removed redundant extract_metric_from

* Removed redundant extract_metric_from

* formatting changes in metrics

* formatting changes in metrics

* modified penobscot experiment to use new local metrics

* modified penobscot experiment to use new local metrics

* modified section experimen to pass device to metrics

* modified section experimen to pass device to metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* fixed other experiments after new metrics

* fixed other experiments after new metrics

* removed apex metrics from distributed train.py

* removed apex metrics from distributed train.py

* added ignite-based metrics to dutch voxel experiment

* added ignite-based metrics to dutch voxel experiment

* removed apex metrics

* removed apex metrics

* modified penobscot test script to use new metrics

* pytorch-ignite pre-release with new metrics until stable available

* removed cell output from the F3 notebook

* deleted .vscode

* modified metric import in test_metrics.py

* separated metrics out as a module

* relative logger file path, modified section experiment

* removed the REPO_PATH from init

* created util logging function, and moved logging file to each experiment

* modified demo experiment

* modified penobscot experiment

* modified dutchf3_voxel experiment

* no logging in voxel2pixel

* modified dutchf3 patch local experiment

* modified patch distributed experiment

* modified interpretation notebook

* minor changes to comments

* DOC: forking dislaimer and new build names. (#9)

* Updating README.md with introduction material (#10)

* Update README with introduction to DeepSeismic

Add intro material for DeepSeismic

* Adding logo file

* Adding image to readme

* Update README.md

* Updates the 3D visualisation to use itkwidgets (#11)

* Updates notebook to use itkwidgets for interactive visualisation

* Adds jupytext to pre-commit (#12)


* Add jupytext

* Adds demo notebook for HRNet (#13)

* Adding TF 2.0 to allow for tensorboard vis in notebooks

* Modifies hrnet config for notebook

* Add HRNet notebook for demo

* Updates HRNet notebook and tidies F3

* removed my username references (#15)

* moving 3D models into contrib folder (#16)

* Weetok (#17)

* Update it to include sections for imaging

* Update README.md

* Update README.md

* added system requirements to readme

* sdk 1.0.76; tested conda env vs docker image; extented readme

* removed reference to imaging

* minor md formatting

* minor md formatting
georgeAccnt-GH added a commit that referenced this pull request Dec 11, 2019
…u GPU-enabled VM, preferably NC12 (#88)

* azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing

* azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing

* merge upstream into my fork (#1)

* MINOR: addressing broken F3 download link (#73)

* Update main_build.yml for Azure Pipelines

* Update main_build.yml for Azure Pipelines

* BUILD: added build status badges (#6)

* Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7)

* Finished version of numpy data loader

* Working training script for demo

* Adds the new metrics

* Fixes docstrings and adds header

* Removing extra setup.py

* Log config file now experiment specific (#8)

* Merging work on salt dataset

* Adds computer vision to dependencies

* Updates dependecies

* Update

* Updates the environemnt files

* Updates readme and envs

* Initial running version of dutchf3

* INFRA: added structure templates.

* VOXEL: initial rough code push - need to clean up before PRing.

* Working version

* Working version before refactor

* quick minor fixes in README

* 3D SEG: first commit for PR.

* 3D SEG: removed data files to avoid redistribution.

* Updates

* 3D SEG: restyled batch file, moving onto others.

* Working HRNet

* 3D SEG: finished going through Waldeland code

* Updates test scripts and makes it take processing arguments

* minor update

* Fixing imports

* Refactoring the experiments

* Removing .vscode

* Updates gitignore

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* minor wording fix

* minor wording fix

* enabled splitting dataset into sections, rather than only patches

* enabled splitting dataset into sections, rather than only patches

* merged duplicate ifelse blocks

* merged duplicate ifelse blocks

* refactored prepare_data.py

* refactored prepare_data.py

* added scripts for section train test

* added scripts for section train test

* section train/test works for single channel input

* section train/test works for single channel input

* Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py

This PR includes the following changes:
- added README instructions for running f3dutch experiments
- prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic.
- ran black formatter on the file, which created all the formatting changes (sorry!)

* Merged PR 204: Adds loaders to deepseismic from cv_lib

* train and test script for section based training/testing

* train and test script for section based training/testing

* Merged PR 209: changes to section loaders in data.py

Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts:
- get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders.
- SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py
h, w = img.shape[-2], img.shape[-1]  # height and width

* Merged PR 210: BENCHMARKS: added placeholder for benchmarks.

BENCHMARKS: added placeholder for benchmarks.

* Merged PR 211: Fixes issues left over from changes to data.py

* removing experiments from deep_seismic, following the new struct

* removing experiments from deep_seismic, following the new struct

* Merged PR 220: Adds Horovod and fixes

Add Horovod training script
Updates dependencies in Horovod docker file
Removes hard coding of path in data.py

* section train/test scripts

* section train/test scripts

* Add cv_lib to repo and updates instructions

* Add cv_lib to repo and updates instructions

* Removes data.py and updates readme

* Removes data.py and updates readme

* Updates requirements

* Updates requirements

* Merged PR 222: Moves cv_lib into repo and updates setup instructions

* renamed train/test scripts

* renamed train/test scripts

* train test works on alaudah section experiments, a few minor bugs left

* train test works on alaudah section experiments, a few minor bugs left

* cleaning up loaders

* cleaning up loaders

* Merged PR 236: Cleaned up dutchf3 data loaders

@<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments.

The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders.

This will affect your code if you access these attributes. E.g. if you have something like this in your experiments:
```
train_set = TrainPatchLoader(…)
patches = train_set.patches[train_set.split]
```

or
```
train_set = TrainSectionLoader(…)
sections = train_set.sections[train_set.split]
```

* training testing for sections works

* training testing for sections works

* minor changes

* minor changes

* reverting changes on dutchf3/local/default.py file

* reverting changes on dutchf3/local/default.py file

* added config file

* added config file

* Updates the repo with preliminary results for 2D segmentation

* Merged PR 248: Experiment: section-based Alaudah training/testing

This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment.

* Merged PR 253: Waldeland based voxel loaders and TextureNet model

Related work items: #16357

* Merged PR 290: A demo notebook on local train/eval on F3 data set

Notebook and associated files + minor change in a patch_deconvnet_skip.py model file.

Related work items: #17432

* Merged PR 312: moved dutchf3_section to experiments/interpretation

moved dutchf3_section to experiments/interpretation

Related work items: #17683

* Merged PR 309: minor change to README to reflect the changes in prepare_data script

minor change to README to reflect the changes in prepare_data script

Related work items: #17681

* Merged PR 315: Removing voxel exp

Related work items: #17702

* sync with new experiment structure

* sync with new experiment structure

* added a logging handler for array metrics

* added a logging handler for array metrics

* first draft of metrics based on the ignite confusion matrix

* first draft of metrics based on the ignite confusion matrix

* metrics now based on ignite.metrics

* metrics now based on ignite.metrics

* modified patch train.py with new metrics

* modified patch train.py with new metrics

* Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo.

Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script.

Related work items: #18264

* modified metrics with ignore_index

* modified metrics with ignore_index

* Merged PR 405: minor mods to notebook, more documentation

A very small PR - Just a few more lines of documentation in the notebook, to improve clarity.

Related work items: #17432

* Merged PR 368: Adds penobscot

Adds for penobscot
- Dataset reader
- Training script
- Testing script
- Section depth augmentation
- Patch depth augmentation
- Iinline visualisation for Tensorboard

Related work items: #14560, #17697, #17699, #17700

* Merged PR 407: Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Related work items: #16362

* Merged PR 452: decouple docker image creation from azureml

removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb

All other changes are due to trivial reruns

Related work items: #18346

* Merged PR 512: Pre-commit hooks for formatting and style checking

Opening this PR to start the discussion -

I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added:
- .pre-commit-config.yaml - defines git hooks to be installed
- .flake8 - settings for flake8 linter
- pyproject.toml - settings for black formatter

The last two files define the formatting and linting style we want to enforce on the repo.

All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors.

Some questions to start the discussion:
- Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that.
- Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file.
- Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this?

Thanks!

Related work items: #18350

* Merged PR 513: 3D training script for Waldeland's model with Ignite

Related work items: #16356

* Merged PR 565: Demo notebook updated with 3D graph

Changes:
1) Updated demo notebook with the 3D visualization
2) Formatting changes due to new black/flake8 git hook

Related work items: #17432

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* merged tests into this branch

* merged tests into this branch

* Merged PR 569: Minor PR: change to pre-commit configuration files

Related work items: #18350

* Merged PR 586: Purging unused files and experiments

Purging unused files and experiments

Related work items: #20499

* moved prepare data under scripts

* moved prepare data under scripts

* removed untested model configs

* removed untested model configs

* fixed weird bug in penobscot data loader

* fixed weird bug in penobscot data loader

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* fixed bugs in my previous 'fix'

* fixed bugs in my previous 'fix'

* removed redundant _open_mask from subclasses

* removed redundant _open_mask from subclasses

* Merged PR 601: Fixes to penobscot experiments

A few changes:
- Instructions in README on how to download and process Penobscot and F3 2D data sets
- moved prepare_data scripts to the scripts/ directory
- fixed a weird issue with a class method in Penobscot data loader
- fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue)
- removed config files that were not tested or working in Penobscot experiments
- modified default.py so it's working if train.py ran without a config file

Related work items: #20694

* Merged PR 605: added common metrics to Waldeland model in Ignite

Related work items: #19550

* Removed redundant extract_metric_from

* Removed redundant extract_metric_from

* formatting changes in metrics

* formatting changes in metrics

* modified penobscot experiment to use new local metrics

* modified penobscot experiment to use new local metrics

* modified section experimen to pass device to metrics

* modified section experimen to pass device to metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* fixed other experiments after new metrics

* fixed other experiments after new metrics

* removed apex metrics from distributed train.py

* removed apex metrics from distributed train.py

* added ignite-based metrics to dutch voxel experiment

* added ignite-based metrics to dutch voxel experiment

* removed apex metrics

* removed apex metrics

* modified penobscot test script to use new metrics

* pytorch-ignite pre-release with new metrics until stable available

* removed cell output from the F3 notebook

* deleted .vscode

* modified metric import in test_metrics.py

* separated metrics out as a module

* relative logger file path, modified section experiment

* removed the REPO_PATH from init

* created util logging function, and moved logging file to each experiment

* modified demo experiment

* modified penobscot experiment

* modified dutchf3_voxel experiment

* no logging in voxel2pixel

* modified dutchf3 patch local experiment

* modified patch distributed experiment

* modified interpretation notebook

* minor changes to comments

* DOC: forking dislaimer and new build names. (#9)

* Updating README.md with introduction material (#10)

* Update README with introduction to DeepSeismic

Add intro material for DeepSeismic

* Adding logo file

* Adding image to readme

* Update README.md

* Updates the 3D visualisation to use itkwidgets (#11)

* Updates notebook to use itkwidgets for interactive visualisation

* Adds jupytext to pre-commit (#12)


* Add jupytext

* Adds demo notebook for HRNet (#13)

* Adding TF 2.0 to allow for tensorboard vis in notebooks

* Modifies hrnet config for notebook

* Add HRNet notebook for demo

* Updates HRNet notebook and tidies F3

* removed my username references (#15)

* moving 3D models into contrib folder (#16)

* Weetok (#17)

* Update it to include sections for imaging

* Update README.md

* Update README.md

* fixed link for F3 download

* MINOR: python version fix to 3.6.7 (#72)

* Adding system requirements in README (#74)

* Update main_build.yml for Azure Pipelines

* Update main_build.yml for Azure Pipelines

* BUILD: added build status badges (#6)

* Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7)

* Finished version of numpy data loader

* Working training script for demo

* Adds the new metrics

* Fixes docstrings and adds header

* Removing extra setup.py

* Log config file now experiment specific (#8)

* Merging work on salt dataset

* Adds computer vision to dependencies

* Updates dependecies

* Update

* Updates the environemnt files

* Updates readme and envs

* Initial running version of dutchf3

* INFRA: added structure templates.

* VOXEL: initial rough code push - need to clean up before PRing.

* Working version

* Working version before refactor

* quick minor fixes in README

* 3D SEG: first commit for PR.

* 3D SEG: removed data files to avoid redistribution.

* Updates

* 3D SEG: restyled batch file, moving onto others.

* Working HRNet

* 3D SEG: finished going through Waldeland code

* Updates test scripts and makes it take processing arguments

* minor update

* Fixing imports

* Refactoring the experiments

* Removing .vscode

* Updates gitignore

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* minor wording fix

* minor wording fix

* enabled splitting dataset into sections, rather than only patches

* enabled splitting dataset into sections, rather than only patches

* merged duplicate ifelse blocks

* merged duplicate ifelse blocks

* refactored prepare_data.py

* refactored prepare_data.py

* added scripts for section train test

* added scripts for section train test

* section train/test works for single channel input

* section train/test works for single channel input

* Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py

This PR includes the following changes:
- added README instructions for running f3dutch experiments
- prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic.
- ran black formatter on the file, which created all the formatting changes (sorry!)

* Merged PR 204: Adds loaders to deepseismic from cv_lib

* train and test script for section based training/testing

* train and test script for section based training/testing

* Merged PR 209: changes to section loaders in data.py

Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts:
- get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders.
- SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py
h, w = img.shape[-2], img.shape[-1]  # height and width

* Merged PR 210: BENCHMARKS: added placeholder for benchmarks.

BENCHMARKS: added placeholder for benchmarks.

* Merged PR 211: Fixes issues left over from changes to data.py

* removing experiments from deep_seismic, following the new struct

* removing experiments from deep_seismic, following the new struct

* Merged PR 220: Adds Horovod and fixes

Add Horovod training script
Updates dependencies in Horovod docker file
Removes hard coding of path in data.py

* section train/test scripts

* section train/test scripts

* Add cv_lib to repo and updates instructions

* Add cv_lib to repo and updates instructions

* Removes data.py and updates readme

* Removes data.py and updates readme

* Updates requirements

* Updates requirements

* Merged PR 222: Moves cv_lib into repo and updates setup instructions

* renamed train/test scripts

* renamed train/test scripts

* train test works on alaudah section experiments, a few minor bugs left

* train test works on alaudah section experiments, a few minor bugs left

* cleaning up loaders

* cleaning up loaders

* Merged PR 236: Cleaned up dutchf3 data loaders

@<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments.

The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders.

This will affect your code if you access these attributes. E.g. if you have something like this in your experiments:
```
train_set = TrainPatchLoader(…)
patches = train_set.patches[train_set.split]
```

or
```
train_set = TrainSectionLoader(…)
sections = train_set.sections[train_set.split]
```

* training testing for sections works

* training testing for sections works

* minor changes

* minor changes

* reverting changes on dutchf3/local/default.py file

* reverting changes on dutchf3/local/default.py file

* added config file

* added config file

* Updates the repo with preliminary results for 2D segmentation

* Merged PR 248: Experiment: section-based Alaudah training/testing

This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment.

* Merged PR 253: Waldeland based voxel loaders and TextureNet model

Related work items: #16357

* Merged PR 290: A demo notebook on local train/eval on F3 data set

Notebook and associated files + minor change in a patch_deconvnet_skip.py model file.

Related work items: #17432

* Merged PR 312: moved dutchf3_section to experiments/interpretation

moved dutchf3_section to experiments/interpretation

Related work items: #17683

* Merged PR 309: minor change to README to reflect the changes in prepare_data script

minor change to README to reflect the changes in prepare_data script

Related work items: #17681

* Merged PR 315: Removing voxel exp

Related work items: #17702

* sync with new experiment structure

* sync with new experiment structure

* added a logging handler for array metrics

* added a logging handler for array metrics

* first draft of metrics based on the ignite confusion matrix

* first draft of metrics based on the ignite confusion matrix

* metrics now based on ignite.metrics

* metrics now based on ignite.metrics

* modified patch train.py with new metrics

* modified patch train.py with new metrics

* Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo.

Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script.

Related work items: #18264

* modified metrics with ignore_index

* modified metrics with ignore_index

* Merged PR 405: minor mods to notebook, more documentation

A very small PR - Just a few more lines of documentation in the notebook, to improve clarity.

Related work items: #17432

* Merged PR 368: Adds penobscot

Adds for penobscot
- Dataset reader
- Training script
- Testing script
- Section depth augmentation
- Patch depth augmentation
- Iinline visualisation for Tensorboard

Related work items: #14560, #17697, #17699, #17700

* Merged PR 407: Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Related work items: #16362

* Merged PR 452: decouple docker image creation from azureml

removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb

All other changes are due to trivial reruns

Related work items: #18346

* Merged PR 512: Pre-commit hooks for formatting and style checking

Opening this PR to start the discussion -

I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added:
- .pre-commit-config.yaml - defines git hooks to be installed
- .flake8 - settings for flake8 linter
- pyproject.toml - settings for black formatter

The last two files define the formatting and linting style we want to enforce on the repo.

All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors.

Some questions to start the discussion:
- Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that.
- Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file.
- Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this?

Thanks!

Related work items: #18350

* Merged PR 513: 3D training script for Waldeland's model with Ignite

Related work items: #16356

* Merged PR 565: Demo notebook updated with 3D graph

Changes:
1) Updated demo notebook with the 3D visualization
2) Formatting changes due to new black/flake8 git hook

Related work items: #17432

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* merged tests into this branch

* merged tests into this branch

* Merged PR 569: Minor PR: change to pre-commit configuration files

Related work items: #18350

* Merged PR 586: Purging unused files and experiments

Purging unused files and experiments

Related work items: #20499

* moved prepare data under scripts

* moved prepare data under scripts

* removed untested model configs

* removed untested model configs

* fixed weird bug in penobscot data loader

* fixed weird bug in penobscot data loader

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* fixed bugs in my previous 'fix'

* fixed bugs in my previous 'fix'

* removed redundant _open_mask from subclasses

* removed redundant _open_mask from subclasses

* Merged PR 601: Fixes to penobscot experiments

A few changes:
- Instructions in README on how to download and process Penobscot and F3 2D data sets
- moved prepare_data scripts to the scripts/ directory
- fixed a weird issue with a class method in Penobscot data loader
- fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue)
- removed config files that were not tested or working in Penobscot experiments
- modified default.py so it's working if train.py ran without a config file

Related work items: #20694

* Merged PR 605: added common metrics to Waldeland model in Ignite

Related work items: #19550

* Removed redundant extract_metric_from

* Removed redundant extract_metric_from

* formatting changes in metrics

* formatting changes in metrics

* modified penobscot experiment to use new local metrics

* modified penobscot experiment to use new local metrics

* modified section experimen to pass device to metrics

* modified section experimen to pass device to metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* fixed other experiments after new metrics

* fixed other experiments after new metrics

* removed apex metrics from distributed train.py

* removed apex metrics from distributed train.py

* added ignite-based metrics to dutch voxel experiment

* added ignite-based metrics to dutch voxel experiment

* removed apex metrics

* removed apex metrics

* modified penobscot test script to use new metrics

* pytorch-ignite pre-release with new metrics until stable available

* removed cell output from the F3 notebook

* deleted .vscode

* modified metric import in test_metrics.py

* separated metrics out as a module

* relative logger file path, modified section experiment

* removed the REPO_PATH from init

* created util logging function, and moved logging file to each experiment

* modified demo experiment

* modified penobscot experiment

* modified dutchf3_voxel experiment

* no logging in voxel2pixel

* modified dutchf3 patch local experiment

* modified patch distributed experiment

* modified interpretation notebook

* minor changes to comments

* DOC: forking dislaimer and new build names. (#9)

* Updating README.md with introduction material (#10)

* Update README with introduction to DeepSeismic

Add intro material for DeepSeismic

* Adding logo file

* Adding image to readme

* Update README.md

* Updates the 3D visualisation to use itkwidgets (#11)

* Updates notebook to use itkwidgets for interactive visualisation

* Adds jupytext to pre-commit (#12)


* Add jupytext

* Adds demo notebook for HRNet (#13)

* Adding TF 2.0 to allow for tensorboard vis in notebooks

* Modifies hrnet config for notebook

* Add HRNet notebook for demo

* Updates HRNet notebook and tidies F3

* removed my username references (#15)

* moving 3D models into contrib folder (#16)

* Weetok (#17)

* Update it to include sections for imaging

* Update README.md

* Update README.md

* added system requirements to readme

* sdk 1.0.76; tested conda env vs docker image; extented readme

* removed reference to imaging

* minor md formatting

* minor md formatting

* clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83

* Add Troubleshooting section for DSVM warnings #89

* Add Troubleshooting section for DSVM warnings, plus typo #89

* tested both yml conda env and docker; udated conda yml to have docker sdk

* tested both yml conda env and docker; udated conda yml to have docker sdk; added

* NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment

* Update README.md
georgeAccnt-GH added a commit that referenced this pull request Dec 13, 2019
* azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing

* azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing

* merge upstream into my fork (#1)

* MINOR: addressing broken F3 download link (#73)

* Update main_build.yml for Azure Pipelines

* Update main_build.yml for Azure Pipelines

* BUILD: added build status badges (#6)

* Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7)

* Finished version of numpy data loader

* Working training script for demo

* Adds the new metrics

* Fixes docstrings and adds header

* Removing extra setup.py

* Log config file now experiment specific (#8)

* Merging work on salt dataset

* Adds computer vision to dependencies

* Updates dependecies

* Update

* Updates the environemnt files

* Updates readme and envs

* Initial running version of dutchf3

* INFRA: added structure templates.

* VOXEL: initial rough code push - need to clean up before PRing.

* Working version

* Working version before refactor

* quick minor fixes in README

* 3D SEG: first commit for PR.

* 3D SEG: removed data files to avoid redistribution.

* Updates

* 3D SEG: restyled batch file, moving onto others.

* Working HRNet

* 3D SEG: finished going through Waldeland code

* Updates test scripts and makes it take processing arguments

* minor update

* Fixing imports

* Refactoring the experiments

* Removing .vscode

* Updates gitignore

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* minor wording fix

* minor wording fix

* enabled splitting dataset into sections, rather than only patches

* enabled splitting dataset into sections, rather than only patches

* merged duplicate ifelse blocks

* merged duplicate ifelse blocks

* refactored prepare_data.py

* refactored prepare_data.py

* added scripts for section train test

* added scripts for section train test

* section train/test works for single channel input

* section train/test works for single channel input

* Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py

This PR includes the following changes:
- added README instructions for running f3dutch experiments
- prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic.
- ran black formatter on the file, which created all the formatting changes (sorry!)

* Merged PR 204: Adds loaders to deepseismic from cv_lib

* train and test script for section based training/testing

* train and test script for section based training/testing

* Merged PR 209: changes to section loaders in data.py

Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts:
- get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders.
- SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py
h, w = img.shape[-2], img.shape[-1]  # height and width

* Merged PR 210: BENCHMARKS: added placeholder for benchmarks.

BENCHMARKS: added placeholder for benchmarks.

* Merged PR 211: Fixes issues left over from changes to data.py

* removing experiments from deep_seismic, following the new struct

* removing experiments from deep_seismic, following the new struct

* Merged PR 220: Adds Horovod and fixes

Add Horovod training script
Updates dependencies in Horovod docker file
Removes hard coding of path in data.py

* section train/test scripts

* section train/test scripts

* Add cv_lib to repo and updates instructions

* Add cv_lib to repo and updates instructions

* Removes data.py and updates readme

* Removes data.py and updates readme

* Updates requirements

* Updates requirements

* Merged PR 222: Moves cv_lib into repo and updates setup instructions

* renamed train/test scripts

* renamed train/test scripts

* train test works on alaudah section experiments, a few minor bugs left

* train test works on alaudah section experiments, a few minor bugs left

* cleaning up loaders

* cleaning up loaders

* Merged PR 236: Cleaned up dutchf3 data loaders

@<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments.

The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders.

This will affect your code if you access these attributes. E.g. if you have something like this in your experiments:
```
train_set = TrainPatchLoader(…)
patches = train_set.patches[train_set.split]
```

or
```
train_set = TrainSectionLoader(…)
sections = train_set.sections[train_set.split]
```

* training testing for sections works

* training testing for sections works

* minor changes

* minor changes

* reverting changes on dutchf3/local/default.py file

* reverting changes on dutchf3/local/default.py file

* added config file

* added config file

* Updates the repo with preliminary results for 2D segmentation

* Merged PR 248: Experiment: section-based Alaudah training/testing

This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment.

* Merged PR 253: Waldeland based voxel loaders and TextureNet model

Related work items: #16357

* Merged PR 290: A demo notebook on local train/eval on F3 data set

Notebook and associated files + minor change in a patch_deconvnet_skip.py model file.

Related work items: #17432

* Merged PR 312: moved dutchf3_section to experiments/interpretation

moved dutchf3_section to experiments/interpretation

Related work items: #17683

* Merged PR 309: minor change to README to reflect the changes in prepare_data script

minor change to README to reflect the changes in prepare_data script

Related work items: #17681

* Merged PR 315: Removing voxel exp

Related work items: #17702

* sync with new experiment structure

* sync with new experiment structure

* added a logging handler for array metrics

* added a logging handler for array metrics

* first draft of metrics based on the ignite confusion matrix

* first draft of metrics based on the ignite confusion matrix

* metrics now based on ignite.metrics

* metrics now based on ignite.metrics

* modified patch train.py with new metrics

* modified patch train.py with new metrics

* Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo.

Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script.

Related work items: #18264

* modified metrics with ignore_index

* modified metrics with ignore_index

* Merged PR 405: minor mods to notebook, more documentation

A very small PR - Just a few more lines of documentation in the notebook, to improve clarity.

Related work items: #17432

* Merged PR 368: Adds penobscot

Adds for penobscot
- Dataset reader
- Training script
- Testing script
- Section depth augmentation
- Patch depth augmentation
- Iinline visualisation for Tensorboard

Related work items: #14560, #17697, #17699, #17700

* Merged PR 407: Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Related work items: #16362

* Merged PR 452: decouple docker image creation from azureml

removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb

All other changes are due to trivial reruns

Related work items: #18346

* Merged PR 512: Pre-commit hooks for formatting and style checking

Opening this PR to start the discussion -

I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added:
- .pre-commit-config.yaml - defines git hooks to be installed
- .flake8 - settings for flake8 linter
- pyproject.toml - settings for black formatter

The last two files define the formatting and linting style we want to enforce on the repo.

All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors.

Some questions to start the discussion:
- Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that.
- Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file.
- Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this?

Thanks!

Related work items: #18350

* Merged PR 513: 3D training script for Waldeland's model with Ignite

Related work items: #16356

* Merged PR 565: Demo notebook updated with 3D graph

Changes:
1) Updated demo notebook with the 3D visualization
2) Formatting changes due to new black/flake8 git hook

Related work items: #17432

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* merged tests into this branch

* merged tests into this branch

* Merged PR 569: Minor PR: change to pre-commit configuration files

Related work items: #18350

* Merged PR 586: Purging unused files and experiments

Purging unused files and experiments

Related work items: #20499

* moved prepare data under scripts

* moved prepare data under scripts

* removed untested model configs

* removed untested model configs

* fixed weird bug in penobscot data loader

* fixed weird bug in penobscot data loader

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* fixed bugs in my previous 'fix'

* fixed bugs in my previous 'fix'

* removed redundant _open_mask from subclasses

* removed redundant _open_mask from subclasses

* Merged PR 601: Fixes to penobscot experiments

A few changes:
- Instructions in README on how to download and process Penobscot and F3 2D data sets
- moved prepare_data scripts to the scripts/ directory
- fixed a weird issue with a class method in Penobscot data loader
- fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue)
- removed config files that were not tested or working in Penobscot experiments
- modified default.py so it's working if train.py ran without a config file

Related work items: #20694

* Merged PR 605: added common metrics to Waldeland model in Ignite

Related work items: #19550

* Removed redundant extract_metric_from

* Removed redundant extract_metric_from

* formatting changes in metrics

* formatting changes in metrics

* modified penobscot experiment to use new local metrics

* modified penobscot experiment to use new local metrics

* modified section experimen to pass device to metrics

* modified section experimen to pass device to metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* fixed other experiments after new metrics

* fixed other experiments after new metrics

* removed apex metrics from distributed train.py

* removed apex metrics from distributed train.py

* added ignite-based metrics to dutch voxel experiment

* added ignite-based metrics to dutch voxel experiment

* removed apex metrics

* removed apex metrics

* modified penobscot test script to use new metrics

* pytorch-ignite pre-release with new metrics until stable available

* removed cell output from the F3 notebook

* deleted .vscode

* modified metric import in test_metrics.py

* separated metrics out as a module

* relative logger file path, modified section experiment

* removed the REPO_PATH from init

* created util logging function, and moved logging file to each experiment

* modified demo experiment

* modified penobscot experiment

* modified dutchf3_voxel experiment

* no logging in voxel2pixel

* modified dutchf3 patch local experiment

* modified patch distributed experiment

* modified interpretation notebook

* minor changes to comments

* DOC: forking dislaimer and new build names. (#9)

* Updating README.md with introduction material (#10)

* Update README with introduction to DeepSeismic

Add intro material for DeepSeismic

* Adding logo file

* Adding image to readme

* Update README.md

* Updates the 3D visualisation to use itkwidgets (#11)

* Updates notebook to use itkwidgets for interactive visualisation

* Adds jupytext to pre-commit (#12)


* Add jupytext

* Adds demo notebook for HRNet (#13)

* Adding TF 2.0 to allow for tensorboard vis in notebooks

* Modifies hrnet config for notebook

* Add HRNet notebook for demo

* Updates HRNet notebook and tidies F3

* removed my username references (#15)

* moving 3D models into contrib folder (#16)

* Weetok (#17)

* Update it to include sections for imaging

* Update README.md

* Update README.md

* fixed link for F3 download

* MINOR: python version fix to 3.6.7 (#72)

* Adding system requirements in README (#74)

* Update main_build.yml for Azure Pipelines

* Update main_build.yml for Azure Pipelines

* BUILD: added build status badges (#6)

* Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7)

* Finished version of numpy data loader

* Working training script for demo

* Adds the new metrics

* Fixes docstrings and adds header

* Removing extra setup.py

* Log config file now experiment specific (#8)

* Merging work on salt dataset

* Adds computer vision to dependencies

* Updates dependecies

* Update

* Updates the environemnt files

* Updates readme and envs

* Initial running version of dutchf3

* INFRA: added structure templates.

* VOXEL: initial rough code push - need to clean up before PRing.

* Working version

* Working version before refactor

* quick minor fixes in README

* 3D SEG: first commit for PR.

* 3D SEG: removed data files to avoid redistribution.

* Updates

* 3D SEG: restyled batch file, moving onto others.

* Working HRNet

* 3D SEG: finished going through Waldeland code

* Updates test scripts and makes it take processing arguments

* minor update

* Fixing imports

* Refactoring the experiments

* Removing .vscode

* Updates gitignore

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* minor wording fix

* minor wording fix

* enabled splitting dataset into sections, rather than only patches

* enabled splitting dataset into sections, rather than only patches

* merged duplicate ifelse blocks

* merged duplicate ifelse blocks

* refactored prepare_data.py

* refactored prepare_data.py

* added scripts for section train test

* added scripts for section train test

* section train/test works for single channel input

* section train/test works for single channel input

* Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py

This PR includes the following changes:
- added README instructions for running f3dutch experiments
- prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic.
- ran black formatter on the file, which created all the formatting changes (sorry!)

* Merged PR 204: Adds loaders to deepseismic from cv_lib

* train and test script for section based training/testing

* train and test script for section based training/testing

* Merged PR 209: changes to section loaders in data.py

Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts:
- get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders.
- SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py
h, w = img.shape[-2], img.shape[-1]  # height and width

* Merged PR 210: BENCHMARKS: added placeholder for benchmarks.

BENCHMARKS: added placeholder for benchmarks.

* Merged PR 211: Fixes issues left over from changes to data.py

* removing experiments from deep_seismic, following the new struct

* removing experiments from deep_seismic, following the new struct

* Merged PR 220: Adds Horovod and fixes

Add Horovod training script
Updates dependencies in Horovod docker file
Removes hard coding of path in data.py

* section train/test scripts

* section train/test scripts

* Add cv_lib to repo and updates instructions

* Add cv_lib to repo and updates instructions

* Removes data.py and updates readme

* Removes data.py and updates readme

* Updates requirements

* Updates requirements

* Merged PR 222: Moves cv_lib into repo and updates setup instructions

* renamed train/test scripts

* renamed train/test scripts

* train test works on alaudah section experiments, a few minor bugs left

* train test works on alaudah section experiments, a few minor bugs left

* cleaning up loaders

* cleaning up loaders

* Merged PR 236: Cleaned up dutchf3 data loaders

@<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments.

The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders.

This will affect your code if you access these attributes. E.g. if you have something like this in your experiments:
```
train_set = TrainPatchLoader(…)
patches = train_set.patches[train_set.split]
```

or
```
train_set = TrainSectionLoader(…)
sections = train_set.sections[train_set.split]
```

* training testing for sections works

* training testing for sections works

* minor changes

* minor changes

* reverting changes on dutchf3/local/default.py file

* reverting changes on dutchf3/local/default.py file

* added config file

* added config file

* Updates the repo with preliminary results for 2D segmentation

* Merged PR 248: Experiment: section-based Alaudah training/testing

This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment.

* Merged PR 253: Waldeland based voxel loaders and TextureNet model

Related work items: #16357

* Merged PR 290: A demo notebook on local train/eval on F3 data set

Notebook and associated files + minor change in a patch_deconvnet_skip.py model file.

Related work items: #17432

* Merged PR 312: moved dutchf3_section to experiments/interpretation

moved dutchf3_section to experiments/interpretation

Related work items: #17683

* Merged PR 309: minor change to README to reflect the changes in prepare_data script

minor change to README to reflect the changes in prepare_data script

Related work items: #17681

* Merged PR 315: Removing voxel exp

Related work items: #17702

* sync with new experiment structure

* sync with new experiment structure

* added a logging handler for array metrics

* added a logging handler for array metrics

* first draft of metrics based on the ignite confusion matrix

* first draft of metrics based on the ignite confusion matrix

* metrics now based on ignite.metrics

* metrics now based on ignite.metrics

* modified patch train.py with new metrics

* modified patch train.py with new metrics

* Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo.

Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script.

Related work items: #18264

* modified metrics with ignore_index

* modified metrics with ignore_index

* Merged PR 405: minor mods to notebook, more documentation

A very small PR - Just a few more lines of documentation in the notebook, to improve clarity.

Related work items: #17432

* Merged PR 368: Adds penobscot

Adds for penobscot
- Dataset reader
- Training script
- Testing script
- Section depth augmentation
- Patch depth augmentation
- Iinline visualisation for Tensorboard

Related work items: #14560, #17697, #17699, #17700

* Merged PR 407: Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Related work items: #16362

* Merged PR 452: decouple docker image creation from azureml

removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb

All other changes are due to trivial reruns

Related work items: #18346

* Merged PR 512: Pre-commit hooks for formatting and style checking

Opening this PR to start the discussion -

I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added:
- .pre-commit-config.yaml - defines git hooks to be installed
- .flake8 - settings for flake8 linter
- pyproject.toml - settings for black formatter

The last two files define the formatting and linting style we want to enforce on the repo.

All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors.

Some questions to start the discussion:
- Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that.
- Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file.
- Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this?

Thanks!

Related work items: #18350

* Merged PR 513: 3D training script for Waldeland's model with Ignite

Related work items: #16356

* Merged PR 565: Demo notebook updated with 3D graph

Changes:
1) Updated demo notebook with the 3D visualization
2) Formatting changes due to new black/flake8 git hook

Related work items: #17432

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* merged tests into this branch

* merged tests into this branch

* Merged PR 569: Minor PR: change to pre-commit configuration files

Related work items: #18350

* Merged PR 586: Purging unused files and experiments

Purging unused files and experiments

Related work items: #20499

* moved prepare data under scripts

* moved prepare data under scripts

* removed untested model configs

* removed untested model configs

* fixed weird bug in penobscot data loader

* fixed weird bug in penobscot data loader

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* fixed bugs in my previous 'fix'

* fixed bugs in my previous 'fix'

* removed redundant _open_mask from subclasses

* removed redundant _open_mask from subclasses

* Merged PR 601: Fixes to penobscot experiments

A few changes:
- Instructions in README on how to download and process Penobscot and F3 2D data sets
- moved prepare_data scripts to the scripts/ directory
- fixed a weird issue with a class method in Penobscot data loader
- fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue)
- removed config files that were not tested or working in Penobscot experiments
- modified default.py so it's working if train.py ran without a config file

Related work items: #20694

* Merged PR 605: added common metrics to Waldeland model in Ignite

Related work items: #19550

* Removed redundant extract_metric_from

* Removed redundant extract_metric_from

* formatting changes in metrics

* formatting changes in metrics

* modified penobscot experiment to use new local metrics

* modified penobscot experiment to use new local metrics

* modified section experimen to pass device to metrics

* modified section experimen to pass device to metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* fixed other experiments after new metrics

* fixed other experiments after new metrics

* removed apex metrics from distributed train.py

* removed apex metrics from distributed train.py

* added ignite-based metrics to dutch voxel experiment

* added ignite-based metrics to dutch voxel experiment

* removed apex metrics

* removed apex metrics

* modified penobscot test script to use new metrics

* pytorch-ignite pre-release with new metrics until stable available

* removed cell output from the F3 notebook

* deleted .vscode

* modified metric import in test_metrics.py

* separated metrics out as a module

* relative logger file path, modified section experiment

* removed the REPO_PATH from init

* created util logging function, and moved logging file to each experiment

* modified demo experiment

* modified penobscot experiment

* modified dutchf3_voxel experiment

* no logging in voxel2pixel

* modified dutchf3 patch local experiment

* modified patch distributed experiment

* modified interpretation notebook

* minor changes to comments

* DOC: forking dislaimer and new build names. (#9)

* Updating README.md with introduction material (#10)

* Update README with introduction to DeepSeismic

Add intro material for DeepSeismic

* Adding logo file

* Adding image to readme

* Update README.md

* Updates the 3D visualisation to use itkwidgets (#11)

* Updates notebook to use itkwidgets for interactive visualisation

* Adds jupytext to pre-commit (#12)


* Add jupytext

* Adds demo notebook for HRNet (#13)

* Adding TF 2.0 to allow for tensorboard vis in notebooks

* Modifies hrnet config for notebook

* Add HRNet notebook for demo

* Updates HRNet notebook and tidies F3

* removed my username references (#15)

* moving 3D models into contrib folder (#16)

* Weetok (#17)

* Update it to include sections for imaging

* Update README.md

* Update README.md

* added system requirements to readme

* sdk 1.0.76; tested conda env vs docker image; extented readme

* removed reference to imaging

* minor md formatting

* minor md formatting

* clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83

* Add Troubleshooting section for DSVM warnings #89

* Add Troubleshooting section for DSVM warnings, plus typo #89

* tested both yml conda env and docker; udated conda yml to have docker sdk

* tested both yml conda env and docker; udated conda yml to have docker sdk; added

* NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment

* Update README.md

* BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12  (#88) (#2)

* azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing

* azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing

* merge upstream into my fork (#1)

* MINOR: addressing broken F3 download link (#73)

* Update main_build.yml for Azure Pipelines

* Update main_build.yml for Azure Pipelines

* BUILD: added build status badges (#6)

* Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7)

* Finished version of numpy data loader

* Working training script for demo

* Adds the new metrics

* Fixes docstrings and adds header

* Removing extra setup.py

* Log config file now experiment specific (#8)

* Merging work on salt dataset

* Adds computer vision to dependencies

* Updates dependecies

* Update

* Updates the environemnt files

* Updates readme and envs

* Initial running version of dutchf3

* INFRA: added structure templates.

* VOXEL: initial rough code push - need to clean up before PRing.

* Working version

* Working version before refactor

* quick minor fixes in README

* 3D SEG: first commit for PR.

* 3D SEG: removed data files to avoid redistribution.

* Updates

* 3D SEG: restyled batch file, moving onto others.

* Working HRNet

* 3D SEG: finished going through Waldeland code

* Updates test scripts and makes it take processing arguments

* minor update

* Fixing imports

* Refactoring the experiments

* Removing .vscode

* Updates gitignore

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* minor wording fix

* minor wording fix

* enabled splitting dataset into sections, rather than only patches

* enabled splitting dataset into sections, rather than only patches

* merged duplicate ifelse blocks

* merged duplicate ifelse blocks

* refactored prepare_data.py

* refactored prepare_data.py

* added scripts for section train test

* added scripts for section train test

* section train/test works for single channel input

* section train/test works for single channel input

* Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py

This PR includes the following changes:
- added README instructions for running f3dutch experiments
- prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic.
- ran black formatter on the file, which created all the formatting changes (sorry!)

* Merged PR 204: Adds loaders to deepseismic from cv_lib

* train and test script for section based training/testing

* train and test script for section based training/testing

* Merged PR 209: changes to section loaders in data.py

Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts:
- get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders.
- SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py
h, w = img.shape[-2], img.shape[-1]  # height and width

* Merged PR 210: BENCHMARKS: added placeholder for benchmarks.

BENCHMARKS: added placeholder for benchmarks.

* Merged PR 211: Fixes issues left over from changes to data.py

* removing experiments from deep_seismic, following the new struct

* removing experiments from deep_seismic, following the new struct

* Merged PR 220: Adds Horovod and fixes

Add Horovod training script
Updates dependencies in Horovod docker file
Removes hard coding of path in data.py

* section train/test scripts

* section train/test scripts

* Add cv_lib to repo and updates instructions

* Add cv_lib to repo and updates instructions

* Removes data.py and updates readme

* Removes data.py and updates readme

* Updates requirements

* Updates requirements

* Merged PR 222: Moves cv_lib into repo and updates setup instructions

* renamed train/test scripts

* renamed train/test scripts

* train test works on alaudah section experiments, a few minor bugs left

* train test works on alaudah section experiments, a few minor bugs left

* cleaning up loaders

* cleaning up loaders

* Merged PR 236: Cleaned up dutchf3 data loaders

@<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments.

The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders.

This will affect your code if you access these attributes. E.g. if you have something like this in your experiments:
```
train_set = TrainPatchLoader(…)
patches = train_set.patches[train_set.split]
```

or
```
train_set = TrainSectionLoader(…)
sections = train_set.sections[train_set.split]
```

* training testing for sections works

* training testing for sections works

* minor changes

* minor changes

* reverting changes on dutchf3/local/default.py file

* reverting changes on dutchf3/local/default.py file

* added config file

* added config file

* Updates the repo with preliminary results for 2D segmentation

* Merged PR 248: Experiment: section-based Alaudah training/testing

This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment.

* Merged PR 253: Waldeland based voxel loaders and TextureNet model

Related work items: #16357

* Merged PR 290: A demo notebook on local train/eval on F3 data set

Notebook and associated files + minor change in a patch_deconvnet_skip.py model file.

Related work items: #17432

* Merged PR 312: moved dutchf3_section to experiments/interpretation

moved dutchf3_section to experiments/interpretation

Related work items: #17683

* Merged PR 309: minor change to README to reflect the changes in prepare_data script

minor change to README to reflect the changes in prepare_data script

Related work items: #17681

* Merged PR 315: Removing voxel exp

Related work items: #17702

* sync with new experiment structure

* sync with new experiment structure

* added a logging handler for array metrics

* added a logging handler for array metrics

* first draft of metrics based on the ignite confusion matrix

* first draft of metrics based on the ignite confusion matrix

* metrics now based on ignite.metrics

* metrics now based on ignite.metrics

* modified patch train.py with new metrics

* modified patch train.py with new metrics

* Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo.

Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script.

Related work items: #18264

* modified metrics with ignore_index

* modified metrics with ignore_index

* Merged PR 405: minor mods to notebook, more documentation

A very small PR - Just a few more lines of documentation in the notebook, to improve clarity.

Related work items: #17432

* Merged PR 368: Adds penobscot

Adds for penobscot
- Dataset reader
- Training script
- Testing script
- Section depth augmentation
- Patch depth augmentation
- Iinline visualisation for Tensorboard

Related work items: #14560, #17697, #17699, #17700

* Merged PR 407: Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Related work items: #16362

* Merged PR 452: decouple docker image creation from azureml

removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb

All other changes are due to trivial reruns

Related work items: #18346

* Merged PR 512: Pre-commit hooks for formatting and style checking

Opening this PR to start the discussion -

I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added:
- .pre-commit-config.yaml - defines git hooks to be installed
- .flake8 - settings for flake8 linter
- pyproject.toml - settings for black formatter

The last two files define the formatting and linting style we want to enforce on the repo.

All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors.

Some questions to start the discussion:
- Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that.
- Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file.
- Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this?

Thanks!

Related work items: #18350

* Merged PR 513: 3D training script for Waldeland's model with Ignite

Related work items: #16356

* Merged PR 565: Demo notebook updated with 3D graph

Changes:
1) Updated demo notebook with the 3D visualization
2) Formatting changes due to new black/flake8 git hook

Related work items: #17432

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* merged tests into this branch

* merged tests into this branch

* Merged PR 569: Minor PR: change to pre-commit configuration files

Related work items: #18350

* Merged PR 586: Purging unused files and experiments

Purging unused files and experiments

Related work items: #20499

* moved prepare data under scripts

* moved prepare data under scripts

* removed untested model configs

* removed untested model configs

* fixed weird bug in penobscot data loader

* fixed weird bug in penobscot data loader

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* fixed bugs in my previous 'fix'

* fixed bugs in my previous 'fix'

* removed redundant _open_mask from subclasses

* removed redundant _open_mask from subclasses

* Merged PR 601: Fixes to penobscot experiments

A few changes:
- Instructions in README on how to download and process Penobscot and F3 2D data sets
- moved prepare_data scripts to the scripts/ directory
- fixed a weird issue with a class method in Penobscot data loader
- fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue)
- removed config files that were not tested or working in Penobscot experiments
- modified default.py so it's working if train.py ran without a config file

Related work items: #20694

* Merged PR 605: added common metrics to Waldeland model in Ignite

Related work items: #19550

* Removed redundant extract_metric_from

* Removed redundant extract_metric_from

* formatting changes in metrics

* formatting changes in metrics

* modified penobscot experiment to use new local metrics

* modified penobscot experiment to use new local metrics

* modified section experimen to pass device to metrics

* modified section experimen to pass device to metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* fixed other experiments after new metrics

* fixed other experiments after new metrics

* removed apex metrics from distributed train.py

* removed apex metrics from distributed train.py

* added ignite-based metrics to dutch voxel experiment

* added ignite-based metrics to dutch voxel experiment

* removed apex metrics

* removed apex metrics

* modified penobscot test script to use new metrics

* pytorch-ignite pre-release with new metrics until stable available

* removed cell output from the F3 notebook

* deleted .vscode

* modified metric import in test_metrics.py

* separated metrics out as a module

* relative logger file path, modified section experiment

* removed the REPO_PATH from init

* created util logging function, and moved logging file to each experiment

* modified demo experiment

* modified penobscot experiment

* modified dutchf3_voxel experiment

* no logging in voxel2pixel

* modified dutchf3 patch local experiment

* modified patch distributed experiment

* modified interpretation notebook

* minor changes to comments

* DOC: forking dislaimer and new build names. (#9)

* Updating README.md with introduction material (#10)

* Update README with introduction to DeepSeismic

Add intro material for DeepSeismic

* Adding logo file

* Adding image to readme

* Update README.md

* Updates the 3D visualisation to use itkwidgets (#11)

* Updates notebook to use itkwidgets for interactive visualisation

* Adds jupytext to pre-commit (#12)


* Add jupytext

* Adds demo notebook for HRNet (#13)

* Adding TF 2.0 to allow for tensorboard vis in notebooks

* Modifies hrnet config for notebook

* Add HRNet notebook for demo

* Updates HRNet notebook and tidies F3

* removed my username references (#15)

* moving 3D models into contrib folder (#16)

* Weetok (#17)

* Update it to include sections for imaging

* Update README.md

* Update README.md

* fixed link for F3 download

* MINOR: python version fix to 3.6.7 (#72)

* Adding system requirements in README (#74)

* Update main_build.yml for Azure Pipelines

* Update main_build.yml for Azure Pipelines

* BUILD: added build status badges (#6)

* Adds dataloader for numpy datasets as well as demo pipeline for such a dataset (#7)

* Finished version of numpy data loader

* Working training script for demo

* Adds the new metrics

* Fixes docstrings and adds header

* Removing extra setup.py

* Log config file now experiment specific (#8)

* Merging work on salt dataset

* Adds computer vision to dependencies

* Updates dependecies

* Update

* Updates the environemnt files

* Updates readme and envs

* Initial running version of dutchf3

* INFRA: added structure templates.

* VOXEL: initial rough code push - need to clean up before PRing.

* Working version

* Working version before refactor

* quick minor fixes in README

* 3D SEG: first commit for PR.

* 3D SEG: removed data files to avoid redistribution.

* Updates

* 3D SEG: restyled batch file, moving onto others.

* Working HRNet

* 3D SEG: finished going through Waldeland code

* Updates test scripts and makes it take processing arguments

* minor update

* Fixing imports

* Refactoring the experiments

* Removing .vscode

* Updates gitignore

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* added instructions for running f3dutch experiments, and fixed some issues in prepare_data.py script

* minor wording fix

* minor wording fix

* enabled splitting dataset into sections, rather than only patches

* enabled splitting dataset into sections, rather than only patches

* merged duplicate ifelse blocks

* merged duplicate ifelse blocks

* refactored prepare_data.py

* refactored prepare_data.py

* added scripts for section train test

* added scripts for section train test

* section train/test works for single channel input

* section train/test works for single channel input

* Merged PR 174: F3 Dutch README, and fixed issues in prepare_data.py

This PR includes the following changes:
- added README instructions for running f3dutch experiments
- prepare_dataset.py didn't work for creating section-based splits, so I fixed a few issues. There are no changes to the patch-based splitting logic.
- ran black formatter on the file, which created all the formatting changes (sorry!)

* Merged PR 204: Adds loaders to deepseismic from cv_lib

* train and test script for section based training/testing

* train and test script for section based training/testing

* Merged PR 209: changes to section loaders in data.py

Changes in this PR will affect patch scripts as well. The following are required changes in patch scripts:
- get_train_loader() in train.py should be changed to get_patch_loader(). I created separate function to load section and patch loaders.
- SectionLoader now swaps H and W dims. When loading test data in patch, this line can be removed (and tested) from test.py
h, w = img.shape[-2], img.shape[-1]  # height and width

* Merged PR 210: BENCHMARKS: added placeholder for benchmarks.

BENCHMARKS: added placeholder for benchmarks.

* Merged PR 211: Fixes issues left over from changes to data.py

* removing experiments from deep_seismic, following the new struct

* removing experiments from deep_seismic, following the new struct

* Merged PR 220: Adds Horovod and fixes

Add Horovod training script
Updates dependencies in Horovod docker file
Removes hard coding of path in data.py

* section train/test scripts

* section train/test scripts

* Add cv_lib to repo and updates instructions

* Add cv_lib to repo and updates instructions

* Removes data.py and updates readme

* Removes data.py and updates readme

* Updates requirements

* Updates requirements

* Merged PR 222: Moves cv_lib into repo and updates setup instructions

* renamed train/test scripts

* renamed train/test scripts

* train test works on alaudah section experiments, a few minor bugs left

* train test works on alaudah section experiments, a few minor bugs left

* cleaning up loaders

* cleaning up loaders

* Merged PR 236: Cleaned up dutchf3 data loaders

@<Mathew Salvaris> , @<Ilia Karmanov> , @<Max Kaznady> , please check out if this PR will affect your experiments.

The main change is with the initialization of sections/patches attributes of loaders. Previously, we were unnecessarily assigning all train/val splits to train loaders, rather than only those belonging to the given split for that loader. Similar for test loaders.

This will affect your code if you access these attributes. E.g. if you have something like this in your experiments:
```
train_set = TrainPatchLoader(…)
patches = train_set.patches[train_set.split]
```

or
```
train_set = TrainSectionLoader(…)
sections = train_set.sections[train_set.split]
```

* training testing for sections works

* training testing for sections works

* minor changes

* minor changes

* reverting changes on dutchf3/local/default.py file

* reverting changes on dutchf3/local/default.py file

* added config file

* added config file

* Updates the repo with preliminary results for 2D segmentation

* Merged PR 248: Experiment: section-based Alaudah training/testing

This PR includes the section-based experiments on dutchf3 to replicate Alaudah's work. No changes were introduced to the code outside this experiment.

* Merged PR 253: Waldeland based voxel loaders and TextureNet model

Related work items: #16357

* Merged PR 290: A demo notebook on local train/eval on F3 data set

Notebook and associated files + minor change in a patch_deconvnet_skip.py model file.

Related work items: #17432

* Merged PR 312: moved dutchf3_section to experiments/interpretation

moved dutchf3_section to experiments/interpretation

Related work items: #17683

* Merged PR 309: minor change to README to reflect the changes in prepare_data script

minor change to README to reflect the changes in prepare_data script

Related work items: #17681

* Merged PR 315: Removing voxel exp

Related work items: #17702

* sync with new experiment structure

* sync with new experiment structure

* added a logging handler for array metrics

* added a logging handler for array metrics

* first draft of metrics based on the ignite confusion matrix

* first draft of metrics based on the ignite confusion matrix

* metrics now based on ignite.metrics

* metrics now based on ignite.metrics

* modified patch train.py with new metrics

* modified patch train.py with new metrics

* Merged PR 361: VOXEL: fixes to original voxel2pixel code to make it work with the rest of the repo.

Realized there was one bug in the code and the rest of the functions did not work with the different versions of libraries which we have listed in the conda yaml file. Also updated the download script.

Related work items: #18264

* modified metrics with ignore_index

* modified metrics with ignore_index

* Merged PR 405: minor mods to notebook, more documentation

A very small PR - Just a few more lines of documentation in the notebook, to improve clarity.

Related work items: #17432

* Merged PR 368: Adds penobscot

Adds for penobscot
- Dataset reader
- Training script
- Testing script
- Section depth augmentation
- Patch depth augmentation
- Iinline visualisation for Tensorboard

Related work items: #14560, #17697, #17699, #17700

* Merged PR 407: Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Azure ML SDK Version:  1.0.65; running devito in AzureML Estimators

Related work items: #16362

* Merged PR 452: decouple docker image creation from azureml

removed all azureml dependencies from 010_CreateExperimentationDockerImage_GeophysicsTutorial_FWI_Azure_devito.ipynb

All other changes are due to trivial reruns

Related work items: #18346

* Merged PR 512: Pre-commit hooks for formatting and style checking

Opening this PR to start the discussion -

I added the required dotenv files and instructions for setting up pre-commit hooks for formatting and style checking. For formatting, we are using black, and style checking flake8. The following files are added:
- .pre-commit-config.yaml - defines git hooks to be installed
- .flake8 - settings for flake8 linter
- pyproject.toml - settings for black formatter

The last two files define the formatting and linting style we want to enforce on the repo.

All of us would set up the pre-commit hooks locally, so regardless of what formatting/linting settings we have in our local editors, the settings specified by the git hooks would still be enforced prior to the commit, to ensure consistency among contributors.

Some questions to start the discussion:
- Do you want to change any of the default settings in the dotenv files - like the line lengths, error messages we exclude or include, or anything like that.
- Do we want to have a requirements-dev.txt file for contributors? This setup uses pre-commit package, I didn't include it in the environment.yaml file, but instead instructed the user to install it in the CONTRIBUTING.MD file.
- Once you have the hooks installed, it will only affect the files you are committing in the future. A big chunk of our codebase does not conform to the formatting/style settings. We will have to run the hooks on the codebase retrospectively. I'm happy to do that, but it will create many changes and a significant looking PR :) Any thoughts on how we should approach this?

Thanks!

Related work items: #18350

* Merged PR 513: 3D training script for Waldeland's model with Ignite

Related work items: #16356

* Merged PR 565: Demo notebook updated with 3D graph

Changes:
1) Updated demo notebook with the 3D visualization
2) Formatting changes due to new black/flake8 git hook

Related work items: #17432

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* Merged PR 341: Tests for cv_lib/metrics

This PR is dependent on the tests created in the previous branch !333. That's why the PR is to merge tests into vapaunic/metrics branch (so the changed files below only include the diff between these two branches. However, I can change this once the vapaunic/metrics is merged.

I created these tests under cv_lib/ since metrics are a part of that library. I imagine we will have tests under deepseismic_interpretation/, and the top level /tests for integration testing.

Let me know if you have any comments on this test, or the structure. As agreed, I'm using pytest.

Related work items: #16955

* merged tests into this branch

* merged tests into this branch

* Merged PR 569: Minor PR: change to pre-commit configuration files

Related work items: #18350

* Merged PR 586: Purging unused files and experiments

Purging unused files and experiments

Related work items: #20499

* moved prepare data under scripts

* moved prepare data under scripts

* removed untested model configs

* removed untested model configs

* fixed weird bug in penobscot data loader

* fixed weird bug in penobscot data loader

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* penobscot experiments working for hrnet, seresnet, no depth and patch depth

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* removed a section loader bug in the penobscot loader

* fixed bugs in my previous 'fix'

* fixed bugs in my previous 'fix'

* removed redundant _open_mask from subclasses

* removed redundant _open_mask from subclasses

* Merged PR 601: Fixes to penobscot experiments

A few changes:
- Instructions in README on how to download and process Penobscot and F3 2D data sets
- moved prepare_data scripts to the scripts/ directory
- fixed a weird issue with a class method in Penobscot data loader
- fixed a bug in section loader (_add_extra_channel in section loader was not necessary and was causing an issue)
- removed config files that were not tested or working in Penobscot experiments
- modified default.py so it's working if train.py ran without a config file

Related work items: #20694

* Merged PR 605: added common metrics to Waldeland model in Ignite

Related work items: #19550

* Removed redundant extract_metric_from

* Removed redundant extract_metric_from

* formatting changes in metrics

* formatting changes in metrics

* modified penobscot experiment to use new local metrics

* modified penobscot experiment to use new local metrics

* modified section experimen to pass device to metrics

* modified section experimen to pass device to metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* moved metrics out of dutchf3, modified distributed to work with the new metrics

* fixed other experiments after new metrics

* fixed other experiments after new metrics

* removed apex metrics from distributed train.py

* removed apex metrics from distributed train.py

* added ignite-based metrics to dutch voxel experiment

* added ignite-based metrics to dutch voxel experiment

* removed apex metrics

* removed apex metrics

* modified penobscot test script to use new metrics

* pytorch-ignite pre-release with new metrics until stable available

* removed cell output from the F3 notebook

* deleted .vscode

* modified metric import in test_metrics.py

* separated metrics out as a module

* relative logger file path, modified section experiment

* removed the REPO_PATH from init

* created util logging function, and moved logging file to each experiment

* modified demo experiment

* modified penobscot experiment

* modified dutchf3_voxel experiment

* no logging in voxel2pixel

* modified dutchf3 patch local experiment

* modified patch distributed experiment

* modified interpretation notebook

* minor changes to comments

* DOC: forking dislaimer and new build names. (#9)

* Updating README.md with introduction material (#10)

* Update README with introduction to DeepSeismic

Add intro material for DeepSeismic

* Adding logo file

* Adding image to readme

* Update README.md

* Updates the 3D visualisation to use itkwidgets (#11)

* Updates notebook to use itkwidgets for interactive visualisation

* Adds jupytext to pre-commit (#12)


* Add jupytext

* Adds demo notebook for HRNet (#13)

* Adding TF 2.0 to allow for tensorboard vis in notebooks

* Modifies hrnet config for notebook

* Add HRNet notebook for demo

* Updates HRNet notebook and tidies F3

* removed my username references (#15)

* moving 3D models into contrib folder (#16)

* Weetok (#17)

* Update it to include sections for imaging

* Update README.md

* Update README.md

* added system requirements to readme

* sdk 1.0.76; tested conda env vs docker image; extented readme

* removed reference to imaging

* minor md formatting

* minor md formatting

* clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12 - Issue #83

* Add Troubleshooting section for DSVM warnings #89

* Add Troubleshooting section for DSVM warnings, plus typo #89

* tested both yml conda env and docker; udated conda yml to have docker sdk

* tested both yml conda env and docker; udated conda yml to have docker sdk; added

* NVIDIA Tesla K80 (or V100 GPU for NCv2 series) - per Vanja's comment

* Update README.md

* BugBash2 Issue #83 and #89: clarify which DSVM we want to use - Ubuntu GPU-enabled VM, preferably NC12  (#88) (#3)

* azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing

* azureml sdk 1.0.74; foxed a few issues around ACR access; added nb 030 for scalability testing

* merge upstream into my fork (#1)

* MINOR: addressing broken F3 download link (#73)

* Update main_build.yml for Azure Pipelines

* Update main_build.yml for Azure Pipelines

* BUILD: added build stat…
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants