Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit fb07dc6

Browse files
committedJun 18, 2024·
add pre-release citation
1 parent d904294 commit fb07dc6

File tree

1 file changed

+48
-9
lines changed

1 file changed

+48
-9
lines changed
 

‎README.md

+48-9
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ _Example predictions using one of our SegFormer-based models to predict canopy c
1919

2020
The repository supports Mask-RCNN for instance segmentation and a variety of semantic segmentation models - we recommend Segformer as a default, but we also provide trained UNets which are more permissively licensed. Models will be downloaded automatically when you run prediction for the first time, so there's no need to handle checkpoints manually. You can of course fine-tune your own models using the pipeline and provide local paths if you need.
2121

22-
Have a look at our [model zoo](https://huggingface.co/restor).
22+
Have a look at our [model zoo](zoo.md).
2323

2424
## Installation
2525

@@ -91,16 +91,42 @@ python predict.py semantic data/5c15321f63d9810007f8b06f_10_00000.tif results_in
9191

9292
which will run the pipeline on the test image in semantic and instance segmentation modes. The results are saved to the output folders which include: geo-referenced canopy masks, shapefiles with detected trees and canopy regions and overlaid visualisations of the predictions.
9393

94+
### Screen prediction
95+
96+
We provide a fun demo script which will run a model on a live screen capture. `screen_predict.py` lives in the `tools` folder:
97+
98+
```python
99+
pip install python-opencv mss
100+
python tools/screen_predict.py semantic
101+
```
102+
103+
The script works best on dual monitor setups where you can view the output on one screen and move around on the other, but it will work on smaller screens just fine. You may need to adjust the grab dimensions to suit your hardware, in the script:
104+
105+
```python
106+
mon = {"left": 0, "top": 0, "width": 1024, "height": 1024}
107+
```
108+
109+
Tips:
110+
111+
- The script has no idea about resolution, so you may need to zoom in/out to find the sweet spot. Remember the models are optimised for 0.1 m/px
112+
- If you pick a region that's outside the bounds of your monitor, the script will probably segfault - so if that happens, double check the region settings above
113+
- Try browsing the web (for example go on OpenAerialMap and zoom in)
114+
94115
## Citation
95116

96-
If you use this pipeline for research or commercial work, we would appreciate that you cite (a) the dataset and (b) the release paper as appropriate.
117+
If you use this pipeline for research or commercial work, we would appreciate that you cite (a) the dataset and (b) the release paper as appropriate. We will update the citation with details of the preprint and/or peer-reviewed manuscript when released.
97118

98119
```latex
99-
\article{
100-
120+
@unpublished{restortcd,
121+
author = "Veitch-Michaelis, Josh and Cottam, Andrew and Schweizer, Daniella Schweizer and Broadbent, Eben N. and Dao, David and Zhang, Ce and Almeyda Zambrano, Angelica and Max, Simeon",
122+
title = "OAM-TCD: A globally diverse dataset of high-resolution tree cover maps",
123+
note = "In prep.",
124+
month = "6",
125+
year = "2024"
101126
}
102127
```
103128

129+
104130
## Contributing
105131

106132
We welcome contributions via pull request. Please note that we enforce the use of several pre-commit hooks, namely:
@@ -124,21 +150,34 @@ Similarly please don't hesitate to suggest new features that you think would be
124150

125151
This repository is released under the Apache 2.0 license which permits a wide variety of downstream uses.
126152

153+
### OAM-TCD Dataset
154+
155+
For license information about the dataset, see the [dataset card](https://huggingface.co/datasets/restor/tcd).
156+
157+
The majority of the dataset is licensed as CC-BY 4.0 with a subset as CC BY-NC 4.0 (train and test) and CC BY-SA 4.0 (test only). These two less
158+
permissive image classes consititute around 10% of the dataset.
159+
160+
The dataset DOI is: `10.5281/zenodo.11617167`.
161+
127162
### Models
128163

129164
Currently our models are released under a CC BY-NC 4.0 license. We are retraining models on _only_ the CC-BY 4.0 imagery so that we can confidently use the same license.
130165

131166
Model usage must be attributed under the terms of the CC-BY license variants.
132167

133-
### OAM-TCD Dataset
168+
#### Mask-RCNN
169+
170+
To train Mask-RCNN (and other instance segmentation models), we use the Detectron2 library from FAIR/Meta which is licensed as Apache 2.0.
171+
172+
#### Segmentation Models Pytorch (SMP)
134173

135-
For license information about the dataset, see the [dataset card]().
174+
UNet model implementations use the [SMP library](https://segmentation-modelspytorch.readthedocs.io/), under the MIT license.
136175

137-
The majority of the dataset is licensed as CC-BY 4.0 with a subset as CC BY-NC 4.0 (train and test) and CC BY-SA 4.0 (test only). These two less permissive image classes consititute around 10% of the dataset.
176+
#### SegFormer
138177

139-
### SegFormer
178+
The Segformer architecture from NVIDIA is provided under [a research license](https://huggingface.co/docs/transformers/model_doc/segformer).
140179

141-
The Segformer architecture from NVIDIA is provided under a research license. This does not allow commercial use without permission from NVIDIA - see [here](https://www.nvidia.com/en-us/research/inquiries/) - but you are free to use these models for research. **If you wish to use our models in a commercial setting, we recommend you use the UNet variants which still perform well.**
180+
This does not allow commercial use without permission from NVIDIA - see [here](https://www.nvidia.com/en-us/research/inquiries/) - but you are free to use these models for research. **If you wish to use our models in a commercial setting, we recommend you use the Mask-RCNN/U-Net variants (or train your own models with your preferred architecture).**
142181

143182
## Acknowledgements
144183

0 commit comments

Comments
 (0)
Please sign in to comment.