You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+48-9
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ _Example predictions using one of our SegFormer-based models to predict canopy c
19
19
20
20
The repository supports Mask-RCNN for instance segmentation and a variety of semantic segmentation models - we recommend Segformer as a default, but we also provide trained UNets which are more permissively licensed. Models will be downloaded automatically when you run prediction for the first time, so there's no need to handle checkpoints manually. You can of course fine-tune your own models using the pipeline and provide local paths if you need.
21
21
22
-
Have a look at our [model zoo](https://huggingface.co/restor).
which will run the pipeline on the test image in semantic and instance segmentation modes. The results are saved to the output folders which include: geo-referenced canopy masks, shapefiles with detected trees and canopy regions and overlaid visualisations of the predictions.
93
93
94
+
### Screen prediction
95
+
96
+
We provide a fun demo script which will run a model on a live screen capture. `screen_predict.py` lives in the `tools` folder:
97
+
98
+
```python
99
+
pip install python-opencv mss
100
+
python tools/screen_predict.py semantic
101
+
```
102
+
103
+
The script works best on dual monitor setups where you can view the output on one screen and move around on the other, but it will work on smaller screens just fine. You may need to adjust the grab dimensions to suit your hardware, in the script:
- The script has no idea about resolution, so you may need to zoom in/out to find the sweet spot. Remember the models are optimised for 0.1 m/px
112
+
- If you pick a region that's outside the bounds of your monitor, the script will probably segfault - so if that happens, double check the region settings above
113
+
- Try browsing the web (for example go on OpenAerialMap and zoom in)
114
+
94
115
## Citation
95
116
96
-
If you use this pipeline for research or commercial work, we would appreciate that you cite (a) the dataset and (b) the release paper as appropriate.
117
+
If you use this pipeline for research or commercial work, we would appreciate that you cite (a) the dataset and (b) the release paper as appropriate. We will update the citation with details of the preprint and/or peer-reviewed manuscript when released.
97
118
98
119
```latex
99
-
\article{
100
-
120
+
@unpublished{restortcd,
121
+
author = "Veitch-Michaelis, Josh and Cottam, Andrew and Schweizer, Daniella Schweizer and Broadbent, Eben N. and Dao, David and Zhang, Ce and Almeyda Zambrano, Angelica and Max, Simeon",
122
+
title = "OAM-TCD: A globally diverse dataset of high-resolution tree cover maps",
123
+
note = "In prep.",
124
+
month = "6",
125
+
year = "2024"
101
126
}
102
127
```
103
128
129
+
104
130
## Contributing
105
131
106
132
We welcome contributions via pull request. Please note that we enforce the use of several pre-commit hooks, namely:
@@ -124,21 +150,34 @@ Similarly please don't hesitate to suggest new features that you think would be
124
150
125
151
This repository is released under the Apache 2.0 license which permits a wide variety of downstream uses.
126
152
153
+
### OAM-TCD Dataset
154
+
155
+
For license information about the dataset, see the [dataset card](https://huggingface.co/datasets/restor/tcd).
156
+
157
+
The majority of the dataset is licensed as CC-BY 4.0 with a subset as CC BY-NC 4.0 (train and test) and CC BY-SA 4.0 (test only). These two less
158
+
permissive image classes consititute around 10% of the dataset.
159
+
160
+
The dataset DOI is: `10.5281/zenodo.11617167`.
161
+
127
162
### Models
128
163
129
164
Currently our models are released under a CC BY-NC 4.0 license. We are retraining models on _only_ the CC-BY 4.0 imagery so that we can confidently use the same license.
130
165
131
166
Model usage must be attributed under the terms of the CC-BY license variants.
132
167
133
-
### OAM-TCD Dataset
168
+
#### Mask-RCNN
169
+
170
+
To train Mask-RCNN (and other instance segmentation models), we use the Detectron2 library from FAIR/Meta which is licensed as Apache 2.0.
171
+
172
+
#### Segmentation Models Pytorch (SMP)
134
173
135
-
For license information about the dataset, see the [dataset card]().
174
+
UNet model implementations use the [SMP library](https://segmentation-modelspytorch.readthedocs.io/), under the MIT license.
136
175
137
-
The majority of the dataset is licensed as CC-BY 4.0 with a subset as CC BY-NC 4.0 (train and test) and CC BY-SA 4.0 (test only). These two less permissive image classes consititute around 10% of the dataset.
176
+
#### SegFormer
138
177
139
-
### SegFormer
178
+
The Segformer architecture from NVIDIA is provided under [a research license](https://huggingface.co/docs/transformers/model_doc/segformer).
140
179
141
-
The Segformer architecture from NVIDIA is provided under a research license. This does not allow commercial use without permission from NVIDIA - see [here](https://www.nvidia.com/en-us/research/inquiries/) - but you are free to use these models for research. **If you wish to use our models in a commercial setting, we recommend you use the UNet variants which still perform well.**
180
+
This does not allow commercial use without permission from NVIDIA - see [here](https://www.nvidia.com/en-us/research/inquiries/) - but you are free to use these models for research. **If you wish to use our models in a commercial setting, we recommend you use the Mask-RCNN/U-Net variants (or train your own models with your preferred architecture).**
0 commit comments