(Released on December 06, 2017)
-
We annotate point for each English character and correct some recognition annotations. New annotaions can be found here.
-
End-to-end annotations of SCUT-CTW1500 have been updated (see data/README.md). Note that the new annotations have a little difference from original detection annotations, and thus the end-to-end annotations are only used for end-to-end evaluation purpose.
-
SCUT-CTW1500 is a text-line based dataset with both English and Chinese instances. If you are insterested in word-level based Engish curve text, we highly recommend you refer to Total-text. In addition, a recent ICDAR2019 Robust Reading Challenge on Arbitrary-Shaped Text (ArT), which is extended from SCUT-CTW1500 and Total-text, is held for stimulating more improvements on the arbitrary-shaped text reading task. The competition results of ArT can be found on ICDAR2019-ArT.
-
Total-Text and SCUT-CTW1500 are now part of the training set of the largest curved text dataset - ArT (Arbitrary-Shaped Text dataset). In order to retain the validity of future benchmarking on both mentioned datasets, the test-set images of CTW1500 should be removed (with the corresponding ID provided in CTW1500_ID_vs_ArT_ID.txt) from the ArT dataset shall one intend to leverage the extra training data from the ArT dataset. We count on the trust of the research community to perform such removal operation to attain the fairness of the benchmarking.
Method | Recall (%) | Precision (%) | Hmean (%) | Publication | TIoU-Hmean (%) | FPS |
---|---|---|---|---|---|---|
Proposed CTD [paper] | 65.2 | 74.3 | 69.5 | PR 2019 | - | |
Proposed CTD+TLOC [paper] | 69.8 | 74.3 | 73.4 | PR 2019 | 47.5 | 13.3 |
SLPR [paper] | 70.1 | 80.1 | 74.8 | arXiv 1801 | - | |
TextSnake [paper][code] | 85.3 | 67.9 | 75.6 | ECCV 2018 | - | |
Qin et al. [paper] | 78.2 | 73.8 | 76.0 | ICDAR 2019 | - | |
CSE [paper] | 76.1 | 78.7 | 77.4 | CVPR 2019 | - | |
LOMO [paper] LOMO MS |
69.6 76.5 |
89.2 85.7 |
78.4 80.8 |
CVPR 2019 | - | |
SAE [paper] | 77.8 | 82.7 | 80.1 | CVPR 2019 | - | |
ATRR [paper] | 80.2 | 80.1 | 80.1 | CVPR 2019 | 58.0 | |
AGBL [paper] | 76.6 | 83.9 | 80.1 | SCIS 1912 | - | |
NASK [paper] | 78.3 | 82.8 | 80.5 | ICASSP 2020 | - | 12 |
LSN+CC [paper] | 78.8 | 83.2 | 80.8 | arXiv 1903 | 60.0 | |
SAST [paper] | 77.1 | 85.3 | 81.0 | ACM MM 2019 | - | 27.6 |
ICG [paper] | 79.8 | 82.8 | 81.3 | PR 2019 | - | |
TextField [paper][code] | 79.8 | 83.0 | 81.4 | TIP 2019 | 61.4 | |
ABCNet [paper][code] | 79.1 | 83.8 | 81.4 | CVPR2020 | - | 9.5 |
MSR [paper] | 79.0 | 84.1 | 81.5 | IJCAI 2019 | 61.3 | |
PSENet-1s [paper][code] | 79.7 | 84.8 | 82.2 | CVPR 2019 | 60.6 | 3.9 |
TextMountain [paper] | 83.4 | 82.9 | 83.2 | arXiv 1811 | 64.2 | |
Relation [paper] | 80.9 | 85.8 | 83.3 | ICDAR 2019 | ||
DB-ResNet-50 [paper][code] | 80.2 | 86.9 | 83.4 | AAAI 2020 | - | 22 |
CRAFT [paper][code] | 81.1 | 86.0 | 83.5 | CVPR 2019 | 61.0 | |
TextDragon [paper] | 82.8 | 84.5 | 83.6 | ICCV 2019 | - | |
TextTubes [paper] | 80.0 | 87.65 | 83.65 | arXiv 1912 | - | |
PSENet_v2 [paper][unofficial code] | 81.2 | 86.4 | 83.7 | ICCV 2019 | - | 39.8 |
ContourNet [paper][code] | 84.1 | 83.7 | 83.9 | CVPR 2020 | - | 4.5 |
SA-TEXT MS [paper] | 85.4 | 83.3 | 84.4 | arXiv 1911 | - | - |
PuzzleNet [paper] | 84.7 | 84.1 | 84.4 | arXiv 2002 | - | - |
PAN Mask R-CNN [paper] | 83.2 | 86.8 | 85.0 | WACV 2019 | 65.2 | |
TextPerception [paper] | 81.8 | 88.8 | 85.2 | AAAI 2020 | - | |
TextCohesion [paper] | 84.7 | 88.0 | 86.3 | arXiv 1904 | - |
*Note that training data and backbone of different methods may not be the same, and thus the comparison is not strictly fair.
Method | Dataset | E2E-Hmean (%) | Wordspotting-Hmean (%) | Publication |
---|---|---|---|---|
TextDragon [paper] | SynText800k + CTW1500 | 39.7 | - | ICCV 2019 |
TextPerception [paper] | SynText800k + CTW1500 | - | 57.0 | AAAI 2020 |
ABCNet [paper][code] | SynText150k + CTW1500 | 45.2 | - | CVPR 2020 |
We provide a brief evaluation script for researches to evaluate their own methods on the CTW1500 dataset. The instruction and details are given in tools/ctw1500_evaluation/Readme.md. An easier way is to use TIoU curved text evaluation script (The origin result of TIoU script is the same as the result from this evaluation scipt).
Clone the Curve-Text-Detector repository
git clone https://github.com/Yuliang-Liu/Curve-Text-Detector.git --recursive
The SCUT-CTW1500 dataset can be downloaded through the following link:
(https://pan.baidu.com/s/1eSvpq7o PASSWORD: fatf) (BaiduYun. Size = 842Mb)
or (https://1drv.ms/u/s!Aplwt7jiPGKilH4XzZPoKrO7Aulk) (OneDrive)
unzip the file in ROOT/data/
a) Train/ - 1000 images.
b) Test/ - 500 images.
c) Each image contains at least 1 curved text.
The visualization of the annotated images can be downloaded through the following link:
(https://pan.baidu.com/s/1eR641zG PASSWORD: 5xei) (BaiduYun. Size = 696 Mb).
We use resnet-50 model as our pre-trained model, which can be download through the following link:
(https://pan.baidu.com/s/1eSJBL5K PASSWORD: mcic) (Baidu Yun. Size = 102Mb)
or (https://1drv.ms/u/s!Aplwt7jiPGKilHwMsW2N_bfnb0Bx) (OneDrive)
put model in ROOT/data/imagenet_models/
Our model trained with SCUT-CTW1500 training set can be download through the following link:
(https://pan.baidu.com/s/1gfs5vH5 PASSWORD: 1700) (BaiduYun. Size = 114Mb)
or (https://1drv.ms/u/s!Aplwt7jiPGKilH0rLDFrRof8qmRD) (OneDrive)
put model in ROOT/output/
-
test.sh Downloading the dataset and our ctd_tloc.caffemodel, and running this file to evaluate our method on the SCUT-CTW1500 test set. Uncommend --vis to visualize the detecting results.
-
my_train.sh This file shows how to train on the SCUT-CTW1500 dataset. Downloading the dataset and resnet-50 pre-trained model, and running my_train.sh to start training.
Both train and test require less than 4 GB video memory.
- demo.py (cd tools/) then (python demo.py). This file easily shows how to test other images. With provided model, it can produce like
Train and test files are put under (model/ctd/smooth_effect/), and both the training and testing procedures are the same as above.
To visulize the ctd+tloc, simply uncomment ctd in the last of the test.prototxt, vice versa. Below are the first three images in our test set:
If you are insterested in it, you can train your own model to test. Because training doesn't require so much time, we don't upload our own model (Of course, you can email me for it).
For the labeling tool and specific details of the gound truths, please refer to data/README.md.
If you find our method or the dataset useful for your research, please cite
@article{liu2019curved,
title={Curved scene text detection via transverse and longitudinal sequence connection},
author={Liu, Yuliang and Jin, Lianwen and Zhang, Shuaitao and Luo, Canjie and Zhang, Sheng},
journal={Pattern Recognition},
volume={90},
pages={337--345},
year={2019},
publisher={Elsevier}
}
- Clone this repository. ROOT is the directory where you clone.
- cd ROOT/caffe/ and use your own Makefile.config to compile (make all && make pycaffe). If you are using ubuntu 14.04, you may need to modify Makefile line 181 (hdf5_serial_hl hdf5_serial) to (hdf5 hdf5_hl).
- cd ROOT/lib make (based on python2)
- pip install shapely. (Enable computing polygon intersection.)
Suggestions and opinions of this dataset (both positive and negative) are greatly welcome. Please contact the authors by sending email to
liu.yuliang@mail.scut.edu.cn
.
The SCUT-CTW1500 database is free to the academic community for research purpose usage only.
For commercial purpose usage, please contact Dr. Lianwen Jin: eelwjin@scut.edu.cn
Copyright 2017, Deep Learning and Vision Computing Lab, South China China University of Technology. http://www.dlvc-lab.net