You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ensure that any already integrated dataset which has the information (boxes & text labels) to be used for the recognition task can also be used for this (crop boxes with corresponding labels)
add a section in the documentation (like models) for datasets (also split into detection / recognition)
integrate a script for benchmarking into references/ or directly into the training script for detection and recognition which follows current papers / common used benchmarking splits
detection:
TODO
train:COCO-Text/ ... ?
val: IC03/IC13/...?
It would be great to get a comparison to other implementations or other OCR applications for research purposes.
This would make the entire library or its implemented models a little more transparent and easier to compare with others.
As a final point, I have to add that it's just great to see if an implementation reach better benchmarks as other 😅
Additional context
Any feedback or suggestion is very welcome 💯
The text was updated successfully, but these errors were encountered:
🚀 The feature
This request can be split into three parts:
detection:
TODO
train:COCO-Text/ ... ?
val: IC03/IC13/...?
recognition:
train: MJSynth/SynthText
val: SVHN/SVT/IIIT5K/IC03/IC13 (+Funsd/Cord)
Motivation, pitch
It would be great to get a comparison to other implementations or other OCR applications for research purposes.
This would make the entire library or its implemented models a little more transparent and easier to compare with others.
As a final point, I have to add that it's just great to see if an implementation reach better benchmarks as other 😅
Additional context
Any feedback or suggestion is very welcome 💯
The text was updated successfully, but these errors were encountered: