"Deep Context-Aware Descreening and Rescreening of Halftone Images" paper implementation.
Numbers in paper are not reproducible at the moment.
This project pertains automated Descreening process. Descreening is the task that we try to reconstruct the halftoned image (which is the mandatory process to interact images with printers, scanners, monitors, etc) meanwhile reducing the amount of data loss.
- First and the only fully open source implementation of this paper, in PyTorch
- The implementation can be divided into below separate projects:
- CoarseNet: Modified version of U-Net architecture to work as a low-pass filter to remove halftone patterns
- DetailsNet: A deep CNN generator and two discriminators which are trained simultaneously to improve image quality
- EdgeNet: A simple CNN model to extract Canny edge features to preserve details
- ObjectNet: Modified version of "Pyramid Scene Parsing Network" to only return 25 major classified segments out of 150
- Halftoning-Algorithms: Implementation of some of the halftone algorithms provided in most recent digital color haltoning books as ground truth
- Places365-Preprocessing: A custom and extendable implementation of Dataset to handle lazy loading of a huge data functionality
Tae-hoon Kim and Sang Il Park. 2018. Deep Context-Aware Descreening
and Rescreening of Halftone Images. ACM Trans. Graph. 37, 4, Article 48
(August 2018), 12 pages. DOI
Nikan Doosti. (2021). Nikronic/Deep-Halftoning: v0.1-Zenodo-pre-alpha (v0.1-Zenodo-pre-alpha). Zenodo. https://doi.org/10.5281/zenodo.5651805