- Other refs:
- Limitations of existing methods:
- Over-smoothing regions: due to intrinsic overlapping between rain streaks and background texture patterns, most methods tend to remove texture details in non-rain regions
- The degradation of rain is complax that current models are insufficient to cover some important factors
- Spatial contextual information in larger regions, which has been proven to be useful for rain removal, is rarely used
- Our method:
- Step 1: introduce novel region-dependent rain models
- Rain-streak binary map, where 1 indicates the presence of individually visible rain streaks in the pixels, and 0 otherwise
- Model the rain streak accumulation, various shapes and directions of overlapping streaks to similate heavy rain
- Step 2: construct a deep network that jointly detects and removes rain
- Detect train streak regions, then use it to constrain rain removal
- Capable of performing an adaptive operation on rain and non-rain regions, preserving richer details
- Step 3: Propose a contextualized dilated network to enlarge the receptive field --> retrive more contextual information
- Step 4: Propose a recurrent rain detection and removal network that progressively removes rain streaks --> restore images campture in the environment with both rain accumulation and various rain streak directions
- Step 1: introduce novel region-dependent rain models
- Rain streak accumulation model
- O: captured image with rain
- B: background scene without rain streaks
- S: rain streak layer
- R: region-dependent variable R (binary mask) to indicate the location of individually visible rain streaks
- A: global atmospheric light
- alpha: scene transmission $$O = \alpha (B + sum_{t=1}^s \tilde{S_t}R) + (1-\alpha)A