This project leverages satellite SAR images from the Copernicus dataset and a SegNet deep learning model to detect oil spills, calculate their size and surface area. It also includes a web platform as a prototype for visualizing and alerting response teams about oil spill incidents.
Video Demo Link
This project aims to address the environmental challenge of oil spills by combining AI and a web platform:
- AI Model:
- Detect oil spills from Copernicus SAR images using a deep learning SegNet model by generating Ground Truth Masks (GTM)
- Calculate the size and surface area of detected spills.
- Web Platform:
- Display detected oil spills on an interactive map.
- Send real-time alerts to response teams.
- Maintain a history of past oil spills for analysis.
The project consists of two main parts:
- Dataset: European Space Agency (ESA) via the Copernicus Open Access Hub SAR image data.
- Model: SegNet-based deep learning architecture for semantic segmentation.
- Output: Ground truth masks for oil spills, spill size, and surface area calculations.
View on Kaggle
- Frontend: Built with React.js for user interaction.
- Clone the repository:
git clone https://github.com/AhmedTrb/AESS-challenge-ENSI-SB.git cd AESS-challenge-ENSI-SB
- Set up the environment for the AI model:
pip install -r requirements.txt
- Set up the web platform:
cd web-platform npm install
- Running the AI model :
- Importing necessary libraries.
- Loading and processing data.
- Splitting the dataset.
- Loading pretrained model weights.
- Testing the model and displaying predictions.
- Running the web platform :
cd web-platform
npm run dev
Open the platform in your browser at http://localhost:3000
In this project, a Conditional Generative Adversarial Network (CGAN) is used for oil spill detection by generating segmentation masks from SAR images. The CGAN consists of two main components: a generator and a discriminator. The generator is a U-Net-like architecture that predicts oil spill masks from input images, while the discriminator is a PatchGAN model that evaluates whether the predicted mask is realistic by comparing it with the ground truth.
The dataset is provided from this paper [1] and consists of preprocessed raw images and Ground Truth Masks (GTM). It was collected from the Sentinel-1 mission satellite, which provides level-1 Ground Range Detected (GRD) data, equipped with a C-band Synthetic Aperture Radar (SAR) system.
- Raw Images: Contains the original SAR images.
- Ground Truth Masks (GTM): Corresponding binary masks indicating the oil spill regions.
The generator is a Convolutional Neural Network (CNN) that consists of an encoder-decoder structure to process the input images and generate segmentation masks.
- Encoder: Uses convolutional layers with down-sampling to extract features.
- Decoder: Uses transposed convolutional layers to reconstruct the segmentation masks.
The discriminator evaluates whether a given pair of input image and generated mask is real or fake.
During training, the generator learns to create accurate masks by minimizing a combination of adversarial loss (to fool the discriminator) and L1 loss (to ensure similarity to ground truth). This collaborative adversarial setup allows the CGAN to generate high-quality segmentation masks, even for complex SAR image patterns.
[1] Deep neural network for oil spill detection using Sentinel-1 data: application to Egyptian coastal regions by Samira Ahmed,Tamer ElGharbawi,Mahmoud Salah & Mahmoud El-Mewafi link
[2] Mapping oil pollution in the Gulf of Suez in 2017–2021 using Synthetic Aperture Radar link
[3] Mediterranean Sea region briefing - The European environment — state and outlook 2015 link
[4] Global maritime traffic link
[5] More than 750 oil slicks spotlight pollution risks in Mediterranean Sea link
[6] Mediterranean Sea Chronic Oil Pollution Analysis: July 2020-January 2024 link