Skip to content

IPL-UV/supers2

Repository files navigation

A Python package for enhancing the spatial resolution of Sentinel-2 satellite images up to 2.5 meters πŸš€

PyPI License Black isort Open In Colab


GitHub: https://github.com/IPL-UV/supers2 🌐 PyPI: https://pypi.org/project/supers2/ πŸ› οΈ


Table of Contents

Overview 🌍

supers2 is a Python package designed to enhance the spatial resolution of Sentinel-2 satellite images to 2.5 meters using a set of neural network models.

Installation βš™οΈ

Install the latest version from PyPI:

pip install supers2

From GitHub:

pip install git+https://github.com/IPL-UV/supers2.git

How to use πŸ› οΈ

Load libraries

import matplotlib.pyplot as plt
import numpy as np
import torch
import cubo

import supers2

Download Sentinel-2 L2A cube

# Create a Sentinel-2 L2A data cube for a specific location and date range
da = cubo.create(
    lat=39.49152740347753,
    lon=-0.4308725142800361,
    collection="sentinel-2-l2a",
    bands=["B02", "B03", "B04", "B05", "B06", "B07", "B08", "B8A", "B11", "B12"],
    start_date="2023-01-01",
    end_date="2023-12-31",
    edge_size=64,
    resolution=10
)

Prepare the data (CPU and GPU usage)

When converting a NumPy array to a PyTorch tensor:

  • GPU: Use .cuda() to transfer the tensor to the GPU if available, improving speed for large datasets or models.

  • CPU: If no GPU is available, PyTorch defaults to the CPU; omit .cuda().

Here’s how you can handle both scenarios dynamically:

# Check if CUDA is available, use GPU if possible
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

Converting data to a PyTorch tensor ensures efficient computation and compatibility, while scaling standardizes pixel values to improve performance.

# Convert the data array to NumPy and scale
original_s2_numpy = (da[11].compute().to_numpy() / 10_000).astype("float32")

# Create the tensor and move it to the appropriate device (CPU or GPU)
X = torch.from_numpy(original_s2_numpy).float().to(device)

Download and Load the model

import mlstac

# Download the model
mlstac.download(
  file="https://huggingface.co/tacofoundation/supers2/resolve/main/simple_model/mlm.json",
  output_dir="models2/CNN_Light_SR",
)

# Load the model
model = mlstac.load("models/supers2_simple_model").compiled_model()
model = model.to(device)

# Apply model
superX = model(X[None]).squeeze(0)

The first plot shows the original Sentinel-2 RGB image (10m resolution). The second plot displays the enhanced version with finer spatial details (2.5m resolution) using a lightweight CNN.

fig, ax = plt.subplots(1, 2, figsize=(10, 5))
ax[0].imshow(X[[2, 1, 0]].permute(1, 2, 0).cpu().numpy()*4)
ax[0].set_title("Original S2")
ax[1].imshow(superX[[2, 1, 0]].permute(1, 2, 0).cpu().numpy()*4)
ax[1].set_title("Enhanced Resolution S2")
plt.show()

Predict only RGBNIR bands

superX = supers2.predict_rgbnir(X[[2, 1, 0, 6]])

Estimate the Local Attention Map of the model πŸ“Š

kde_map, complexity_metric, robustness_metric, robustness_vector = supers2.lam(
    X=X[[2, 1, 0, 6]].cpu(), # The input tensor
    model=models.srx4, # The SR model
    h=240, # The height of the window
    w=240, # The width of the window
    window=128, # The window size
    scales = ["1x", "2x", "3x", "4x", "5x", "6x", "7x", "8x"]
)

# Visualize the results
plt.imshow(kde_map)
plt.title("Kernel Density Estimation")
plt.show()

plt.plot(robustness_vector)
plt.title("Robustness Vector")
plt.show()