This code was sourced from https://github.com/advimman/lama.
Specifically, bin/predict_single.py includes a wrapper class for the model, which streamlines the inpainting process for a single image and mask.
- Download model checkpoint from here
- Move best.ckpt to assets/lama_checkpoint folder. File path should be assets/lama_checkpoint/best.ckpt
- Create virtual environment. See next section for details
- Activate virtual environment.
- Run example_usage.py or your own implementation.
Running this repository locally requires a virtual environment. If you have one great, just run 'pip install -r requirements.txt' inside your environment.
- You can create a python virtual environment by running 'python -m venv .venv' in your project directory. You can activate the environment by running 'source ./.venv/bin/activate'.
- You can create a conda virtual environment by running 'conda create --name env'. You can activate the envionment by running 'conda activate env'
- Once you have a running environment, you can run 'pip install -r requirements.txt' to install the required libraries.