This repository contains the Python implementation of Detecting critical treatment effect bias in small subgroups.
This repository presents the methods from the paper "Detecting critical treatment effect bias in small subgroups."
Motivation: Randomized trials are the gold standard for informed decision-making in medicine, yet they may not always capture the full scope of the population in clinical practice. Observational studies are usually more representative of the patient population but are susceptible to various biases, such as those arising from hidden confounding.
Benchmarking observational studies has become a popular strategy to assess the reliability of observational data when a randomized trial is available. The main idea behind this approach is first to emulate the procedures adopted in the randomized trial within the observational study, for example, using the Target Trial Emulation framework. Then, the treatment effect estimates from the emulated observational study are compared with those from the randomized trial. If the estimates are similar, we may be willing to trust the observational study results for patient populations where the randomized data is insufficient.
Contribution: To support the benchmarking framework, we propose a novel statistical test to compare treatment effect estimates between randomized and observational studies. In particular, our test satisfies two properties identified as essential for effective benchmarking: granularity and tolerance. Granularity allows the detection of bias at a subgroup or individual level, thereby improving the power of benchmarking. Tolerance permits the acceptance of studies with negligible bias that does not impact decision-making, thereby reducing false rejections. Further, we can use our test to estimate an asymptotically valid lower bound on the maximum bias strength for any individual.
- Python 3.9.18
- Numpy 1.26.2
- Scipy 1.11.4
- Scikit-learn 1.1.3
- Pandas 2.1.4
- Scikit-uplift 0.5.1
- Optax 0.1.7
- JAX 0.4.23
- Flax 0.7.5
To set up your environment and install the package, follow these steps:
Start by creating a Conda environment with Python 3.9.18. This step ensures your package runs in an environment with the correct Python version.
conda create -n myenv python=3.9.18
conda activate myenv
There are two ways to install the package:
- Local Installation:
Start by cloning the repository from GitHub. Then, upgrade
pip
to its latest version and use the local setup files to install the package. This method is ideal for development or when you have the source code.git clone https://github.com/jaabmar/kernel-test-bias.git cd kernel-test-bias pip install --upgrade pip pip install -e .
- Direct Installation from GitHub (Recommended):
You can also install the package directly from GitHub. This method is straightforward and ensures you have the latest version.
pip install git+https://github.com/jaabmar/kernel-test-bias.git
The src
folder contains the core code of the package, organized as follows:
-
datasets
: This directory includes modules for loading and preprocessing datasets.bias_models.py
: Defines the bias models used in the paper.hillstrom.py
: Contains functions specific to the Hillstrom dataset processing.
-
tests
: Includes testing procedures for bias discussed in the paper.ate_test.py
: An implementation of the average treatment effect (ATE) test that allows for tolerance, inspired by De Bartolomeis et al..kernel_test.py
: Our proposed kernel-based test that offers both granularity and tolerance.utils_test.py
: Utility functions to support testing procedures.
-
experiment_utils.py
: Utility functions that facilitate the execution of experiments. -
experiment.py
: Executes example experiments as per the paper, with parameters that can be customized by the user.
To run experiments using experiment.py
, follow these instructions:
-
Activate Your Environment: Ensure you have activated the Conda environment or virtual environment where the package is installed.
-
Run the Script: From the terminal, navigate to the
src
directory whereexperiment.py
is located, and run the following command:python experiment.py --test_type [TYPE] --bias_model [MODEL] --user_shift [SHIFT] ...
Replace [TYPE], [MODEL], [SHIFT], etc., with your desired values.
Example:
python experiment.py --test_type kernel_test --bias_model scenario_1 --user_shift 60.0 --epochs 2000 --lr 0.1
For a complete list of configurable parameters and their descriptions, consult the argument parser setup in experiment.py
.
We welcome contributions to improve this project. Here's how you can contribute:
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
For any inquiries, please reach out:
- Javier Abad Martinez - javier.abadmartinez@ai.ethz.ch
- Piersilvio de Bartolomeis - pdebartol@ethz.ch
- Konstantin Donhauser - konstantin.donhauser@ai.ethz.ch
If you find this code useful, please consider citing our paper:
@inproceedings{debartolomeis2024detecting,
title={Detecting critical treatment effect bias in small subgroups},
author={De Bartolomeis, Piersilvio and Abad, Javier and Donhauser, Konstantin and Yang, Fanny},
booktitle={The 40th Conference on Uncertainty in Artificial Intelligence},
year={2024}
}