Skip to content

Latest commit

 

History

History
30 lines (17 loc) · 1.15 KB

README.md

File metadata and controls

30 lines (17 loc) · 1.15 KB

Systematic analysis of the impact of label noise correction on ML Fairness

This Python package provides an implementation of the empirical methodology to systematically evaluate the effectiveness of label noise correction techniques in ensuring the fairness of models trained on biased datasets, proposed in [1]. The methodology involves manipulating the amount of label noise and can be used with fairness benchmarks but also with standard ML datasets. Experiment tracking is done using mlflow.

Installation

You can install the package using pip:

pip install fair_lnc_evaluation

Usage

Examples of how to use this package can be found on the examples folder.

References

Contributing

Contributions to this package are welcome! If you have any bug reports, feature requests, or would like to contribute with code improvements, please submit an issue or a pull request on the GitHub repository.

License

This package is distributed under the MIT License.


Feel free to modify and expand upon this README.md template according to your specific package and the algorithms you implement.