Releases: EqualityAI/EqualityML
v0.2.1:Fix Pillow vulnerability
What's Changed
- Add integration test by @JoaoGranja in #25
- Package update by @JoaoGranja in #26
Full Changelog: https://github.com/EqualityAI/EqualityML/commits/v0.2.1
v0.2.0: FAIR, DiscriminationThreshold, Paired_ttest
FAIR
FAIR (Fairness Assessment and Inequality Reduction) empowers AI developers to assess fairness of their Machine Learning application and mitigate any observed bias in its application. It contains methods to assess fairness metrics as well as a set of bias algorithms for mitigating unfairness.
DiscriminationThreshold
The DiscriminationThreshold class provides a solution for determining the optimal discrimination threshold in a binary classification model for decision makers. The discrimination threshold refers to the probability value that separates the positive and negative classes. The commonly used threshold is 0.5, however, adjusting it will affect the sensitivity to false positives, as precision and recall exhibit an inverse relationship with respect to the threshold.
Paired_ttest
The goal of this function is to perform statistical paired t test for classifier comparisons. 2 methods are provided: McNemar's test and paired ttest 5x2cv.
What's Changed
Continuous integration by @JoaoGranja in #1
Refactor fair pkg by @JoaoGranja in #2
Rename project by @JoaoGranja in #4
Add r modules by @JoaoGranja in #5
Combine metrics mitigation classes by @JoaoGranja in #6
Sources of harm by @nyujwc331 in #7
Refactor r code by @JoaoGranja in #8
V0.1.0a1 release by @JoaoGranja in #9
Add models comparison by @JoaoGranja in #20
Update Requirements.txt to fix dependabot alerts #1 by @JoaoGranja in #21
Fix/updates map bias mitigation by @JoaoGranja in #22