-
-
Notifications
You must be signed in to change notification settings - Fork 2
Fairness Metric Demo
learner = lrn("classif.rpart", cp = .01)
adult_train = tsk("adult_train")
adult_test = tsk("adult_test")
learner$train(adult_train)
predictions = learner$predict(adult_test)
-
False Positive Rate Bias
me = MeasureFairness$new("groupwise_abs_diff", base_measure = msr("classif.fpr")) predictions$score(me, task = adult_test)
-
False Negative Rate Bias
me = MeasureFairness$new("groupwise_abs_diff", base_measure = msr("classif.fnr")) predictions$score(me, task = adult_test)
-
False Positive Rate Ratios
me = MeasureFairness$new("groupwise_quotient", base_measure = msr("classif.fpr")) predictions$score(me, task = adult_test)
-
True Positive Rate Ratios
me = MeasureFairness$new("groupwise_quotient", base_measure = msr("classif.tpr")) predictions$score(me, task = adult_test)
Metrics based on Binary Outcome Source: (wikipedia)
-
Predictive parity, also referred to as outcome test.
A classifier satisfies Predictive parity if the subjects in the protected and unprotected groups have equal PPV. We could use the following code to assess it:
me = MeasureFairness$new("groupwise_quotient", base_measure = msr("classif.ppv")) predictions$score(me, task = adult_test) #Should close to 1
-
False positive error rate balance, also referred to as predictive equality.
A classifier satisfies predictive equality if the subjects in the protected and unprotected groups have equal FPR. We could use the following code to assess it:
me = MeasureFairness$new("groupwise_quotient", base_measure = msr("classif.fpr")) predictions$score(me, task = adult_test) #Should close to 1
-
False negative error rate balance, also referred to as equal opportunity.
A classifier satisfies equal opportunity if the subjects in the protected and unprotected groups have equal FNR. We could use the following code to assess it:
me = MeasureFairness$new("groupwise_quotient", base_measure = msr("classif.fnr")) predictions$score(me, task = adult_test) #Should close to 1
-
Equalized Odds
Users are expected to evaluate Equalized Odds by ratio or absolute difference. A classifier satisfies Equalized Odds if the subjects in the protected and unprotected groups have equal TPR and equal FPR. So we could use the following code to assess Equalized Odds:
me = MeasureFairness$new("groupwise_quotient", base_measure = msr("classif.tpr")) predictions$score(me, task = adult_test) #Should close to 1
me = MeasureFairness$new("groupwise_quotient", base_measure = msr("classif.fpr")) predictions$score(me, task = adult_test) #Should close to 1
-
Conditional use accuracy equality
A classifier satisfies Conditional use accuracy equality if the subjects in the protected and unprotected groups have equal PPV and equal NPV. We could use ratio or absolute difference to assess it:
me = MeasureFairness$new("groupwise_quotient", base_measure = msr("classif.ppv")) predictions$score(me, task = adult_test) #Should close to 1
me = MeasureFairness$new("groupwise_quotient", base_measure = msr("classif.npv")) predictions$score(me, task = adult_test) #Should close to 1
-
Overall accuracy equality
A classifier satisfies Overall accuracy equality if the subject in the protected and unprotected groups have equal prediction accuracy. We could use the following code to assess it:
me = MeasureFairness$new("groupwise_quotient", base_measure = msr("classif.acc")) predictions$score(me, task = adult_test) #Should close to 1
-
Treatment equality
A classifier satisfies this definition if the subjects in the protected and unprotected groups have an equal ratio of FN and FP, satisfying the formula:
Assume A is the protected field with binary variable. Then FN{A=a}/FP{A=a} = FN{A=b}/FP{A=b}
However, we could do a simple transformation and assess FN{A=a}/FN{A=b} = FP{A=a}/FP{A=b}. Then we could use the following code:
me1 = MeasureFairness$new("groupwise_quotient", base_measure = msr("classif.fp")) me2 = MeasureFairness$new("groupwise_quotient", base_measure = msr("classif.fn)) #The following two measure should be equal predictions$score(me1, task = adult_test) predictions$score(me2, task = adult_test)
Since the following metrics relied on predicted probability score. We need to change the predict type:
learner = lrn("classif.rpart", cp = .01)
adult_train = tsk("adult_train")
adult_test = tsk("adult_test")
learner$predict_type = "prob"
learner$train(adult_train)
predictions = learner$predict(adult_test)
Metrics based on Predicted Probability Score (This would require the implementation of Probability Score)
-
Test-fairness, also known as calibration or matching conditional frequencies.
A classifier satisfies Test-fairness if individuals with the same predicted probability score have the same probability to be classified in the positive class when they belong to either the protected or the unprotected group:
-
Well-calibration is an extension of the previous definition.
A classifier satisfies Well-calibration if individuals inside or outside the protected group have the same predicted probability score S they must have the same probability of being classified in the positive class, and this probability must be equal to S.
-
Balance for positive class.
A classifier satisfies this definition if the subjects constituting the positive class from both protected and unprotected groups have equal average predicted probability score S. This means that the expected value of probability score for the protected and unprotected groups with positive actual outcome Y is the same.
-
Balance for negative class.
A classifier satisfies this definition if the subjects constituting the negative class from both protected and unprotected groups have equal average predicted probability score S. This means that the expected value of probability score for the protected and unprotected groups with negative actual outcome Y is the same, satisfying the formula: