diff --git a/doc/source/methods/PartialDependenceVariance.ipynb b/doc/source/methods/PartialDependenceVariance.ipynb index 99ea10cf0..027f058dc 100644 --- a/doc/source/methods/PartialDependenceVariance.ipynb +++ b/doc/source/methods/PartialDependenceVariance.ipynb @@ -51,14 +51,18 @@ "\n", "* the method offers a standardize procedure to quantify the feature importance for any learning algorithm. This contrasts with some internal feature importance for some tree-based algorithms such as [Random Forest](https://link.springer.com/article/10.1023/a:1010933404324)[[5]](#References) or [Gradient Boosting](https://www.jstor.org/stable/2699986)[[6]](#References), which have their own way to define the importance of a feature.\n", "\n", + "\n", "* the method operates in the black-box regime (i.e., can be applied to any prediction model).\n", "\n", + "\n", "* the method can be adapted to quantify the strength of potential interaction effects.\n", "\n", + "\n", "**Drawbacks**:\n", "\n", "* since the computation of the feature importance is based on the `PD`, the method captures only the main effect of a feature and ignores possible feature interactions. The `PD` plot can be flat as the feature affects the predictions manly through interactions. This is related to the masked heterogeneity.\n", "\n", + "\n", "* the method can fail to detect feature interactions even though those exist (see theoretical overview example below)." ] }, @@ -355,8 +359,10 @@ "\n", "* either to a fraction from the main effect of $X_2$.\n", "\n", + "\n", "* either to the interaction between $X_1$ and $X_2$.\n", "\n", + "\n", "* or a combination of both.\n", "\n", "\n",