Replies: 1 comment 1 reply
-
I can only answer 1. The way we build the graph is with an unrolled loop that grows in complexity with number of steps. We could provide an alternative with Scan (equivalent to a for loop) that shouldn't have this issue (I think this was actually the original approach CC @juanitorduz). The switch happened when the batching was added but there's no reason we couldn't have a batch with scan. However, it may also be worth checking if your decay isn't so fast that the contributions from many of the last x steps are useless for any practical purposes. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello all,
first of all, thanks for providing this great package!
I however have some questions regarding the mmm module:
I would like to use a geometric decay function without a maximum duration of carryover effect (I am currently also using Robyn and would like this aspect to be consistent between both packages). I tried setting l_max to be equal to the total number of observations, this however increases the runtime from around 5 minutes to over 1 hour. What is the reason for this this happening and is there another way I could implement the geometric decay to avoid this? I already tried this, which however did not work:
def geometric_adstock(x, alpha: float = 0.5): x_decayed = np.zeros_like(x) x_decayed[0] = x[0] for xi in range(1, len(x_decayed)): x_decayed[xi] = x[xi] + alpha* x_decayed[xi - 1] return x_decayed
I am not quite sure how to implement it using the batched_convolution so a hint would be highly appreciated!
In the mmm-example.ipynb notebook the sigma parameter of the half normal prior for the channel coefficients is set using prior information about the spend share of a channel. As all channels are scaled using MaxAbsScaler the sigma is also being scaled, I however don't quite understand how this is done. From my understanding the scaling factor$\frac{1}{\sqrt{1-\frac{2}{\pi}}}$ is derived from the formula used to calculate the variance of the half normal distribution $\sigma(1-\frac{2}{\pi})$ , when specifying unit variance.
I however don't understand why it is then multiplied with the number of channels.
What is an appropriate way to specify sigma for the channel coefficients of organic marketing channels? As there is no information on spend, calculating the spend share is not possible. Would it be appropriate to use spend share for all paid media channels and use exposure share to scale the sigma for organic channels?
I would highly appreciate answers to any of these questions!
Beta Was this translation helpful? Give feedback.
All reactions