Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Variance parameter faulty implementation #4

Open
WillDudley opened this issue Feb 15, 2019 · 4 comments
Open

Variance parameter faulty implementation #4

WillDudley opened this issue Feb 15, 2019 · 4 comments

Comments

@WillDudley
Copy link
Owner

A lower variance is not better - need to look into this asap

@WillDudley WillDudley pinned this issue Feb 15, 2019
@WillDudley
Copy link
Owner Author

Ya so there's two things here:

  1. The codomain of the target function f equals its domain, with the target function as identity. This makes interoperability worse as it implies that the codomain is a probability. However, this is not true. Indeed, a bell curve with mean close to 1 implies a probability of landing beyond 1.

  2. Switching the mean to be around the centre of the margin seems like an incorrect thing to do. The target is still at f(prob_b), and this should not be changed.

@WillDudley
Copy link
Owner Author

Probs should change f from f(x)=x to f(x)=logit(x). This maps (0,1) to R instead of just (0,1). The blow-up around 1 may be an issue as O(logit) may be bigger than O(cost_function), but we'll see and we usually only consider probs up to 0.9, maybe 0.95, at a maximum.

@WillDudley
Copy link
Owner Author

Probs should change f from f(x)=x to f(x)=logit(x). This maps (0,1) to R instead of just (0,1). The blow-up around 1 may be an issue as O(logit) may be bigger than O(cost_function), but we'll see and we usually only consider probs up to 0.9, maybe 0.95, at a maximum.

Ignore this, this is garbage.

I've addressed problem 2 though and at worst it's a lot more readable. Checked with manual working out.

Back to the variance issue. I think the problem lies with two things: (i) the ranges of the deltas and variances considered, and (ii) taking variance into account when calculating the margin midpoint is not what we want to do.

(i) - We can tackle this empirically if needed. Or do it theoretically, idk. Currently focussing on (i)
(ii) - Indeed, if we do this, then the margin midpoint is very much dependent on the variance. This is not what we want.

WillDudley added a commit that referenced this issue Feb 17, 2019
#4 and #6
WillDudley added a commit that referenced this issue Feb 18, 2019
#4  Fixed (hopefully!) needs more testing
@WillDudley
Copy link
Owner Author

Should be fixed in V0.2.7. Attacker heatmaps have the intuitive gradient (lower variance is better), at least at minimum costs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant