resolve num_error, result error. all to float32 #18
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Clarify difference between num_error and result_error.
-We would call our self.error as result_error as acceptable range where our final result differs from theory, where num_error is the acceptable range that we regard two number as the same.
-Turn out median doesnt need num_error, despite low scale, so we can treat like other functions
-For mode, we hardcode num_error to be 0.01, which may make sense in dataset that has all data different from each other making finding mode, not making sense. To be discussed how to best deal with this num_error for mode case @mhchia
Switch everything to float32. Since torch.tensor() defaults to float32, while numpy defaults to float64, this may cause inequality in comparing number.