Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

resolve num_error, result error. all to float32 #18

Merged
merged 4 commits into from
Feb 16, 2024

Conversation

JernKunpittaya
Copy link
Collaborator

@JernKunpittaya JernKunpittaya commented Feb 16, 2024

Clarify difference between num_error and result_error.
-We would call our self.error as result_error as acceptable range where our final result differs from theory, where num_error is the acceptable range that we regard two number as the same.
-Turn out median doesnt need num_error, despite low scale, so we can treat like other functions
-For mode, we hardcode num_error to be 0.01, which may make sense in dataset that has all data different from each other making finding mode, not making sense. To be discussed how to best deal with this num_error for mode case @mhchia

Switch everything to float32. Since torch.tensor() defaults to float32, while numpy defaults to float64, this may cause inequality in comparing number.

Copy link
Member

@mhchia mhchia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should be good to hardcode num_error for now. If we need to configure it we can easily modify the State to accept another num_error later

@mhchia mhchia merged commit 2356f69 into feat/support-rest-operations Feb 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants