Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MultilevelMonteCarloApplication] Unexpected failure of test_estimation_random #11212

Closed
avdg81 opened this issue May 31, 2023 · 7 comments
Closed

Comments

@avdg81
Copy link
Contributor

avdg81 commented May 31, 2023

Description
While I was working on a new utility function (which converts an angle from degrees to radians, see PR #11112) I noticed that test_estimation_random from momentEstimatorTests.TestMultiMomentEstimator failed due to a result not being near to the expected result. Since I had not touched the code of the MultilevelMonteCarloApplication, it was (and still is) unclear to me why this failure occurred. Perhaps there is some randomized input being used, where the used seed was unfortunate in my case, leading to the failure? In any case, when it was rerun I got the test to pass (without making any changes to the code). Does anyone else have a similar experience? If yes, perhaps someone who is involved in the development of the MultilevelMonteCarloApplication could have a look at this issue, please? When I have made a mistake I would like to learn what I did wrong. Otherwise, I would appreciate it if the test could be made more stable. Thanks.

Scope
MultilevelMonteCarloApplication

To Reproduce
I have not found a consistent way to reproduce the issue. But you may want to have a look at the following CI log: https://github.com/KratosMultiphysics/Kratos/actions/runs/5092607680/jobs/9154174396.

Expected behavior
The test should not fail when the code it covers has not been touched.

@rubenzorrilla
Copy link
Member

As far as I know MultilevelMonteCarloApplication is no longer being maintained. I think we should consider removing it from the CI.

Pinging @RiccardoRossi as I think he is the responsible right now.

@matekelemen
Copy link
Contributor

I've seen MultiLevelMonteCarlo tests fail too and it's most probably not something you did. It's a good reminder never to write code that produces unreproducible results.

@rubenzorrilla Do we have plans on how to archive applications? At this rate, people do research, implement their stuff, and when their PhD is complete their applications become unmaintained and almost impossible to get working again if they get kicked out from the CI. Just to be clear, I support removing unmaintained applications but I think we should find a way to archive them alongside a version of Kratos they still work with.

@rubenzorrilla
Copy link
Member

@rubenzorrilla Do we have plans on how to archive applications? At this rate, people do research, implement their stuff, and when their PhD is complete their applications become unmaintained and almost impossible to get working again if they get kicked out from the CI. Just to be clear, I support removing unmaintained applications but I think we should find a way to archive them alongside a version of Kratos they still work with.

We created the KratosLegacyApplications for this. What is not clear at all is at which point an application should me moved there. You also raised a fair point, at which point of the repo such applications are ensured to work. Maybe we can put this info in the README.md of the legacy repo.

@avdg81
Copy link
Contributor Author

avdg81 commented Jun 1, 2023

I've seen MultiLevelMonteCarlo tests fail too and it's most probably not something you did. It's a good reminder never to write code that produces unreproducible results.

@rubenzorrilla Do we have plans on how to archive applications? At this rate, people do research, implement their stuff, and when their PhD is complete their applications become unmaintained and almost impossible to get working again if they get kicked out from the CI. Just to be clear, I support removing unmaintained applications but I think we should find a way to archive them alongside a version of Kratos they still work with.

Thanks for confirming the issue. The point is that such a false test failure makes it more difficult to merge changes, since all tests need to pass first.

@roigcarlo
Copy link
Member

I've opened a PR to remove it from the CI as a first stage. @avdg81 we have been aware of this problem by some some time now. If nobody has taken care of in one month I will proceed to move MLMC to Legacy Appls.

@avdg81
Copy link
Contributor Author

avdg81 commented Jun 1, 2023

I've opened a PR to remove it from the CI as a first stage. @avdg81 we have been aware of this problem by some some time now. If nobody has taken care of in one month I will proceed to move MLMC to Legacy Appls.

@roigcarlo Great! Thank you very much for picking this up.

@rubenzorrilla
Copy link
Member

I've opened a PR to remove it from the CI as a first stage. @avdg81 we have been aware of this problem by some some time now. If nobody has taken care of in one month I will proceed to move MLMC to Legacy Appls.

Hey, I've briefly discussed about this with @RiccardoRossi and we consider that in this case we can directly move this to the legacy repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants