Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compiler benchmarks are failing and not reporting any results #9394

Closed
Akirathan opened this issue Mar 13, 2024 · 2 comments · Fixed by #9397
Closed

Compiler benchmarks are failing and not reporting any results #9394

Akirathan opened this issue Mar 13, 2024 · 2 comments · Fixed by #9397
Assignees
Labels
--data corruption or loss Important: data corruption or loss -compiler -tooling Category: tooling p-high Should be completed in the next sprint

Comments

@Akirathan
Copy link
Member

The following benchmarks are failing on org.graalvm.polyglot.PolyglotException: org.enso.editions.EditionResolutionError$CannotLoadEdition: Cannot load edition [2024.1.1-dev]: The edition was not found.:

  • org.enso.compiler.benchmarks.inline.InlineCompilerBenchmark.longExpression
  • org.enso.compiler.benchmarks.inline.InlineCompilerErrorBenchmark.expressionWithErrors
  • org.enso.compiler.benchmarks.module.ManyErrorsBenchmark.manyErrors
  • org.enso.compiler.benchmarks.module.ManyLocalVarsBenchmark.longMethodWithLotOfLocalVars
  • org.enso.compiler.benchmarks.module.ManyNestedBlocksBenchmark.manyNestedBlocks
  • org.enso.compiler.benchmarks.module.ManySmallMethodsBenchmark.manySmallMethods

The only way to tell if a benchmark is failing nowadays is to manually look at the output of the benchmark, and to notice there are no results for that benchmark uploaded in the artifacts. In the future, we should report the erroneous benchmarks somehow automatically.

The only compiler benchmark that succeeds is org.enso.compiler.benchmarks.module.ImportStandardLibrariesBenchmark.importStandardLibraries. No other benchmarks are visible on https://enso-org.github.io/engine-benchmark-results/engine-benchs.html

These benchmarks were introduced in #9158

@Akirathan Akirathan added -tooling Category: tooling p-high Should be completed in the next sprint -compiler --data corruption or loss Important: data corruption or loss labels Mar 13, 2024
@Akirathan Akirathan self-assigned this Mar 13, 2024
@Akirathan Akirathan moved this from ❓New to 📤 Backlog in Issues Board Mar 13, 2024
@Akirathan
Copy link
Member Author

Duplicate of #9256

@Akirathan Akirathan marked this as a duplicate of #9256 Mar 13, 2024
@Akirathan Akirathan moved this from 📤 Backlog to 🔧 Implementation in Issues Board Mar 13, 2024
@enso-bot
Copy link

enso-bot bot commented Mar 13, 2024

Pavel Marek reports a new STANDUP for today (2024-03-13):

Progress: - Some general problems with benchmarks and their data collecting:

@Akirathan Akirathan linked a pull request Mar 14, 2024 that will close this issue
5 tasks
@mergify mergify bot closed this as completed in #9397 Mar 14, 2024
@github-project-automation github-project-automation bot moved this from 🔧 Implementation to 🟢 Accepted in Issues Board Mar 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
--data corruption or loss Important: data corruption or loss -compiler -tooling Category: tooling p-high Should be completed in the next sprint
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

1 participant