Skip to content

Releases: openml/automlbenchmark

v2.0.2

09 Nov 09:05
Compare
Choose a tag to compare
  • Add constraint sets used for the new evaluation (includes 100gb of gp3 ssd)
  • Log information about the used AWS volumes (type, size and id)

v2.0.1

30 Sep 17:36
Compare
Choose a tag to compare
  • if a container image is built from a clean state on a commit with a version tag, this version tag will be appended to the image tag
  • randomforest:latest and tunedrandomforest:latest now correctly pull from main instead of master (thanks to @eddiebergman)

V2.0

17 Sep 09:38
5b6d8bf
Compare
Choose a tag to compare

V2.0

Almost a year has passed since the last release, and too much has changed to list everything. Some highlights include:

  • AWS spot instance support
  • Sparse dataset support
  • Optimized data loading from OpenML
  • Added frameworks:
    • MLNET
    • FLAML
    • Light AutoML
    • mlr3automl
  • Many bug fixes and improvements

Going forward we hope to release new versions more intermittently.


Thanks to everyone who contributed through commits, issues, discussions or any other way.
In particular we would like the following contributors for their code contributions since v1.6:

Adding support for new frameworks

19 Aug 17:56
329c3e6
Compare
Choose a tag to compare

New frameworks added since 1.5:

  • AutoXGBoost
  • MLJar-supervised
  • MLPlan

Upgraded existing frameworks versions.

Improved frameworks version management

In most cases, users can try an older or a more recent version of a given framework, simply by creating a local framework definition with the version they want to use (https://github.com/openml/automlbenchmark/blob/master/docs/HOWTO.md#framework-definition), and force the framework setup (python runbenchmark.py my_framework -s force).

Run on OpenML suites and tasks directly

Specify benchmark as openml/s/X or openml/t/Y to run on a suite or task, e.g.: python runbenchmark.py randomforest openml/s/218.

Bug fixes