You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When xpack.ml.max_model_memory_limit is set it is possible that a particular data frame analytics job is impossible for the cluster to create and run successfully. But I can imagine that the way this manifests itself in the UI will cause immense frustration.
In the following example xpack.ml.max_model_memory_limit was set to 410mb.
After creating the initial config and clicking "Create" you get an error like this:
The obvious reaction will then be to edit the model memory limit to bring it down to the maximum permitted:
This works, and you can create the job, but then when you try to start it you get this error:
Given that the backend code is so defensive about stopping you running a job that has a model memory limit less than the estimated requirement it would be better if the UI was also stricter, and broke the bad news that you cannot do what you want to at an earlier stage.
The text was updated successfully, but these errors were encountered:
#60496 is related to this, because if we make the UI refuse job creation at an earlier stage based on the latest model memory estimate it has then we need to make sure that that estimate reflects all the changes made to the config.
When
xpack.ml.max_model_memory_limit
is set it is possible that a particular data frame analytics job is impossible for the cluster to create and run successfully. But I can imagine that the way this manifests itself in the UI will cause immense frustration.In the following example
xpack.ml.max_model_memory_limit
was set to410mb
.After creating the initial config and clicking "Create" you get an error like this:
The obvious reaction will then be to edit the model memory limit to bring it down to the maximum permitted:
This works, and you can create the job, but then when you try to start it you get this error:
Given that the backend code is so defensive about stopping you running a job that has a model memory limit less than the estimated requirement it would be better if the UI was also stricter, and broke the bad news that you cannot do what you want to at an earlier stage.
The text was updated successfully, but these errors were encountered: