-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
per cluster dataroot for interactive apps #1328
Comments
This initializer may patch a system for batch connect apps to be able to submit to schedulers with different file systems. Place the initializer at # extend the BatchConnect::Session so that it submits jobs on a 'per-cluster' basis.
module BatchConnect
class Session
class << self
def dataroot(token = "", cluster: nil)
OodAppkit.dataroot.join('batch_connect').join(cluster.to_s).join(token.to_s)
end
end
def save(app:, context:, format: nil)
self.id = SecureRandom.uuid
self.token = app.token
self.title = app.title
self.view = app.session_view
self.created_at = Time.now.to_i
self.cluster_id = context.try(:cluster).to_s
submit_script = app.submit_opts(context, fmt: format, staged_root: staged_root) # could raise an exception
self.cluster_id = submit_script.fetch(:cluster, cluster_id).to_s
raise(ClusterNotFound, I18n.t('dashboard.batch_connect_missing_cluster')) unless cluster_id.present?
stage(app.root.join("template"), context: context) && submit(submit_script)
rescue => e # rescue from all standard exceptions (app never crashes)
errors.add(:save, e.message)
Rails.logger.error("ERROR: #{e.class} - #{e.message}")
false
end
def staged_root
self.class.dataroot(token, cluster: cluster_id).join("output", id)
end
end
end This changes the dataroot such that clustername is a part of the path where the app is staged and ran from.
It turns into this with the cluster in the directory structure.
Now to get this to work with multiple file-systems, sites would have to mount |
I'm going to open this back up until we have the job composer functionality as well. |
Actually, we're not back porting this to the job composer. |
There have been several discourse topics (links needed) where a site has filesystems that are distinct to a given cluster. Specifically, their HOME directories are separate for each cluster.
Historically we've told folks they need a different ondemand instance per cluster because stuff like
dataroot
evaluate to$HOME/ondemand/data
. Thestaged_root
for a given job (the job's working directory, the webserver templates files and puts them here before submitting the job) is a subdirectory under thisdataroot
.So clearly there's a need for some sites to have these directories on a per cluster basis. If the site could come up with a sshfs scheme to mount the other filesystems, then the webserver could access 2 or more file systems.
What the $HOME filesystem is on the webserver is anyones guess. Maybe it could be a local filesystem that mounts the others?
TODO
The text was updated successfully, but these errors were encountered: