You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We gain the ability to preserve uncommitted changes when using Continue Working On--it would also be great to preserve the active kernel connection, which is possible with a remote kernel connection that stays alive and accessible independent of the current workspace location.
Although we can start with unauthenticated kernels for exploration purposes, this likely eventually requires a different auth story than what we have now--since controllers are registered by extensions, and remote kernel connections are generally protected by some kind of auth, we ideally would not want extensions to have to prompt for auth when using Continue Working On as well. The ideal user experience here IMO would be 'once I have authorized an extension in one place that VS Code is installed, signing into VS Code in another location should be sufficient to authorize that extension in that location as well'...cc @TylerLeonhardt for that though :)
The text was updated successfully, but these errors were encountered:
I'd like to know more about how these kernels are authenticated. I imagine these aren't just using GitHub/Microsoft auth but rather some other mechanism?
I'm trying to find a comparison... maybe a world where you could open a server distro and then Continue Working On via SSH... but that seems contrived.
I can explain how remote jupyter server/connection works now.
Users are able to provide their own Jupyter server localhost:3000?token=123 to Jupyter extension, which Jupyter extension will use to detect kernel specs and connections. To support this scenario in Edit Session, we would only need the Jupyter extension to cache the Jupyter server url as it has the right amount of auth tokens to connect to the server.
It's a bit more complex when the server urls are from a 3rd party extension, e.g., AML. The 3rd party extension is responsible for doing the user authentication, finding all servers avaialable to this user and also preparing right connection to the server. It can still be token based, but it's up to the 3rd party extension. In this scenario, Jupyter extension might only cache the server url, without any auth info. On restart, Jupyter extension will request the 3rd party to do the auth and prepare the connection info for us.
The core/UI remembers which controller is being used in previous session, but it's not responsible for the lifecycle of it, neither how it handles authentication.
We gain the ability to preserve uncommitted changes when using Continue Working On--it would also be great to preserve the active kernel connection, which is possible with a remote kernel connection that stays alive and accessible independent of the current workspace location.
Although we can start with unauthenticated kernels for exploration purposes, this likely eventually requires a different auth story than what we have now--since controllers are registered by extensions, and remote kernel connections are generally protected by some kind of auth, we ideally would not want extensions to have to prompt for auth when using Continue Working On as well. The ideal user experience here IMO would be 'once I have authorized an extension in one place that VS Code is installed, signing into VS Code in another location should be sufficient to authorize that extension in that location as well'...cc @TylerLeonhardt for that though :)
The text was updated successfully, but these errors were encountered: