-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
jsonrpc: reconnect automatically #1146
Conversation
lib/jsonrpc/server.go
Outdated
@@ -53,7 +53,7 @@ func (s *RPCServer) handleWS(ctx context.Context, w http.ResponseWriter, r *http | |||
conn: c, | |||
handler: s.methods, | |||
exiting: make(chan struct{}), | |||
}).handleWsConn(ctx) | |||
}).handleWsConn(ctx, ReconnectConfig{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since focused on client use, at the point of seeing this all the change was done.
Not quite like it tbh.
What would happen if i send a request, and the stream dies before i get my response? |
Thinking through DX here, it seems like i want to be able to say "make this API call, and if the connection is interrupted, make it again until i get a response". We don't want to automatically resend all requests, really only for idempotent (or mostly idempotent) calls. |
In this PR, that means that the
|
For a moment I thought about making the retry at the API level, but felt that decision would be better to be controlled from the caller side. Considering you are thinking about some built-in way to do that: what are you imagining?
|
Considering what @whyrusleeping mentioned in the case of existing channels. Before the PR, the I did a quick test of this with the Seems that the missing thing is calling some Open to hearing if all this make sense for the experts |
Call Retrying - I really think this should be per method, we already use the proxy struct to carry permission info, we may just add another tag for this, so https://github.com/filecoin-project/lotus/blob/master/api/apistruct/struct.go#L61 would be SyncState func(context.Context) (*api.SyncState, error) `perm:"read",retry:"true"` Channels - those should get closed. We may be able to somehow rescue them in case the connection drops but the server is still running, but i feel like that creates more problems than it solves (). |
Regarding the annotations, sounds good. I would have to reconsider the original solution since this extending the current PR direction seems not that good.
Right. But what do you think about what I said before?, callers should be responsible in distinguishing between clean and forced channel close. That isn't a problem now because it seems that are never closed (even if the conn is lost). That can change the semantics of what the caller can understand as closed channels in possible logics. If a client receive a chan of bytes, I can imagine that it can be a problem. Not much this PR can do about that since it is a client thing, but just making the point if recall an existing case that should be considered somewhere. |
Right, I'm not sure what would provide the 'cleanest' DX. If we treat channels like streams (files, sockets, io.Reader), then closing with dropped connection seems not unreasonable. This does put abnormal closure detection on the client, bu I don't think that in most cases it will be very hard:
|
SGTM. |
Closes #127
I started to do this most from the client's perspective, but then realized
wsConn
was also used from the server-side, so this seems a bit hacky. Unfolded this way since I'm using it only for client.Configuration for this is opt and may serve for future configs too.