You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As watching happens in a go routine the defer close() will be executed immediately once the function returns. Another issue in the logic is also the watch timeout. In other words even with the context cancel fixed the watch will timeout and the watch logic will return and never continue as the result channel is closed. Normally this (and other interruptions are handled through a looping watch) I.e the watch call it self would be moved inside the a loop where it will be reinitiated when it times out or fails.
Possible solutions
move the Watch call inside the loop in the go routine
simplify the code by using the Informer provided by the k8s go client. It basically handles the boilerplate code required to make pure Watch calls fault tolerant. In other words they stay alive and reconnect until the receive the stop signal.
factory:=informers.NewSharedInformerFactory(clientset, 0)
informer:=factory.Core().V1().Pods().Informer()
stopper:=make(chanstruct{})
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(objinterface{}) {
mObj:=obj.(v1.Object)
log.Printf("New Pod Added to Store: %s", mObj.GetName())
},
RemoveFunc: ...,
UpdateFunc: ....,
})
informer.Run(stopper)
// Later somewhere close(stopper)
The text was updated successfully, but these errors were encountered:
Describe the bug
K8S provider cancels the create pod watcher (or more specifically the context immediately.
https://github.com/asynkron/protoactor-go/blob/dev/cluster/clusterproviders/k8s/k8s_provider.go#L219-L225
As watching happens in a go routine the defer close() will be executed immediately once the function returns. Another issue in the logic is also the watch timeout. In other words even with the context cancel fixed the watch will timeout and the watch logic will return and never continue as the result channel is closed. Normally this (and other interruptions are handled through a looping watch) I.e the watch call it self would be moved inside the a loop where it will be reinitiated when it times out or fails.
Possible solutions
move the Watch call inside the loop in the go routine
simplify the code by using the Informer provided by the k8s go client. It basically handles the boilerplate code required to make pure Watch calls fault tolerant. In other words they stay alive and reconnect until the receive the stop signal.
The text was updated successfully, but these errors were encountered: