-
Notifications
You must be signed in to change notification settings - Fork 407
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bugfix: fix querying local empty list error #326
Conversation
0a92f08
to
01c4a8b
Compare
gvk, err = serializer.UnsafeDefaultRESTMapper.KindFor(schema.GroupVersionResource{ | ||
// storage.ErrStorageNotFound indicates that there is no corresponding resource locally, | ||
// and will determine whether to return an empty list structure according to whether GVK is registered or not | ||
if err == storage.ErrStorageNotFound { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- storage.ErrStorageNotFound does not indicates empty list object on local disk, and indicates an error that request resource is not cached in local disk, so we need to return error here.
- and
len(objs)==0
indicates empty list object on local disk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for reminding me. I ignored this situation. The PR submitted recently corrected this mistake
if err != nil { | ||
return nil, err | ||
} | ||
gvk, e := serializer.YurtHubRESTMapperManager.UnsafeDefaultRESTMapper.KindFor(gvr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not good to expose RESTMaaper
info out of serializer, how about add a func KindFor()
for serializer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I combined RESTMapperManager with CacheManager, and the interface is exposed by CacheManager.
|
||
// Verify if DynamicRESTMapper(which store the CRD info) needs to be updated | ||
gvk := apiVersion.WithKind(kind) | ||
if err = serializer.YurtHubRESTMapperManager.UpdateCRDRESTMapper(gvk); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about add a func UpdateKind()
for serializer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I combined RESTMapperManager with CacheManager, and the interface is exposed by CacheManager
// make up a storage that represents the no resources | ||
// in local disk, so when cloud-edge network disconnected, | ||
// yurthub can return empty objects instead of 404 code(not found) | ||
if len(items) == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's need to keep len(items)==0
for empty item list object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for reminding. The PR submitted recently recovered this situation.
|
||
// For CRD, if no objects in cloud cluster, we need to store the GVK info of CRD, | ||
// so when cloud-edge network disconnected, | ||
// yurthub can return empty objects(make by using GVK info) instead of 404 code(not found) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
404 code and empty items list object is different:
- 404 code indicates that client has ever not requested this kind of resource
- empty items list object indicates that client has requested this kind of resource, but got a empty items list object response.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for reminding. The PR submitted recently corrected this mistake.
}{ | ||
data: map[string]struct{}{}, | ||
}, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's need to keep these test case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR submitted recently recovered this test case.
err: true, | ||
}, | ||
queryErr: storage.ErrStorageNotFound, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to keep these test cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR submitted recently recovered this test case.
|
||
func NewDynamicRESTMapper() *DynamicRESTMapper { | ||
m := make(map[schema.GroupVersionResource]schema.GroupVersionKind) | ||
s, err := factory.CreateStorage() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do not create Storage here, how about use the StorageWrapper
from YurtHubConfiguration
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I combined RESTMapperManager with CacheManager, and the StorageWrapper of CacheManager will be used
dm.resourceToKind[singular] = gvk | ||
dm.resourceToKind[plural] = gvk | ||
|
||
return dm.updateCachedDynamicRESTMapper() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make sure no duplicate update cache if GVK has been updated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for reminding. The recently submitted PR adds validation logic.
pkg/yurthub/proxy/remote/remote.go
Outdated
Version: info.APIVersion, | ||
Resource: info.Resource, | ||
} | ||
gvk, err := serializer.YurtHubRESTMapperManager.CRDRESTMapper.KindFor(gvr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please try do not reference serializer.YurtHubRESTMapperManager
out of serializer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for reminding. I combined RESTMapperManager with CacheManager, and the interface will be exposed by CacheManager.
db266f1
to
cbe15bd
Compare
cbe15bd
to
80f4a33
Compare
} | ||
} | ||
|
||
func (rm *RESTMapperManager) IsSchemeResource(gvr schema.GroupVersionResource) (bool, schema.GroupVersionKind) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's not friendly to expose RESTMapperManager
out of serializer.go, so we need to delete IsSchemeResource
and IsCustomResource
func, but add KindFor
func for Serializer
instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion, the code has been updated.
sepForGVR = "/" | ||
) | ||
|
||
func (cm *cacheManager) initRESTMapperManager() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my opinion, RESTMapper
info is not part of cacheManager, so it's not friendly to add func of RESTMapper
in cacheManager. how about remove cache of restMapper into serializer.go
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion, the code has been updated.
@@ -52,12 +52,16 @@ type CacheManager interface { | |||
UpdateCacheAgents(agents []string) error | |||
ListCacheAgents() []string | |||
CanCacheFor(req *http.Request) bool | |||
UpdateKind(gvk schema.GroupVersionKind) error | |||
DeleteKindFor(gvr schema.GroupVersionResource) error | |||
ListCacheCRD() map[schema.GroupVersionResource]schema.GroupVersionKind |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not friendly to add func of process gvr
or gvk
for cacheManager, In Original design, only serializer.go
can process gvr
and gvk
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion, the code has been updated.
|
||
var ( | ||
// unsafeDefaultRESTMapper is used to store the mapping relationship between GVK and GVR in scheme | ||
unsafeDefaultRESTMapper = NewDefaultRESTMapperFromScheme() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
var unsafeDefaultRESTMapper
has the same name as struct field RESTMapperManager. unsafeDefaultRESTMapper
, I think it's better to have different names for these two definitions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is updated, the global variable name is changed to unsafeSchemeRESTMapper
.
func (rm *RESTMapperManager) deleteKind(gvk schema.GroupVersionKind) error { | ||
rm.Lock() | ||
kindName := strings.TrimSuffix(gvk.Kind, "List") | ||
plural, singular := meta.UnsafeGuessKindToResource(gvk.GroupVersion().WithKind(kindName)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kindName := strings.TrimSuffix(gvk.Kind, "List")
plural, singular := meta.UnsafeGuessKindToResource(gvk.GroupVersion().WithKind(kindName))
should be located before rm.Lock()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is updated.
@@ -25,8 +25,11 @@ import ( | |||
"github.com/openyurtio/openyurt/pkg/yurthub/cachemanager" | |||
"github.com/openyurtio/openyurt/pkg/yurthub/certificate/interfaces" | |||
"github.com/openyurtio/openyurt/pkg/yurthub/healthchecker" | |||
"github.com/openyurtio/openyurt/pkg/yurthub/kubernetes/meta" | |||
util2 "github.com/openyurtio/openyurt/pkg/yurthub/proxy/util" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
util2 --> proxyutil?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is updated.
pkg/yurthub/proxy/util/util.go
Outdated
@@ -292,3 +294,28 @@ func WithRequestTimeout(handler http.Handler) http.Handler { | |||
handler.ServeHTTP(w, req) | |||
}) | |||
} | |||
|
|||
// WithUpdateRESTMapper used to update the RESTMapper. | |||
func WithUpdateRESTMapper(handler http.Handler, manager *meta.RESTMapperManager) http.Handler { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not put all of restMapper operations in this func?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The delete operation of the invalid GVK is transferred to the modifyResponse(..) function (which in pkg/yurthub/proxy/remote/remote.go
).
pkg/yurthub/storage/disk/storage.go
Outdated
@@ -87,6 +88,21 @@ func (ds *diskStorage) Create(key string, contents []byte) error { | |||
} | |||
return err | |||
} else if info.IsDir() { | |||
// ensure that there are no files left locally before create an dir | |||
if fileInfoList, err := ioutil.ReadDir(keyPath); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ioutil.ReadDir(keyPath)
will return a sorted list files, but we only need to list all file names, so how about use the following code:
f, err := os.Open(keyPath)
if err != nil {
return err
}
defer f.Close()
names, err := f.Readdirnames(-1)
if err != nil {
return err
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is updated, the Create(..) no longer cleans up old files, and the related functions are placed in the Replace(..).
a8a5a24
to
8948c7d
Compare
@@ -600,3 +625,8 @@ func (cm *cacheManager) CanCacheFor(req *http.Request) bool { | |||
} | |||
return true | |||
} | |||
|
|||
// DeleteInvalidKindFor is used to delete the invalid Kind(which is not registered in the cloud) | |||
func (cm *cacheManager) DeleteInvalidKindFor(gvr schema.GroupVersionResource) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about change the func name from DeleteInvalidKindFor
to DeleteKindFor
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code has been updated, please review.
@@ -2237,13 +2354,46 @@ func compareObjectsAndKeys(t *testing.T, objs []runtime.Object, namespaced bool, | |||
return true | |||
} | |||
|
|||
//used to reset the RESTMapper | |||
func resetRESTMapper(dStorage storage.Store, rm *hubmeta.RESTMapperManager) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add a reset
func for RESTMapperManager
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code has been updated, please review.
@@ -1098,7 +1115,7 @@ func TestCacheListResponse(t *testing.T) { | |||
path: "/api/v1/node", | |||
resource: "nodes", | |||
namespaced: false, | |||
cacheErr: storage.ErrStorageNotFound, | |||
queryErr: storage.ErrStorageNotFound, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we only need expectResult
to verify the return values, so please delete cacheErr
or queryErr
field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code has been updated, please review.
8948c7d
to
36e3353
Compare
|
||
func NewRESTMapperManager(rootDir string) *RESTMapperManager { | ||
var dm map[schema.GroupVersionResource]schema.GroupVersionKind | ||
dStorage, err := disk.NewDiskStorage(rootDir) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not make disk storage
as parameter for NewRESTMapperManager
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is updated, the disk storage
is passed into the NewRESTMapperManager
as a variable.
klog.Infof("initialize an empty DynamicRESTMapper") | ||
} | ||
} else { | ||
klog.Errorf("failed to create disk storage, %v, only initialize an empty DynamicRESTMapper in memory ", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if failed to create disk storage, i think we need to return error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is updated, the disk storage
is passed into the NewRESTMapperManager
as a variable, so the storage creation process is deleted.
// and delete the corresponding file in the disk (cache-crd-restmapper.conf), it should be used carefully. | ||
func (rm *RESTMapperManager) ResetRESTMapper() error { | ||
rm.dynamicRESTMapper = make(map[schema.GroupVersionResource]schema.GroupVersionKind) | ||
err := rm.storage.DeleteCollection("_internal/restmapper") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_internal/restmapper
is hard code, please define a constant var.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is updated, please review.
func (rm *RESTMapperManager) dynamicKindFor(gvr schema.GroupVersionResource) (schema.GroupVersionKind, error) { | ||
rm.RLock() | ||
defer rm.RUnlock() | ||
resource := gvr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think var resource is not needed, we can use gvr directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is updated, please review.
// Obtain gvk according to gvr in dynamicRESTMapper | ||
func (rm *RESTMapperManager) dynamicKindFor(gvr schema.GroupVersionResource) (schema.GroupVersionKind, error) { | ||
rm.RLock() | ||
defer rm.RUnlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rm.RLock()
defer rm.RUnlock()
code block can be moved to before if hasVersion
condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is updated, and code block is removed.
36e3353
to
0476391
Compare
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.9", GitCommit:"94f372e501c973a7fa9eb40ec9ebd2fe7ca69848", GitTreeState:"clean", BuildDate:"2020-09-16T13:56:40Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.8-aliyunedge.1", GitCommit:"d0a01f1", GitTreeState:"", BuildDate:"2021-05-24T03:00:17Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Linux 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-8"
CENTOS_MANTISBT_PROJECT_VERSION="8"
yurthub version: projectinfo.Info{GitVersion:"v0.8.6", GitCommit:"3e8d68e", BuildDate:"2021-07-30T08:23:12Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
I0730 08:43:32.326194 1 start.go:62] FLAG: --access-server-through-hub="false"
I0730 08:43:32.326223 1 start.go:62] FLAG: --add_dir_header="false"
I0730 08:43:32.326228 1 start.go:62] FLAG: --alsologtostderr="false"
I0730 08:43:32.326231 1 start.go:62] FLAG: --bind-address="127.0.0.1"
I0730 08:43:32.326236 1 start.go:62] FLAG: --cert-mgr-mode="hubself"
I0730 08:43:32.326240 1 start.go:62] FLAG: --disabled-resource-filters="[]"
I0730 08:43:32.326246 1 start.go:62] FLAG: --disk-cache-path="/etc/kubernetes/cache/"
I0730 08:43:32.326250 1 start.go:62] FLAG: --dummy-if-ip="169.254.2.1"
I0730 08:43:32.326253 1 start.go:62] FLAG: --dummy-if-name="yurthub-dummy0"
I0730 08:43:32.326256 1 start.go:62] FLAG: --enable-dummy-if="true"
I0730 08:43:32.326260 1 start.go:62] FLAG: --enable-iptables="true"
I0730 08:43:32.326263 1 start.go:62] FLAG: --enable-resource-filter="true"
I0730 08:43:32.326266 1 start.go:62] FLAG: --gc-frequency="120"
I0730 08:43:32.326271 1 start.go:62] FLAG: --heartbeat-failed-retry="3"
I0730 08:43:32.326274 1 start.go:62] FLAG: --heartbeat-healthy-threshold="2"
I0730 08:43:32.326277 1 start.go:62] FLAG: --heartbeat-timeout-seconds="2"
I0730 08:43:32.326280 1 start.go:62] FLAG: --help="false"
I0730 08:43:32.326283 1 start.go:62] FLAG: --join-token="12345c.7dad0c6abcde36a8"
I0730 08:43:32.326286 1 start.go:62] FLAG: --kubelet-ca-file="/etc/kubernetes/pki/ca.crt"
I0730 08:43:32.326290 1 start.go:62] FLAG: --kubelet-client-certificate="/var/lib/kubelet/pki/kubelet-client-current.pem"
I0730 08:43:32.326294 1 start.go:62] FLAG: --lb-mode="rr"
I0730 08:43:32.326297 1 start.go:62] FLAG: --log-flush-frequency="5s"
I0730 08:43:32.326302 1 start.go:62] FLAG: --log_backtrace_at=":0"
I0730 08:43:32.326307 1 start.go:62] FLAG: --log_dir=""
I0730 08:43:32.326311 1 start.go:62] FLAG: --log_file=""
I0730 08:43:32.326314 1 start.go:62] FLAG: --log_file_max_size="1800"
I0730 08:43:32.326317 1 start.go:62] FLAG: --logtostderr="true"
I0730 08:43:32.326320 1 start.go:62] FLAG: --max-requests-in-flight="250"
I0730 08:43:32.326324 1 start.go:62] FLAG: --node-name="n137"
I0730 08:43:32.326327 1 start.go:62] FLAG: --profiling="true"
I0730 08:43:32.326333 1 start.go:62] FLAG: --proxy-port="10261"
I0730 08:43:32.326336 1 start.go:62] FLAG: --proxy-secure-port="10268"
I0730 08:43:32.326339 1 start.go:62] FLAG: --root-dir="/var/lib/yurthub"
I0730 08:43:32.326342 1 start.go:62] FLAG: --serve-port="10267"
I0730 08:43:32.326345 1 start.go:62] FLAG: --server-addr="https://apiserver.demo:6443"
I0730 08:43:32.326348 1 start.go:62] FLAG: --skip_headers="false"
I0730 08:43:32.326351 1 start.go:62] FLAG: --skip_log_headers="false"
I0730 08:43:32.326354 1 start.go:62] FLAG: --stderrthreshold="2"
I0730 08:43:32.326357 1 start.go:62] FLAG: --v="2"
I0730 08:43:32.326360 1 start.go:62] FLAG: --version="false"
I0730 08:43:32.326363 1 start.go:62] FLAG: --vmodule=""
I0730 08:43:32.326388 1 config.go:179] yurthub would connect remote servers: https://apiserver.demo:6443
I0730 08:43:32.327851 1 restmapper.go:83] reset DynamicRESTMapper to map[apps.openyurt.io/v1alpha1, Resource=nodepool:apps.openyurt.io/v1alpha1, Kind=NodePool apps.openyurt.io/v1alpha1, Resource=nodepools:apps.openyurt.io/v1alpha1, Kind=NodePool]
I0730 08:43:32.328424 1 filter.go:93] Filter servicetopology registered successfully
I0730 08:43:32.328432 1 filter.go:93] Filter masterservice registered successfully
I0730 08:43:32.328476 1 start.go:72] yurthub cfg: &config.YurtHubConfiguration{LBMode:"rr", RemoteServers:[]*url.URL{(*url.URL)(0xc0005c4680)}, YurtHubServerAddr:"127.0.0.1:10267", YurtHubProxyServerAddr:"127.0.0.1:10261", YurtHubProxyServerSecureAddr:"127.0.0.1:10268", YurtHubProxyServerDummyAddr:"169.254.2.1:10261", YurtHubProxyServerSecureDummyAddr:"169.254.2.1:10268", GCFrequency:120, CertMgrMode:"hubself", KubeletRootCAFilePath:"/etc/kubernetes/pki/ca.crt", KubeletPairFilePath:"/var/lib/kubelet/pki/kubelet-client-current.pem", NodeName:"n137", HeartbeatFailedRetry:3, HeartbeatHealthyThreshold:2, HeartbeatTimeoutSeconds:2, MaxRequestInFlight:250, JoinToken:"12345c.7dad0c6abcde36a8", RootDir:"/var/lib/yurthub", EnableProfiling:true, EnableDummyIf:true, EnableIptables:true, HubAgentDummyIfName:"yurthub-dummy0", StorageWrapper:(*cachemanager.storageWrapper)(0xc0002e2400), SerializerManager:(*serializer.SerializerManager)(0xc0002e2440), RESTMapperManager:(*meta.RESTMapperManager)(0xc0002e2480), TLSConfig:(*tls.Config)(nil), MutatedMasterServiceAddr:"apiserver.demo:6443", Filters:(*filter.Filters)(0xc000601170), SharedFactory:(*informers.sharedInformerFactory)(0xc00048ff90), YurtSharedFactory:(*externalversions.sharedInformerFactory)(0xc000162050)}
I0730 08:43:32.328642 1 start.go:87] 1. register cert managers
I0730 08:43:32.328662 1 certificate.go:60] Registered certificate manager kubelet
I0730 08:43:32.328667 1 certificate.go:60] Registered certificate manager hubself
I0730 08:43:32.328683 1 start.go:93] 2. create cert manager with hubself mode
I0730 08:43:32.328705 1 cert_mgr.go:214] /var/lib/yurthub/pki/ca.crt file already exists, so skip to create ca file
I0730 08:43:32.328710 1 cert_mgr.go:127] use /var/lib/yurthub/pki/ca.crt ca file to bootstrap yurthub
I0730 08:43:32.328798 1 cert_mgr.go:289] yurthub bootstrap conf file already exists, skip init bootstrap
I0730 08:43:32.328821 1 certificate_store.go:130] Loading cert/key pair from "/var/lib/yurthub/pki/yurthub-current.pem".
I0730 08:43:32.338308 1 certificate_manager.go:282] Certificate rotation is enabled.
I0730 08:43:32.338350 1 cert_mgr.go:412] yurthub config file already exists, skip init config file
I0730 08:43:32.338358 1 start.go:101] 3. new transport manager
I0730 08:43:32.338364 1 transport.go:57] use /var/lib/yurthub/pki/ca.crt ca cert file to access remote server
I0730 08:43:32.338473 1 start.go:109] 4. create health checker for remote servers
I0730 08:43:32.338480 1 certificate_manager.go:553] Certificate expiration is 2031-07-27 08:44:12 +0000 UTC, rotation deadline is 2028-12-12 08:41:57.628805822 +0000 UTC
I0730 08:43:32.338502 1 certificate_manager.go:288] Waiting 64607h58m25.290305841s for next certificate rotation
I0730 08:43:32.343438 1 connrotation.go:145] create a connection from 192.168.0.137:52156 to apiserver.demo:6443, total 1 connections in transport manager dialer
I0730 08:43:32.371049 1 start.go:118] 5. new restConfig manager for hubself mode
I0730 08:43:32.371076 1 start.go:126] 6. create tls config for secure servers
I0730 08:43:32.372155 1 config.go:107] re-fix hub rest config host successfully with server https://apiserver.demo:6443
I0730 08:43:32.373415 1 certmanager.go:47] subject of yurthub server certificate
I0730 08:43:32.373450 1 certificate_store.go:130] Loading cert/key pair from "/var/lib/yurthub/pki/yurthub-server-current.pem".
I0730 08:43:32.373670 1 certificate_manager.go:282] Certificate rotation is enabled.
I0730 08:43:32.373764 1 certificate_manager.go:553] Certificate expiration is 2031-07-28 08:34:54 +0000 UTC, rotation deadline is 2029-05-09 13:09:09.305597776 +0000 UTC
I0730 08:43:32.373785 1 certificate_manager.go:288] Waiting 68164h25m36.9318152s for next certificate rotation
I0730 08:43:37.373915 1 start.go:134] 7. new cache manager with storage wrapper and serializer manager
I0730 08:43:37.374031 1 cache_agent.go:69] reset cache agents to [kubelet kube-proxy flanneld coredns yurttunnel-agent yurthub]
I0730 08:43:37.376433 1 start.go:142] 8. new gc manager for node n137, and gc frequency is a random time between 120 min and 360 min
I0730 08:43:37.376535 1 gc.go:97] list pod keys from storage, total: 6
I0730 08:43:37.377152 1 config.go:107] re-fix hub rest config host successfully with server https://apiserver.demo:6443
I0730 08:43:37.414575 1 gc.go:125] list all of pod that on the node: total: 5
I0730 08:43:37.414725 1 gc.go:143] gc pod kubelet/pods/kube-system/yurthub-n137 successfully
I0730 08:43:37.414751 1 start.go:151] 9. new filter chain for mutating response body
I0730 08:43:37.414795 1 filter.go:70] Filter servicetopology initialize successfully
I0730 08:43:37.414810 1 filter.go:70] Filter masterservice initialize successfully
I0730 08:43:37.414817 1 start.go:159] 10. new reverse proxy handler for remote servers
I0730 08:43:37.414835 1 start.go:168] 11. create dummy network interface yurthub-dummy0 and init iptables manager
I0730 08:43:37.414843 1 gc.go:74] start gc events after waiting 87.516µs from previous gc
I0730 08:43:37.415436 1 config.go:107] re-fix hub rest config host successfully with server https://apiserver.demo:6443
I0730 08:43:37.416600 1 gc.go:163] list kubelet event keys from storage, total: 12
I0730 08:43:37.527604 1 start.go:176] 12. new yurthub server and begin to serve, dummy proxy server: 169.254.2.1:10261, secure dummy proxy server: 169.254.2.1:10268
I0730 08:43:37.527649 1 start.go:185] 12. new yurthub server and begin to serve, proxy server: 127.0.0.1:10261, secure proxy server: 127.0.0.1:10268, hub server: 127.0.0.1:10267
I0730 08:43:37.527731 1 reflector.go:175] Starting reflector *v1.Service (24h0m0s) from pkg/mod/k8s.io/client-go@v0.18.8/tools/cache/reflector.go:125
I0730 08:43:37.528077 1 reflector.go:175] Starting reflector *v1alpha1.NodePool (24h0m0s) from pkg/mod/k8s.io/client-go@v0.18.8/tools/cache/reflector.go:125
E0730 08:43:37.528398 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.8/tools/cache/reflector.go:125: Failed to list *v1.Service: Get http://127.0.0.1:10261/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:10261: connect: connection refused
I0730 08:43:37.528437 1 util.go:232] start proxying: get /apis/apps.openyurt.io/v1alpha1/nodepools?limit=500&resourceVersion=0, in flight requests: 1
I0730 08:43:37.534356 1 util.go:215] yurthub list nodepools: /apis/apps.openyurt.io/v1alpha1/nodepools?limit=500&resourceVersion=0 with status code 200, spent 5.83471ms
I0730 08:43:37.534407 1 serializer.go:199] schema.GroupVersionResource{Group:"apps.openyurt.io", Version:"v1alpha1", Resource:"nodepools"} is not found in client-go runtime scheme
I0730 08:43:37.535397 1 util.go:232] start proxying: get /apis/apps.openyurt.io/v1alpha1/nodepools?allowWatchBookmarks=true&resourceVersion=2609783&timeout=7m7s&timeoutSeconds=427&watch=true, in flight requests: 1
I0730 08:43:37.736884 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/pods/yurthub-n137, in flight requests: 2
I0730 08:43:37.745580 1 util.go:215] kubelet get pods: /api/v1/namespaces/kube-system/pods/yurthub-n137 with status code 404, spent 8.595304ms
I0730 08:43:37.823413 1 gc.go:160] no kube-proxy events in local storage, skip kube-proxy events gc
I0730 08:43:38.033906 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/secrets?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dcoredns-token-dkfcc&resourceVersion=2592959&timeout=6m41s&timeoutSeconds=401&watch=true, in flight requests: 2
I0730 08:43:38.034552 1 util.go:232] start proxying: get /apis/network.openyurt.io/v1alpha1/nodenetworkconfigurations?allowWatchBookmarks=true&resourceVersion=1255477&timeout=8m41s&timeoutSeconds=521&watch=true, in flight requests: 3
I0730 08:43:38.034962 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/secrets?allowWatchBookmarks=true&fieldSelector=metadata.name%3Ddefault-token-bp6tc&resourceVersion=2592959&timeout=6m28s&timeoutSeconds=388&watch=true, in flight requests: 4
I0730 08:43:38.035303 1 util.go:232] start proxying: get /api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dn137&resourceVersion=2795654&timeout=5m58s&timeoutSeconds=358&watch=true, in flight requests: 5
I0730 08:43:38.035677 1 util.go:232] start proxying: get /api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1241767&timeout=5m23s&timeoutSeconds=323&watch=true, in flight requests: 6
I0730 08:43:38.036278 1 util.go:232] start proxying: get /api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1245131&timeout=5m5s&timeoutSeconds=305&watch=true, in flight requests: 7
I0730 08:43:38.036439 1 util.go:232] start proxying: get /apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1241589&timeout=9m58s&timeoutSeconds=598&watch=true, in flight requests: 8
I0730 08:43:38.036686 1 util.go:232] start proxying: get /api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=1255718&timeout=9m17s&timeoutSeconds=557&watch=true, in flight requests: 9
I0730 08:43:38.037355 1 util.go:232] start proxying: get /apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1255717&timeout=7m3s&timeoutSeconds=423&watch=true, in flight requests: 10
I0730 08:43:38.037434 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2775350&timeout=6m35s&timeoutSeconds=395&watch=true, in flight requests: 11
I0730 08:43:38.037672 1 util.go:232] start proxying: get /api/v1/services?allowWatchBookmarks=true&resourceVersion=1245131&timeout=6m32s&timeoutSeconds=392&watch=true, in flight requests: 12
I0730 08:43:38.038517 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dalibaba-log-configuration&resourceVersion=2774753&timeout=8m5s&timeoutSeconds=485&watch=true, in flight requests: 13
I0730 08:43:38.039584 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dcoredns&resourceVersion=2775546&timeout=7m37s&timeoutSeconds=457&watch=true, in flight requests: 14
I0730 08:43:38.039642 1 serializer.go:199] schema.GroupVersionResource{Group:"network.openyurt.io", Version:"v1alpha1", Resource:"nodenetworkconfigurations"} is not found in client-go runtime scheme
I0730 08:43:38.040813 1 util.go:232] start proxying: get /apis/storage.k8s.io/v1/csidrivers?allowWatchBookmarks=true&resourceVersion=1241591&timeout=8m30s&timeoutSeconds=510&watch=true, in flight requests: 15
I0730 08:43:38.042044 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=2779485&timeout=9m5s&timeoutSeconds=545&watch=true, in flight requests: 16
I0730 08:43:38.042839 1 util.go:232] start proxying: get /api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dn137&resourceVersion=2801118&timeoutSeconds=353&watch=true, in flight requests: 17
I0730 08:43:38.043953 1 util.go:232] start proxying: get /api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dn137&resourceVersion=2795654&timeout=7m19s&timeoutSeconds=439&watch=true, in flight requests: 18
I0730 08:43:38.044901 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dyurttunnel-nodes&resourceVersion=2776439&timeout=9m54s&timeoutSeconds=594&watch=true, in flight requests: 19
I0730 08:43:38.046058 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/secrets?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-proxy-token-7fmxs&resourceVersion=2592959&timeout=6m46s&timeoutSeconds=406&watch=true, in flight requests: 20
I0730 08:43:38.047085 1 util.go:232] start proxying: get /api/v1/services?allowWatchBookmarks=true&resourceVersion=1245131&timeoutSeconds=597&watch=true, in flight requests: 21
I0730 08:43:38.048211 1 cache_manager.go:323] pod(kubelet/pods/kube-system/yurthub-n137) is DELETED
I0730 08:43:38.048593 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dcoredns-cloud&resourceVersion=2772025&timeout=6m17s&timeoutSeconds=377&watch=true, in flight requests: 22
I0730 08:43:38.049533 1 util.go:232] start proxying: delete /api/v1/namespaces/kube-system/pods/yurthub-n137, in flight requests: 23
I0730 08:43:38.053586 1 cache_manager.go:335] unable to understand watch event &v1.Status{TypeMeta:v1.TypeMeta{Kind:"Status", APIVersion:"v1"}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"too old resource version: 2772025 (2773736)", Reason:"Expired", Details:(*v1.StatusDetails)(nil), Code:410}
I0730 08:43:38.053712 1 util.go:215] kubelet watch configmaps: /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dcoredns-cloud&resourceVersion=2772025&timeout=6m17s&timeoutSeconds=377&watch=true with status code 200, spent 5.052794ms
I0730 08:43:38.053800 1 cache_manager.go:277] kubelet watch configmaps: /api/v1/namespaces/kube-system/configmaps get 0 objects(add:0/update:0/del:0)
I0730 08:43:38.057749 1 util.go:215] kubelet delete pods: /api/v1/namespaces/kube-system/pods/yurthub-n137 with status code 404, spent 8.172162ms
I0730 08:43:39.054371 1 util.go:232] start proxying: get /api/v1/services?limit=500&resourceVersion=0, in flight requests: 22
I0730 08:43:39.060115 1 util.go:215] yurthub list services: /api/v1/services?limit=500&resourceVersion=0 with status code 200, spent 5.63453ms
I0730 08:43:39.062103 1 util.go:232] start proxying: get /api/v1/services?allowWatchBookmarks=true&resourceVersion=1245131&timeout=9m55s&timeoutSeconds=595&watch=true, in flight requests: 22
I0730 08:43:39.117605 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns-cloud&resourceVersion=2772025, in flight requests: 23
I0730 08:43:39.123163 1 util.go:215] kubelet list configmaps: /api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns-cloud&resourceVersion=2772025 with status code 200, spent 5.492708ms
I0730 08:43:39.123465 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dcoredns-cloud&resourceVersion=2801280&timeout=8m48s&timeoutSeconds=528&watch=true, in flight requests: 23
I0730 08:43:39.845571 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dflannel-token-xgm5l&resourceVersion=2592959, in flight requests: 24
I0730 08:43:39.851172 1 util.go:215] kubelet list secrets: /api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dflannel-token-xgm5l&resourceVersion=2592959 with status code 200, spent 5.487044ms
I0730 08:43:39.851485 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/secrets?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dflannel-token-xgm5l&resourceVersion=2788193&timeout=9m8s&timeoutSeconds=548&watch=true, in flight requests: 24
I0730 08:43:42.929686 1 util.go:232] start proxying: get /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:43:42.930051 1 util.go:215] kubelet get leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 269.09µs
I0730 08:43:42.930471 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:43:42.930543 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 35.442µs
I0730 08:43:45.677349 1 util.go:232] start proxying: patch /api/v1/namespaces/kube-system/events/yurthub-n137.169684620d001290, in flight requests: 25
I0730 08:43:45.689906 1 util.go:215] kubelet patch events: /api/v1/namespaces/kube-system/events/yurthub-n137.169684620d001290 with status code 200, spent 12.496911ms
I0730 08:43:45.690403 1 util.go:232] start proxying: patch /api/v1/namespaces/kube-system/events/yurthub-n137.169684620d02d80e, in flight requests: 25
I0730 08:43:45.698954 1 util.go:215] kubelet patch events: /api/v1/namespaces/kube-system/events/yurthub-n137.169684620d02d80e with status code 404, spent 8.50604ms
I0730 08:43:45.699268 1 util.go:232] start proxying: post /api/v1/namespaces/kube-system/events, in flight requests: 25
I0730 08:43:45.708393 1 util.go:215] kubelet create events: /api/v1/namespaces/kube-system/events with status code 201, spent 9.090475ms
I0730 08:43:45.708766 1 util.go:232] start proxying: patch /api/v1/namespaces/kube-system/events/yurthub-n137.169684620d001290, in flight requests: 25
I0730 08:43:45.720865 1 util.go:215] kubelet patch events: /api/v1/namespaces/kube-system/events/yurthub-n137.169684620d001290 with status code 200, spent 12.060213ms
I0730 08:43:45.721262 1 util.go:232] start proxying: patch /api/v1/namespaces/kube-system/events/yurthub-n137.169684621f684cfd, in flight requests: 25
I0730 08:43:45.732910 1 util.go:215] kubelet patch events: /api/v1/namespaces/kube-system/events/yurthub-n137.169684621f684cfd with status code 200, spent 11.618758ms
I0730 08:43:45.733300 1 util.go:232] start proxying: patch /api/v1/namespaces/kube-system/events/yurthub-n137.16968462278e1ffe, in flight requests: 25
I0730 08:43:45.745434 1 util.go:215] kubelet patch events: /api/v1/namespaces/kube-system/events/yurthub-n137.16968462278e1ffe with status code 200, spent 12.105218ms
I0730 08:43:52.931163 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:43:52.931281 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 61.971µs
I0730 08:44:02.931939 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:44:02.932127 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 131.858µs
I0730 08:44:12.932880 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:44:12.933005 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 66.911µs
I0730 08:44:22.933733 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:44:22.933860 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 59.063µs
I0730 08:44:32.934554 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:44:32.934661 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 58.564µs
I0730 08:44:42.935337 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:44:42.935468 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 65.075µs
I0730 08:44:52.936304 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:44:52.936421 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 59.957µs
I0730 08:45:01.736969 1 util.go:232] start proxying: post /api/v1/namespaces/kube-system/pods, in flight requests: 25
I0730 08:45:01.755368 1 util.go:215] kubelet create pods: /api/v1/namespaces/kube-system/pods with status code 201, spent 18.320342ms
I0730 08:45:01.756958 1 storage.go:464] key(kubelet/pods/kube-system/yurthub-n137) storage is pending, just skip it
I0730 08:45:01.756978 1 cache_manager.go:468] skip to cache object because key(kubelet/pods/kube-system/yurthub-n137) is under processing
I0730 08:45:01.758803 1 cache_manager.go:323] pod(kubelet/pods/kube-system/yurthub-n137) is ADDED
I0730 08:45:02.937204 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:45:02.937322 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 65.339µs
I0730 08:45:06.436317 1 util.go:232] start proxying: patch /api/v1/nodes/n137/status?timeout=10s, in flight requests: 25
I0730 08:45:06.451718 1 util.go:215] kubelet patch nodes: /api/v1/nodes/n137/status?timeout=10s with status code 200, spent 15.338498ms
I0730 08:45:06.453041 1 storage.go:464] key(kubelet/nodes/n137) storage is pending, just skip it
I0730 08:45:06.453065 1 cache_manager.go:468] skip to cache object because key(kubelet/nodes/n137) is under processing
I0730 08:45:07.736729 1 util.go:232] start proxying: get /api/v1/namespaces/kube-system/pods/yurthub-n137, in flight requests: 25
I0730 08:45:07.745743 1 util.go:215] kubelet get pods: /api/v1/namespaces/kube-system/pods/yurthub-n137 with status code 200, spent 8.95224ms
I0730 08:45:07.747148 1 util.go:232] start proxying: patch /api/v1/namespaces/kube-system/pods/yurthub-n137/status, in flight requests: 25
I0730 08:45:07.758577 1 util.go:215] kubelet patch pods: /api/v1/namespaces/kube-system/pods/yurthub-n137/status with status code 200, spent 11.381656ms
I0730 08:45:07.758844 1 storage.go:464] key(kubelet/pods/kube-system/yurthub-n137) storage is pending, just skip it
I0730 08:45:07.758974 1 storage.go:464] key(kubelet/pods/kube-system/yurthub-n137) storage is pending, just skip it
I0730 08:45:07.758990 1 cache_manager.go:468] skip to cache object because key(kubelet/pods/kube-system/yurthub-n137) is under processing
I0730 08:45:07.760464 1 cache_manager.go:323] pod(kubelet/pods/kube-system/yurthub-n137) is MODIFIED
I0730 08:45:12.938152 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:45:12.938268 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 63.741µs
I0730 08:45:22.938995 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:45:22.939133 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 74.689µs
I0730 08:45:32.939825 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:45:32.939946 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 64.773µs
I0730 08:45:42.940814 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:45:42.940955 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 85.836µs
I0730 08:45:52.941728 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:45:52.941852 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 66.343µs
I0730 08:46:02.942653 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:46:02.942799 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 88.757µs
I0730 08:46:12.943461 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:46:12.943598 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 79.985µs
I0730 08:46:22.944348 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:46:22.944471 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 66.563µs
I0730 08:46:32.945199 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:46:32.945319 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 66.39µs
I0730 08:46:42.946026 1 util.go:232] start proxying: put /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s, in flight requests: 25
I0730 08:46:42.946183 1 util.go:215] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/n137?timeout=10s with status code 200, spent 70.764µs
I0730 08:46:45.842612 1 util.go:232] start proxying: post /apis/authorization.k8s.io/v1/subjectaccessreviews, in flight requests: 25
I0730 08:46:45.848483 1 util.go:215] kubelet create subjectaccessreviews: /apis/authorization.k8s.io/v1/subjectaccessreviews with status code 201, spent 5.815005ms |
6b4da91
to
3e8d68e
Compare
/lgtm |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: qclc, rambohe-ch The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #325
Special notes for your reviewer:
/assign @rambohe-ch
Does this PR introduce a user-facing change?
other Note