Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

smbplugin.exe resulting in Windows nodes and causing blue screens #261

Closed
WangMosquito opened this issue May 12, 2021 · 6 comments
Closed
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@WangMosquito
Copy link

What happened:
Working for one to two days on Windows systems a blue screen occurs .

What you expected to happen:
SMB mounts can be mounted from Windows systems.

How to reproduce it:
We use an on-premise Kubernetes cluster running Windows as well as Linux nodes. We have installed the kubectl csi-driver-smb-v0.6.0 in the kube-system namespace, with Windows integration enabled.

In addition, we have created a Storage class and PV/PVCs according to the following templates:

StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: mount-readwritemany
mountOptions:
  - dir_mode=0777
  - file_mode=0777
parameters:
  createSubDir: 'false'
  csi.storage.k8s.io/node-stage-secret-name: mount-secret
  csi.storage.k8s.io/node-stage-secret-namespace: default
  source: \\10.33.3.1\share
provisioner: smb.csi.k8s.io
reclaimPolicy: Retain
volumeBindingMode: Immediate

PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: iis-log-web-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  storageClassName: mount-readwritemany

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: web
      name: web
    spec:
      containers:
        - image: 'webserver:v1'
          name: web
          volumeMounts:
            - mountPath: 'C:\inetpub\logs\LogFiles\W3SVC1'
              name: iis-log-web
      nodeSelector:
        kubernetes.io/os: windows
      volumes:
        - name: iis-log-web
          persistentVolumeClaim:
            claimName: iis-log-web

Anything else we need to know?:
Windows Event Log
image
image
WinDbg
image
image

Environment:

  • CSI Proxy version: v0.2.2
  • CSI Driver version: v0.6.0
  • Kubernetes version: v1.19.10
  • OS: Windows Server 2019 Datacenter, Version 1809, OS Build 17763.1817
@andyzhangx
Copy link
Member

have you installed csi-proxy? I am surprised that this driver would make blue screen, it just does mount & unmount through csi-proxy, below are the windows node version we are using in e2e test, it works well:

NAME                    STATUS   ROLES    AGE     VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION     CONTAINER-RUNTIME
2046k8s000              Ready    agent    9m31s   v1.18.17   10.240.0.35    <none>        Windows Server 2019 Datacenter   10.0.17763.1397    docker://19.3.14
2046k8s001              Ready    agent    9m32s   v1.18.17   10.240.0.4     <none>        Windows Server 2019 Datacenter   10.0.17763.1397    docker://19.3.14

@WangMosquito
Copy link
Author

WangMosquito commented May 12, 2021

yes, i have installed csi-proxy, but not die immediately, can Working for one to two days.

install to C:\etc\kubernetes\node\bin

Invoke-WebRequest https://kubernetesartifacts.azureedge.net/csi-proxy/v0.2.2/binaries/csi-proxy.tar.gz -OutFile C:\etc\kubernetes\node\bin\csi-proxy.tar.gz; 

csi-proxy.exe installed and run as binary or run as a Windows service on each Windows node

sc create csiproxy binPath="C:\etc\kubernetes\node\bin\csi-proxy.exe -windows-service -log_file=C:\etc\kubernetes\logs\csi-proxy.log -logtostderr=false"
sc failure csiproxy reset= 0 actions= restart/10000
sc start csiproxy

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 10, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 9, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

andyzhangx added a commit to andyzhangx/csi-driver-smb that referenced this issue Dec 10, 2024
98f23071d Merge pull request kubernetes-csi#260 from TerryHowe/update-csi-driver-version
e9d8712d0 Merge pull request kubernetes-csi#259 from stmcginnis/deprecated-kind-kube-root
faf79ff66 Remove --kube-root deprecated kind argument
734c2b950 Merge pull request kubernetes-csi#265 from Rakshith-R/consider-main-branch
f95c855be Merge pull request kubernetes-csi#262 from huww98/golang-toolchain
3c8d966fe Treat main branch as equivalent to master branch
e31de525b Merge pull request kubernetes-csi#261 from huww98/golang
fd153a9e2 Bump golang to 1.23.1
a8b3d0504 pull-test.sh: fix "git subtree pull" errors
6b05f0fcc use new GOTOOLCHAIN env to manage go version
18b6ac6d2 chore: update CSI driver version to 1.15

git-subtree-dir: release-tools
git-subtree-split: 98f23071d946dd3de3188a7e1f84679067003162
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants