Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

s3Upload() doesn't work if includePathPattern matches multiple files #83

Open
swisspol opened this issue May 2, 2018 · 25 comments
Open

Comments

@swisspol
Copy link

swisspol commented May 2, 2018

Version 1.26

Assuming we have the following content in the build directory:

Houseparty-arm64-v8a.apk
Houseparty-armeabi-v7a.apk
Houseparty-x86.apk
mapping.txt

And we want to upload it to S3:

This works:

s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*-arm64-v8a.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*-armeabi-v7a.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*-x86.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*.txt', workingDir: 'build')

This doesn't work and only uploads mapping.txt:

s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*.txt', workingDir: 'build')

This doesn't work either and doesn't upload anything:

s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*', workingDir: 'build')
@hoegertn
Copy link
Contributor

can you try again with 1.27

@llater
Copy link

llater commented Jul 12, 2018

Version 1.27

I have the build directory:

Houseparty-arm64-v8a.apk
Houseparty-armeabi-v7a.apk
Houseparty-x86.apk
mapping.txt

I want to upload it to S3:

s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*.txt', workingDir: 'build')

It only uploads mapping.txt. Still unresolved in 1.27

@ryanneufeld
Copy link

This issue is still unresolved in 1.31

@hoegertn
Copy link
Contributor

Can you please describe your setup? I cannot reproduce the problem.

@ryanneufeld
Copy link

ryanneufeld commented Sep 13, 2018

My pipeline looks similar to this:

dir('build') {
    withEnv(['GIT_SSH=run_ssh.sh']) {
        sh """
            ./make-package
            // This produces ${BUILD_PARENT}.tar.gz
            // This produces ${BUILD_PARENT}.tar.gz.sha1
        """
    }
}
withAWS(credentials: 'dash-build-s3upload', region: 'us-west-1') {
    s3Upload bucket: 's3bucketname', includePathPattern: "*tar*", workingDir: 'build'
}

I can confirm that the tar files are in the build directory.

And the following is what i'm actually using:

s3Upload bucket: 's3bucketname', file: "${BUILD_PARENT}.tar.gz", workingDir: 'build'
s3Upload bucket: 's3bucketname', file: "${BUILD_PARENT}.tar.gz.sha1", workingDir: 'build'

@CroModder
Copy link

Seems it's known bug but still waiting for taking into consideration:
https://issues.jenkins-ci.org/browse/JENKINS-47046

One of the workaround would be usage of pipeline findFiles:
https://jenkins.io/doc/pipeline/steps/pipeline-utility-steps/#findfiles-find-files-in-the-workspace

@hoegertn
Copy link
Contributor

The problem is that I cannot reproduce it on my test setup to debug the root cause.

@CroModder
Copy link

The problem is that I cannot reproduce it on my test setup to debug the root cause.

Is there an online playground for jenkins pipeline, or some other ways how to share the whole build job? Because the setup that is failing for me is literally the official example:
s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*', workingDir:'dist', excludePathPattern:'**/*.svg')

@kraenhansen
Copy link

I am also seeing this error :/

@eplodn
Copy link

eplodn commented Jan 3, 2019

Same here.

This works:

pipeline{
    agent { node { label 'jenkins-host' } }
    stages {
        stage('test') {
            steps {
                script {
                    sh "rm -rf txt_dir || true"
                    sh "echo test1 >> test1.txt"
//                    sh "echo test2 >> test2.txt"
                    sh "mkdir -p txt_dir"
                    sh "mv *txt txt_dir" 
                    archiveArtifacts allowEmptyArchive: true,
                        artifacts: "**/*txt",
                        caseSensitive: false,
                        defaultExcludes: false,
                        onlyIfSuccessful: false
                
                    withAWS(endpointUrl:'http://100.64.0.165:9000',  // local minio.io
                            credentials:'128e57fa-140a-4463-ad37-b3821371f735') {
                        s3Upload bucket:'jenkins', path:"build-${env.BUILD_NUMBER}/", 
                                includePathPattern:'**/*txt', workingDir: "${env.WORKSPACE}"
                    }
                }
            }
        }
    }
}

Take out the comment on sh "echo test2 >> test2.txt" and it doesn't work.
It also doesn't say it's failed. Just "Upload complete".

Is there something I can do on my end to allow debugging/log files/etc.?

@brockers
Copy link

Having the same problem. Trying to upload the contents of an entire folder to the root of the S3 bucket, where the files list is something like:

ls -l assets/marketing/
-rw-rw-r--. 1 jenkins jenkins  85598 Jan 11 16:52 ai_logo.png
-rw-rw-r--. 1 jenkins jenkins   1559 Jan 11 16:52 favicon-16x16.png
-rw-rw-r--. 1 jenkins jenkins   2366 Jan 11 16:52 favicon-32x32.png
-rw-rw-r--. 1 jenkins jenkins   1150 Jan 11 16:52 favicon.ico
-rw-rw-r--. 1 jenkins jenkins 180092 Jan 11 16:52 header.jpg
-rw-rw-r--. 1 jenkins jenkins   3635 Jan 15 13:19 index.html
-rw-rw-r--. 1 jenkins jenkins  15173 Jan 11 16:52 logo.png
-rw-rw-r--. 1 jenkins jenkins    268 Jan 15 10:48 README.md
-rw-rw-r--. 1 jenkins jenkins    487 Jan 11 17:35 ribbon.css
-rw-rw-r--. 1 jenkins jenkins   1825 Jan 11 16:52 style.css

The following will fail to upload anything:

stage('deploy') {
	when {
		branch 'master'
	}
	steps {
		withAWS(endpointUrl:'https://s3.amazonaws.com', credentials:'aws_cred_id'){
			s3Upload(bucket:'staticwebsite-bucket', path:'', includePathPattern: '*', workingDir: 'assets/website/', acl:'PublicRead')
		}
	}
}

And this version will upload only one of each file type:

stage('deploy') {
	when {
		branch 'master'
	}
	steps {
		withAWS(endpointUrl:'https://s3.amazonaws.com', credentials:'aws_cred_id'){
			s3Upload(bucket:'staticwebsite-bucket'', path:'', includePathPattern: '*.css', workingDir: 'assets/website/', acl:'PublicRead')
			s3Upload(bucket:'staticwebsite-bucket'', path:'', includePathPattern: '*.png', workingDir: 'assets/website/', acl:'PublicRead')
			s3Upload(bucket:'staticwebsite-bucket'', path:'', includePathPattern: '*.jpg', workingDir: 'assets/website/', acl:'PublicRead')
			s3Upload(bucket:'staticwebsite-bucket'', path:'', includePathPattern: '*.ico', workingDir: 'assets/website/', acl:'PublicRead')
		}
	}
}

@brockers
Copy link

brockers commented Jan 18, 2019

The only useful work-around while still using withAWS and s3Upload is currently by using a findFiles glob and looping through the resulting list. It works fine, and will make it easy to convert a Jenkinsfile over when s3Upload gets fixed, but here is an example for anyone else:

stage('deploy') {
	when {
		branch 'master'
	}
	steps {
		script {
			FILES = findFiles(glob: 'assets/website/**')
			withAWS(endpointUrl:'https://s3.amazonaws.com', credentials:'aws_cred_id'){
				FILES.each{ item -> 
					s3Upload(bucket: 'staticwebsite-bucket', acl: 'PublicRead', path: '', file: "${item.path}")
				}
			}
		}
	}
}

@weidonglian
Copy link

The only useful work-around while still using withAWS and s3Upload is currently by using a findFiles glob and looping through the resulting list. It works fine, and will make it easy to convert a Jenkinsfile over when s3Upload gets fixed, but here is an example for anyone else:

stage('deploy') {
	when {
		branch 'master'
	}
	steps {
		script {
			FILES = findFiles(glob: 'assets/website/**')
			withAWS(endpointUrl:'https://s3.amazonaws.com', credentials:'aws_cred_id'){
				FILES.each{ item -> 
					s3Upload(bucket: 'staticwebsite-bucket', acl: 'PublicRead', path: '', file: "${item.path}")
				}
			}
		}
	}
}

It does not keep the relative path, it will upload files in every sub folders into the root of the bucket.
Is there an easy way to get the relative path.
Really hope this ticket will be fixed soon.

@brockers
Copy link

Hey @weidonglian I haven't tried it but findFiles glob includes

${item.directory}

As a value. I would check that out (you may have to remove $PWD) and then put the value in for path:

I know it is a awful work-around, but it is the only thing I've been able to make work so far.

@ryan-summers
Copy link

ryan-summers commented Feb 21, 2019

I believe I may be able to clarify the issue a bit here. The problem appears to arise on Windows agents, but not on *nix agents.

Example pipeline

pipeline {
    agent { label 'master' }

    stages {
        stage('Generic files')
        {
            steps {
                dir('test') {
                    writeFile file: 'test.csv', text: 'fake csv file for testing'
                    writeFile file: 'test.log', text: 'fake log file for testing'

                    dir ('results') {
                        writeFile file: 'test.csv', text: 'fake csv file within results directory'
                        writeFile file: 'test.log', text: 'fake log file within results directory'
                    }
                }
            }
        }
    }

    post {
        always {
            withAWS(credentials: 'MY_CREDENTIALS', region: 'MY_REGION') {
                s3Upload(bucket: "test-bucket", includePathPattern: "test/results/*", path: "test-dir/")
                s3Upload(bucket: "test-bucket", includePathPattern: "test/results/test*", path: "test-dir/")
                s3Upload(bucket: "test-bucket", includePathPattern: "test/test*", path: "test-dir/")
            }
        }
    }

When running on a *nix agent, the commands all properly upload files.

When running on a windows agent (e.g. change master agent to a windows-specific agent), none of the s3Upload commands will upload a file.

@hoegertn
Copy link
Contributor

Can you try what happens if you use \ as path separator?

@swisspol
Copy link
Author

swisspol commented Feb 21, 2019 via email

@ryan-summers
Copy link

ryan-summers commented Feb 21, 2019

Using \\ as the path separator in the pipeline does not make the problem go away on a Windows agent. I can reliably upload using the above pipeline on Linux, but it will fail every time from a windows agent.

A potential work-around for this issue (if both windows-like and unix-like nodes are available) is to stash files on the Windows node and unstash them to a *nix node before the s3Upload command is executed.

@ryanneufeld
Copy link

@ryan-summers it totally happens on *nix agents as well. In fact, I didn't even know you could use a windows agent.

@ryan-summers
Copy link

Perhaps the windows vs. *nix agents upload is a different issue then.

@smastrorocco
Copy link

Any progress on this? Running into same issue on *nix agents as well.

@tommymccallig
Copy link

Seems to be an issue only on slaves. If you run the exact same pipeline on the master, it appears to work. See: https://issues.jenkins-ci.org/browse/JENKINS-44000

@everactivetim
Copy link

I was having this exact same problem, building the plugin from source resolved my issue (docker slaves).

@techarchconsulting
Copy link

This works fine in slave too..

Example Pipeline stage

stage('Deployment')
{
steps
{
script
{
def files = findFiles(glob: 'build/.')
withAWS(region:'us-east-1',credentials:'AutoDeployer')
{
files.each {s3Upload(file:"${it}", bucket:'mymnr.dev', path:"",pathStyleAccessEnabled:true, payloadSigningEnabled:true, acl:'PublicRead')}
}

                files = findFiles(glob: 'build/static/css/*.*')                    
                withAWS(region:'us-east-1',credentials:'AutoDeployer')
                {
                        files.each {s3Upload(file:"${it}", bucket:'mymnr.dev', path:"static/css/",pathStyleAccessEnabled:true, payloadSigningEnabled:true, acl:'PublicRead')}
                }
                
                files = findFiles(glob: 'build/static/js/**')
                withAWS(region:'us-east-1',credentials:'AutoDeployer')
                {
                        files.each {s3Upload(file:"${it}", bucket:'mymnr.dev', path:"static/js/",pathStyleAccessEnabled:true, payloadSigningEnabled:true, acl:'PublicRead')}
              
                }
            }
        }
    }

@rudionrails
Copy link

rudionrails commented Jun 3, 2019

Hi, is there any progress on this or has it been resolved? I am on v1.36, using it to separate regular vs gz files. perhaps I am using it the wrong way also. Help appreciated.

    stage('Deploy') {
      when {
        branch "master"
      }

      steps {
        s3Upload(
          bucket: "${S3_BUCKET}",
          path: "${S3_PATH}",
          workingDir: "dist",
          includePathPattern: "**/*",
          excludePathPattern: "**/*.gz",
          acl: "PublicRead"
        )

        s3Upload(
          bucket: "${S3_BUCKET}",
          path: "${S3_PATH}",
          workingDir: "dist",
          includePathPattern: "**/*.gz",
          contentEncoding: "gzip",
          acl: "PublicRead"
        )
      }
    }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests