Skip to content
This repository has been archived by the owner on Feb 12, 2021. It is now read-only.

Feature/travis ci #119

Open
wants to merge 15 commits into
base: master
Choose a base branch
from
22 changes: 22 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
sudo: required
services:
- docker
language: node_js
node_js:
- "4.1"
- "4.0"
- "0.12"
- "0.11"
- "0.10"
- "0.8"
- "0.6"
- "iojs"
before_install:
- docker pull 66pix/s3ninja:latest
before_script:
- docker run -p "9444:9444" 66pix/s3ninja:latest &
- sleep 10
addons:
hosts:
- node-s3-test.localhost
- s3-bucket.localhost
16 changes: 11 additions & 5 deletions lib/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,12 @@ function Client(options) {
}
}

Client.prototype.createBucket = function createBucket(bucket, cb) {
this.s3.createBucket({
Bucket: bucket
}, cb);
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this new function also be documented in the readme?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, will do that if and when we actually manage to get CI working with S3Ninja. Which I am still not 100% convinced will be a thing.

Also I hacked this in and have no idea if it matches the expected format for a new API function.

Client.prototype.deleteObjects = function(s3Params) {
var self = this;
var ee = new EventEmitter();
Expand Down Expand Up @@ -102,7 +108,7 @@ Client.prototype.deleteObjects = function(s3Params) {
cb();
}
});
});
});

function tryDeletingObjects(cb) {
self.s3Pend.go(function(pendCb) {
Expand Down Expand Up @@ -466,7 +472,7 @@ Client.prototype.downloadFile = function(params) {
if (statusCode >= 300) {
handleError(new Error("http status code " + statusCode));
return;
}
}
if (headers['content-length'] == undefined) {
var outStream = fs.createWriteStream(localFile);
outStream.on('error', handleError);
Expand All @@ -476,14 +482,14 @@ Client.prototype.downloadFile = function(params) {
downloader.progressTotal += chunk.length;
downloader.progressAmount += chunk.length;
downloader.emit('progress');
outStream.write(chunk);
outStream.write(chunk);
})

request.on('httpDone', function() {
request.on('httpDone', function() {
if (errorOccurred) return;
downloader.progressAmount += 1;
downloader.emit('progress');
outStream.end();
outStream.end();
cb();
})
} else {
Expand Down
4 changes: 2 additions & 2 deletions test/test.js
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ function createClient() {
accessKeyId: process.env.S3_KEY,
secretAccessKey: process.env.S3_SECRET,
endpoint: process.env.S3_ENDPOINT,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pennycoder here is where the S3_ENDPOINT var is used.

Had to disable ssl for the mock - this should be converted to an env var as well, so current s3 behaviour is preserved as you mentioned.

sslEnabled: false
},
});
}
Expand Down Expand Up @@ -97,8 +98,7 @@ describe("s3", function () {
Prefix: remoteRoot,
Bucket: s3Bucket,
};
var deleter = client.deleteDir(s3Params);
deleter.on('end', function() {
client.createBucket(s3Bucket, function() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice if the current s3 behavior could also be preserved. Perhaps make this option a different environment variable or add a variable that changes the mode of the tests to be either local or against S3 - depending on the variable would change the behavior?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is the S3_ENDPOINT env var that one can use to determine what ... endpoint to use.

I'm not sure of the spec for createBucket - does it fail catastrophically if one attempts to create a bucket that already exists?

done();
});
});
Expand Down