Compress and Encrypt files and streams before saving them to the final Flysystem destination.
All the magic here is php_user_filter
:
everything happens on streams on the fly, chunk by chunk, with no impact on local disk and almost
zero overhead on RAM.
Use composer to install these available packages:
Package name | Stream filter type | Adatper class |
---|---|---|
slam/flysystem-v1encrypt-proxy |
XChaCha20-Poly1305 encryption |
V1EncryptProxyAdapter |
slam/flysystem-gzip-proxy |
Gzip compression |
GzipProxyAdapter |
slam/flysystem-zip-proxy |
Zip compression |
ZipProxyAdapter |
use SlamFlysystem\Gzip\GzipProxyAdapter;
use SlamFlysystem\V1Encrypt\V1EncryptProxyAdapter;
use League\Flysystem\Local\LocalFilesystemAdapter;
// Create a strong key and save it somewhere
$key = V1EncryptProxyAdapter::generateKey();
// Create the final FilesystemAdapter
$remoteAdapter = new LocalFilesystemAdapter(/* ... */);
$adapter = new GzipProxyAdapter(new V1EncryptProxyAdapter(
$remoteAdapter,
$key
));
// The FilesystemOperator
$filesystem = new \League\Flysystem\Filesystem($adapter);
// Upload a file, with stream
$handle = fopen('my-huge-file.txt', 'r');
$filesystem->writeStream('data.txt', $handle);
fclose($handle);
// Remotely a data.txt.gz.v1encrypted file has now been created
// Download a file, with stream
$handle = $filesystem->readStream('data.txt');
file_put_contents('my-huge-file.txt', $handle);
fclose($handle);
Both write and read operations leverage streams to keep memory usage low.
A 10 Gb mysqldump
output can be streamed into a 1 Gb dump.sql.gz.v1encrypted
file
with a 10 Mb RAM footprint of the running php process, and no additional local disk
space required.
In order to upload a file to AWS it is required to specify payload's content-length
and
hash
within headers, before sending the payload itself.
This requirement conflicts with the dynamic nature of streams that this library edits
on-the-fly.
You can solve this issue by buffering the payload in a local file before the upload;
for this purpose you can use slam/flysystem-local-cache-proxy
,
which also acts as a local cache to speed up further reads:
use SlamFlysystem\Gzip\GzipProxyAdapter;
use SlamFlysystem\V1Encrypt\V1EncryptProxyAdapter;
use SlamFlysystem\LocalCache\LocalCacheProxyAdapter;
use League\Flysystem\AsyncAwsS3\AsyncAwsS3Adapter;
$adapter =
new GzipProxyAdapter( // 1st: compress data
new V1EncryptProxyAdapter( // 2nd: encrypt data
new LocalCacheProxyAdapter( // 3rd: buffer resulting data to a local file
new AsyncAwsS3Adapter(/*...*/), // 4th: upload the data
sys_get_temp_dir().'/flysystem'
),
$key
)
)
;
Security is a moving target and we need to make space for future, more secure, protocols.
No name and no configurations are intentional: cipher agility is bad.
The first time you use an encryption stream, you should use only the latest one. If you are already using an encryption stream and a new version is released, you are invited to re-encrypt all your assets with the new version.
This one combines Flysystem with a php_user_filter
, which allows
compression without knowing the source nor the destination of the stream.
- PHP's
ZipArchive
doesn't support streams for write, and for read operations only supports a reading stream after you already saved a local copy of the archive - Flysystem's
ZipArchive
acts as a big final bucket; here instead we transparently zip content from the source to the final bucket, per file. - @maennchen
ZipStream-PHP
, which is awesome, can stream to Flysystem only after the whole archive is written somewhere, see https://github.com/maennchen/ZipStream-PHP/wiki/FlySystem-example; you can stream the zip to S3, but not with Flysystem: https://github.com/maennchen/ZipStream-PHP/wiki/Stream-S3
The Zip proxy wasn't possible without copying what we needed from https://github.com/maennchen/ZipStream-PHP, so I strongly recommend supporting that package financially if you like theirs or our package for Zip compression.
Some Flysystem adapters like the Local one try to guess the file mime type by
its nature (content or extension): in such cases it will fail due to the custom
extention and the encrypted content.
Other adapters like the Aws S3 one allow you to specify it manually (for ex.
with the ContentType
key in the Config): it is a good idea to always manually
inject it, if you like the Filesystem::mimeType($path)
call to be reliable.
The file size returned relates to the compressed and encrypted file, not the original one.