-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix s3 file cache sharing temp folder across processes #1650
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -18,6 +18,7 @@ | |
# copyright holder | ||
|
||
require 'fileutils' | ||
require 'tmpdir' | ||
require 'cosmos' | ||
require 'cosmos/utilities/s3' | ||
|
||
|
@@ -48,7 +49,7 @@ def initialize(s3_path, size = 0, priority = 0) | |
|
||
def retrieve | ||
local_path = "#{S3FileCache.instance.cache_dir}/#{File.basename(@s3_path)}" | ||
Cosmos::Logger.info "Retrieving #{@s3_path} from logs bucket" | ||
Cosmos::Logger.debug "Retrieving #{@s3_path} from logs bucket" | ||
@rubys3_client.get_object(bucket: "logs", key: @s3_path, response_target: local_path) | ||
if File.exist?(local_path) | ||
@size = File.size(local_path) | ||
|
@@ -57,6 +58,7 @@ def retrieve | |
rescue => err | ||
@error = err | ||
Cosmos::Logger.error "Failed to retrieve #{@s3_path}\n#{err.formatted}" | ||
raise err | ||
end | ||
|
||
def reserve | ||
|
@@ -165,11 +167,11 @@ def initialize(name = 'default', max_disk_usage = MAX_DISK_USAGE) | |
end | ||
|
||
# Create local file cache location | ||
@cache_dir = File.join(Dir.tmpdir, 'cosmos', 'file_cache', name) | ||
@cache_dir = Dir.mktmpdir | ||
FileUtils.mkdir_p(@cache_dir) | ||
|
||
# Clear out local file cache | ||
FileUtils.rm_f Dir.glob("#{@cache_dir}/*") | ||
at_exit do | ||
FileUtils.remove_dir(@cache_dir, true) | ||
end | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I assume the at_exit applies to the Thread you create below? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. At exit applies to the entire ruby process. |
||
|
||
@cached_files = S3FileCollection.new | ||
|
||
|
@@ -178,7 +180,11 @@ def initialize(name = 'default', max_disk_usage = MAX_DISK_USAGE) | |
file = @cached_files.get_next_to_retrieve | ||
# Cosmos::Logger.debug "Next file: #{file}" | ||
if file and (file.size + @cached_files.current_disk_usage()) <= @max_disk_usage | ||
file.retrieve | ||
begin | ||
file.retrieve | ||
rescue | ||
# Will be automatically retried | ||
end | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What's the point of raising the error and then silently rescuing and retrying? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The error didn't use to be there, but I added it for when using retrieve by itself |
||
else | ||
sleep(1) | ||
end | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this what you expect is happening with the reducer error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what this comment means, but this error code is buggy and I need to fix it.