Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

b*1 to TSM converter #5226

Merged
merged 1 commit into from
Dec 29, 2015
Merged

b*1 to TSM converter #5226

merged 1 commit into from
Dec 29, 2015

Conversation

otoolep
Copy link
Contributor

@otoolep otoolep commented Dec 26, 2015

Started with #5159

This tool converts, on a database-by-database basis, b1 and bz1 shards to tsm format. The main components include:

  • bz1 reader
  • b1 reader
  • conversion logic, which drains each of these readers and writes to 1 or more TSM files (possibly multiple files per TSM shard) for each reader.

Each reader exposes an iterator such that the emitted chunks are suitable for feeding directly into a TSMWriter -- the series keys are a combination of the measurement, tag-set, and field. Series keys are arranged alphabetically, and timestamps of all points are increasing, as required by TSMWriters.

A database is backed-up before any changes are made to it.

Much of this work involved copying existing b1 and bz1 code, so as to avoid conflicts with changes to that code (which will going away soon). There is also much duplication between the b1 and bz readers. This could be factored out if necessary, but simplicity and correctness seems key for this tool.

Remaining

  • allow parallel conversion.
  • large scale testing.
  • Full instructions in README.md.
  • Testing notes.

Example output:

$ influx_tsm ~/.influxdb/data/

b1 and bz1 shard conversion.
-----------------------------------
Data directory is: /home/philip/.influxdb/data/
Databases specified: all
Parallel mode enabled: no
Database backups enabled: yes
1 shard(s) detected, 1 remain after filtering.

Database        Retention       Path                                            Engine  Size
_internal       monitor         /home/philip/.influxdb/data/_internal/monitor/1 bz1     262144

These shards will be converted. Proceed? y/N: y
Conversion starting....
Database _internal backed up.
Conversion of /home/philip/.influxdb/data/_internal/monitor/1 successful (23.453011ms)

@otoolep otoolep force-pushed the b_converter branch 7 times, most recently from 85b21eb to 1ffcbd3 Compare December 28, 2015 00:23
@otoolep otoolep added this to the 0.10.0 milestone Dec 28, 2015
@otoolep otoolep self-assigned this Dec 28, 2015
@otoolep otoolep force-pushed the b_converter branch 6 times, most recently from de2965c to 7e6ea23 Compare December 29, 2015 18:47
@otoolep
Copy link
Contributor Author

otoolep commented Dec 29, 2015

@david @pauldix @jwilder @benbjohnson -- this utility is ready to go. There is a fair chunk of code here, but much of it is copy-n-paste.

@otoolep
Copy link
Contributor Author

otoolep commented Dec 29, 2015

This utility is ready to go. There is a fair chunk of code here, but much of it is copy-n-paste.

@jwilder @pauldix @benbjohnson @dgnorton

@pauldix
Copy link
Member

pauldix commented Dec 29, 2015

LGTM, let the testing begin!!

@otoolep
Copy link
Contributor Author

otoolep commented Dec 29, 2015

@jwilder also reviewed an early draft of main.go and gave it a +1.

@benbjohnson
Copy link
Contributor

I briefly looked over it and it lgtm. However, I would suggest adding a lot of logging. If we get bug reports then they're going to be hard to track down without any logging.

@otoolep
Copy link
Contributor Author

otoolep commented Dec 29, 2015

@benbjohnson -- I will review the code for places I should add more output to help the user in the case of failure.

@otoolep
Copy link
Contributor Author

otoolep commented Dec 29, 2015

I ran influx_stress for 45 minutes. This resulted in about 500m points, across 100,000 series, into bz1 shards.

Conversion by influx_tsm is shown below, including processing times.

$  influx_tsm -parallel ~/.influxdb/data/

b1 and bz1 shard conversion.
-----------------------------------
Data directory is: /home/philip/.influxdb/data/
Databases specified: all
Parallel mode enabled: yes
Database backups enabled: yes
3 shard(s) detected, 3 non-TSM shards detected.

Database        Retention       Path                                            Engine  Size
_internal       monitor         /home/philip/.influxdb/data/_internal/monitor/2 bz1     1048576
stress          default         /home/philip/.influxdb/data/stress/default/3    bz1     2097152
stress          default         /home/philip/.influxdb/data/stress/default/1    bz1     13958643712

These shards will be converted. Proceed? y/N: y
Conversion starting....
Database _internal backed up (5.925165ms)
Database stress backed up (1m12.100326507s)
Conversion of /home/philip/.influxdb/data/_internal/monitor/2 successful (409.878476ms)
Conversion of /home/philip/.influxdb/data/stress/default/3 successful (410.761709ms)
Conversion of /home/philip/.influxdb/data/stress/default/1 successful (8m22.913080395s)

@otoolep
Copy link
Contributor Author

otoolep commented Dec 29, 2015

I double-checked the code. All errors bubble up the stack, and are clearly associated with the failing shard. I'd like to get this merged now, and see how if goes during our testing. Then I will see operationally where the logging needs to be enhanced.

otoolep added a commit that referenced this pull request Dec 29, 2015
@otoolep otoolep merged commit def0148 into master Dec 29, 2015
@otoolep otoolep deleted the b_converter branch December 29, 2015 21:09
if err != nil {
return err
}
defer w.Close()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't this miss a call to Close if the file rolls over on L61? It may also double close the file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the potential double-close is an issue. Let me fix that up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants