Metrictank comes with a bunch of helper tools.
Here is an overview of them all.
This file is generated by tools-to-doc
mt-aggs-explain
Usage:
mt-aggs-explain [flags] [config-file]
(config file defaults to /etc/metrictank/storage-aggregation.conf)
Flags:
-metric string
specify a metric name to see which aggregation rule it matches
-version
print version string
mt-backfill
Consumes data from Kafka and backfills chunks to Cassandra (only supports cassandra).
Does not update the index table. Useful when existing series in Metrictank need historical data.
Parameters:
-chunk-max-stale string
chunk max stale age. (default "1m")
-config string
config file path (default "/etc/metrictank/metrictank.ini")
-gc-interval string
gc interval. (default "2m")
-log-level string
log level. panic|fatal|error|warning|info|debug (default "info")
-metric-max-stale string
metric max stale age. (default "5m")
-public-org int
org Id
-timeout int
the tool will exit if no kafka message is received during this interval (default 10)
Config file supports same elements as `metrictank` command, but only supports Kafka in and Cassandra out.
Example:
mt-backfill -config /etc/metrictank/backfill.ini -timeout 600
mt-control-server
Run a control server that can be used to issue control messages to a metrictank cluster.
Flags:
-config string
configuration file path (optional)
mt-explain
Explains the execution plan for a given query / set of targets
Usage:
mt-explain
-from string
get data from (inclusive) (default "-24h")
-mdp int
max data points to return (default 800)
-mdp-optimization
enable MaxDataPoints optimization (experimental)
-pre-normalization
enable pre-normalization optimization (default true)
-stable
whether to use only functionality marked as stable (default true)
-time-zone string
time-zone to use for interpreting from/to when needed. (check your config) (default "local")
-to string
get data until (exclusive) (default "now")
Example:
mt-explain -from -24h -to now -mdp 1000 "movingAverage(sumSeries(foo.bar), '2min')" "alias(averageSeries(foo.*), 'foo-avg')"
Generates fake metrics workload
Usage:
mt-fakemetrics [command]
Available Commands:
agents Mimic independent agents
agginput A particular workload good to test performance of carbon-relay-ng aggregators
backfill backfills old data and stops when 'now' is reached
bad Sends out invalid/out-of-order/duplicate metric data
containers Mimic a set of containers - with churn - whose stats get reported at the same time
feed Publishes a realtime feed of data
help Help about any command
resolutionchange Sends out metric with changing intervals, time range 24hours
schemasbackfill backfills a sends a set of metrics for each encountered storage-schemas.conf rule. Note: patterns must be a static string + wildcard at the end (e.g. foo.bar.*)!
storageconf Sends out one or more set of 10 metrics which you can test aggregation and retention rules on
version Print the version number
Flags:
-t, --add-tags add the built-in tags to generated metrics (default false)
--carbon-addr string carbon TCP address. e.g. localhost:2003
--config string config file (default is $HOME/.mt-fakemetrics.yaml)
--custom-tags strings A list of comma separated tags (i.e. "tag1=value1,tag2=value2")(default empty) conflicts with add-tags
--filter strings A list of comma separated filters to apply. E.g. 'offset:-1h,period:patt1=10:patt2=5'
--gnet-addr string gnet address. e.g. http://localhost:8081
--gnet-key string gnet api key
-h, --help help for mt-fakemetrics
--kafka-comp string compression: none|gzip|snappy (default "snappy")
--kafka-mdm-addr string kafka TCP address for MetricData-Msgp messages. e.g. localhost:9092
--kafka-mdm-topic string kafka topic for MetricData-Msgp messages (default "mdm")
--kafka-mdm-v2 enable MetricPoint optimization (send MetricData first, then optimized MetricPoint payloads) (default true)
--listen string http listener address for pprof. (default ":6764")
--log-level int log level. 0=TRACE|1=DEBUG|2=INFO|3=WARN|4=ERROR|5=CRITICAL|6=FATAL (default 2)
--num-unique-custom-tags int a number between 0 and the length of custom-tags. when using custom-tags this will make the tags unique (default 0)
--num-unique-tags int a number between 0 and 10. when using add-tags this will add a unique number to some built-in tags (default 1)
--partition-scheme string method used for partitioning metrics (kafka-mdm-only). (byOrg|bySeries|bySeriesWithTags|bySeriesWithTagsFnv|lastNum) (default "bySeries")
--statsd-addr string statsd TCP address. e.g. 'localhost:8125'
--statsd-type string statsd type: standard or datadog (default "standard")
--stdout enable emitting metrics to stdout
Use "mt-fakemetrics [command] --help" for more information about a command.
mt-gateway
Provides an HTTP gateway for interacting with metrictank, including metrics ingestion
Usage:
mt-gateway [flags]
Flags:
-addr string
http service address (default ":6059")
-config string
configuration file path (default "/etc/metrictank/mt-gateway.ini")
-default-org-id int
default org ID to send to downstream services if none is provided (default -1)
-discard-prefixes string
discard data points starting with one of the given prefixes separated by | (may be given multiple times, once per topic specified in 'metrics-topic', as a comma-separated list)
-graphite-url string
graphite-api address (default "http://localhost:8080")
-importer-url string
mt-whisper-importer-writer address
-kafka-tcp-addr string
kafka tcp address(es) for metrics, in csv host[:port] format (default "localhost:9092")
-kafka-version string
Kafka version in semver format. All brokers must be this version or newer. (default "0.10.0.0")
-log-level string
log level. panic|fatal|error|warning|info|debug (default "info")
-metrics-flush-freq duration
The best-effort frequency of flushes to kafka (default 50ms)
-metrics-kafka-comp string
compression: none|gzip|snappy (default "snappy")
-metrics-max-messages int
The maximum number of messages the producer will send in a single request (default 5000)
-metrics-partition-scheme string
method used for partitioning metrics. (byOrg|bySeries|bySeriesWithTags|bySeriesWithTagsFnv) (may be given multiple times, once per topic, as a comma-separated list) (default "bySeries")
-metrics-publish
enable metric publishing
-metrics-topic string
topic for metrics (may be given multiple times as a comma-separated list) (default "mdm")
-metrictank-url string
metrictank address (default "http://localhost:6060")
-only-org-id value
restrict publishing data belonging to org id; 0 means no restriction (may be given multiple times, once per topic specified in 'metrics-topic', as a comma-separated list)
-schemas-file string
path to carbon storage-schemas.conf file (default "/etc/metrictank/storage-schemas.conf")
-stats-addr string
graphite address (default "localhost:2003")
-stats-buffer-size int
how many messages (holding all measurements from one interval) to buffer up in case graphite endpoint is unavailable. (default 20000)
-stats-enabled
enable sending graphite messages for instrumentation
-stats-interval int
interval in seconds to send statistics (default 10)
-stats-prefix string
stats prefix (will add trailing dot automatically if needed) (default "mt-gateway.stats.default.$hostname")
-stats-timeout duration
timeout after which a write is considered not successful (default 10s)
-v2
enable optimized MetricPoint payload (default true)
-v2-clear-interval duration
interval after which we always resend a full MetricData (default 1h0m0s)
-v2-org
encode org-id in messages (default true)
-version
print version string
mt-index-cat
Retrieves a metrictank index and dumps it in the requested format
In particular, the vegeta outputs are handy to pipe requests for given series into the vegeta http benchmark tool
Usage:
mt-index-cat [global config flags] <idxtype> [idx config flags] output
global config flags:
-addr string
graphite/metrictank address (default "http://localhost:6060")
-bt-total-partitions int
total number of partitions (when using bigtable and partitions='*') (default -1)
-from string
for vegeta outputs, will generate requests for data starting from now minus... eg '30min', '5h', '14d', etc. or a unix timestamp (default "30min")
-limit int
only show this many metrics. use 0 to disable
-max-stale string
exclude series that have not been seen for this much time (compared against LastUpdate). use 0 to disable (default "6h30min")
-min-stale string
exclude series that have been seen in this much time (compared against LastUpdate). use 0 to disable (default "0")
-org int
show only metrics with this OrgID (-1 to disable) (default -1)
-partitions string
only show metrics from the comma separated list of partitions or * for all (default "*")
-prefix string
only show metrics that have this prefix
-regex string
only show metrics that match this regex
-substr string
only show metrics that have this substring
-suffix string
only show metrics that have this suffix
-tags string
tag filter. empty (default), 'some', 'none', 'valid', or 'invalid'
-verbose
print stats to stderr
tags filter:
'' no filtering based on tags
'none' only show metrics that have no tags
'some' only show metrics that have one or more tags
'valid' only show metrics whose tags (if any) are valid
'invalid' only show metrics that have one or more invalid tags
idxtype: 'cass' (cassandra) or 'bt' (bigtable)
cass config flags:
-archive-table string
Cassandra table to archive metricDefinitions in. (default "metric_idx_archive")
-auth
enable cassandra user authentication
-ca-path string
cassandra CA certficate path when using SSL (default "/etc/metrictank/ca.pem")
-connection-check-interval duration
interval at which to perform a connection check to cassandra, set to 0 to disable. (default 5s)
-connection-check-timeout duration
maximum total time to wait before considering a connection to cassandra invalid. This value should be higher than connection-check-interval. (default 30s)
-consistency string
write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one (default "one")
-create-keyspace
enable the creation of the index keyspace and tables, only one node needs this (default true)
-disable-initial-host-lookup
instruct the driver to not attempt to get host info from the system.peers table
-enabled
(default true)
-host-verification
host (hostname and server cert) verification when using SSL (default true)
-hosts string
comma separated list of cassandra addresses in host:port form (default "localhost:9042")
-init-load-concurrency int
Number of partitions to load concurrently on startup. (default 1)
-init-load-retries int
Number of times to retry loading a partition on startup before failing. (default 3)
-keyspace string
Cassandra keyspace to store metricDefinitions in. (default "metrictank")
-num-conns int
number of concurrent connections to cassandra (default 10)
-password string
password for authentication (default "cassandra")
-protocol-version int
cql protocol version to use (default 4)
-prune-interval duration
Interval at which the index should be checked for stale series. (default 3h0m0s)
-schema-file string
File containing the needed schemas in case database needs initializing (default "/etc/metrictank/schema-idx-cassandra.toml")
-ssl
enable SSL connection to cassandra
-table string
Cassandra table to store metricDefinitions in. (default "metric_idx")
-timeout duration
cassandra request timeout (default 1s)
-update-cassandra-index
synchronize index changes to cassandra. not all your nodes need to do this. (default true)
-update-interval duration
frequency at which we should update the metricDef lastUpdate field, use 0s for instant updates (default 3h0m0s)
-username string
username for authentication (default "cassandra")
-write-queue-size int
Max number of metricDefs allowed to be unwritten to cassandra (default 100000)
bigtable config flags:
-bigtable-instance string
Name of bigtable instance (default "default")
-create-cf
enable the creation of the table and column families (default true)
-enabled
-gcp-project string
Name of GCP project the bigtable cluster resides in (default "default")
-prune-interval duration
Interval at which the index should be checked for stale series. (default 3h0m0s)
-table-name string
Name of bigtable table used for metricDefs (default "metrics")
-update-bigtable-index
synchronize index changes to bigtable. not all your nodes need to do this. (default true)
-update-interval duration
frequency at which we should update the metricDef lastUpdate field, use 0s for instant updates (default 3h0m0s)
-write-concurrency int
Number of writer threads to use (default 5)
-write-max-flush-size int
Max number of metricDefs in each batch write to bigtable (default 10000)
-write-queue-size int
Max number of metricDefs allowed to be unwritten to bigtable. Must be larger then write-max-flush-size (default 100000)
output:
* presets: dump|list|vegeta-render|vegeta-render-patterns
* templates, which may contain:
- fields, e.g. '{{.Id}} {{.OrgId}} {{.Name}} {{.Interval}} {{.Unit}} {{.Mtype}} {{.Tags}} {{.LastUpdate}} {{.Partition}}'
- methods, e.g. '{{.NameWithTags}}' (works basically the same as a field)
- processing functions:
pattern: transforms a graphite.style.metric.name into a pattern with wildcards inserted
an operation is randomly selected between: replacing a node with a wildcard, replacing a character with a wildcard, and passthrough
patternCustom: transforms a graphite.style.metric.name into a pattern with wildcards inserted according to rules provided:
patternCustom <chance> <operation>[ <chance> <operation>...]
the chances need to add up to 100
operation is one of:
* pass (passthrough)
* <digit>rcnw (replace a randomly chosen sequence of <digit (0-9)> consecutive nodes with wildcards
* <digit>rccw (replace a randomly chosen sequence of <digit (0-9)> consecutive characters with wildcards
example: {{.Name | patternCustom 15 "pass" 40 "1rcnw" 15 "2rcnw" 10 "3rcnw" 10 "3rccw" 10 "2rccw"}}\n
age: subtracts the passed integer (typically .LastUpdate) from the query time
roundDuration: formats an integer-seconds duration using aggressive rounding. for the purpose of getting an idea of overal metrics age
Cassandra Examples:
mt-index-cat -from 60min cass -hosts cassandra:9042 list
mt-index-cat -from 60min cass -hosts cassandra:9042 'sumSeries({{.Name | pattern}})'
mt-index-cat -from 60min cass -hosts cassandra:9042 'GET http://localhost:6060/render?target=sumSeries({{.Name | pattern}})&from=-6h\nX-Org-Id: 1\n\n'
mt-index-cat cass -hosts cassandra:9042 -timeout 60s '{{.LastUpdate | age | roundDuration}}\n' | sort | uniq -c
mt-index-cat cass -hosts localhost:9042 -schema-file ../../scripts/config/schema-idx-cassandra.toml '{{.Name | patternCustom 15 "pass" 40 "1rcnw" 15 "2rcnw" 10 "3rcnw" 10 "3rccw" 10 "2rccw"}}\n'
Bigtable Examples:
mt-index-cat -max-stale 0 -bt-total-partitions 128 bt -gcp-project your_project -bigtable-instance the_bt_instance -table-name metric_idx -create-cf false list
mt-index-cat -max-stale 768h -partitions 1,2,3 bt -gcp-project your_project -bigtable-instance the_bt_instance -table-name metric_idx -create-cf false '{{.NameWithTags}} {{.Id}} {{.OrgId}} {{.LastUpdate}} {{.Partition}}
'
Usage of ./mt-index-deleter:
reads rowkeys from stdin and deletes them from the index. only BigTable is supported right now
-batch int
batch size of each delete (default 10)
-bigtable-instance string
Name of bigtable instance (default "default")
-bigtable-table string
Name of bigtable table used for metricDefs (default "metric_idx")
-concurrency int
number of concurrent delete workers (default 20)
-gcp-project string
Name of GCP project the bigtable cluster resides in (default "default")
Usage of ./mt-indexdump-rules-analyzer:
reads metric names from stdin and reports the number of metrics that match each index-rules.conf rule
-index-rules-file string
name of file which defines the max-stale times (default "/etc/metrictank/index-rules.conf")
mt-index-migrate
Migrate metric index from one cassandra keyspace to another.
This tool can be used for moving data to a different keyspace or cassandra cluster
or for resetting partition information when the number of partitions being used has changed.
Flags:
-dry-run
run in dry-run mode. No changes will be made. (default true)
-dst-cass-addr string
Address of cassandra host to migrate to. (default "localhost")
-dst-keyspace string
Cassandra keyspace in use on destination. (default "raintank")
-dst-table string
Cassandra table name in use on destination. (default "metric_idx")
-log-level string
log level. panic|fatal|error|warning|info|debug (default "info")
-num-partitions int
number of partitions in cluster (default 1)
-partition-scheme string
method used for partitioning metrics. (byOrg|bySeries|bySeriesWithTags|bySeriesWithTagsFnv) (default "byOrg")
-schema-file string
File containing the needed schemas in case database needs initializing (default "/etc/metrictank/schema-idx-cassandra.toml")
-src-cass-addr string
Address of cassandra host to migrate from. (default "localhost")
-src-keyspace string
Cassandra keyspace in use on source. (default "raintank")
-src-table string
Cassandra table name in use on source. (default "metric_idx")
mt-index-prune
Retrieves a metrictank index and moves all deprecated entries into an archive table
Usage:
mt-index-prune [global config flags] <idxtype> [idx config flags]
global config flags:
-index-rules-file string
name of file which defines the max-stale times (default "/etc/metrictank/index-rules.conf")
-no-dry-run
do not only plan and print what to do, but also execute it
-partition-from int
the partition to start at
-partition-to int
prune all partitions up to this one (exclusive). If unset, only the partition defined with "--partition-from" gets pruned (default -1)
-verbose
print every metric name that gets archived
idxtype: only 'cass' supported for now
cass config flags:
-archive-table string
Cassandra table to archive metricDefinitions in. (default "metric_idx_archive")
-auth
enable cassandra user authentication
-ca-path string
cassandra CA certficate path when using SSL (default "/etc/metrictank/ca.pem")
-connection-check-interval duration
interval at which to perform a connection check to cassandra, set to 0 to disable. (default 5s)
-connection-check-timeout duration
maximum total time to wait before considering a connection to cassandra invalid. This value should be higher than connection-check-interval. (default 30s)
-consistency string
write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one (default "one")
-create-keyspace
enable the creation of the index keyspace and tables, only one node needs this (default true)
-disable-initial-host-lookup
instruct the driver to not attempt to get host info from the system.peers table
-enabled
(default true)
-host-verification
host (hostname and server cert) verification when using SSL (default true)
-hosts string
comma separated list of cassandra addresses in host:port form (default "localhost:9042")
-init-load-concurrency int
Number of partitions to load concurrently on startup. (default 1)
-init-load-retries int
Number of times to retry loading a partition on startup before failing. (default 3)
-keyspace string
Cassandra keyspace to store metricDefinitions in. (default "metrictank")
-num-conns int
number of concurrent connections to cassandra (default 10)
-password string
password for authentication (default "cassandra")
-protocol-version int
cql protocol version to use (default 4)
-prune-interval duration
Interval at which the index should be checked for stale series. (default 3h0m0s)
-schema-file string
File containing the needed schemas in case database needs initializing (default "/etc/metrictank/schema-idx-cassandra.toml")
-ssl
enable SSL connection to cassandra
-table string
Cassandra table to store metricDefinitions in. (default "metric_idx")
-timeout duration
cassandra request timeout (default 1s)
-update-cassandra-index
synchronize index changes to cassandra. not all your nodes need to do this. (default true)
-update-interval duration
frequency at which we should update the metricDef lastUpdate field, use 0s for instant updates (default 3h0m0s)
-username string
username for authentication (default "cassandra")
-write-queue-size int
Max number of metricDefs allowed to be unwritten to cassandra (default 100000)
EXAMPLES:
mt-index-prune --verbose --partition-from 0 --partition-to 8 cass -hosts cassandra:9042
mt-kafka-mdm-sniff
Inspects what's flowing through kafka (in mdm format) and reports it to you
Flags:
-config string
configuration file path (default "/etc/metrictank/metrictank.ini")
-format-md string
template to render MetricData with (default "{{.Part}} {{.OrgId}} {{.Id}} {{.Name}} {{.Interval}} {{.Value}} {{.Time}} {{.Unit}} {{.Mtype}} {{.Tags}}")
-format-point string
template to render MetricPoint data with (default "{{.Part}} {{.MKey}} {{.Value}} {{.Time}}")
-invalid
only show metrics that are invalid
-prefix string
only show metrics that have this prefix
-substr string
only show metrics that have this substring
you can also use functions in templates:
date: formats a unix timestamp as a date
example: mt-kafka-mdm-sniff -format-point '{{.Time | date}}'
mt-kafka-mdm-sniff-out-of-order
Inspects what's flowing through kafka (in mdm format) and reports out of order data (does not take into account reorder buffer)
# Mechanism
* it sniffs points being added on a per-series (metric Id) level
* for every series, tracks the last 'correct' point. E.g. a point that was able to be added to the series because its timestamp is higher than any previous timestamp
* if for any series, a point comes in with a timestamp equal or lower than the last point correct point - which metrictank would not add unless it falls within the reorder buffer - it triggers an event for this out-of-order point
every event is printed using the specified, respective format based on the message format
# Event formatting
Uses standard golang templating. E.g. {{field}} with these available fields:
NumBad - number of failed points since last successful add
DeltaTime - delta between Head and Bad time properties in seconds (point timestamps)
DeltaSeen - delta between Head and Bad seen time in seconds (consumed from kafka)
.Head.* - head is last successfully added message
.Bad.* - Bad is the current (last seen) point that could not be added (assuming no re-order buffer)
under Head and Bad, the following subfields are available:
Part (partition) and Seen (when the msg was consumed from kafka)
for MetricData, prefix these with Md. : Time OrgId Id Name Metric Interval Value Unit Mtype Tags
for MetricPoint, prefix these with Mp. : Time MKey Value
Flags:
-config string
configuration file path (default "/etc/metrictank/metrictank.ini")
-do-unknown-mp
process MetricPoint messages for which no MetricData messages have been seen. If you use prefix/substr filter, this may report on metrics you wanted to filter out! (default true)
-format string
template to render event with (default "{{.Bad.Md.Id}} {{.Bad.Md.Name}} {{.Bad.Mp.MKey}} {{.DeltaTime}} {{.DeltaSeen}} {{.NumBad}}")
-prefix string
only show metrics with a name that has this prefix
-substr string
only show metrics with a name that has this substring
mt-kafka-persist-sniff
Print what's flowing through kafka metric persist topic
Flags:
-backlog-process-timeout string
Maximum time backlog processing can block during metrictank startup. Setting to a low value may result in data loss (default "60s")
-brokers string
tcp address for kafka (may be given multiple times as comma separated list) (default "kafka:9092")
-enabled
-kafka-version string
Kafka version in semver format. All brokers must be this version or newer. (default "2.0.0")
-offset string
Set the offset to start consuming from. Can be oldest, newest or a time duration (default "newest")
-partitions string
kafka partitions to consume. use '*' or a comma separated list of id's. This should match the partitions used for kafka-mdm-in (default "*")
-sasl-enabled
Whether to enable SASL
-sasl-mechanism string
The SASL mechanism configuration (possible values: SCRAM-SHA-256, SCRAM-SHA-512, PLAINTEXT)
-sasl-password string
Password for client authentication (use with -sasl-enabled and -sasl-user)
-sasl-username string
Username for client authentication (use with -sasl-enabled and -sasl-password)
-tls-client-cert string
Client cert for client authentication (use with -tls-enabled and -tls-client-key)
-tls-client-key string
Client key for client authentication (use with -tls-enabled and -tls-client-cert)
-tls-enabled
Whether to enable TLS
-tls-skip-verify
Whether to skip TLS server cert verification
-topic string
kafka topic (default "metricpersist")
mt-keygen
mt-keygen gives you the MKey for a specific MetricDefinition
It fills a temp file with a template MetricDefinition
It launches vim
You fill in the important details - name / interval / tags /...
It prints the MKey
-version
print version string
generate deterministic metrics for each metrictank partition, query them back and report on correctness and performance
Correctness:
Monitor the parrot.monitoring.error series. There's 3 potential issues:
* parrot.monitoring.error;error=http // could not execute http request
* parrot.monitoring.error;error=decode // could not decode http response
* parrot.monitoring.error;error=invalid // any other problem with the response itself
Performance:
In addition to these black-and-white measurements above, there are also more subjective measurements
* parrot.monitoring.lag // how far the response is lagging behind
* parrot.monitoring.nans // number of nans included in the response
Usage:
mt-parrot [flags]
Flags:
--gateway-address string the url of the metrics gateway (default "http://localhost:6059")
--gateway-key string the bearer token to include with gateway requests
-h, --help help for mt-parrot
--log-level string log level. panic|fatal|error|warning|info|debug (default "info")
--lookback-period duration how far to look back when validating metrics (default 5m0s)
--org-id int org id to publish parrot metrics to (default 1)
--partition-count int32 number of kafka partitions in use (default 8)
--partition-method string the partition method in use on the gateway, must be one of bySeries|bySeriesWithTags|bySeriesWithTagsFnv (default "bySeries")
--query-interval duration interval to query to validate metrics (default 10s)
--stats-address string address to send monitoring statistics to (default "localhost:2003")
--stats-buffer-size int how many messages (holding all measurements from one interval) to buffer up in case graphite endpoint is unavailable. (default 20000)
--stats-prefix string stats prefix (will add trailing dot automatically if needed)
--stats-timeout duration timeout after which a write is considered not successful (default 10s)
--test-metrics-interval duration interval to send test metrics (default 10s)
mt-schemas-explain
Usage:
mt-schemas-explain [flags] [config-file]
(config file defaults to /etc/metrictank/storage-schemas.conf)
Flags:
-int int
specify an interval to apply interval-based matching in addition to metric matching (e.g. to simulate kafka-mdm input)
-metric string
specify a metric name to see which schema it matches
-version
print version string
-window-factor int
size of compaction window relative to TTL (default 20)
mt-split-metrics-by-ttl [flags] ttl [ttl...]
Creates schema of metric tables split by TTLs and
assists in migrating the data to new tables.
Flags:
-cassandra-addrs string
cassandra host (may be given multiple times as comma-separated list) (default "localhost")
-cassandra-auth
enable cassandra authentication
-cassandra-ca-path string
cassandra CA certificate path when using SSL (default "/etc/metrictank/ca.pem")
-cassandra-consistency string
write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one (default "one")
-cassandra-disable-initial-host-lookup
instruct the driver to not attempt to get host info from the system.peers table
-cassandra-host-selection-policy string
(default "tokenaware,hostpool-epsilon-greedy")
-cassandra-host-verification
host (hostname and server cert) verification when using SSL (default true)
-cassandra-keyspace string
cassandra keyspace to use for storing the metric data table (default "metrictank")
-cassandra-password string
password for authentication (default "cassandra")
-cassandra-retries int
how many times to retry a query before failing it
-cassandra-ssl
enable SSL connection to cassandra
-cassandra-timeout string
cassandra timeout (default "1s")
-cassandra-username string
username for authentication (default "cassandra")
-cql-protocol-version int
cql protocol version to use (default 4)
mt-store-cat
Retrieves timeseries data from the cassandra store. Either raw or with minimal processing
Usage:
mt-store-cat [flags] tables
mt-store-cat [flags] <table-selector> <metric-selector> <format>
table-selector: '*' or name of a table. e.g. 'metric_128'
metric-selector: '*' or an id (of raw or aggregated series) or prefix:<prefix> or substr:<substring> or glob:<pattern>
format:
- points
- point-summary
- chunk-summary (shows TTL's, optionally bucketed. See groupTTL flag)
- chunk-csv (for importing into cassandra)
EXAMPLES:
mt-store-cat -cassandra-keyspace metrictank -from='-1min' '*' '1.77c8c77afa22b67ef5b700c2a2b88d5f' points
mt-store-cat -cassandra-keyspace metrictank -from='-1month' '*' 'prefix:fake' point-summary
mt-store-cat -cassandra-keyspace metrictank '*' 'prefix:fake' chunk-summary
mt-store-cat -groupTTL h -cassandra-keyspace metrictank 'metric_512' '1.37cf8e3731ee4c79063c1d55280d1bbe' chunk-summary
Flags:
-archive string
archive to fetch for given metric. e.g. 'sum_1800'
-cassandra-addrs string
cassandra host (may be given multiple times as comma-separated list) (default "localhost")
-cassandra-auth
enable cassandra authentication
-cassandra-ca-path string
cassandra CA certificate path when using SSL (default "/etc/metrictank/ca.pem")
-cassandra-consistency string
write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one (default "one")
-cassandra-create-keyspace
enable the creation of the mdata keyspace and tables, only one node needs this (default true)
-cassandra-disable-initial-host-lookup
instruct the driver to not attempt to get host info from the system.peers table
-cassandra-host-selection-policy string
(default "tokenaware,hostpool-epsilon-greedy")
-cassandra-host-verification
host (hostname and server cert) verification when using SSL (default true)
-cassandra-keyspace string
cassandra keyspace to use for storing the metric data table (default "metrictank")
-cassandra-omit-read-timeout string
if a read is older than this, it will directly be omitted without executing (default "60s")
-cassandra-password string
password for authentication (default "cassandra")
-cassandra-read-concurrency int
max number of concurrent reads to cassandra. (default 20)
-cassandra-read-queue-size int
max number of outstanding reads before reads will be dropped. This is important if you run queries that result in many reads in parallel. (default 200000)
-cassandra-retries int
how many times to retry a query before failing it
-cassandra-schema-file string
File containing the needed schemas in case database needs initializing (default "/etc/metrictank/schema-store-cassandra.toml")
-cassandra-ssl
enable SSL connection to cassandra
-cassandra-timeout string
cassandra timeout (default "1s")
-cassandra-username string
username for authentication (default "cassandra")
-config string
configuration file path (default "/etc/metrictank/metrictank.ini")
-cql-protocol-version int
cql protocol version to use (default 4)
-fix int
fix data to this interval like metrictank does quantization. only for points and point-summary format
-from string
get data from (inclusive). only for points and point-summary format (default "-24h")
-groupTTL string
group chunks in TTL buckets: s (second. means unbucketed), m (minute), h (hour) or d (day). only for chunk-summary format (default "d")
-index-archive-table string
Cassandra table to archive metricDefinitions in. (default "metric_idx_archive")
-index-init-load-concurrency int
Number of partitions to load concurrently on startup. (default 1)
-index-schema-file string
File containing the needed index schemas in case database needs initializing (default "/etc/metrictank/schema-idx-cassandra.toml")
-index-table string
Cassandra table to store metricDefinitions in. (default "metric_idx")
-index-timeout duration
cassandra request timeout (default 1s)
-print-ts
print time stamps instead of formatted dates. only for points and point-summary format
-time-zone string
time-zone to use for interpreting from/to when needed. (check your config) (default "local")
-to string
get data until (exclusive). only for points and point-summary format (default "now")
-verbose
verbose (print stuff about the request)
-version
print version string
-window-factor int
size of compaction window relative to TTL (default 20)
Notes:
* Using `*` as metric-selector may bring down your cassandra. Especially chunk-summary ignores from/to and queries all data.
With great power comes great responsibility
* points that are not in the `from <= ts < to` range, are prefixed with `-`. In range has prefix of '>`
* When using chunk-summary, if there's data that should have been expired by cassandra, but for some reason didn't, we won't see or report it
* Doesn't automatically return data for aggregated series. It's up to you to query for an AMKey (id_<rollup>_<span>) when appropriate
* (rollup is one of sum, cnt, lst, max, min and span is a number in seconds)
mt-store-cp [flags] table-in [table-out]
Copies data in Cassandra to use another table (and possibly another cluster).
It is up to you to assure table-out exists before running this tool
This tool is EXPERIMENTAL and needs doublechecking whether data is successfully written to Cassandra
see https://github.com/grafana/metrictank/pull/909 for details
Please report good or bad experiences in the above ticket or in a new one
Flags:
-cassandra-auth
enable cassandra authentication
-cassandra-ca-path string
cassandra CA certificate path when using SSL (default "/etc/metrictank/ca.pem")
-cassandra-concurrency int
max number of concurrent reads to cassandra. (default 20)
-cassandra-consistency string
write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one (default "one")
-cassandra-disable-host-lookup
disable host lookup (useful if going through proxy)
-cassandra-host-selection-policy string
(default "tokenaware,hostpool-epsilon-greedy")
-cassandra-host-verification
host (hostname and server cert) verification when using SSL (default true)
-cassandra-keyspace string
cassandra keyspace to use for storing the metric data table (default "metrictank")
-cassandra-password string
password for authentication (default "cassandra")
-cassandra-retries int
how many times to retry a query before failing it
-cassandra-ssl
enable SSL connection to cassandra
-cassandra-timeout string
cassandra timeout (default "1s")
-cassandra-username string
username for authentication (default "cassandra")
-cql-protocol-version int
cql protocol version to use (default 4)
-dest-cassandra-addrs string
cassandra host (may be given multiple times as comma-separated list) (default "localhost")
-end-timestamp int
timestamp at which to stop, defaults to int max (default 2147483647)
-end-token int
token to stop at (inclusive), defaults to math.MaxInt64 (default 9223372036854775807)
-idx-table string
idx table in cassandra (default "metric_idx")
-max-batch-size int
max number of queries per batch (default 10)
-partitions string
process ids for these partitions (comma separated list of partition numbers or '*' for all) (default "*")
-progress-rows int
number of rows between progress output (default 1000000)
-source-cassandra-addrs string
cassandra host (may be given multiple times as comma-separated list) (default "localhost")
-start-timestamp int
timestamp at which to start, defaults to 0
-start-token int
token to start at (inclusive), defaults to math.MinInt64 (default -9223372036854775808)
-threads int
number of workers to use to process data (default 1)
-verbose
show every record being processed
mt-update-ttl [flags] ttl-old ttl-new
Adjusts the data in Cassandra to use a new TTL value. The TTL is applied counting from the timestamp of the data
Automatically resolves the corresponding tables based on ttl value. If the table stays the same, will update in place. Otherwise will copy to the new table, not touching the input data
Unless you disable create-keyspace, tables are created as needed
Flags:
-cassandra-addrs string
cassandra host (may be given multiple times as comma-separated list) (default "localhost")
-cassandra-auth
enable cassandra authentication
-cassandra-ca-path string
cassandra CA certificate path when using SSL (default "/etc/metrictank/ca.pem")
-cassandra-concurrency int
number of concurrent connections to cassandra. (default 20)
-cassandra-consistency string
write consistency (any|one|two|three|quorum|all|local_quorum|each_quorum|local_one (default "one")
-cassandra-disable-initial-host-lookup
instruct the driver to not attempt to get host info from the system.peers table
-cassandra-host-verification
host (hostname and server cert) verification when using SSL (default true)
-cassandra-keyspace string
cassandra keyspace to use for storing the metric data table (default "metrictank")
-cassandra-password string
password for authentication (default "cassandra")
-cassandra-retries int
how many times to retry a query before failing it
-cassandra-ssl
enable SSL connection to cassandra
-cassandra-timeout string
cassandra timeout (default "1s")
-cassandra-username string
username for authentication (default "cassandra")
-cql-protocol-version int
cql protocol version to use (default 4)
-create-keyspace
enable the creation of the keyspace and tables (default true)
-end-timestamp int
timestamp at which to stop, defaults to int max (default 2147483647)
-host-selection-policy string
(default "tokenaware,hostpool-epsilon-greedy")
-schema-file string
File containing the needed schemas in case database needs initializing (default "/etc/metrictank/schema-store-cassandra.toml")
-start-timestamp int
timestamp at which to start, defaults to 0
-status-every int
print status every x keys (default 100000)
-threads int
number of workers to use to process data (default 10)
-verbose
show every record being processed
-window-factor int
size of compaction window relative to TTL (default 20)
mt-view-boundaries
Shows boundaries of rows in cassandra and of spans of specified size.
to see UTC times, just prefix command with TZ=UTC
-span string
see boundaries for chunks of this span
-version
print version string
Usage of ./mt-whisper-importer-reader:
-custom-headers string
headers to add to every request, in the format "<name>:<value>;<name>:<value>"
-dst-schemas string
The filename of the output schemas definition file
-http-auth string
The credentials used to authenticate in the format "user:password"
-http-endpoint string
The http endpoint to send the data to (default "http://127.0.0.1:8080/metrics/import")
-import-from uint
Only import starting from the specified timestamp
-import-until uint
Only import up to, but not including, the specified timestamp (default 4294967295)
-insecure-ssl
Disables ssl certificate verification
-name-filter string
A regex pattern to be applied to all metric names, only matching ones will be imported
-name-prefix string
Prefix to prepend before every metric name, should include the '.' if necessary
-position-file string
file to store position and load position from
-threads int
Number of workers threads to process and convert .wsp files (default 10)
-verbose
More detailed logging
-whisper-directory string
The directory that contains the whisper file structure (default "/opt/graphite/storage/whisper")
-write-unfinished-chunks
Defines if chunks that have not completed their chunk span should be written
Usage of ./mt-whisper-importer-writer:
-config string
configuration file path (default "/etc/metrictank/metrictank.ini")
-exit-on-error
Exit with a message when there's an error
-http-endpoint string
The http endpoint to listen on (default "0.0.0.0:8080")
-log-level string
log level. panic|fatal|error|warning|info|debug (default "info")
-num-partitions int
Number of Partitions (default 1)
-partition-scheme string
method used for partitioning metrics. This should match the settings of tsdb-gw. (byOrg|bySeries|bySeriesWithTags|bySeriesWithTagsFnv) (default "bySeries")
-uri-path string
the URI on which we expect chunks to get posted (default "/metrics/import")
mt-write-delay-schema-explain
use this tool to diagnose which retentions may be subject to chunks not showing, if your write queue (temporarily) doesn't drain well
if the following applies to you:
* metrictank is struggling to clear the write queue (e.g. cassandra/bigtable has become slow)
* metrictank only keeps a limited amount of chunks in its ringbuffers (tank)
* you are concerned that chunks needed to satisfy read queries, are not available in the chunk store
then this tool will help you understand, which is especially useful if you have multiple retentions with different chunkspans and/or numchunks
Usage:
mt-write-delay-schema-explain [flags] last-drain-time [schemas-file]
(config file defaults to /etc/metrictank/storage-schemas.conf)
last-drain-time is the last time the queue was empty
Flags:
-version
print version string