We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Part of playbench.
clickhouse-cloud :) EXPLAIN SELECT count(1) FROM checks WHERE test_name is not null SETTINGS allow_experimental_analyzer=0, allow_experimental_parallel_reading_from_replicas=0 EXPLAIN SELECT count(1) FROM checks WHERE test_name IS NOT NULL SETTINGS allow_experimental_analyzer = 0, allow_experimental_parallel_reading_from_replicas = 0 Query id: 499bc5b7-98a1-40fa-b640-e2baa3ca7f26 ┌─explain─────────────────────────────────────────────────┐ 1. │ Expression ((Projection + Before ORDER BY)) │ 2. │ Aggregating │ 3. │ Expression │ 4. │ ReadFromPreparedSource (_minmax_count_projection) │ └─────────────────────────────────────────────────────────┘ 4 rows in set. Elapsed: 0.006 sec. clickhouse-cloud :) EXPLAIN SELECT count(1) FROM checks WHERE test_name is not null SETTINGS allow_experimental_analyzer=1, allow_experimental_parallel_reading_from_replicas=0 EXPLAIN SELECT count(1) FROM checks WHERE test_name IS NOT NULL SETTINGS allow_experimental_analyzer = 1, allow_experimental_parallel_reading_from_replicas = 0 Query id: 62bf5a49-b694-419e-a590-9e8e67748f59 ┌─explain────────────────────────────────────────────────────────────┐ 1. │ Expression ((Project names + Projection)) │ 2. │ Aggregating │ 3. │ Expression (Before GROUP BY) │ 4. │ Filter ((WHERE + Change column names to column identifiers)) │ 5. │ ReadFromMergeTree (checks.checks) │ └────────────────────────────────────────────────────────────────────┘ 5 rows in set. Elapsed: 0.002 sec. clickhouse-cloud :) SELECT count(1) FROM checks WHERE test_name is not null SETTINGS allow_experimental_analyzer=1, allow_experimental_parallel_reading_from_replicas=0 SELECT count(1) FROM checks WHERE test_name IS NOT NULL SETTINGS allow_experimental_analyzer = 1, allow_experimental_parallel_reading_from_replicas = 0 Query id: 0fc52f3d-75a3-49a6-96c7-1cfe8c5744a7 ┌───count(1)─┐ 1. │ 3121567112 │ -- 3.12 billion └────────────┘ 1 row in set. Elapsed: 26.163 sec. Processed 3.12 billion rows, 6.24 GB (119.31 million rows/s., 238.63 MB/s.) Peak memory usage: 806.02 MiB. clickhouse-cloud :) SELECT count(1) FROM checks WHERE test_name is not null SETTINGS allow_experimental_analyzer=0, allow_experimental_parallel_reading_from_replicas=0 SELECT count(1) FROM checks WHERE test_name IS NOT NULL SETTINGS allow_experimental_analyzer = 0, allow_experimental_parallel_reading_from_replicas = 0 Query id: 696750a6-f29e-461d-b6f2-846a08533dd5 ┌────count()─┐ 1. │ 3121567112 │ -- 3.12 billion └────────────┘ 1 row in set. Elapsed: 0.004 sec. SHOW CREATE TABLE checks CREATE TABLE checks.checks ( `pull_request_number` UInt32, `commit_sha` LowCardinality(String), `check_name` LowCardinality(String), `check_status` LowCardinality(String), `check_duration_ms` UInt64, `check_start_time` DateTime, `test_name` LowCardinality(String), `test_status` LowCardinality(String), `test_duration_ms` UInt64, `report_url` String, `pull_request_url` String, `commit_url` String, `task_url` String, `base_ref` String, `base_repo` String, `head_ref` String, `head_repo` String, `test_context_raw` String, `instance_type` LowCardinality(String), `instance_id` String, `date` Date MATERIALIZED toDate(check_start_time) ) ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}') PRIMARY KEY (date, pull_request_number, commit_sha, check_name, test_name, check_start_time) ORDER BY (date, pull_request_number, commit_sha, check_name, test_name, check_start_time) SETTINGS index_granularity = 8192
The text was updated successfully, but these errors were encountered:
KochetovNicolai
Successfully merging a pull request may close this issue.
Part of playbench.
The text was updated successfully, but these errors were encountered: