This repository has been archived by the owner on Mar 30, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 92
Query Execution Modes (Broker vs Historical)
hbutani edited this page Aug 2, 2016
·
3 revisions
This is the default mode. In this case the DruidQuery is executed against the Broker. The DruidRDD is setup with a Partition for each Query Interval.
This is turned on by setting the queryHistoricalServer parameter. In this case a DruidRDD Partition is setup for each Historical Server. When there are multiple Historical Servers that are serving a Segment, Segment assignment is made to the Historical Server that is handling the fewest segments. In this we try to spread the load is spread across Historical servers.
Currently this mode is not available when the Query contains
- A Sort/Limit Specification
- A Having Filter
- Any JavaScript Aggregations
- Any Post Aggregation Specifications
- Overview
- Quick Start
-
User Guide
- [Defining a DataSource on a Flattened Dataset](https://github.com/SparklineData/spark-druid-olap/wiki/Defining-a Druid-DataSource-on-a-Flattened-Dataset)
- Defining a Star Schema
- Sample Queries
- Approximate Count and Spatial Queries
- Druid Datasource Options
- Sparkline SQLContext Options
- Using Tableau with Sparkline
- How to debug a Query Plan?
- Running the ThriftServer with Sparklinedata components
- [Setting up multiple Sparkline ThriftServers - Load Balancing & HA] (https://github.com/SparklineData/spark-druid-olap/wiki/Setting-up-multiple-Sparkline-ThriftServers-(Load-Balancing-&-HA))
- Runtime Views
- Sparkline SQL extensions
- Sparkline Pluggable Modules
- Dev. Guide
- Reference Architectures
- Releases
- Cluster Spinup Tool
- TPCH Benchmark