-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
268 changed files
with
27,852 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
name: Deploy Docusaurus to GitHub Pages | ||
|
||
on: | ||
push: | ||
branches: | ||
- main # Set this to your default branch | ||
|
||
jobs: | ||
deploy: | ||
runs-on: ubuntu-latest | ||
steps: | ||
- uses: actions/checkout@v2 | ||
- uses: actions/setup-node@v2 | ||
with: | ||
node-version: '16' | ||
|
||
- name: Install and Build | ||
run: | | ||
npm install | ||
npm run build | ||
- name: Deploy | ||
uses: peaceiris/actions-gh-pages@v3 | ||
with: | ||
github_token: ${{ secrets.GITHUB_TOKEN }} | ||
publish_dir: ./build |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
# Dependencies | ||
/node_modules | ||
|
||
# Production | ||
/build | ||
|
||
# Generated files | ||
.docusaurus | ||
.cache-loader | ||
|
||
# Misc | ||
.DS_Store | ||
.env.local | ||
.env.development.local | ||
.env.test.local | ||
.env.production.local | ||
|
||
npm-debug.log* | ||
yarn-debug.log* | ||
yarn-error.log* |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,41 @@ | ||
# Website | ||
|
||
This website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator. | ||
|
||
### Installation | ||
|
||
``` | ||
$ yarn | ||
``` | ||
|
||
### Local Development | ||
|
||
``` | ||
$ yarn start | ||
``` | ||
|
||
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server. | ||
|
||
### Build | ||
|
||
``` | ||
$ yarn build | ||
``` | ||
|
||
This command generates static content into the `build` directory and can be served using any static contents hosting service. | ||
|
||
### Deployment | ||
|
||
Using SSH: | ||
|
||
``` | ||
$ USE_SSH=true yarn deploy | ||
``` | ||
|
||
Not using SSH: | ||
|
||
``` | ||
$ GIT_USER=<Your GitHub username> yarn deploy | ||
``` | ||
|
||
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
module.exports = { | ||
presets: [require.resolve('@docusaurus/core/lib/babel/preset')], | ||
}; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
--- | ||
id: excel | ||
title: Excel | ||
slug: /integration-examples/excel | ||
sidebar_label: Excel | ||
--- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
--- | ||
id: apis-integrations | ||
title: APIs & integrations | ||
slug: /apis-integrations | ||
sidebar_label: APIs & integrations | ||
sidebar_position: 11 | ||
--- | ||
import DocCardList from '@theme/DocCardList'; | ||
|
||
<DocCardList /> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
--- | ||
id: power-bi | ||
title: Power BI | ||
slug: /integration-examples/power-bi | ||
sidebar_label: Power BI | ||
--- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
--- | ||
id: database-design | ||
title: Database Structure Design | ||
sidebar_label: Database Design | ||
slug: /architecture-and-design/database-design | ||
--- | ||
|
||
## Database Structure | ||
|
||
The "Synmetrix" database is organized according to the relational database model and includes the following tables: | ||
|
||
1. **public.users**: Stores information about registered users, including user ID (id), display name (display_name), avatar URL (avatar_url), as well as the creation time and last update time of the record. | ||
|
||
2. **auth.account_providers**: Describes the relationships between user accounts and their authentication providers. | ||
|
||
3. **auth.accounts**: Stores information about user accounts, including unique identifiers, email addresses, passwords, and other data. | ||
|
||
4. **auth.providers**: Provides a list of available authentication providers. | ||
|
||
5. **auth.refresh_tokens**: Contains information about refresh tokens for each user account. | ||
|
||
6. **auth.roles**: Manages user roles. | ||
|
||
7. **auth.account_roles**: Contains information about roles for each account. | ||
|
||
8. **public.teams**: Stores information about user teams. | ||
|
||
9. **public.datasources**: Contains information about data sources used by users. | ||
|
||
10. **public.dataschemas**: Describes data models used to define business metrics for data sources. | ||
|
||
11. **public.explorations**: Describes research tasks performed by users. | ||
|
||
12. **public.members**: Stores information about team members. | ||
|
||
13. **public.team_roles**: Manages user roles within teams. | ||
|
||
14. **public.member_roles**: Contains information about roles for each team member. | ||
|
||
15. **public.reports**: Contains information about the structure and schedule of reports based on metrics needed by users. | ||
|
||
16. **public.sql_credentials**: Manages SQL credentials used to access business metrics through the SQL interface. | ||
|
||
17. **public.alerts**: Stores information about alerts created by users. | ||
|
||
## Database Architecture Description | ||
|
||
![Database Architecture Description](/docs/data/db.png) | ||
|
||
The architecture of the database, including relationships between tables, primary and foreign keys, and indexes, is represented in the [Database Markup Language (DBML).](https://github.com/mlcraft-io/mlcraft/blob/main/docs/database/mlcraft.dblm) | ||
|
||
This database structure provides flexibility and scalability to the system, allowing convenient management of users, teams, data sources, reports, and other system elements. Each database table is designed for a specific purpose and can be extended or modified to meet evolving system requirements. | ||
|
||
|
||
:::note | ||
For the complete DBML representation of the database architecture, please refer to [Database Markup Language (DBML)](https://github.com/mlcraft-io/mlcraft/blob/main/docs/database/mlcraft.dblm). | ||
::: | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
--- | ||
id: architecture-and-design | ||
title: Architecture and Design | ||
slug: /architecture-and-design | ||
sidebar_label: Architecture and design | ||
sidebar_position: 5 | ||
--- | ||
![Synmetrix System Architecture](/docs/data/architecture2.png) | ||
|
||
import DocCardList from '@theme/DocCardList'; | ||
|
||
<DocCardList /> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,55 @@ | ||
--- | ||
id: system-architecture | ||
title: System Architecture | ||
slug: /architecture-and-design/system-architecture | ||
sidebar_label: System Architecture | ||
--- | ||
|
||
# Synmetrix System Architecture | ||
|
||
The architecture of the Synmetrix system has been meticulously designed with a focus on details, taking into account the core requirements of scalability, reliability, and flexibility. When creating the architecture, modern technologies were utilized, including the following key components: | ||
|
||
## [Hasura](https://hasura.io/) | ||
|
||
Hasura plays a crucial role in the system by connecting to the database and providing a GraphQL API for external services and applications. Hasura simplifies working with the database, offering intuitive and powerful tools for quickly creating queries and mutations in GraphQL, thereby enhancing performance and optimizing data-related workflows. | ||
|
||
## [PostgreSQL](https://www.postgresql.org/) | ||
|
||
PostgreSQL has been chosen as the primary database management system due to its reliability, high performance, and flexibility. This allows for efficient handling of large volumes of data while ensuring excellent performance and stability. | ||
|
||
## [Node.js](https://nodejs.org/) | ||
|
||
Node.js is the server-side platform used to create the backend of the application. Node.js is known for its ability to easily and rapidly develop scalable networked applications, thanks to efficient handling of asynchronous operations and event processing. | ||
|
||
## [React.js](https://reactjs.org/) | ||
|
||
React.js is a library for developing user interfaces. It provides high performance and simplifies the development process through the use of a component-based architecture. | ||
|
||
## [Cube.js](https://cube.dev/) | ||
|
||
Cube.js is an open-source analytics platform for building business analytics applications using JavaScript. In Synmetrix, Cube.js is used to manage business metrics, providing efficient tools for data processing. | ||
|
||
## [CubeStore](https://cubestore.dev/) | ||
|
||
CubeStore is a distributed database optimized for analytical queries and integrated with Cube.js. It offers capabilities for fast and efficient processing of large volumes of data. | ||
|
||
## [Redis](https://redis.io/) | ||
|
||
Redis is a key-value database management system known for its high performance and flexibility. It offers unique features, including support for various data structures such as strings, lists, sets, hashes, and more. | ||
|
||
## [Docker](https://www.docker.com/) | ||
|
||
Docker is used for containerization and simplifying the application deployment process, ensuring consistent functionality in any environment. | ||
|
||
## [Docker Swarm](https://docs.docker.com/swarm/) | ||
|
||
Docker Swarm is an orchestration and management tool for containers deployed using Docker. It allows for the management and scaling of applications across multiple servers, making infrastructure management easier. | ||
|
||
## [Ubuntu](https://ubuntu.com/) | ||
|
||
Ubuntu has been chosen as the primary operating system for the servers hosting all the services. It is a stable and reliable system well-suited for server environments. | ||
|
||
## Interactions between architecture components | ||
![Interactions between architecture components](/docs/data/architecture.png) | ||
|
||
Additionally, Synmetrix has been designed as a microservices-based system. Each microservice performs a specific function and can scale independently of others, providing flexibility and scalability to the system. This architecture allows for the independent updating and modernization of individual system components, simplifying the development and maintenance process of the system as a whole. |
150 changes: 150 additions & 0 deletions
150
docs/caching/getting-started-with-pre-aggregations/index.mdx
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,150 @@ | ||
--- | ||
id: getting-started-with-pre-aggregations | ||
title: Getting started with pre-aggregations | ||
sidebar_label: Getting started with pre-aggregations | ||
slug: /caching/getting-started-with-pre-aggregations | ||
--- | ||
|
||
import Tabs from '@theme/Tabs'; | ||
import TabItem from '@theme/TabItem'; | ||
|
||
Often at the beginning of an analytical application's lifecycle - when there is | ||
a smaller dataset that queries execute over - the application works well and | ||
delivers responses within acceptable thresholds. However, as the size of the | ||
dataset grows, the time-to-response from a user's perspective can often suffer | ||
quite heavily. This is true of both application and purpose-built data | ||
warehousing solutions. | ||
|
||
This leaves us with a chicken-and-egg problem; application databases can deliver | ||
low-latency responses with small-to-large datasets, but struggle with massive | ||
analytical datasets; data warehousing solutions _usually_ make no guarantees | ||
except to deliver a response, which means latency can vary wildly on a | ||
query-to-query basis. | ||
|
||
| Database Type | Low Latency? | Massive Datasets? | | ||
| ------------------------------ | ------------ | ----------------- | | ||
| Application (Postgres/MySQL) | ✅ | ❌ | | ||
| Analytical (BigQuery/Redshift) | ❌ | ✅ | | ||
|
||
|
||
Cube provides a solution to this problem: pre-aggregations. In layman's terms, a | ||
pre-aggregation is a condensed version of the source data. It specifies | ||
attributes from the source, which Cube uses to condense (or crunch) the data. | ||
This simple yet powerful optimization can reduce the size of the dataset by | ||
several orders of magnitude, and ensures subsequent queries can be served by the | ||
same condensed dataset if any matching attributes are found. | ||
|
||
|
||
## Pre-Aggregations without Time Dimension | ||
|
||
To illustrate pre-aggregations with an example, let's use a sample e-commerce | ||
database. We have a data model representing all our `orders`: | ||
|
||
**YAML** | ||
|
||
```yaml | ||
cubes: | ||
- name: orders | ||
sql_table: orders | ||
|
||
measures: | ||
- name: count | ||
type: count | ||
|
||
dimensions: | ||
- name: id | ||
sql: id | ||
type: number | ||
primary_key: true | ||
|
||
- name: status | ||
sql: status | ||
type: string | ||
|
||
- name: completed_at | ||
sql: completed_at | ||
type: time | ||
``` | ||
**JavaScript** | ||
```javascript | ||
cube(`orders`, { | ||
sql_table: `orders`, | ||
|
||
measures: { | ||
count: { | ||
type: `count`, | ||
}, | ||
}, | ||
|
||
dimensions: { | ||
id: { | ||
sql: `id`, | ||
type: `number`, | ||
primary_key: true, | ||
}, | ||
|
||
status: { | ||
sql: `status`, | ||
type: `string`, | ||
}, | ||
|
||
completed_at: { | ||
sql: `completed_at`, | ||
type: `time`, | ||
}, | ||
}, | ||
}); | ||
``` | ||
|
||
Some sample data from this table might look like: | ||
|
||
|
||
| **id** | **status** | **completed_at** | | ||
| ------ | ---------- | ----------------------- | | ||
| 1 | completed | 2021-02-15T12:21:11.290 | | ||
| 2 | completed | 2021-02-25T18:15:12.369 | | ||
| 3 | shipped | 2021-03-15T20:40:57.404 | | ||
| 4 | processing | 2021-03-13T10:30:21.360 | | ||
| 5 | completed | 2021-03-10T18:25:32.109 | | ||
|
||
|
||
Our first requirement is to populate a dropdown in our front-end application | ||
which shows all possible statuses. The Cube query to retrieve this information | ||
might look something like: | ||
|
||
**JSON** | ||
|
||
```json | ||
{ | ||
"dimensions": ["orders.status"] | ||
} | ||
``` | ||
|
||
In that case, we can add the following pre-aggregation to the `orders` cube: | ||
|
||
**YAML** | ||
|
||
```yaml | ||
cubes: | ||
- name: orders | ||
# ... | ||
|
||
pre_aggregations: | ||
- name: order_statuses | ||
dimensions: | ||
- status | ||
``` | ||
**JavaScript** | ||
```javascript | ||
cube(`orders`, { | ||
// ... | ||
|
||
pre_aggregations: { | ||
order_statuses: { | ||
dimensions: [status], | ||
}, | ||
}, | ||
}); | ||
``` |
Oops, something went wrong.