Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

Commit

Permalink
Documentation update (#88)
Browse files Browse the repository at this point in the history
* Update INSTALL.md
Updated the install doc to add the StorageClass part for the deployment on a generic Kubernetes cluster
* Update installation and FAQ with Kubernetes tips.
* describe how to add a user
* Feature/kafka (#87)
* Feature/documentation for #43 #78 (#85)
* add doc traefik
* add TSimulus & reverse proxy doc #78, #43
  • Loading branch information
banzo authored Dec 20, 2019
1 parent 62ee617 commit dc250e7
Show file tree
Hide file tree
Showing 59 changed files with 18,746 additions and 12 deletions.
8 changes: 7 additions & 1 deletion INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,4 +207,10 @@ See [.gitlab-ci.sample.yml](.gitlab-ci.sample.yml) for an example CI setup with

See the [user management documentation](doc/USERMANAGEMENT.md) for information on how to configure user identification and authorization (LDAP, RBAC, ...).

See the [logs management documentation](doc/LOGGING.md) for information on how to configure logging
See the [logs management documentation](doc/LOGGING.md) for information on how to configure logging.

See the [reverse proxy documentation](doc/REVERSEPROXY.md) for information on how to configure the Traefik reverse proxy with FADI.

See the [security documentation](doc/SECURITY.md) for information on how to configure SSL.

Seel the [TSimulus documentation](doc/TSIMULUS.md) for information on how to simulate sensors and generate realistic data with [TSimulus](https://github.com/cetic/TSimulus).
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ Anywhere you can run [Kubernetes](https://kubernetes.io/), you should be able to
## Quick start

1. [Install the framework on your workstation](INSTALL.md)
2. Try [a simple use case](USERGUIDE.md)
2. Try [a simple use case](USERGUIDE.md) (or more [advanced usage examples](examples/README.md))

You can find a more detailed explanation of FADI in the [presentation slideshow](https://fadi.presentations.cetic.be)
You can find a more detailed explanation of FADI in the [presentation slideshow](https://fadi.presentations.cetic.be) and in the [documentation section](doc/README.md)

## FADI Helm Chart

Expand Down
2 changes: 2 additions & 0 deletions USERGUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -365,3 +365,5 @@ For more information on how to use Superset, see the [official Jupyter documenta
In this use case, we have demonstrated a simple configuration for FADI, where we use various services to ingest, store, analyse, explore and provide dashboards and alerts

You can find the various resources for this sample use case (Nifi flowfile, Grafana dashboards, ...) in the [examples folder](examples/basic)

The examples section contains other more specific examples (e.g. [Kafka streaming ingestion](examples/kafka/README.md))
13 changes: 6 additions & 7 deletions doc/LOGGING.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,25 +2,25 @@ Logs management
==========

<p align="left";>
<a href="https://www.elastic.co" alt="pgAdmin">
<img src="/doc/images/logos/elk.png" align="center" alt"ELK logo" width="200px" />
<a href="https://www.elastic.co" alt="elk">
<img src="/doc/images/logos/elk.png" align="center" alt="ELK logo" width="200px" />
</a>
</p>

**[Elastic Stack](https://www.elastic.co)** is a group of open source products from Elastic designed to help users take data from any type of source and in any format and search, analyze, and visualize that data in real time. The product group is composed of: **Beats**, **Logstash**, **Elasticsearch** and **Kibana**.
Despite each one of these four technologies being a separate project, they have been built to work together:

* the process start with **[Beats](https://www.elastic.co/products/beats)**, it ships the logs from all services to
* **[Logstash](https://www.elastic.co/products/logstash)**, which parse, filter or/and transform the logs before storing them in
* **[Logstash](https://www.elastic.co/products/logstash)**, which parses, filters or/and transforms the logs before storing them in
* **[Elasticsearch](https://www.elastic.co/products/elasticsearch)** for indexing, which is connected to
* **[Kibana](https://www.elastic.co/products/kibana)** that provides visualisation of the various services' logs in a central web application interface

![Elastic-stack](/doc/images/installation/elastic_stack.png)

To access the **Kibana** web interface, use this command:
To access the **Kibana** web interface, you have to go through the nginx-ldapauth-proxy, you can use this command:

```
minikube service fadi-kibana
minikube service fadi-nginx-ldapauth-proxy
```

The next step is to **define your index pattern:** Index patterns tell Kibana which Elasticsearch indices you want to explore. An index pattern can match the name of a single index, or include a wildcard (`*`) to match multiple indices, for example, in our case the index we are using is `filebeat*` ([ref](https://www.elastic.co/guide/en/beats/filebeat/current/index.html)).
Expand All @@ -29,7 +29,6 @@ To create the index pattern and monitor the logs, follow these simple steps:
1. In Kibana, open **Management** and then click **Index Patterns**.
2. If this is your first index pattern, the **Create index pattern** page opens automatically. Otherwise, click **Create index pattern**.
3. Enter `filebeat*` in the Index pattern field.

![index_pattern](/doc/images/installation/index_pattern.png)

4. Click **Next step**.
Expand All @@ -42,4 +41,4 @@ To create the index pattern and monitor the logs, follow these simple steps:

![Kibana Logs](/doc/images/installation/kibana_logs.png)

For more details you can always visit the [Elastic-stack official documentation](https://www.elastic.co/guide/index.html).
For more details you can always visit the [Elastic-stack official documentation](https://www.elastic.co/guide/index.html).
10 changes: 10 additions & 0 deletions doc/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
FADI Documentation
==========

* [Services logs management](LOGGING.md) - configure and setup a centralised logging system
* [Users management](USERMANAGEMENT.md) - user identification and authorization (LDAP, RBAC, ...)
* [Reverse proxy](doc/REVERSEPROXY.md) - Traefik reverse proxy configuration
* [Security](doc/SECURITY.md) - SSL setup
* [TSimulus](doc/TSIMULUS.md) - how to simulate sensors and generate realistic data with [TSimulus](https://github.com/cetic/TSimulus)

For tutorials and examples, see the [examples section](examples/README.md)
84 changes: 84 additions & 0 deletions doc/REVERSEPROXY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
Reverse Proxy
==========

<p align="left";>
<a href="https://traefik.io/" alt="traefik">
<img src="/doc/images/logos/traefik-logo.png" width="200px" />
</a>
</p>

* [1. Create the Traefik reverse proxy](#1-create-the-traefik-reverse-proxy)
* [2. Configure the various services to use Traefik](#2-configure-the-various-services-to-use-traefik)

This page provides information on how to configure FADI with the [Traefik](https://traefik.io/) Reverse Proxy.

> Traefik is an open-source reverse proxy and load balancer for HTTP and TCP-based applications that is easy, dynamic, automatic, fast, full-featured, production proven, provides metrics, and integrates with every major cluster technology... No wonder it's so popular!
Note: other Reverse Proxy than Traefik that could be used with FADI, check the list [here](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/).

## 1. Create the Traefik reverse proxy

To create the Traefik reverse proxy, you will need to add the `stable` Helm repository:

```
helm repo add stable https://kubernetes-charts.storage.googleapis.com
```

You can find information about this Helm chart [here](https://github.com/helm/charts/tree/master/stable/traefik) as the Traefik Helm chart is hosted in the stable helm repository.

First, you will need to create a `clusterrole` for Traefik:

```
kubectl get clusterrole traefik-ingress-controller 2> /dev/null || kubectl create -f ./traefik/rbac-config.yaml
```

Take a look at the [sample file](/helm/traefik/rbac-config.yaml).

Then, you can install Traefik with Helm: (If you want further information, you can follow this [tutorial](https://docs.traefik.io/v1.3/user-guide/kubernetes/#deploy-trfik-using-helm-chart))

```
helm upgrade --install traefik stable/traefik -f ./traefik/values.yaml --namespace kube-system --tiller-namespace tiller
```

The values file can be found [here](/helm/traefik/values.yaml).

```
loadBalancerIP: "yourLoadBalancerIP"
ssl:
enabled: true
dashboard:
enabled: true
domain: <yourdomain>
serviceType: NodePort
ingress:
annotations: {kubernetes.io/ingress.class: traefik}
path: /
```

See the [default values file](https://github.com/helm/charts/blob/master/stable/traefik/values.yaml) from the official repository for more configuration options.


## 2. Configure the various services to use Traefik

You will need to update ingress definitions for each service you want to expose, behind your domain name.

See https://docs.traefik.io/providers/kubernetes-ingress/ for the documentation.

Update the FADI `values.yaml` file. You can set all the service types to `ClusterIP` as all services are now exposed through an Ingress.

For instance, for Grafana:
```
grafana:
enabled: true
service:
type: ClusterIP
ingress:
enabled: true
annotations: {kubernetes.io/ingress.class: traefik}
path: /
hosts: [grafana.yourdomain]
```

You should now be able to access Grafana through the domain name you have chosen: `http(s)://grafana.yourdomain.com`

Next you will also want to configure SSL access to your services. For that, have a look at the [security documentation](/doc/SECURITY.md).
7 changes: 7 additions & 0 deletions doc/SECURITY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Security
==========

TODO

* [cert-manager](https://cert-manager.io/docs/installation/kubernetes/).
* ...
68 changes: 68 additions & 0 deletions doc/TSIMULUS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
TSimulus: Sensors simulation for FADI
==========

<p align="left";>
<a href="https://tsimulus.readthedocs.io/en/latest" alt="tsimulus">
<img src="/doc/images/logos/TSimulus-logo.png" width="200px" />
</a>
</p>

* [0. Context](#0-context)
* [1. Setup TSimulus](#1-setup-tsimulus)
* [2. How to use TSimulus?](#2-how-to-use-tsimulus)

## 0. Context

The TSimulus library is used to simulate various sensors from industrial partners in the context of [many research projects](https://www.cetic.be/FADI). It allows to generate a sufficient amount of data to test the entire FADI platform. It is useful when there was a lack of real time data streams. In that context, TSimulus enables to already work on the analysis part without having the real data from the partners.

This section explains how to setup TSimulus for FADI and gives the essential links to the documentation on how to use it.

## 1. Setup TSimulus

The **TSimulus as a Service** project aims at building a REST API in front of the [TSimulus](https://github.com/cetic/TSimulus) framework, and a set of configurable websocket routes to consume the TSimulus stream.

The project is structured as a [sbt multiproject](https://www.scala-sbt.org/1.x/docs/Multi-Project.html), each part is runnable as standalone and the top level project orchestrates a complete deployment and coordination of each parts.

For more information on the implementation, take a look at the documentation of [TSimulus as a Service](https://github.com/cetic/tsimulus-saas).

To install TSimulus with FADI, you will need to modify a bit the [values.yaml](/helm/values.yaml) file. This will deploy all the TSimulus services (the TSimulus microservice and the [Swagger User Interface](https://swagger.io/tools/swagger-ui/)) on your Kubernetes cluster.

First of all, update your [values.yaml](/helm/values.yaml) by activating the TSimulus services:

```
tsaas:
enabled: true
ingress:
enabled: true
hosts: [api-tsimulus.yourdomain]
swaggerui:
enabled: true
swaggerui :
jsonUrl : https://raw.githubusercontent.com/cetic/tsimulus-saas/master/oas/api-doc/openapi.json
server :
url: http://api-tsimulus.yourdomain
description: "TSIMULUS API"
ingress:
enabled: true
hosts: [swagger-tsimulus.yourdomain]
```

You can also setup Ingress parts to use a reverse proxy. See the [previous section](doc/REVERSEPROXY.md).

Then, run the [deploy.sh](/helm/deploy.sh) script to take the modifications into account:

```
cd helm
./deploy.sh
```

You should now be able to access the Swagger User Interface (on a minikube setup: `minikube service fadi-swaggerui`):

![](/doc/images/installation/tsaas-swaggerui.png)

## 2. How to use TSimulus?

* See the [TSimulus Documentation](https://tsimulus.readthedocs.io/en/latest/).
* See the [TSimulus as a Service Documentation](https://github.com/cetic/tsimulus-saas).
* (TODO) See the use cases.
76 changes: 74 additions & 2 deletions doc/USERMANAGEMENT.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ User Management
* [Superset](#superset)
* [PostgreSQL](#postgresql)
* [3. Manage your LDAP server](#3-manage-your-ldap-server)
* [Adding a user](#adding-a-user)


This page provides information on how to configure FADI user authentication and authorization (LDAP, RBAC, ...).
Expand Down Expand Up @@ -81,12 +82,10 @@ host database user address auth-method [auth-options]

For example, to use LDAP authentication for local users, your configuration should look something like this :


```
local all all ldap ldapserver=example.com ldapport=389 [other-ldap-options]
```


For more information about how to add LDAP authentication to PostgreSQL: [LDAP authentication in PostgreSQL](https://www.postgresql.org/docs/11/auth-ldap.html)

For more information about pg-ldap-sync: [Use LDAP permissions in PostgreSQL](https://github.com/larskanis/pg-ldap-sync)
Expand Down Expand Up @@ -114,3 +113,76 @@ The first entry that will be created is for the administrator and the password i
* Password: `password1`

For more information on how to use phpLDAPadmin, see the [phpLDAPadmin documentation](http://phpldapadmin.sourceforge.net/function-ref/1.2/)

### Adding a user

This section provides an example on how to add a user through phpLDAPadmin and access the Superset, Grafana and JupyterHub services.

#### 1. Connect to phpLDAPadmin

<a href="http://phpldapadmin.sourceforge.net/wiki/index.php/Main_Page" alt="phpLDAPadmin"><img src="images/logos/phpldapadmin.jpg" width="100px" /></a>

Access your phpLDAPadmin service and connect using the admin Login DN & password, the default Login DN & password are:

* Login DN: `cn=admin,dc=ldap,dc=cetic,dc=be`
* Password: `password1`

<img src="images/installation/phpldapadmin.gif" />

#### 2. Add the user

To add users, there are two ways: using a tempalte and manually.

#### Import the user using a template

The template below adds a user called John Doe:

```
dn: cn=John,cn=admin,dc=ldap,dc=cetic,dc=be
cn: John
givenname: John
mail: john@mail.com
objectclass: inetOrgPerson
objectclass: top
sn: Doe
uid: John Doe
userpassword: Johnpassword
```

Change the user name and other misc info ( mail, etc.) and copy/paste it in the import field, here is an example of a modified template for a user called `Luke Skywalker`.

```
dn: cn=Luke,cn=admin,dc=ldap,dc=cetic,dc=be
cn: Luke
givenname: Luke
mail: luke.skywalker@mail.com
objectclass: inetOrgPerson
objectclass: top
sn: Skywalker
uid: Luke Skywalker
userpassword: ThereIsNoTry
```

Now you can go to `import`, paste that template and click `proceed` and the user will be added.

<img src="images/installation/Luke.gif" alt="Add a user"/>

#### Add the user manually

You can add a user manually through phpLDAPadmin, after connecting go to `⭐️Create new entry here` :

<img src="images/installation/Create_new.gif" alt="Create user"/>

You can for example create a user in the default admin group `cn=admin,dc=ldap,dc=cetic,dc=be`, or create a new group in which you can create new users.

In this example we are going to create a simple user under the default admin user (which is also a group).

When you click on `⭐️Create new entry here`, a new window called `Select a template for the creation process` will show up with all the different entries you can create:

<img src="images/installation/Generic_User_Account.png" alt="Create a new user"/>

Go to `Generic: User Account` and a list of fields will show up. Enter the information about the user you want to create and click `Create Object`.




Binary file added doc/images/installation/Create_new.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/Generic_User_Account.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/Luke.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/installation/tsaas-swaggerui.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/logos/TSimulus-logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/logos/traefik-logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 7 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
FADI examples
=========

This section contains various usage examples for FADI:

* [basic example](/USERGUIDE.md) with batch ingestion
* [streaming ingestion](examples/kafka/README.md) with streaming ingestion with the help of the [Apache Kafka](https://kafka.apache.org) message broker
Loading

0 comments on commit dc250e7

Please sign in to comment.