This is Toy Cluster with Hadoop and HBase nodes. Created for experiments.
It includes following services:
- Apache Zookeeper 3.9.0
- Apache Hadoop 3.3.6
- Azkaban 4.0.0
- Apache Spark 3.5.1
- Apache HBase 2.6.0 (Reference Guide exists only for version 2.4)
Just execute:
$ ./start.sh
Current state of all nodes will be saved at data directory.
Directory logs will be created automatically and will contain logs for all services.
Shared data between host machine and cluster nodes can be done through shared directory.
To stop:
$ ./stop.sh
Note: If you want clean-up all working data just execute $ ./clean.sh
script.
toy-master1
:- HDFS NameNode
- YARN ResourceManager
- MapReduce JobHistory Server
- Azkaban (username/password is
azkaban/azkaban
) - Spark Web UI
- Spark History Server
- HBase Master Web UI
- HBase Thrift2 Web UI
- HBase REST Web UI
toy-worker1
:- YARN NodeManager
- Spark Web UI
- HBase RegionServer Web UI
toy-worker2
:- YARN NodeManager
- Spark Web UI
- HBase RegionServer Web UI
toy-worker3
:- YARN NodeManager
- Spark Web UI
- HBase RegionServer Web UI
-
Go to examples folder.
-
Deploy examples:
$ ./deploy.sh
Note: This script build examples and upload them to local Maven cache (
~/.m2/repository
). Then it download them and push to the cluster. -
Run some file:
$ ./mapreduce.sh
or
$ ./hbase.sh
-
Go to examples folder.
-
Deploy examples:
$ ./deploy.sh
Note: This script build examples and upload them to local Maven cache (
~/.m2/repository
). Then it download them and push to the cluster. -
Upload input data:
$ ./wc_input.sh
-
Go to azkaban directory and create ZIP archive:
$ ./azkaban-arch.sh
It will generate
azkaban-jobs.zip
. -
Go to Azkaban and login (username/password are
azkaban/azkaban
). -
Click on "Create Project" button.
-
Input "Name" and "Description". After that click on "Create Project" button.
-
Click on "Upload" and select generated ZIP-file. After that click on "Upload" button.
-
Click on "Execute Flow". Update "Flow Parameters" if needed (you are can see all parameters in
shared.properties
file). -
Click on "Execute" button, then press "Continue" button.
Note: To get "Job Logs" click on "Job List" and find "Details" link in latest column of the table.
-
Download output data:
$ ./wc_output.sh
Distributed under MIT License.