Skip to content

Benchmark results: HerdDB 0.12.0 (2019 hardware)

Alessandro Luccaroni edited this page Oct 9, 2019 · 2 revisions

For info on YCSB and how to use it to make the bench, see this page.

Tools and system used

This bench compares the throughput of HerdDB 0.10.0, HerdDB 0.12.0 and MYSQL 5.7.24 on different disk subsystems (SSD and HDD). To maintain repeatable results each bench has been performed multiple times on the same "cloud" machine (CentOS 7 machine with 64GB of RAM and 16 virtual CPU - SSD rated for 2300 IOPS, HDD rated for 500 IOPS top) starting each time with an empty table. The data-set can be fully saved in-memory by both applications (1000000 recordcount and operationcount).

This bench has been used as a tool for the development of HerdDB and a more scientific analysis/real-world data approach is needed to correlate the results to real-world scenarios. For example, the access pattern of your application may differ heavily from the ones used in the benches.

You will find the my.cnf and other configuration files used for HerdDB, YCSB and Mysql in the footnotes.

Results

Data saved on SSD (in ops/sec using 200 thread parallelism):

Workload HERD 0.10.0 HERD 0.12.0 MSYQL 5.7.24 % ratio notes
load phase SSD 6674 6798 1406 483 100% INSERT
workloadA SSD 4205 4211 3221 131 50% READ - 50% UPDATE
workloadB SSD 23790 23021 29182 79 95% READ - 5% UPDATE
workloadC SSD 48441 42193 47236 89 100% READ
workloadD SSD 38215 33882 23552 144 95% READ (latest rows) - 5% INSERT
workloadE SSD 12969 12652 15814 80 95% SCAN (short range of records) - 5% INSERT
workloadF SSD 4006 3936 3193 123 50% READ - 50% READ,MODIFY,WRITE
workloadems1 SSD 7150 7348 1680 437 32% INSERT - 64% UPDATE - 4% READ

image

Data saved on HDD (in ops/sec using 200 thread parallelism):

Workload HERD 0.10.0 HERD 0.12.0 MSYQL 5.7.24 % ratio notes
load phase HDD 1789 2163 398 543 100% INSERT
workloadA HDD 1072 1818 939 194 50% READ - 50% UPDATE
workloadB HDD 8968 8915 10496 85 95% READ - 5% UPDATE
workloadC HDD 49180 42773 49224 87 100% READ
workloadD HDD 12760 14918 6850 218 95% READ (latest rows) - 5% INSERT
workloadE HDD 7724 7779 5982 130 95% SCAN (short range of records) - 5% INSERT
workloadF HDD 1336 1564 970 161 50% READ - 50% READ,MODIFY,WRITE
workloadems1 HDD 1802 2270 496 458 32% INSERT - 64% UPDATE - 4% READ

image

Configuration files

YCSB ems1 custom workload

workload=com.yahoo.ycsb.workloads.CoreWorkload
recordcount=1000000
operationcount=1000000
insertstart=0
fieldcount=10
fieldlength=100
readallfields=true
writeallfields=false
fieldlengthdistribution=constant
readproportion=0.02
updateproportion=0.64
insertproportion=0.32
readmodifywriteproportion=0.02
scanproportion=0
maxscanlength=1000
scanlengthdistribution=uniform
insertorder=ordered
requestdistribution=latest
hotspotdatafraction=0.2
hotspotopnfraction=0.8
table=usertable
histogram.buckets=1000
timeseries.granularity=1000

Mysql my.cnf

# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html

[client]
port    = 3307
socket = /data2/mysql-5.7/data/mysqld.sock
default-character-set=utf8mb4


[mysqld]

# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M

# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin

# These are commonly set, remove the # and set as required.
# basedir = .....
# datadir = .....
# port = .....
# server_id = .....
# socket = .....

# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
log_warnings=3
port = 3307
socket = /data2/mysql-5.7/data/mysqld.sock
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
lower_case_table_names=1
back_log = 1000
max_connections = 1000
max_connect_errors = 10
table_open_cache = 8192
max_allowed_packet = 16M
max_heap_table_size = 512M
read_buffer_size = 8M
read_rnd_buffer_size = 64M
sort_buffer_size = 32M
join_buffer_size = 32M
thread_cache_size = 200
#thread_concurrency = 32
query_cache_size = 256M
query_cache_limit = 8M
ft_min_word_len = 4
default-storage-engine = InnoDB
thread_stack = 512K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 1G

innodb_buffer_pool_size = 8G
#innodb_buffer_pool_size = 300MB
innodb_data_file_path = ibdata1:10M:autoextend
innodb_write_io_threads = 16
innodb_read_io_threads = 16

innodb_thread_concurrency = 32
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 32M
innodb_log_file_size = 512M
innodb_log_files_in_group = 4
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 50

explicit_defaults_for_timestamp=1
innodb_file_per_table=1
innodb_flush_method=O_DIRECT
innodb_buffer_pool_instances=8
innodb_read_ahead_threshold=0
innodb_io_capacity=20000

sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES


[mysqldump]
# Do not buffer the whole result set in memory before writing it to
# file. Required for dumping very large tables
quick
max_allowed_packet = 16M

[mysqld_safe]
# Increase the amount of open files allowed per process. Warning: Make
# sure you have set the global system limit high enough! The high value
# is required for a large number of opened tables
open-files-limit = 16384

HerdDB setenv.sh


# Basic Environment and Java variables

JAVA_HOME="/usr/java/current12"
JDK_JAVA_OPTIONS="--add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED"
JAVA_OPTS="-XX:+UseG1GC -Duser.language=en -Xmx8g -Xms8g -XX:+AggressiveOpts -Djava.net.preferIPv4Stack=true -XX:MaxDirectMemorySize=4g  -Djava.util.logging.config.file=conf/logging.properties"

if [ -z "$JAVA_HOME" ]; then
  JAVA_PATH=`which java 2>/dev/null`
  if [ "x$JAVA_PATH" != "x" ]; then
    JAVA_BIN=`dirname $JAVA_PATH 2>/dev/null`
    JAVA_HOME=`dirname $JAVA_BIN 2>/dev/null`
  fi
  if [ -z "$JAVA_HOME" ]; then
    echo "JAVA_HOME environment variable is not defined and is needed to run this program"
    exit 1
  fi
fi

HerdDB server.properties

# standalone|cluster
server.mode=standalone

# listening endpoint for client and server-to-server communications
# if you leave host empty a discovery of the local name will be attempted
server.host=
# if you leave port to zero at every boot a random port will be used
server.port=7000

# id of the node in the cluster
# when empty the system will choose a random id and write it to server.data.dir/nodeid file
server.node.id=

# main directory, other directories by default are relative to this directory
server.base.dir=dbdata

# directory to use to store data
server.data.dir=data

# temporary directory for swap
server.tmp.dir=tmp

# metadata directory. in clustered mode metadata are store on ZooKeeper, not in this directory
server.metadata.dir=metadata

#####################################
# standalone mode txlog configuration
#####################################

# this directory contains the transaction log
server.log.dir=txlog

# max number of unflushed operation
# txlog.maxsyncbatchsize=10000
# max number of unflushed bytes
# txlog.maxsyncbatchbytes=524288
# max time (in ms) to wait before acknowledging a write
# txlog.synctimeout=1
# background fsync task period, in seconds, 0 to disable
# txlog.deferredsyncperiod=0
# max txlog file size
# txlog.maxfilesize=67108864;

# force fsync on txlog (only standalone) and on data pages (standalone and cluster)
#requirefsync=true

# use O_DIRECT to write to the txlog. It may spped up writes up to 50x but
# it depends on the underlying FS and Linux kernel version and configuration
# txlog.use_o_direct=false

# use O_DIRECT to read/write data pages
# page.use_o_direct=false

# use O_DIRECT to read/write index pages
# index.use_o_direct=false

# SSL configuration
# if no file is configured a self signed certificate will be generated at every boot
server.ssl=false

# Enable/Disable network acceptor
# server.network.enabled=true
# for server.mode=cluster
server.zookeeper.address=localhost:2181
server.zookeeper.session.timeout=40000
server.zookeeper.path=/herd


# bookkeeper client parameters
server.bookkeeper.ensemble.size=1
server.bookkeeper.write.quorum.size=1
server.bookkeeper.ack.quorum.size=1

# bookkeeper client parameters. for a fault-tolerant system use should have at least 3 bookies and configure these values
#server.bookkeeper.ensemble.size=3
#server.bookkeeper.write.quorum.size=2
#server.bookkeeper.ack.quorum.size=2

# retention period, in milliseconds, of bookkeeper ledgers
server.bookkeeper.ledgers.retention.period=34560000

# max time to wait before forcing sync to follower nodes, set 0 to disable this feature (if you do not have followers at all)
server.bookkeeper.max.idle.time=10000

# start a bookie inside the same JVM (if the server is started in cluster mode)
server.bookkeeper.start=true
# if you leave port to zero a random port will be used an then persisted to bookie_port file
# bookkeeper uses local hostname and this port to identify bookies
server.bookkeeper.port=-1

# max "logical" size in bytes of a data page. Defaults to 1MB
#server.memory.page.size=

# period of time in milliseconds between forced checkpoints. Defaults to 15 minutes
#server.checkpoint.period=

# Maximum target time in milliseconds to spend during standard checkpoint operations. Checkpoint duration
# could be longer than this to complete pages flush. If set to -1 checkpoints won't have a time limit. Be
# aware that configuring this parameter to small values could impact performances on the long run
# increasing pages pollution with dirty not reclaimed records: in many cases is safer to configure a
# wider dirty page threshold.
#server.checkpoint.duration=

# Maximum target time in milliseconds to spend during standard checkpoint operations on compacting
# smaller pages. Is should be less than the maximum checkpoint duration configured by
# "server.checkpoint.duration". If set to -1 checkpoints won't have a time limit. Regardless his
# value at least one page will be compacted for each checkpoint.
#server.checkpoint.compaction=

# Maximum dirty bytes percentage at which a pages will be considered for rebuild during a checkpoint.
# This value must be between 0 and 1.0.
#server.checkpoint.page.dirty.max.threshold=

# Minimum byte fill percentage at which a pages will be considered for rebuild during a checkpoint.
# This value must be between 0 and 1.0.
#server.checkpoint.page.fill.min.threshold=

# option to halt the JVM in case of error during the boot of a tablespace
# use this option to debug boot problems
server.halt.on.tablespace.boot.error=true

# users file, you'd better to set UNIX permissions properly
server.users.file=../conf/users
# overall limit on memory usage. it defaults to maximum heap size configured on the JVM
#server.memory.max.limit=

# maximum amount of memory (in bytes) used for data. Defaults to 30% of server.memory.max.limit
#server.memory.data.limit=
server.memory.data.limit=4294967296
# maximum amount of memory (in bytes) used for primary indexes. Defaults to 20% of server.memory.max.limit
#server.memory.pk.limit=
server.memory.pk.limit=2147483648
# enable/disable JMX
#server.jmx.enable=true

# web ui
#http.enabled=true

# bind address, default to same host of the main API connector
#http.host=

# http server TCP port
#http.port=9845


# Thread pools (if you find more sensible defaults please create and issue for discussion)

# size of internal Netty EventLoopGroups (boss and workers)
server.network.thread.workers=16

# threads for handling requests (mostly SQL parsing + dataccess/locks + reads)
server.network.thread.callback.workers=64

# threads for handling executions of activity which happen after writing to the log (mostly DML)
server.async.thread.workers=64
Clone this wiki locally