diff --git a/README.md b/README.md index e22fb78ebc..ef168f48de 100644 --- a/README.md +++ b/README.md @@ -32,7 +32,7 @@ You can try out the demo instance: http://demo.redash.io/ (login with any Google ## Getting Started -* [Setting up re:dash instance](http://redash.io/deployment/setup.html) (includes links to ready made AWS/GCE images). +* [Setting up a re:dash instance](http://docs.redash.io/en/latest/setup.html). * [Documentation](http://docs.redash.io). diff --git a/docs/dev/vagrant.rst b/docs/dev/vagrant.rst index 1b788d8c98..755bc554d8 100644 --- a/docs/dev/vagrant.rst +++ b/docs/dev/vagrant.rst @@ -4,9 +4,8 @@ Setting up development environment (using Vagrant) To simplify contribution there is a `Vagrant box `__ available with all the needed software to run re:dash for development (use it only for -development, for demo purposes there is -`redash/demo `__ box and the -AWS/GCE images). +development, for demo purposes use the +`Redash website `__). To get started with this box: diff --git a/docs/index.rst b/docs/index.rst index 816f020d95..866fa5e2b7 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -9,7 +9,7 @@ Open Source Data Collaboration and Visualization Platform Prior to **re:dash**, we tried to use traditional BI suites and discovered a set of bloated, technically challenged and slow tools/flows. What we were looking for was a more hacker'ish way to look at data, so we built one. **re:dash** was built to allow fast and easy access to billions of records, that we process and collect using Amazon Redshift ("petabyte scale data warehouse" that "speaks" PostgreSQL). -Today **_re:dash_** has support for querying multiple databases, including: Redshift, Google BigQuery,Google Spreadsheets, PostgreSQL, MySQL, Graphite and custom scripts. +Today **_re:dash_** supports querying multiple databases, including: Redshift, Google BigQuery, Google Spreadsheets, PostgreSQL, MySQL, Graphite and custom scripts. Features ######## diff --git a/docs/settings.rst b/docs/settings.rst index 89af350de5..83476388da 100644 --- a/docs/settings.rst +++ b/docs/settings.rst @@ -1,7 +1,7 @@ Settings ######## -Much of the functionality of re:dash can be changes with settings. Settings are read by `/redash/settings.py` from environment variables which (for most installs) can be set in `/opt/redash/current/.env` +Much of the functionality of re:dash can be changed with settings. Settings are read by `/redash/settings.py` from environment variables which (for most installs) can be set in `/opt/redash/current/.env` The follow is a list of settings and what they control: diff --git a/docs/setup.rst b/docs/setup.rst index 2a25cc62b7..8e341faacb 100644 --- a/docs/setup.rst +++ b/docs/setup.rst @@ -6,84 +6,27 @@ script `__ -- us-west-1: `ami-269feb46 `__ -- us-west-2: `ami-435fba23 `__ -- eu-west-1: `ami-b4c277c7 `__ -- eu-central-1: `ami-07ced76b `__ -- sa-east-1: `ami-6e2eaf02 `__ -- ap-northeast-1: `ami-aa5a64c4 `__ -- ap-southeast-1: `ami-1c45897f `__ -- ap-southeast-2: `ami-42b79221 `__ - -(the above AMIs are of version: 0.9.1) - -When launching the instance make sure to use a security group, that **only** allows incoming traffic on: port 22 (SSH), 80 (HTTP) and 443 (HTTPS). - -Now proceed to `"Setup" <#setup>`__. - -Google Compute Engine ---------------------- - -First, you need to add the images to your account: - -.. code:: bash - - $ gcloud compute images create "redash-091-b1377" --source-uri gs://redash-images/redash.0.9.1.b1377.tar.gz - -Next you need to launch an instance using this image (n1-standard-1 -instance type is recommended). If you plan using re:dash with BigQuery, -you can use a dedicated image which comes with BigQuery preconfigured -(using instance permissions): - -.. code:: bash - - $ gcloud compute images create "redash-091-b1377-bq" --source-uri gs://redash-images/redash.0.9.1.b1377-bq.tar.gz - -Note that you need to launch this instance with BigQuery access: - -.. code:: bash - - $ gcloud compute instances create --image redash-091-b1377-bq --scopes storage-ro,bigquery - -(the same can be done from the web interface, just make sure to enable -BigQuery access) - -Now proceed to `"Setup" <#setup>`__. - - -Other ------ - Download the provision script and run it on your machine. Note that: 1. You need to run the script as root. 2. It was tested only on Ubuntu 12.04, Ubuntu 14.04 and Debian Wheezy. -3. It's designed to run on a "clean" machine. If you're running this script on a machine that is used for other purposes, you might want to tweak it to your needs (like removing the ``apt-get dist-upgrade`` call at the beginning of it). +3. It's designed to run on a "clean" machine. If you're running this script on a + machine that is used for other purposes, you might want to tweak it to your + needs (like removing the ``apt-get dist-upgrade`` call at the beginning of it). Setup ===== -Once you created the instance with either the image or the script, you +Once you created the instance with the script, you should have a running re:dash instance with everything you need to get started. You can now login to it with the user "admin" (password: "admin"). But to make it useful, there are a few more steps that you need to manually do to complete the setup: -First ssh to your instance and change directory to ``/opt/redash``. If -you're using the GCE image, switch to root (``sudo su``). +First ssh to your instance and change directory to ``/opt/redash``. Users & Google Authentication setup ----------------------------------- @@ -110,7 +53,8 @@ file. export REDASH_GOOGLE_CLIENT_SECRET="" -4. Configure the domain(s) you want to allow to use with Google Apps, by running the command: +4. Configure the domain(s) you want to allow to use with Google Apps, by + running the command: .. code:: @@ -134,8 +78,8 @@ If you're passing multiple domains, separate them with commas. Datasources ----------- -To make re:dash truly useful, you need to setup your data sources in it. Browse to ``/data_sources`` on your instance, -to create new data source connection. +To make re:dash truly useful, you need to setup your data sources in it. Browse +to ``/data_sources`` on your instance, to create new data source connection. See :doc:`documentation ` for the different options. Your instance comes ready with dependencies needed to setup supported sources. @@ -143,8 +87,9 @@ Your instance comes ready with dependencies needed to setup supported sources. Mail Configuration ------------------ -For the system to be able to send emails (for example when alerts trigger), you need to set the mail server to use and the -host name of your re:dash server. If you're using one of our images, you can do this by editing the `.env` file: +For the system to be able to send emails (for example when alerts trigger), you +need to set the mail server to use and the host name of your re:dash server. If +you're using one of our images, you can do this by editing the `.env` file: .. code:: @@ -161,16 +106,19 @@ host name of your re:dash server. If you're using one of our images, you can do export REDASH_HOST="" # base address of your re:dash instance, for example: "https://demo.redash.io" - Note that not all values are required, as there are default values. -- It's recommended to use some mail service, like `Amazon SES `__, `Mailgun `__ - or `Mandrill `__ to send emails to ensure deliverability. +- It's recommended to use some mail service, like + `Amazon SES `__, + `Mailgun `__, or `SparkPost `__ + to send emails to ensure deliverability. -To test email configuration, you can run `bin/run ./manage.py send_test_mail` (from `/opt/redash/current`). +To test email configuration, you can run `bin/run ./manage.py send_test_mail` +(from `/opt/redash/current`). How to upgrade? --------------- -It's recommended to upgrade once in a while your re:dash instance to -benefit from bug fixes and new features. See :doc:`here ` for full upgrade +It's recommended to once in a while upgrade your re:dash instance to benefit +from bug fixes and new features. See :doc:`here ` for full upgrade instructions (including Fabric script). Notes diff --git a/setup/amazon_linux/README.md b/setup/amazon_linux/README.md deleted file mode 100644 index d30254ea4e..0000000000 --- a/setup/amazon_linux/README.md +++ /dev/null @@ -1 +0,0 @@ -Bootstrap script for Amazon Linux AMI. *Not supported*, we recommend to use the Docker images instead. diff --git a/setup/amazon_linux/bootstrap.sh b/setup/amazon_linux/bootstrap.sh deleted file mode 100755 index 31b403ef74..0000000000 --- a/setup/amazon_linux/bootstrap.sh +++ /dev/null @@ -1,216 +0,0 @@ -#!/bin/bash -set -eu - -REDASH_BASE_PATH=/opt/redash -FILES_BASE_URL=https://raw.githubusercontent.com/getredash/redash/master/setup/amazon_linux/files/ -# Verify running as root: -if [ "$(id -u)" != "0" ]; then - if [ $# -ne 0 ]; then - echo "Failed running with sudo. Exiting." 1>&2 - exit 1 - fi - echo "This script must be run as root. Trying to run with sudo." - sudo bash $0 --with-sudo - exit 0 -fi - -# Base packages -yum update -yum install -y python-pip python-devel nginx curl -yes | yum groupinstall -y "Development Tools" -yum install -y libffi-devel openssl-devel - -# redash user -# TODO: check user doesn't exist yet? -if [-x $(adduser --system --no-create-home --comment "" redash)]; then - echo "redash user have already registered." -fi -add_service() { - service_name=$1 - service_command="/etc/init.d/$service_name" - - echo "Adding service: $service_name (/etc/init.d/$service_name)." - chmod +x $service_command - - if command -v chkconfig >/dev/null 2>&1; then - # we're chkconfig, so lets add to chkconfig and put in runlevel 345 - chkconfig --add $service_name && echo "Successfully added to chkconfig!" - chkconfig --level 345 $service_name on && echo "Successfully added to runlevels 345!" - elif command -v update-rc.d >/dev/null 2>&1; then - #if we're not a chkconfig box assume we're able to use update-rc.d - update-rc.d $service_name defaults && echo "Success!" - else - echo "No supported init tool found." - fi - - $service_command start -} - -# PostgreSQL -pg_available=0 -psql --version || pg_available=$? -if [ $pg_available -ne 0 ]; then - # wget $FILES_BASE_URL"postgres_apt.sh" -O /tmp/postgres_apt.sh - # bash /tmp/postgres_apt.sh - yum update - yum -y install postgresql93-server postgresql93-devel - service postgresql93 initdb - add_service "postgresql93" -fi - - - -# Redis -redis_available=0 -redis-cli --version || redis_available=$? -if [ $redis_available -ne 0 ]; then - wget http://download.redis.io/releases/redis-2.8.17.tar.gz - tar xzf redis-2.8.17.tar.gz - rm redis-2.8.17.tar.gz - cd redis-2.8.17 - make - make install - - # Setup process init & configuration - - REDIS_PORT=6379 - REDIS_CONFIG_FILE="/etc/redis/$REDIS_PORT.conf" - REDIS_LOG_FILE="/var/log/redis_$REDIS_PORT.log" - REDIS_DATA_DIR="/var/lib/redis/$REDIS_PORT" - - mkdir -p `dirname "$REDIS_CONFIG_FILE"` || die "Could not create redis config directory" - mkdir -p `dirname "$REDIS_LOG_FILE"` || die "Could not create redis log dir" - mkdir -p "$REDIS_DATA_DIR" || die "Could not create redis data directory" - - wget -O /etc/init.d/redis_6379 $FILES_BASE_URL"redis_init" - wget -O $REDIS_CONFIG_FILE $FILES_BASE_URL"redis.conf" - - add_service "redis_$REDIS_PORT" - - cd .. - rm -rf redis-2.8.17 -fi - - -if [ ! -d "$REDASH_BASE_PATH" ]; then - sudo mkdir /opt/redash - sudo chown redash /opt/redash - sudo -u redash mkdir /opt/redash/logs -fi - -# Default config file -if [ ! -f "/opt/redash/.env" ]; then - sudo -u redash wget $FILES_BASE_URL"env" -O /opt/redash/.env -fi - -# Install latest version -REDASH_VERSION=${REDASH_VERSION-0.9.1.b1377} -LATEST_URL="https://github.com/getredash/redash/releases/download/v${REDASH_VERSION}/redash.$REDASH_VERSION.tar.gz" -VERSION_DIR="/opt/redash/redash.$REDASH_VERSION" -REDASH_TARBALL=/tmp/redash.tar.gz -REDASH_TARBALL=/tmp/redash.tar.gz - -if [ ! -d "$VERSION_DIR" ]; then - sudo -u redash wget $LATEST_URL -O $REDASH_TARBALL - sudo -u redash mkdir $VERSION_DIR - sudo -u redash tar -C $VERSION_DIR -xvf $REDASH_TARBALL - ln -nfs $VERSION_DIR /opt/redash/current - ln -nfs /opt/redash/.env /opt/redash/current/.env - - cd /opt/redash/current - - # TODO: venv? - pip install -r requirements.txt -fi - -# InfluxDB dependencies: -pip install influxdb==2.6.0 - -# BigQuery dependencies: -pip install google-api-python-client==1.2 pyOpenSSL==0.14 oauth2client==1.2 - -# MySQL dependencies: -yum install -y mysql-devel -pip install MySQL-python==1.2.5 - -# Microsoft SQL Server dependencies (`sudo` required): -sudo apt-get install -y freetds-dev -sudo pip install pymssql==2.1.1 - -# Mongo dependencies: -pip install pymongo==2.7.2 - -# Setup supervisord + sysv init startup script -sudo -u redash mkdir -p /opt/redash/supervisord -pip install supervisor==3.1.2 # TODO: move to requirements.txt - -# Create database / tables -pg_user_exists=0 -sudo -u postgres psql postgres -tAc "SELECT 1 FROM pg_roles WHERE rolname='redash'" | grep -q 1 || pg_user_exists=$? -if [ $pg_user_exists -ne 0 ]; then - echo "Creating redash postgres user & database." - sudo -u postgres createuser redash --no-superuser --no-createdb --no-createrole - sudo -u postgres createdb redash --owner=redash - - cd /opt/redash/current - sudo -u redash bin/run ./manage.py database create_tables -fi - -# Create default admin user -cd /opt/redash/current -# TODO: make sure user created only once -# TODO: generate temp password and print to screen -sudo -u redash bin/run ./manage.py users create --admin --password admin "Admin" "admin" - -# Create re:dash read only pg user & setup data source -pg_user_exists=0 -sudo -u postgres psql postgres -tAc "SELECT 1 FROM pg_roles WHERE rolname='redash_reader'" | grep -q 1 || pg_user_exists=$? -if [ $pg_user_exists -ne 0 ]; then - echo "Creating redash reader postgres user." - - sudo yum install -y expect - - REDASH_READER_PASSWORD=$(mkpasswd) - sudo -u postgres psql -c "CREATE ROLE redash_reader WITH PASSWORD '$REDASH_READER_PASSWORD' NOCREATEROLE NOCREATEDB NOSUPERUSER LOGIN" - sudo -u redash psql -c "grant select(id,name,type) ON data_sources to redash_reader;" redash - sudo -u redash psql -c "grant select on events, queries, dashboards, widgets, visualizations, query_results to redash_reader;" redash - - cd /opt/redash/current - sudo -u redash bin/run ./manage.py ds new -n "re:dash metadata" -t "pg" -o "{\"user\": \"redash_reader\", \"password\": \"$REDASH_READER_PASSWORD\", \"host\": \"localhost\", \"dbname\": \"redash\"}" -fi - - -# Get supervisord startup script -sudo -u redash wget -O /opt/redash/supervisord/supervisord.conf $FILES_BASE_URL"supervisord.conf" - -# install start-stop-daemon -wget http://developer.axis.com/download/distribution/apps-sys-utils-start-stop-daemon-IR1_9_18-2.tar.gz -tar xvzf apps-sys-utils-start-stop-daemon-IR1_9_18-2.tar.gz -cd apps/sys-utils/start-stop-daemon-IR1_9_18-2/ -gcc start-stop-daemon.c -o start-stop-daemon -cp start-stop-daemon /sbin/ - -wget -O /etc/init.d/redash_supervisord $FILES_BASE_URL"redash_supervisord_init" -add_service "redash_supervisord" - -# Nginx setup -if [ -d "/etc/nginx/conf.d" ]; then - echo "/etc/nginx/conf.d exists." - wget -O /etc/nginx/conf.d/redash.conf $FILES_BASE_URL"nginx_redash_site" -elif [ -d "/etc/nginx/sites-available" ]; then - echo "/etc/nginx/sites-available exists." - wget -O /etc/nginx/sites-available/redash $FILES_BASE_URL"nginx_redash_site" - ln -nfs /etc/nginx/sites-available/redash /etc/nginx/conf.d/redash.conf -else - mkdir /etc/nginx/sites-available - if [ $? -ne 0 ] ; then - echo "create /etc/nginx/sites-available ok" - wget -O /etc/nginx/sites-available/redash $FILES_BASE_URL"nginx_redash_site" - ln -nfs /etc/nginx/sites-available/redash /etc/nginx/conf.d/redash.conf - else - echo "ERROR: create /etc/nginx/sites-available failed" - exit - fi -fi - -service nginx restart diff --git a/setup/amazon_linux/files/env b/setup/amazon_linux/files/env deleted file mode 100644 index 7d468f86c3..0000000000 --- a/setup/amazon_linux/files/env +++ /dev/null @@ -1,6 +0,0 @@ -export REDASH_STATIC_ASSETS_PATH="../rd_ui/dist/" -export REDASH_LOG_LEVEL="INFO" -export REDASH_REDIS_URL=redis://localhost:6379/1 -export REDASH_DATABASE_URL="postgresql://redash" -export REDASH_COOKIE_SECRET=veryverysecret -export REDASH_GOOGLE_APPS_DOMAIN= diff --git a/setup/amazon_linux/files/nginx_redash_site b/setup/amazon_linux/files/nginx_redash_site deleted file mode 100644 index 19c21c0637..0000000000 --- a/setup/amazon_linux/files/nginx_redash_site +++ /dev/null @@ -1,20 +0,0 @@ -upstream rd_servers { - server 127.0.0.1:5000; -} - -server { - listen 80 default; - - access_log /var/log/nginx/rd.access.log; - - gzip on; - gzip_types *; - gzip_proxied any; - - location / { - proxy_set_header Host $http_host; - proxy_set_header X-Real-IP $remote_addr; - proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; - proxy_pass http://rd_servers; - } -} \ No newline at end of file diff --git a/setup/amazon_linux/files/postgres_apt.sh b/setup/amazon_linux/files/postgres_apt.sh deleted file mode 100644 index 35018d94ed..0000000000 --- a/setup/amazon_linux/files/postgres_apt.sh +++ /dev/null @@ -1,162 +0,0 @@ -#!/bin/sh - -# script to add apt.postgresql.org to sources.list - -# from command line -CODENAME="$1" -# lsb_release is the best interface, but not always available -if [ -z "$CODENAME" ]; then - CODENAME=$(lsb_release -cs 2>/dev/null) -fi -# parse os-release (unreliable, does not work on Ubuntu) -if [ -z "$CODENAME" -a -f /etc/os-release ]; then - . /etc/os-release - # Debian: VERSION="7.0 (wheezy)" - # Ubuntu: VERSION="13.04, Raring Ringtail" - CODENAME=$(echo $VERSION | sed -ne 's/.*(\(.*\)).*/\1/') -fi -# guess from sources.list -if [ -z "$CODENAME" ]; then - CODENAME=$(grep '^deb ' /etc/apt/sources.list | head -n1 | awk '{ print $3 }') -fi -# complain if no result yet -if [ -z "$CODENAME" ]; then - cat < /etc/apt/sources.list.d/pgdg.list < - -# Do NOT "set -e" - -# PATH should only include /usr/* if it runs after the mountnfs.sh script -PATH=/sbin:/usr/sbin:/usr/local/sbin:/bin:/usr/bin:/usr/local/bin -NAME=supervisord -DESC="process supervisor" -DAEMON=/usr/local/bin/$NAME -DAEMON_ARGS="--configuration /opt/redash/supervisord/supervisord.conf " -PIDFILE=/opt/redash/supervisord/supervisord.pid -SCRIPTNAME=/etc/init.d/redash_supervisord -USER=redash - -# Exit if the package is not installed -[ -x "$DAEMON" ] || exit 0 - -# Read configuration variable file if it is present -[ -r /etc/default/$NAME ] && . /etc/default/$NAME - -. /etc/rc.d/init.d/functions - -# -# Function that starts the daemon/service -# - -do_start() -{ - # Return - # 0 if daemon has been started - # 1 if daemon was already running - # 2 if daemon could not be started - start-stop-daemon --start --quiet --pidfile $PIDFILE --user $USER --chuid $USER --exec $DAEMON --test > /dev/null \ - || return 1 - start-stop-daemon --start --quiet --pidfile $PIDFILE --user $USER --chuid $USER --exec $DAEMON -- \ - $DAEMON_ARGS \ - || return 2 - # Add code here, if necessary, that waits for the process to be ready - # to handle requests from services started subsequently which depend - # on this one. As a last resort, sleep for some time. -} - -# -# Function that stops the daemon/service -# - -do_stop() -{ - # Return - # 0 if daemon has been stopped - # 1 if daemon was already stopped - # 2 if daemon could not be stopped - # other if a failure occurred - start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --user $USER --chuid $USER --name $NAME - RETVAL="$?" - [ "$RETVAL" = 2 ] && return 2 - # Wait for children to finish too if this is a daemon that forks - # and if the daemon is only ever run from this initscript. - # If the above conditions are not satisfied then add some other code - # that waits for the process to drop all resources that could be - # needed by services started subsequently. A last resort is to - # sleep for some time. - start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --user $USER --chuid $USER --exec $DAEMON - [ "$?" = 2 ] && return 2 - # Many daemons don't delete their pidfiles when they exit. - rm -f $PIDFILE - return "$RETVAL" -} - -case "$1" in - start) - [ "$VERBOSE" != no ] && echo "Starting $DESC" "$NAME" - do_start - case "$?" in - 0|1) [ "$VERBOSE" != no ] && echo 0 ;; - 2) [ "$VERBOSE" != no ] && echo 1 ;; - esac - ;; - stop) - [ "$VERBOSE" != no ] && echo "Stopping $DESC" "$NAME" - do_stop - case "$?" in - 0|1) [ "$VERBOSE" != no ] && echo 0 ;; - 2) [ "$VERBOSE" != no ] && echo 1 ;; - esac - ;; - status) - status -p "$STASH_PID" stash - # status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $? - ;; - restart) - echo "Restarting $DESC" "$NAME" - do_stop - case "$?" in - 0|1) - do_start - case "$?" in - 0) echo 0 ;; - 1) echo 1 ;; # Old process is still running - *) echo 1 ;; # Failed to start - esac - ;; - *) - # Failed to stop - echo 1 - ;; - esac - ;; - *) - echo "Usage: $SCRIPTNAME {start|stop|status|restart}" >&2 - exit 3 - ;; -esac - diff --git a/setup/amazon_linux/files/redis.conf b/setup/amazon_linux/files/redis.conf deleted file mode 100644 index 5880a69895..0000000000 --- a/setup/amazon_linux/files/redis.conf +++ /dev/null @@ -1,785 +0,0 @@ -## Generated by install_server.sh ## -# Redis configuration file example - -# Note on units: when memory size is needed, it is possible to specify -# it in the usual form of 1k 5GB 4M and so forth: -# -# 1k => 1000 bytes -# 1kb => 1024 bytes -# 1m => 1000000 bytes -# 1mb => 1024*1024 bytes -# 1g => 1000000000 bytes -# 1gb => 1024*1024*1024 bytes -# -# units are case insensitive so 1GB 1Gb 1gB are all the same. - -################################## INCLUDES ################################### - -# Include one or more other config files here. This is useful if you -# have a standard template that goes to all Redis server but also need -# to customize a few per-server settings. Include files can include -# other files, so use this wisely. -# -# Notice option "include" won't be rewritten by command "CONFIG REWRITE" -# from admin or Redis Sentinel. Since Redis always uses the last processed -# line as value of a configuration directive, you'd better put includes -# at the beginning of this file to avoid overwriting config change at runtime. -# -# If instead you are interested in using includes to override configuration -# options, it is better to use include as the last line. -# -# include /path/to/local.conf -# include /path/to/other.conf - -################################ GENERAL ##################################### - -# By default Redis does not run as a daemon. Use 'yes' if you need it. -# Note that Redis will write a pid file in /var/run/redis.pid when daemonized. -daemonize yes - -# When running daemonized, Redis writes a pid file in /var/run/redis.pid by -# default. You can specify a custom pid file location here. -pidfile /var/run/redis_6379.pid - -# Accept connections on the specified port, default is 6379. -# If port 0 is specified Redis will not listen on a TCP socket. -port 6379 - -# TCP listen() backlog. -# -# In high requests-per-second environments you need an high backlog in order -# to avoid slow clients connections issues. Note that the Linux kernel -# will silently truncate it to the value of /proc/sys/net/core/somaxconn so -# make sure to raise both the value of somaxconn and tcp_max_syn_backlog -# in order to get the desired effect. -tcp-backlog 511 - -# By default Redis listens for connections from all the network interfaces -# available on the server. It is possible to listen to just one or multiple -# interfaces using the "bind" configuration directive, followed by one or -# more IP addresses. -# -# Examples: -# -# bind 192.168.1.100 10.0.0.1 -# bind 127.0.0.1 - -# Specify the path for the Unix socket that will be used to listen for -# incoming connections. There is no default, so Redis will not listen -# on a unix socket when not specified. -# -# unixsocket /tmp/redis.sock -# unixsocketperm 700 - -# Close the connection after a client is idle for N seconds (0 to disable) -timeout 0 - -# TCP keepalive. -# -# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence -# of communication. This is useful for two reasons: -# -# 1) Detect dead peers. -# 2) Take the connection alive from the point of view of network -# equipment in the middle. -# -# On Linux, the specified value (in seconds) is the period used to send ACKs. -# Note that to close the connection the double of the time is needed. -# On other kernels the period depends on the kernel configuration. -# -# A reasonable value for this option is 60 seconds. -tcp-keepalive 0 - -# Specify the server verbosity level. -# This can be one of: -# debug (a lot of information, useful for development/testing) -# verbose (many rarely useful info, but not a mess like the debug level) -# notice (moderately verbose, what you want in production probably) -# warning (only very important / critical messages are logged) -loglevel notice - -# Specify the log file name. Also the empty string can be used to force -# Redis to log on the standard output. Note that if you use standard -# output for logging but daemonize, logs will be sent to /dev/null -logfile /var/log/redis_6379.log - -# To enable logging to the system logger, just set 'syslog-enabled' to yes, -# and optionally update the other syslog parameters to suit your needs. -# syslog-enabled no - -# Specify the syslog identity. -# syslog-ident redis - -# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. -# syslog-facility local0 - -# Set the number of databases. The default database is DB 0, you can select -# a different one on a per-connection basis using SELECT where -# dbid is a number between 0 and 'databases'-1 -databases 16 - -################################ SNAPSHOTTING ################################ -# -# Save the DB on disk: -# -# save -# -# Will save the DB if both the given number of seconds and the given -# number of write operations against the DB occurred. -# -# In the example below the behaviour will be to save: -# after 900 sec (15 min) if at least 1 key changed -# after 300 sec (5 min) if at least 10 keys changed -# after 60 sec if at least 10000 keys changed -# -# Note: you can disable saving at all commenting all the "save" lines. -# -# It is also possible to remove all the previously configured save -# points by adding a save directive with a single empty string argument -# like in the following example: -# -# save "" - -save 900 1 -save 300 10 -save 60 10000 - -# By default Redis will stop accepting writes if RDB snapshots are enabled -# (at least one save point) and the latest background save failed. -# This will make the user aware (in a hard way) that data is not persisting -# on disk properly, otherwise chances are that no one will notice and some -# disaster will happen. -# -# If the background saving process will start working again Redis will -# automatically allow writes again. -# -# However if you have setup your proper monitoring of the Redis server -# and persistence, you may want to disable this feature so that Redis will -# continue to work as usual even if there are problems with disk, -# permissions, and so forth. -stop-writes-on-bgsave-error yes - -# Compress string objects using LZF when dump .rdb databases? -# For default that's set to 'yes' as it's almost always a win. -# If you want to save some CPU in the saving child set it to 'no' but -# the dataset will likely be bigger if you have compressible values or keys. -rdbcompression yes - -# Since version 5 of RDB a CRC64 checksum is placed at the end of the file. -# This makes the format more resistant to corruption but there is a performance -# hit to pay (around 10%) when saving and loading RDB files, so you can disable it -# for maximum performances. -# -# RDB files created with checksum disabled have a checksum of zero that will -# tell the loading code to skip the check. -rdbchecksum yes - -# The filename where to dump the DB -dbfilename dump.rdb - -# The working directory. -# -# The DB will be written inside this directory, with the filename specified -# above using the 'dbfilename' configuration directive. -# -# The Append Only File will also be created inside this directory. -# -# Note that you must specify a directory here, not a file name. -dir /var/lib/redis/6379 - -################################# REPLICATION ################################# - -# Master-Slave replication. Use slaveof to make a Redis instance a copy of -# another Redis server. A few things to understand ASAP about Redis replication. -# -# 1) Redis replication is asynchronous, but you can configure a master to -# stop accepting writes if it appears to be not connected with at least -# a given number of slaves. -# 2) Redis slaves are able to perform a partial resynchronization with the -# master if the replication link is lost for a relatively small amount of -# time. You may want to configure the replication backlog size (see the next -# sections of this file) with a sensible value depending on your needs. -# 3) Replication is automatic and does not need user intervention. After a -# network partition slaves automatically try to reconnect to masters -# and resynchronize with them. -# -# slaveof - -# If the master is password protected (using the "requirepass" configuration -# directive below) it is possible to tell the slave to authenticate before -# starting the replication synchronization process, otherwise the master will -# refuse the slave request. -# -# masterauth - -# When a slave loses its connection with the master, or when the replication -# is still in progress, the slave can act in two different ways: -# -# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will -# still reply to client requests, possibly with out of date data, or the -# data set may just be empty if this is the first synchronization. -# -# 2) if slave-serve-stale-data is set to 'no' the slave will reply with -# an error "SYNC with master in progress" to all the kind of commands -# but to INFO and SLAVEOF. -# -slave-serve-stale-data yes - -# You can configure a slave instance to accept writes or not. Writing against -# a slave instance may be useful to store some ephemeral data (because data -# written on a slave will be easily deleted after resync with the master) but -# may also cause problems if clients are writing to it because of a -# misconfiguration. -# -# Since Redis 2.6 by default slaves are read-only. -# -# Note: read only slaves are not designed to be exposed to untrusted clients -# on the internet. It's just a protection layer against misuse of the instance. -# Still a read only slave exports by default all the administrative commands -# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve -# security of read only slaves using 'rename-command' to shadow all the -# administrative / dangerous commands. -slave-read-only yes - -# Slaves send PINGs to server in a predefined interval. It's possible to change -# this interval with the repl_ping_slave_period option. The default value is 10 -# seconds. -# -# repl-ping-slave-period 10 - -# The following option sets the replication timeout for: -# -# 1) Bulk transfer I/O during SYNC, from the point of view of slave. -# 2) Master timeout from the point of view of slaves (data, pings). -# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings). -# -# It is important to make sure that this value is greater than the value -# specified for repl-ping-slave-period otherwise a timeout will be detected -# every time there is low traffic between the master and the slave. -# -# repl-timeout 60 - -# Disable TCP_NODELAY on the slave socket after SYNC? -# -# If you select "yes" Redis will use a smaller number of TCP packets and -# less bandwidth to send data to slaves. But this can add a delay for -# the data to appear on the slave side, up to 40 milliseconds with -# Linux kernels using a default configuration. -# -# If you select "no" the delay for data to appear on the slave side will -# be reduced but more bandwidth will be used for replication. -# -# By default we optimize for low latency, but in very high traffic conditions -# or when the master and slaves are many hops away, turning this to "yes" may -# be a good idea. -repl-disable-tcp-nodelay no - -# Set the replication backlog size. The backlog is a buffer that accumulates -# slave data when slaves are disconnected for some time, so that when a slave -# wants to reconnect again, often a full resync is not needed, but a partial -# resync is enough, just passing the portion of data the slave missed while -# disconnected. -# -# The biggest the replication backlog, the longer the time the slave can be -# disconnected and later be able to perform a partial resynchronization. -# -# The backlog is only allocated once there is at least a slave connected. -# -# repl-backlog-size 1mb - -# After a master has no longer connected slaves for some time, the backlog -# will be freed. The following option configures the amount of seconds that -# need to elapse, starting from the time the last slave disconnected, for -# the backlog buffer to be freed. -# -# A value of 0 means to never release the backlog. -# -# repl-backlog-ttl 3600 - -# The slave priority is an integer number published by Redis in the INFO output. -# It is used by Redis Sentinel in order to select a slave to promote into a -# master if the master is no longer working correctly. -# -# A slave with a low priority number is considered better for promotion, so -# for instance if there are three slaves with priority 10, 100, 25 Sentinel will -# pick the one with priority 10, that is the lowest. -# -# However a special priority of 0 marks the slave as not able to perform the -# role of master, so a slave with priority of 0 will never be selected by -# Redis Sentinel for promotion. -# -# By default the priority is 100. -slave-priority 100 - -# It is possible for a master to stop accepting writes if there are less than -# N slaves connected, having a lag less or equal than M seconds. -# -# The N slaves need to be in "online" state. -# -# The lag in seconds, that must be <= the specified value, is calculated from -# the last ping received from the slave, that is usually sent every second. -# -# This option does not GUARANTEES that N replicas will accept the write, but -# will limit the window of exposure for lost writes in case not enough slaves -# are available, to the specified number of seconds. -# -# For example to require at least 3 slaves with a lag <= 10 seconds use: -# -# min-slaves-to-write 3 -# min-slaves-max-lag 10 -# -# Setting one or the other to 0 disables the feature. -# -# By default min-slaves-to-write is set to 0 (feature disabled) and -# min-slaves-max-lag is set to 10. - -################################## SECURITY ################################### - -# Require clients to issue AUTH before processing any other -# commands. This might be useful in environments in which you do not trust -# others with access to the host running redis-server. -# -# This should stay commented out for backward compatibility and because most -# people do not need auth (e.g. they run their own servers). -# -# Warning: since Redis is pretty fast an outside user can try up to -# 150k passwords per second against a good box. This means that you should -# use a very strong password otherwise it will be very easy to break. -# -# requirepass foobared - -# Command renaming. -# -# It is possible to change the name of dangerous commands in a shared -# environment. For instance the CONFIG command may be renamed into something -# hard to guess so that it will still be available for internal-use tools -# but not available for general clients. -# -# Example: -# -# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 -# -# It is also possible to completely kill a command by renaming it into -# an empty string: -# -# rename-command CONFIG "" -# -# Please note that changing the name of commands that are logged into the -# AOF file or transmitted to slaves may cause problems. - -################################### LIMITS #################################### - -# Set the max number of connected clients at the same time. By default -# this limit is set to 10000 clients, however if the Redis server is not -# able to configure the process file limit to allow for the specified limit -# the max number of allowed clients is set to the current file limit -# minus 32 (as Redis reserves a few file descriptors for internal uses). -# -# Once the limit is reached Redis will close all the new connections sending -# an error 'max number of clients reached'. -# -# maxclients 10000 - -# Don't use more memory than the specified amount of bytes. -# When the memory limit is reached Redis will try to remove keys -# according to the eviction policy selected (see maxmemory-policy). -# -# If Redis can't remove keys according to the policy, or if the policy is -# set to 'noeviction', Redis will start to reply with errors to commands -# that would use more memory, like SET, LPUSH, and so on, and will continue -# to reply to read-only commands like GET. -# -# This option is usually useful when using Redis as an LRU cache, or to set -# a hard memory limit for an instance (using the 'noeviction' policy). -# -# WARNING: If you have slaves attached to an instance with maxmemory on, -# the size of the output buffers needed to feed the slaves are subtracted -# from the used memory count, so that network problems / resyncs will -# not trigger a loop where keys are evicted, and in turn the output -# buffer of slaves is full with DELs of keys evicted triggering the deletion -# of more keys, and so forth until the database is completely emptied. -# -# In short... if you have slaves attached it is suggested that you set a lower -# limit for maxmemory so that there is some free RAM on the system for slave -# output buffers (but this is not needed if the policy is 'noeviction'). -# -# maxmemory - -# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory -# is reached. You can select among five behaviors: -# -# volatile-lru -> remove the key with an expire set using an LRU algorithm -# allkeys-lru -> remove any key accordingly to the LRU algorithm -# volatile-random -> remove a random key with an expire set -# allkeys-random -> remove a random key, any key -# volatile-ttl -> remove the key with the nearest expire time (minor TTL) -# noeviction -> don't expire at all, just return an error on write operations -# -# Note: with any of the above policies, Redis will return an error on write -# operations, when there are not suitable keys for eviction. -# -# At the date of writing this commands are: set setnx setex append -# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd -# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby -# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby -# getset mset msetnx exec sort -# -# The default is: -# -# maxmemory-policy volatile-lru - -# LRU and minimal TTL algorithms are not precise algorithms but approximated -# algorithms (in order to save memory), so you can select as well the sample -# size to check. For instance for default Redis will check three keys and -# pick the one that was used less recently, you can change the sample size -# using the following configuration directive. -# -# maxmemory-samples 3 - -############################## APPEND ONLY MODE ############################### - -# By default Redis asynchronously dumps the dataset on disk. This mode is -# good enough in many applications, but an issue with the Redis process or -# a power outage may result into a few minutes of writes lost (depending on -# the configured save points). -# -# The Append Only File is an alternative persistence mode that provides -# much better durability. For instance using the default data fsync policy -# (see later in the config file) Redis can lose just one second of writes in a -# dramatic event like a server power outage, or a single write if something -# wrong with the Redis process itself happens, but the operating system is -# still running correctly. -# -# AOF and RDB persistence can be enabled at the same time without problems. -# If the AOF is enabled on startup Redis will load the AOF, that is the file -# with the better durability guarantees. -# -# Please check http://redis.io/topics/persistence for more information. - -appendonly no - -# The name of the append only file (default: "appendonly.aof") - -appendfilename "appendonly.aof" - -# The fsync() call tells the Operating System to actually write data on disk -# instead to wait for more data in the output buffer. Some OS will really flush -# data on disk, some other OS will just try to do it ASAP. -# -# Redis supports three different modes: -# -# no: don't fsync, just let the OS flush the data when it wants. Faster. -# always: fsync after every write to the append only log . Slow, Safest. -# everysec: fsync only one time every second. Compromise. -# -# The default is "everysec", as that's usually the right compromise between -# speed and data safety. It's up to you to understand if you can relax this to -# "no" that will let the operating system flush the output buffer when -# it wants, for better performances (but if you can live with the idea of -# some data loss consider the default persistence mode that's snapshotting), -# or on the contrary, use "always" that's very slow but a bit safer than -# everysec. -# -# More details please check the following article: -# http://antirez.com/post/redis-persistence-demystified.html -# -# If unsure, use "everysec". - -# appendfsync always -appendfsync everysec -# appendfsync no - -# When the AOF fsync policy is set to always or everysec, and a background -# saving process (a background save or AOF log background rewriting) is -# performing a lot of I/O against the disk, in some Linux configurations -# Redis may block too long on the fsync() call. Note that there is no fix for -# this currently, as even performing fsync in a different thread will block -# our synchronous write(2) call. -# -# In order to mitigate this problem it's possible to use the following option -# that will prevent fsync() from being called in the main process while a -# BGSAVE or BGREWRITEAOF is in progress. -# -# This means that while another child is saving, the durability of Redis is -# the same as "appendfsync none". In practical terms, this means that it is -# possible to lose up to 30 seconds of log in the worst scenario (with the -# default Linux settings). -# -# If you have latency problems turn this to "yes". Otherwise leave it as -# "no" that is the safest pick from the point of view of durability. - -no-appendfsync-on-rewrite no - -# Automatic rewrite of the append only file. -# Redis is able to automatically rewrite the log file implicitly calling -# BGREWRITEAOF when the AOF log size grows by the specified percentage. -# -# This is how it works: Redis remembers the size of the AOF file after the -# latest rewrite (if no rewrite has happened since the restart, the size of -# the AOF at startup is used). -# -# This base size is compared to the current size. If the current size is -# bigger than the specified percentage, the rewrite is triggered. Also -# you need to specify a minimal size for the AOF file to be rewritten, this -# is useful to avoid rewriting the AOF file even if the percentage increase -# is reached but it is still pretty small. -# -# Specify a percentage of zero in order to disable the automatic AOF -# rewrite feature. - -auto-aof-rewrite-percentage 100 -auto-aof-rewrite-min-size 64mb - -# An AOF file may be found to be truncated at the end during the Redis -# startup process, when the AOF data gets loaded back into memory. -# This may happen when the system where Redis is running -# crashes, especially when an ext4 filesystem is mounted without the -# data=ordered option (however this can't happen when Redis itself -# crashes or aborts but the operating system still works correctly). -# -# Redis can either exit with an error when this happens, or load as much -# data as possible (the default now) and start if the AOF file is found -# to be truncated at the end. The following option controls this behavior. -# -# If aof-load-truncated is set to yes, a truncated AOF file is loaded and -# the Redis server starts emitting a log to inform the user of the event. -# Otherwise if the option is set to no, the server aborts with an error -# and refuses to start. When the option is set to no, the user requires -# to fix the AOF file using the "redis-check-aof" utility before to restart -# the server. -# -# Note that if the AOF file will be found to be corrupted in the middle -# the server will still exit with an error. This option only applies when -# Redis will try to read more data from the AOF file but not enough bytes -# will be found. -aof-load-truncated yes - -################################ LUA SCRIPTING ############################### - -# Max execution time of a Lua script in milliseconds. -# -# If the maximum execution time is reached Redis will log that a script is -# still in execution after the maximum allowed time and will start to -# reply to queries with an error. -# -# When a long running script exceed the maximum execution time only the -# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be -# used to stop a script that did not yet called write commands. The second -# is the only way to shut down the server in the case a write commands was -# already issue by the script but the user don't want to wait for the natural -# termination of the script. -# -# Set it to 0 or a negative value for unlimited execution without warnings. -lua-time-limit 5000 - -################################## SLOW LOG ################################### - -# The Redis Slow Log is a system to log queries that exceeded a specified -# execution time. The execution time does not include the I/O operations -# like talking with the client, sending the reply and so forth, -# but just the time needed to actually execute the command (this is the only -# stage of command execution where the thread is blocked and can not serve -# other requests in the meantime). -# -# You can configure the slow log with two parameters: one tells Redis -# what is the execution time, in microseconds, to exceed in order for the -# command to get logged, and the other parameter is the length of the -# slow log. When a new command is logged the oldest one is removed from the -# queue of logged commands. - -# The following time is expressed in microseconds, so 1000000 is equivalent -# to one second. Note that a negative number disables the slow log, while -# a value of zero forces the logging of every command. -slowlog-log-slower-than 10000 - -# There is no limit to this length. Just be aware that it will consume memory. -# You can reclaim memory used by the slow log with SLOWLOG RESET. -slowlog-max-len 128 - -################################ LATENCY MONITOR ############################## - -# The Redis latency monitoring subsystem samples different operations -# at runtime in order to collect data related to possible sources of -# latency of a Redis instance. -# -# Via the LATENCY command this information is available to the user that can -# print graphs and obtain reports. -# -# The system only logs operations that were performed in a time equal or -# greater than the amount of milliseconds specified via the -# latency-monitor-threshold configuration directive. When its value is set -# to zero, the latency monitor is turned off. -# -# By default latency monitoring is disabled since it is mostly not needed -# if you don't have latency issues, and collecting data has a performance -# impact, that while very small, can be measured under big load. Latency -# monitoring can easily be enalbed at runtime using the command -# "CONFIG SET latency-monitor-threshold " if needed. -latency-monitor-threshold 0 - -############################# Event notification ############################## - -# Redis can notify Pub/Sub clients about events happening in the key space. -# This feature is documented at http://redis.io/topics/notifications -# -# For instance if keyspace events notification is enabled, and a client -# performs a DEL operation on key "foo" stored in the Database 0, two -# messages will be published via Pub/Sub: -# -# PUBLISH __keyspace@0__:foo del -# PUBLISH __keyevent@0__:del foo -# -# It is possible to select the events that Redis will notify among a set -# of classes. Every class is identified by a single character: -# -# K Keyspace events, published with __keyspace@__ prefix. -# E Keyevent events, published with __keyevent@__ prefix. -# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... -# $ String commands -# l List commands -# s Set commands -# h Hash commands -# z Sorted set commands -# x Expired events (events generated every time a key expires) -# e Evicted events (events generated when a key is evicted for maxmemory) -# A Alias for g$lshzxe, so that the "AKE" string means all the events. -# -# The "notify-keyspace-events" takes as argument a string that is composed -# by zero or multiple characters. The empty string means that notifications -# are disabled at all. -# -# Example: to enable list and generic events, from the point of view of the -# event name, use: -# -# notify-keyspace-events Elg -# -# Example 2: to get the stream of the expired keys subscribing to channel -# name __keyevent@0__:expired use: -# -# notify-keyspace-events Ex -# -# By default all notifications are disabled because most users don't need -# this feature and the feature has some overhead. Note that if you don't -# specify at least one of K or E, no events will be delivered. -notify-keyspace-events "" - -############################### ADVANCED CONFIG ############################### - -# Hashes are encoded using a memory efficient data structure when they have a -# small number of entries, and the biggest entry does not exceed a given -# threshold. These thresholds can be configured using the following directives. -hash-max-ziplist-entries 512 -hash-max-ziplist-value 64 - -# Similarly to hashes, small lists are also encoded in a special way in order -# to save a lot of space. The special representation is only used when -# you are under the following limits: -list-max-ziplist-entries 512 -list-max-ziplist-value 64 - -# Sets have a special encoding in just one case: when a set is composed -# of just strings that happens to be integers in radix 10 in the range -# of 64 bit signed integers. -# The following configuration setting sets the limit in the size of the -# set in order to use this special memory saving encoding. -set-max-intset-entries 512 - -# Similarly to hashes and lists, sorted sets are also specially encoded in -# order to save a lot of space. This encoding is only used when the length and -# elements of a sorted set are below the following limits: -zset-max-ziplist-entries 128 -zset-max-ziplist-value 64 - -# HyperLogLog sparse representation bytes limit. The limit includes the -# 16 bytes header. When an HyperLogLog using the sparse representation crosses -# this limit, it is converted into the dense representation. -# -# A value greater than 16000 is totally useless, since at that point the -# dense representation is more memory efficient. -# -# The suggested value is ~ 3000 in order to have the benefits of -# the space efficient encoding without slowing down too much PFADD, -# which is O(N) with the sparse encoding. The value can be raised to -# ~ 10000 when CPU is not a concern, but space is, and the data set is -# composed of many HyperLogLogs with cardinality in the 0 - 15000 range. -hll-sparse-max-bytes 3000 - -# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in -# order to help rehashing the main Redis hash table (the one mapping top-level -# keys to values). The hash table implementation Redis uses (see dict.c) -# performs a lazy rehashing: the more operation you run into a hash table -# that is rehashing, the more rehashing "steps" are performed, so if the -# server is idle the rehashing is never complete and some more memory is used -# by the hash table. -# -# The default is to use this millisecond 10 times every second in order to -# active rehashing the main dictionaries, freeing memory when possible. -# -# If unsure: -# use "activerehashing no" if you have hard latency requirements and it is -# not a good thing in your environment that Redis can reply form time to time -# to queries with 2 milliseconds delay. -# -# use "activerehashing yes" if you don't have such hard requirements but -# want to free memory asap when possible. -activerehashing yes - -# The client output buffer limits can be used to force disconnection of clients -# that are not reading data from the server fast enough for some reason (a -# common reason is that a Pub/Sub client can't consume messages as fast as the -# publisher can produce them). -# -# The limit can be set differently for the three different classes of clients: -# -# normal -> normal clients including MONITOR clients -# slave -> slave clients -# pubsub -> clients subscribed to at least one pubsub channel or pattern -# -# The syntax of every client-output-buffer-limit directive is the following: -# -# client-output-buffer-limit -# -# A client is immediately disconnected once the hard limit is reached, or if -# the soft limit is reached and remains reached for the specified number of -# seconds (continuously). -# So for instance if the hard limit is 32 megabytes and the soft limit is -# 16 megabytes / 10 seconds, the client will get disconnected immediately -# if the size of the output buffers reach 32 megabytes, but will also get -# disconnected if the client reaches 16 megabytes and continuously overcomes -# the limit for 10 seconds. -# -# By default normal clients are not limited because they don't receive data -# without asking (in a push way), but just after a request, so only -# asynchronous clients may create a scenario where data is requested faster -# than it can read. -# -# Instead there is a default limit for pubsub and slave clients, since -# subscribers and slaves receive data in a push fashion. -# -# Both the hard or the soft limit can be disabled by setting them to zero. -client-output-buffer-limit normal 0 0 0 -client-output-buffer-limit slave 256mb 64mb 60 -client-output-buffer-limit pubsub 32mb 8mb 60 - -# Redis calls an internal function to perform many background tasks, like -# closing connections of clients in timeout, purging expired keys that are -# never requested, and so forth. -# -# Not all tasks are performed with the same frequency, but Redis checks for -# tasks to perform accordingly to the specified "hz" value. -# -# By default "hz" is set to 10. Raising the value will use more CPU when -# Redis is idle, but at the same time will make Redis more responsive when -# there are many keys expiring at the same time, and timeouts may be -# handled with more precision. -# -# The range is between 1 and 500, however a value over 100 is usually not -# a good idea. Most users should use the default of 10 and raise this up to -# 100 only in environments where very low latency is required. -hz 10 - -# When a child rewrites the AOF file, if the following option is enabled -# the file will be fsync-ed every 32 MB of data generated. This is useful -# in order to commit the file to the disk more incrementally and avoid -# big latency spikes. -aof-rewrite-incremental-fsync yes diff --git a/setup/amazon_linux/files/redis_init b/setup/amazon_linux/files/redis_init deleted file mode 100644 index e20d856aaf..0000000000 --- a/setup/amazon_linux/files/redis_init +++ /dev/null @@ -1,66 +0,0 @@ -#!/bin/sh - -EXEC=/usr/local/bin/redis-server -CLIEXEC=/usr/local/bin/redis-cli -PIDFILE=/var/run/redis_6379.pid -CONF="/etc/redis/6379.conf" -REDISPORT="6379" -############### -# SysV Init Information -# chkconfig: - 58 74 -# description: redis_6379 is the redis daemon. -### BEGIN INIT INFO -# Provides: redis_6379 -# Required-Start: $network $local_fs $remote_fs -# Required-Stop: $network $local_fs $remote_fs -# Default-Start: 2 3 4 5 -# Default-Stop: 0 1 6 -# Should-Start: $syslog $named -# Should-Stop: $syslog $named -# Short-Description: start and stop redis_6379 -# Description: Redis daemon -### END INIT INFO - - -case "$1" in - start) - if [ -f $PIDFILE ] - then - echo "$PIDFILE exists, process is already running or crashed" - else - echo "Starting Redis server..." - $EXEC $CONF - fi - ;; - stop) - if [ ! -f $PIDFILE ] - then - echo "$PIDFILE does not exist, process is not running" - else - PID=$(cat $PIDFILE) - echo "Stopping ..." - $CLIEXEC -p $REDISPORT shutdown - while [ -x /proc/${PID} ] - do - echo "Waiting for Redis to shutdown ..." - sleep 1 - done - echo "Redis stopped" - fi - ;; - status) - if [ ! -f $PIDFILE ] - then - echo 'Redis is not running' - else - echo "Redis is running ($(<$PIDFILE))" - fi - ;; - restart) - $0 stop - $0 start - ;; - *) - echo "Please use start, stop, restart or status as first argument" - ;; -esac diff --git a/setup/amazon_linux/files/supervisord.conf b/setup/amazon_linux/files/supervisord.conf deleted file mode 100644 index 3adabacdc2..0000000000 --- a/setup/amazon_linux/files/supervisord.conf +++ /dev/null @@ -1,31 +0,0 @@ - -[supervisord] -nodaemon=false -logfile=/opt/redash/logs/supervisord.log -pidfile=/opt/redash/supervisord/supervisord.pid -directory=/opt/redash/current - -[inet_http_server] -port = 127.0.0.1:9001 - -[rpcinterface:supervisor] -supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface - -[program:redash_server] -command=/opt/redash/current/bin/run /usr/local/bin/gunicorn -b 127.0.0.1:5000 --name redash -w 4 redash.wsgi:app -process_name=redash_server -numprocs=1 -priority=999 -autostart=true -autorestart=true -stdout_logfile=/opt/redash/logs/api.log -stderr_logfile=/opt/redash/logs/api_error.log - -[program:redash_celery] -command=/opt/redash/current/bin/run /usr/local/bin/celery worker --app=redash.worker --beat -Qqueries,celery,scheduled_queries -process_name=redash_celery -numprocs=1 -priority=999 -autostart=true -autorestart=true -stdout_logfile=/opt/redash/logs/celery.log diff --git a/setup/packer.json b/setup/packer.json deleted file mode 100644 index 374d3e14c5..0000000000 --- a/setup/packer.json +++ /dev/null @@ -1,29 +0,0 @@ -{ - "variables": { - "aws_access_key": "", - "aws_secret_key": "", - "redash_version": "0.7.1.b1015", - "image_version": "071b1015" - }, - "builders": [ - { - "name": "redash-eu-west-1", - "type": "amazon-ebs", - "access_key": "{{user `aws_access_key`}}", - "secret_key": "{{user `aws_secret_key`}}", - "region": "eu-west-1", - "source_ami": "ami-63a19214", - "instance_type": "t2.micro", - "ssh_username": "ubuntu", - "ami_name": "redash-{{user `image_version`}}-eu-west-1" - } - ], - "provisioners": [ - { - "type": "shell", - "script": "ubuntu/bootstrap.sh", - "execute_command": "{{ .Vars }} sudo -E -S bash '{{ .Path }}'", - "environment_vars": ["REDASH_VERSION={{user `redash_version`}}"] - } - ] -}