Skip to content

Commit

Permalink
improve user configuration; update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
juanvallejo committed Aug 18, 2017
1 parent 70e2725 commit a821120
Show file tree
Hide file tree
Showing 3 changed files with 144 additions and 65 deletions.
51 changes: 37 additions & 14 deletions images/installer/README_INVENTORY_GENERATOR.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,47 @@
Dynamic Inventory Generation
============================

Script within the origin-ansible image that creates
a container capable of connecting to an OpenShift
master and dynamically creating an inventory file
from its environment.
Script within the openshift-ansible image that can dynamically
generate an Ansible inventory file from an existing cluster.

### Configure

User configuration helps to provide additional details when creating an inventory file.
The default location of this file is in `root/etc/inventory-generator-config.yaml`. The
following configuration values are either expected or default to the given values when omitted:

- `openshift_cluster_user` (required):
- username of account capable of listing nodes in a cluster
- used for querying the cluster using `oc` to retrieve additional node information.

- `master_config_path` (required):
- specifies where to look for the bind-mounted `master-config.yaml` file in the container
- if omitted or a `null` value is given, its value is defaulted to `/opt/app-root/src/master-config.yaml`

- `admin_kubeconfig_path` (required):
- specifies where to look for the bind-mounted `admin.kubeconfig` file in the container
- if omitted or a `null` value is given, its value is defaulted to `/opt/app-root/src/.kube/config`

- `ansible_ssh_user`:
- specifies the ssh user to be used by Ansible when running the specified `PLAYBOOK_FILE` (see `README_CONTAINER_IMAGE.md` for additional information on this environment variable).
- if omitted, its value is defaulted to `ec2-user`

- `ansible_become_user`:
- specifies a user to "become" on the remote host. Used for privilege escalation.
- If a non-null value is specified, `ansible_become` is implicitly set to `yes` in the resulting inventory file.

See the supplied sample user configuration file in `root/etc/inventory-generator-config.yaml` for additional optional inventory variables that may be specified.

### Build

`docker build --rm -t openshift/origin-ansible -f images/installer/Dockerfile .`

### Run

Given a master node's `master-config.yaml` file and its `admin.kubeconfig` file, the command below will:
Given a master node's `master-config.yaml` file, a user configuration file (see "Configure" section), and an `admin.kubeconfig` file, the command below will:

1. Connect to the host using the bind-mounted `id_rsa` file (or anything else used to access your remote machine via ssh)
2. Generate an inventory file based on the current configuration and environment of the existing OpenShift deployment on the remote host
1. Use `oc` to query the host about additional node information (using the supplied `kubeconfig` file and `openshift_cluster_user` value)
2. Generate an inventory file based on information retrieved from `oc get nodes` and the given `master-config.yaml` file.
3. run the specified [openshift-ansible](https://github.com/openshift/openshift-ansible) `health.yml` playbook using the generated inventory file from the previous step

```
Expand All @@ -25,17 +51,15 @@ docker run -u `id -u` \
-v /tmp/origin/master/admin.kubeconfig:/opt/app-root/src/.kube/config:Z \
-v /tmp/aws/master-config.yaml:/opt/app-root/src/master-config.yaml:Z \
-e OPTS="-v --become --become-user root" \
-e PLAYBOOK_FILE=playbooks/byo/config.yml \
-e PLAYBOOK_FILE=playbooks/byo/openshift-checks/health.yml \
-e GENERATE_INVENTORY=true \
-e USER=`whoami` \
openshift/origin-ansible
```

### Configure

To include additional inventory variables in the final generated inventory file,
create or edit the `root/etc/inventory-generator-config.yaml` file.
**Note** In the command above, specifying the `GENERATE_INVENTORY` environment variable will automatically generate the inventory file in an expected location.
An `INVENTORY_FILE` variable (or any other inventory location) does not need to be supplied when generating an inventory.

### Debug

Expand Down Expand Up @@ -63,5 +87,4 @@ bash-4.2$ less generated_hosts

### Notes

For now, the `/usr/local/bin/generate` script will look for the `master-config.yaml` file in the
home directory in the container (`/opt/app-root/src`).
See `README_CONTAINER_IMAGE.md` for additional information about this image.
18 changes: 14 additions & 4 deletions images/installer/root/etc/inventory-generator-config.yaml
Original file line number Diff line number Diff line change
@@ -1,13 +1,23 @@
---
# meta config
master_config_path: null # defaults to /opt/app-root/src/master-config.yaml
admin_kubeconfig_path: null # defaults to /opt/app-root/src/.kube/config

# default user configuration
common_host_alias: openshiftdevel # host alias as defined in ~/.ssh/config
ansible_ssh_user: ec2-user
ansible_become: "yes"
ansible_become_user: "root"

# openshift-ansible inventory vars
openshift_uninstall_images: false
openshift_install_examples: true
openshift_deployment_type: origin

# openshift cluster-admin credentials
openshift_cluster_user: cluster-node-viewer
openshift_cluster_pass: ""
openshift_release: 3.6
openshift_image_tag: v3.6.0
openshift_hosted_logging_deploy: null # defaults to "true" if loggingPublicURL is set in master-config.yaml
openshift_logging_image_version: v3.6.0
openshift_disable_check: disk_availability,docker_storage,memory_availability

# openshift cluster-viewer info
openshift_cluster_user: cluster-lister # name of user or service account able to list nodes in a cluster
140 changes: 93 additions & 47 deletions images/installer/root/usr/local/bin/generate
Original file line number Diff line number Diff line change
Expand Up @@ -14,19 +14,22 @@ import subprocess
import sys
import yaml

DEFAULT_USER_CONFIG_PATH = '/etc/inventory-generator-config.yaml'

try:
HOME = os.environ['HOME']
except KeyError:
print 'A required environment variable "$HOME" has not been set'
exit(1)

MASTER_CONFIG = 'master-config.yaml'
DEFAULT_USER_CONFIG_PATH = '/etc/inventory-generator-config.yaml'
DEFAULT_MASTER_CONFIG_PATH = HOME + '/master-config.yaml'
DEFAULT_ADMIN_KUBECONFIG_PATH = HOME + '/.kube/config'

INVY_FULL_PATH = HOME + '/generated_hosts'
USE_STDOUT = True

INVY = 'generated_hosts'
if len(sys.argv) > 1:
INVY = sys.argv[1]
INVY_FULL_PATH = sys.argv[1]
USE_STDOUT = False


class OpenShiftClientError(Exception):
Expand All @@ -46,8 +49,9 @@ class InvalidHostGroup(Exception):

class OpenShiftClient:
oc = None
kubeconfig = None

def __init__(self):
def __init__(self, kubeconfig=DEFAULT_ADMIN_KUBECONFIG_PATH):
"""Find and store path to oc binary"""
# https://github.com/openshift/openshift-ansible/issues/3410
# oc can be in /usr/local/bin in some cases, but that may not
Expand All @@ -72,6 +76,7 @@ class OpenShiftClient:
raise OpenShiftClientError('Unable to locate `oc` binary. Not present in PATH.')

self.oc = oc_binary
self.kubeconfig = kubeconfig

def call(self, cmd_str):
"""Execute a remote call using `oc`"""
Expand All @@ -84,17 +89,22 @@ class OpenShiftClient:
raise OpenShiftClientError('[rc {}] {}\n{}'.format(err.returncode, ' '.join(err.cmd), err.output))
return out

def login(self, host, user, password):
"""Login using `oc` to the specified host"""
call_cmd = 'login {host} -u {u} -p {p} --insecure-skip-tls-verify'
return self.call(call_cmd.format(host=host, u=user, p=password))
def login(self, host, user):
"""Login using `oc` to the specified host.
Expects an admin.kubeconfig file to have been provided, and that
the kubeconfig file has a context stanza for the given user.
Although a password is not used, a placeholder is automatically
specified in order to prevent this script from "hanging" in the
event that no valid user stanza exists in the provided kubeconfig."""
call_cmd = 'login {host} -u {u} -p none --config {c}'
return self.call(call_cmd.format(host=host, u=user, c=self.kubeconfig))

def get_nodes(self):
"""Retrieve remote node information as a yaml object"""
return self.call('get nodes -o yaml')


class HostGroup():
class HostGroup:
groupname = ""
hosts = list()

Expand Down Expand Up @@ -129,7 +139,7 @@ class HostGroup():
return infos


class Host():
class Host:
group = "masters"
alias = ""
hostname = ""
Expand All @@ -146,7 +156,7 @@ class Host():
return self.group

def get_openshift_hostname(self):
return self.host_alias
return self.hostname

def host_alias(self, hostalias):
"""Set an alias for this host."""
Expand All @@ -171,6 +181,10 @@ class Host():
info = ""
if self.alias:
info += self.alias + " "
elif self.hostname:
info += self.hostname + " "
elif self.ip_addr:
info += self.ip_addr + " "
if self.ip_addr:
info += "openshift_ip=" + self.ip_addr + " "
if self.public_ip_addr:
Expand Down Expand Up @@ -204,10 +218,29 @@ def main():
if not USER_CONFIG:
USER_CONFIG = DEFAULT_USER_CONFIG_PATH

# read user configuration
try:
config_file_obj = open(USER_CONFIG, 'r')
raw_config_file = config_file_obj.read()
user_config = yaml.load(raw_config_file)
if not user_config:
user_config = dict()
except IOError as err:
print "Unable to find or read user configuration file '{}': {}".format(USER_CONFIG, err)
exit(1)

master_config_path = user_config.get('master_config_path', DEFAULT_MASTER_CONFIG_PATH)
if not master_config_path:
master_config_path = DEFAULT_MASTER_CONFIG_PATH

admin_kubeconfig_path = user_config.get('admin_kubeconfig_path', DEFAULT_ADMIN_KUBECONFIG_PATH)
if not admin_kubeconfig_path:
admin_kubeconfig_path = DEFAULT_ADMIN_KUBECONFIG_PATH

try:
file_obj = open(HOME + '/' + MASTER_CONFIG, 'r')
file_obj = open(master_config_path, 'r')
except IOError as err:
print "Unable to find or read host master configuration file '{}': {}".format(MASTER_CONFIG, err)
print "Unable to find or read host master configuration file '{}': {}".format(master_config_path, err)
exit(1)

raw_text = file_obj.read()
Expand All @@ -221,26 +254,20 @@ def main():
# cluster information for inventory file
file_obj.close()

try:
config_file_obj = open(USER_CONFIG, 'r')
raw_config_file = config_file_obj.read()
user_config = yaml.load(raw_config_file)
except IOError as err:
print "Unable to find or read user configuration file '{}': {}".format(USER_CONFIG, err)
exit(1)

# set inventory values based on user configuration
common_host_alias = user_config.get('common_host_alias', 'openshiftdevel')
ansible_ssh_user = user_config.get('ansible_ssh_user', 'ec2-user')
ansible_become_user = user_config.get('ansible_become_user')

openshift_uninstall_images = user_config.get('openshift_uninstall_images', False)
openshift_install_examples = user_config.get('openshift_install_examples', True)
openshift_deployment_type = user_config.get('openshift_deployment_type', 'origin')
openshift_cluster_user = user_config.get('openshift_cluster_user', 'developer')
openshift_cluster_pass = user_config.get('openshift_cluster_pass', 'fakepass')

# default value for cluster-viewere is blank in config. Handle this case to avoid an `oc login` flag error
if not len(openshift_cluster_pass):
openshift_cluster_pass = 'fakepass'
openshift_release = user_config.get('openshift_release')
openshift_image_tag = user_config.get('openshift_image_tag')
openshift_logging_image_version = user_config.get('openshift_logging_image_version')
openshift_disable_check = user_config.get('openshift_disable_check')

openshift_cluster_user = user_config.get('openshift_cluster_user', 'developer')

# extract host config info from parsed yaml file
asset_config = y.get("assetConfig")
Expand All @@ -250,30 +277,32 @@ def main():
# if master_config is missing, error out; we expect to be running on a master to be able to
# gather enough information to generate the rest of the inventory file.
if not master_config:
print "'kubernetesMasterConfig' missing from '{}'; unable to gather all necessary host information...".format(MASTER_CONFIG)
msg = "'kubernetesMasterConfig' missing from '{}'; unable to gather all necessary host information..."
print msg.format(master_config_path)
exit(1)

master_public_url = y.get("masterPublicURL")
if not master_public_url:
print "'kubernetesMasterConfig.masterPublicURL' missing from '{}'; Unable to connect to master host...".format(MASTER_CONFIG)
msg = "'kubernetesMasterConfig.masterPublicURL' missing from '{}'; Unable to connect to master host..."
print msg.format(master_config_path)
exit(1)

# connect to remote host using `oc login...` and extract all possible node information
oc = OpenShiftClient()
oc.login(master_public_url, openshift_cluster_user, openshift_cluster_pass)
oc = OpenShiftClient(admin_kubeconfig_path)
oc.login(master_public_url, openshift_cluster_user)
nodes_config = yaml.load(oc.get_nodes())

# contains host types (e.g. masters, nodes, etcd)
host_groups = dict()
openshift_hosted_logging_deploy = False
ansible_become = "yes"
is_etcd_deployed = master_config.get("storage-backend", "") in ["etcd3", "etcd2", "etcd"]

if asset_config and asset_config.get('loggingPublicURL'):
openshift_hosted_logging_deploy = True

openshift_hosted_logging_deploy = user_config.get("openshift_hosted_logging_deploy", openshift_hosted_logging_deploy)

m = Host("masters")
m.host_alias(common_host_alias)
m.address(master_config["masterIP"])
m.public_host_name(master_public_url)
host_groups["masters"] = HostGroup([m])
Expand All @@ -285,7 +314,6 @@ def main():
continue

n = Host("nodes")
n.host_alias(common_host_alias)

address = ""
internal_hostname = ""
Expand All @@ -306,30 +334,48 @@ def main():
etcd_hosts = list()
for url in etcd_config.get("urls", []):
e = Host("etcd")
e.host_alias(common_host_alias)
e.host_name(url)
etcd_hosts.append(e)

host_groups["etcd"] = HostGroup(etcd_hosts)

# open new inventory file for writing
try:
inv_file_obj = open(HOME + '/' + INVY, 'w+')
except IOError as err:
print "Unable to create or open generated inventory file: {}".format(err)
exit(1)
if USE_STDOUT:
inv_file_obj = sys.stdout
else:
try:
inv_file_obj = open(INVY_FULL_PATH, 'w+')
except IOError as err:
print "Unable to create or open generated inventory file: {}".format(err)
exit(1)

inv_file_obj.write("[OSEv3:children]\n")
for group in host_groups:
inv_file_obj.write("{}\n".format(group))
inv_file_obj.write("\n")

inv_file_obj.write("[OSEv3:vars]\n")
inv_file_obj.write("ansible_ssh_user={}\n".format(ansible_ssh_user))
inv_file_obj.write("ansible_become={}\n".format(ansible_become))
inv_file_obj.write("openshift_uninstall_images={}\n".format(str(openshift_uninstall_images)))
inv_file_obj.write("openshift_deployment_type={}\n".format(openshift_deployment_type))
inv_file_obj.write("openshift_install_examples={}\n".format(str(openshift_install_examples)))
if ansible_ssh_user:
inv_file_obj.write("ansible_ssh_user={}\n".format(ansible_ssh_user))
if ansible_become_user:
inv_file_obj.write("ansible_become_user={}\n".format(ansible_become_user))
inv_file_obj.write("ansible_become=yes\n")

if openshift_uninstall_images:
inv_file_obj.write("openshift_uninstall_images={}\n".format(str(openshift_uninstall_images)))
if openshift_deployment_type:
inv_file_obj.write("openshift_deployment_type={}\n".format(openshift_deployment_type))
if openshift_install_examples:
inv_file_obj.write("openshift_install_examples={}\n".format(str(openshift_install_examples)))

if openshift_release:
inv_file_obj.write("openshift_release={}\n".format(str(openshift_release)))
if openshift_image_tag:
inv_file_obj.write("openshift_image_tag={}\n".format(str(openshift_image_tag)))
if openshift_logging_image_version:
inv_file_obj.write("openshift_logging_image_version={}\n".format(str(openshift_logging_image_version)))
if openshift_disable_check:
inv_file_obj.write("openshift_disable_check={}\n".format(str(openshift_disable_check)))
inv_file_obj.write("\n")

inv_file_obj.write("openshift_hosted_logging_deploy={}\n".format(str(openshift_hosted_logging_deploy)))
Expand Down

0 comments on commit a821120

Please sign in to comment.