We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The module azure.azcollection.azure_rm_aks creates auotomaticly a vmss but ignore the configured feature flag for autoscaling.
azure.azcollection.azure_rm_aks
ansible 2.10.1 config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.8/site-packages/ansible executable location = /usr/local/bin/ansible python version = 3.8.6 (default, Sep 24 2020, 21:54:23) [GCC 8.3.0]
PYTHON_VERSION=3.8.6 AZURE_TENANT=f69b6fef-1cad-49e3-a2f3-87d471e2272b LANG=C.UTF-8 AZURE_SUBSCRIPTION_ID=SECRET AZURE_DEFAULTS_LOCATION=westeurope AZURE_CLIENT_ID=SECRET PYTHON_PIP_VERSION=20.2.3 PYTHON_GET_PIP_SHA256=6e0bb0a2c2533361d7f297ed547237caf1b7507f197835974c0dd7eba998c53c AZURE_SECRET=SECRET PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/fa7dc83944936bf09a0e4cb5d5ec852c0d256599/get-pip.py PATH=/usr/local/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Linux b8b1041ff1cf 5.8.13-200.fc32.x86_64 #1 SMP Thu Oct 1 21:49:42 UTC 2020 x86_64 GNU/Linux
The result is an AKS cluster in Azure including a VMSS with autoscaling=OFF
- name: Create a managed Azure Container Services (AKS) azure.azcollection.azure_rm_aks: state: present name: "{{ aks_name }}" location: "{{ location }}" resource_group: "{{ resource_group }}" node_resource_group: "{{ node_resource_group }}" dns_prefix: "{{ dns_prefix }}" kubernetes_version: "{{ aks_version }}" # addon: # http_application_routing: # enabled: "{{ addons.http_application_routing.enabled }}" network_profile: service_cidr: "{{ network.service_cidr }}" dns_service_ip: "{{ network.dns_service_ip }}" docker_bridge_cidr: "{{ network.docker_bridge_cidr }}" pod_cidr: "{{ network.pod_cidr }}" network_plugin: "{{ network.network_plugin }}" network_policy: "{{ network.network_policy }}" linux_profile: admin_username: "{{ admin_username }}" ssh_key: "{{ ssh_key }}" service_principal: client_id: "{{ service_principal.client_id }}" client_secret: "{{ service_principal.client_secret }}" agent_pool_profiles: - name: "{{ agent_pool_profiles[0].name }}" count: "{{ agent_pool_profiles[0].count }}" min_count: "{{ agent_pool_profiles[0].min_count }}" max_count: "{{ agent_pool_profiles[0].max_count }}" enable_auto_scaling: yes #"{{ agent_pool_profiles[0].enable_auto_scaling }}" vm_size: "{{ agent_pool_profiles[0].vm_size }}" vnet_subnet_id: "{{ hostvars.localhost['AKSSubnet'] }}" type: "{{ agent_pool_profiles[0].type }}" enable_rbac: "{{ rbac }}" tags: environment: testing register: aks - name: "AKS Cluster: {{ aks.name }}" debug: msg: "{{ aks }}"
A created AKS cluster in Azure including a VMSS with autoscaling=ON
Ansible shows autoscaling enabled but in Azure Portal it shows OFF
TASK [aks : AKS Cluster: testingAKS] ********************************************************************************************************************************************************************************************************* ok: [localhost] => { "msg": { "aad_profile": {}, "addon": { "KubeDashboard": { "enabled": false } }, "agent_pool_profiles": [ { "count": 3, "enable_auto_scaling": true, "max_count": 9, "min_count": 3, "name": "akspoolmin", "os_disk_size_gb": 128, "os_type": "Linux", "type": "VirtualMachineScaleSets", "vm_size": "Standard_B2s", "vnet_subnet_id": "/subscriptions/XXX/resourceGroups/VPC/providers/Microsoft.Network/virtualNetworks/testingVNetKube/subnets/AKSSubnet" } ], "changed": true, "dns_prefix": "aks", "enable_rbac": true, "failed": false, "fqdn": "aks-XXX.hcp.westeurope.azmk8s.io", "id": "/subscriptions/XXX/resourcegroups/AKS/providers/Microsoft.ContainerService/managedClusters/testingAKS", "kube_config": "apiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: XXX\n server: https://aks-XXX.hcp.westeurope.azmk8s.io:443\n name: testingAKS\ncontexts:\n- context:\n cluster: testingAKS\n user: clusterUser_AKS_testingAKS\n name: testingAKS\ncurrent-context: testingAKS\nkind: Config\npreferences: {}\nusers:\n- name: clusterUser_AKS_testingAKS\n user:\n client-certificate-data: XXX\n client-key-data: XXX\n token: XXX\n", "kubernetes_version": "1.18.8", "linux_profile": { "admin_username": "ansible", "ssh_key": "XXX" }, "location": "westeurope", "name": "testingAKS", "network_profile": { "dns_service_ip": "10.8.0.10", "docker_bridge_cidr": "172.17.0.1/16", "load_balancer_sku": "Basic", "network_plugin": "kubenet", "network_policy": "calico", "pod_cidr": "10.244.0.0/16", "service_cidr": "10.8.0.0/16" }, "node_resource_group": "aksnodepool", "provisioning_state": "Succeeded", "service_principal_profile": { "client_id": "XXX" }, "tags": { "environment": "testing" }, "type": "Microsoft.ContainerService/ManagedClusters", "warnings": [ "Azure API profile latest does not define an entry for ContainerServiceClient", "Azure API profile latest does not define an entry for ContainerServiceClient" ] } }
The text was updated successfully, but these errors were encountered:
@david-freistrom Thank you for reporting this to us. We will look into it. Thank you very much!
Sorry, something went wrong.
@david-freistrom I have tested in local machine! It will disabled or enabled when you set it in parameters!
Enable enable_auto_scaling: - name: Create a aks to enable enable_auto_scaling azure_rm_aks: name: "aks{{ rpfx }}" resource_group: "{{ resource_group }}" location: eastus dns_prefix: "aks{{ rpfx }}" kubernetes_version: "{{ versions.azure_aks_versions[0] }}" service_principal: client_id: "{{ azure_client_id }}" client_secret: "{{ azure_secret }}" linux_profile: admin_username: azureuser ssh_key: ssh-rsa *********** agent_pool_profiles: - name: default count: 1 vm_size: Standard_B2s type: VirtualMachineScaleSets mode: System enable_auto_scaling: True max_count: 6 min_count: 1 max_pods: 42 availability_zones: - 1 - 2 node_resource_group: "node{{ noderpfx }}" enable_rbac: yes network_profile: load_balancer_sku: standard
Disable enable_auto_scaling: - name: Create a aks to disable enable_auto_scaling azure_rm_aks: name: "aks{{ rpfx }}" resource_group: "{{ resource_group }}" location: eastus dns_prefix: "aks{{ rpfx }}" kubernetes_version: "{{ versions.azure_aks_versions[0] }}" service_principal: client_id: "{{ azure_client_id }}" client_secret: "{{ azure_secret }}" linux_profile: admin_username: azureuser ssh_key: ssh-rsa *********** agent_pool_profiles: - name: default count: 1 vm_size: Standard_B2s type: VirtualMachineScaleSets mode: System enable_auto_scaling: False max_pods: 42 availability_zones: - 1 - 2 node_resource_group: "node{{ noderpfx }}" enable_rbac: yes network_profile: load_balancer_sku: standard
No branches or pull requests
SUMMARY
The module azure.azcollection.azure_rm_aks creates auotomaticly a vmss but ignore the configured feature flag for autoscaling.
ISSUE TYPE
COMPONENT NAME
azure.azcollection.azure_rm_aks
ANSIBLE VERSION
CONFIGURATION
OS / ENVIRONMENT
PYTHON_VERSION=3.8.6
AZURE_TENANT=f69b6fef-1cad-49e3-a2f3-87d471e2272b
LANG=C.UTF-8
AZURE_SUBSCRIPTION_ID=SECRET
AZURE_DEFAULTS_LOCATION=westeurope
AZURE_CLIENT_ID=SECRET
PYTHON_PIP_VERSION=20.2.3
PYTHON_GET_PIP_SHA256=6e0bb0a2c2533361d7f297ed547237caf1b7507f197835974c0dd7eba998c53c
AZURE_SECRET=SECRET
PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/fa7dc83944936bf09a0e4cb5d5ec852c0d256599/get-pip.py
PATH=/usr/local/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Linux b8b1041ff1cf 5.8.13-200.fc32.x86_64 #1 SMP Thu Oct 1 21:49:42 UTC 2020 x86_64 GNU/Linux
STEPS TO REPRODUCE
The result is an AKS cluster in Azure including a VMSS with autoscaling=OFF
EXPECTED RESULTS
A created AKS cluster in Azure including a VMSS with autoscaling=ON
ACTUAL RESULTS
Ansible shows autoscaling enabled but in Azure Portal it shows OFF
The text was updated successfully, but these errors were encountered: