Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Grains/pillar not updated for docker_image #53401

Closed
fignew opened this issue Jun 7, 2019 · 16 comments
Closed

Grains/pillar not updated for docker_image #53401

fignew opened this issue Jun 7, 2019 · 16 comments
Assignees
Labels
Bug broken, incorrect, or confusing behavior fixed-pls-verify fix is linked, bug author to confirm fix P3 Priority 3 severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around ZD The issue is related to a Zendesk customer support ticket. ZRELEASED - Neon retired label
Milestone

Comments

@fignew
Copy link
Contributor

fignew commented Jun 7, 2019

Description of Issue

When running salt.state.docker_image.present the container being built has the host's pillar and grains instead of the container's. This is mostly an issue for detecting if the state is being run in a container via grains.virtual_subtype.

Setup

{% if grains.virtual_subtype is defined and grains.virtual_subtype == "Docker" %}
it-works-as-expected:
  test.succeed_without_changes:
    - name: "Running inside of a docker container"
{% else %}
it-doesnt-work:
  test.fail_without_changes:
    - name: "Detected as not running inside of docker even though it is"
{% endif %}

Steps to Reproduce Issue

docker-doesnt-work-as-expected:
  docker_image.present:
    - name: pleasefix
    - tag: lovesalt
    - sls: theabovestate
    - base: fedora

Versions Report

Salt Version:
           Salt: 2019.2.0
 
Dependency Versions:
           cffi: 1.11.5
       cherrypy: Not Installed
       dateutil: 2.8.0
      docker-py: 3.5.0
          gitdb: Not Installed
      gitpython: Not Installed
          ioflo: Not Installed
         Jinja2: 2.10.1
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: 0.30.1
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.6.1
   mysql-python: Not Installed
      pycparser: 2.14
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: Not Installed
         Python: 2.7.16 (default, Apr 30 2019, 15:54:43)
   python-gnupg: Not Installed
         PyYAML: 5.1
          PyZMQ: 17.0.0
           RAET: Not Installed
          smmap: Not Installed
        timelib: Not Installed
        Tornado: 5.0.2
            ZMQ: 4.3.1
 
System Versions:
           dist: fedora 30 Thirty
         locale: UTF-8
        machine: x86_64
        release: 5.0.17-300.fc30.x86_64
         system: Linux
        version: Fedora 30 Thirty
@gecube
Copy link

gecube commented Jun 7, 2019

Some concerns.

  1. I'm not sure that the build of docker images (wider - any OCI-compliant image) with Salt is good idea. I believe that all images must be pre-build. Also my point of you that one can use Salt inside docker for preconfigure of environment.
  2. Could you show your Dockerfile? Let's try to debug.

I've just test with registry.gitlab.com/gecube/salt-ssh:latest and run commands INSIDE container:

root@5f426a4e0dd9:/# salt-call --version
salt-call 2019.2.0 (Fluorine)
root@5f426a4e0dd9:/# salt-call --local cmd.run "uname -a"
local:
    Linux 5f426a4e0dd9 5.1.4-1-default #1 SMP Wed May 22 11:11:40 UTC 2019 (0739fa4) x86_64 GNU/Linux
root@5f426a4e0dd9:/# salt-call --local grains.items
local:
    ----------
...
    kernel:
        Linux
    kernelrelease:
        5.1.4-1-default
    kernelversion:
        #1 SMP Wed May 22 11:11:40 UTC 2019 (0739fa4)
    locale_info:
        ----------
        defaultencoding:
            UTF-8
        defaultlanguage:
            en_US
        detectedencoding:
            UTF-8
        timezone:
            UTC
    localhost:
        5f426a4e0dd9
    lsb_distrib_codename:
        stretch
    lsb_distrib_id:
        Debian GNU/Linux
    lsb_distrib_release:
        9
    machine_id:
        8b6f0bc66c81d2b9ef76a3fc9b52b452
    manufacturer:
        LENOVO
    master:
        salt
    mem_total:
        15481
    nodename:
        5f426a4e0dd9
    num_cpus:
        12
    num_gpus:
        0
    os:
        Debian
    os_family:
        Debian
    osarch:
        amd64
    oscodename:
        stretch
    osfinger:
        Debian-9
    osfullname:
        Debian GNU/Linux
    osmajorrelease:
        9
    osrelease:
        9
    osrelease_info:
        - 9
    path:
        /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    pid:
        93
    productname:
        20MF000TRT
    ps:
        ps -efHww
    pythonexecutable:
        /usr/bin/python2
    pythonpath:
        - /usr/bin
        - /usr/lib/python2.7
        - /usr/lib/python2.7/plat-x86_64-linux-gnu
        - /usr/lib/python2.7/lib-tk
        - /usr/lib/python2.7/lib-old
        - /usr/lib/python2.7/lib-dynload
        - /usr/local/lib/python2.7/dist-packages
        - /usr/lib/python2.7/dist-packages
    pythonversion:
        - 2
        - 7
        - 13
        - final
        - 0
    saltpath:
        /usr/lib/python2.7/dist-packages/salt
    saltversion:
        2019.2.0
    saltversioninfo:
        - 2019
        - 2
        - 0
        - 0
    serialnumber:
        R90TBKNN
    server_id:
        1716638644
    shell:
        /bin/sh
    swap_total:
        15483
    uid:
        0
    username:
        root
    uuid:
        5f4b154c-2223-11b2-a85c-f3866972cbac
    virtual:
        physical
    virtual_subtype:
        Docker
    zfs_feature_flags:
        False
    zfs_support:
        False
    zmqversion:
        4.2.1

@fignew
Copy link
Contributor Author

fignew commented Jun 7, 2019

Hi gecube, We're talking about two different things. I'm using salt.states.docker_image.present (the sls option) to build docker images based on upstream images with salt states pre-applied. There is a Dockerfile for the upstream container image but it's not something involved in this step.

This issue also occurs with the docker execution module:
salt dockerhost docker.sls containername mods=sls.using.grains

You get the host's grains instead of the container's grains.

@Ch3LL
Copy link
Contributor

Ch3LL commented Jun 11, 2019

I'm able to replicate this. I also added this to the state:

echo {{ salt['grains.item']('host')['host'] }}:
  cmd.run

and it returned the hostname from my laptop and not the container when running:

salt-call --local docker.sls container_name mods=test -ldebug

@Ch3LL Ch3LL added Bug broken, incorrect, or confusing behavior severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around P3 Priority 3 labels Jun 11, 2019
@Ch3LL Ch3LL added this to the Approved milestone Jun 11, 2019
@doesitblend doesitblend added the ZD The issue is related to a Zendesk customer support ticket. label Jun 27, 2019
@doesitblend
Copy link
Collaborator

ZD-3884

@fignew
Copy link
Contributor Author

fignew commented Jun 28, 2019

A related issue is not being able to set pillars in docker_image.present as the docs describe

docker-doesnt-work-as-expected:
  docker_image.present:
    - name: pleasefix
    - tag: lovesalt
    - sls: theabovestate
    - base: fedora
    - pillar: {'somecustompillar': 'doesntwork'}

@bryceml
Copy link
Contributor

bryceml commented Jul 18, 2019

I ran it with -l debug and it prints out the grains.items as a step:

[INFO    ] Executing command 'docker exec 0106d05fdcb67534691ce83c425ab7de681c4144a00b2d3efd677b3acd186883 env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin:/opt/bin:/usr/local/bin:/usr/local/sbin  python2 /tmp/salt.docker.10f253/salt-call --metadata --local --log-file /tmp/salt.docker.10f253/log --cachedir /tmp/salt.docker.10f253/cache --out json -l quiet -- grains.items' in directory '/root'
[DEBUG   ] stdout: {
    "local": {
        "fun_args": [], 
        "jid": "20190718230605996142", 
        "return": {
            "biosversion": "4.2.amazon", 
            "kernel": "Linux", 
            "domain": "", 
            "disks": [
                "loop1", 
                "loop6", 
                "loop4", 
                "loop2", 
                "loop0", 
                "loop7", 
                "loop5", 
                "loop3"
            ], 
            "uid": 0, 
            "kernelrelease": "4.15.0-1043-aws", 
            "pythonpath": [
                "/tmp/salt.docker.10f253/py2", 
                "/tmp/salt.docker.10f253/pyall", 
                "/tmp/salt.docker.10f253", 
                "/usr/lib/python2.7", 
                "/usr/lib/python2.7/plat-x86_64-linux-gnu", 
                "/usr/lib/python2.7/lib-tk", 
                "/usr/lib/python2.7/lib-old", 
                "/usr/lib/python2.7/lib-dynload", 
                "/usr/local/lib/python2.7/dist-packages", 
                "/usr/lib/python2.7/dist-packages"
            ], 
            "serialnumber": "ec23a746-c47c-840b-fccb-6161a947e730", 
            "pid": 46, 
            "fqdns": [], 
            "ip_interfaces": {
                "lo": [
                    "127.0.0.1"
                ], 
                "eth0": [
                    "172.17.0.2"
                ]
            }, 
            "virtual_hv_features_list": [
                "writable_page_tables", 
                "auto_translated_physmap", 
                "hvm_callback_vector", 
                "hvm_safe_pvclock", 
                "hvm_pirqs"
            ], 
            "groupname": "root", 
            "shell": "/bin/sh", 
            "mem_total": 983, 
            "saltversioninfo": [
                2019, 
                2, 
                0, 
                0
            ], 
            "zfs_support": false, 
            "SSDs": [
                "xvda"
            ], 
            "mdadm": [], 
            "id": "0106d05fdcb6", 
            "osrelease": "10", 
            "ps": "ps -efHww", 
            "locale_info": {
                "detectedencoding": "ascii", 
                "defaultlanguage": null, 
                "defaultencoding": null
            }, 
            "ip_gw": true, 
            "cpuarch": "x86_64", 
            "ip6_interfaces": {
                "lo": [], 
                "eth0": []
            }, 
            "num_cpus": 1, 
            "hwaddr_interfaces": {
                "lo": "00:00:00:00:00:00", 
                "eth0": "02:42:ac:11:00:02"
            }, 
            "init": "unknown", 
            "ip4_interfaces": {
                "lo": [
                    "127.0.0.1"
                ], 
                "eth0": [
                    "172.17.0.2"
                ]
            }, 
            "osfullname": "Debian GNU/Linux", 
            "gid": 0, 
            "master": "salt", 
            "virtual_subtype": "Docker", 
            "dns": {
                "domain": "", 
                "sortlist": [], 
                "nameservers": [
                    "10.12.64.2"
                ], 
                "ip4_nameservers": [
                    "10.12.64.2"
                ], 
                "search": [
                    "nonprod.pdx.hub.aws.saltstack.net"
                ], 
                "ip6_nameservers": [], 
                "options": []
            }, 
            "ipv6": [], 
            "osarch": "amd64", 
            "cpu_flags": [
                "fpu", 
                "vme", 
                "de", 
                "pse", 
                "tsc", 
                "msr", 
                "pae", 
                "mce", 
                "cx8", 
                "apic", 
                "sep", 
                "mtrr", 
                "pge", 
                "mca", 
                "cmov", 
                "pat", 
                "pse36", 
                "clflush", 
                "mmx", 
                "fxsr", 
                "sse", 
                "sse2", 
                "ht", 
                "syscall", 
                "nx", 
                "rdtscp", 
                "lm", 
                "constant_tsc", 
                "rep_good", 
                "nopl", 
                "xtopology", 
                "cpuid", 
                "pni", 
                "pclmulqdq", 
                "ssse3", 
                "fma", 
                "cx16", 
                "pcid", 
                "sse4_1", 
                "sse4_2", 
                "x2apic", 
                "movbe", 
                "popcnt", 
                "tsc_deadline_timer", 
                "aes", 
                "xsave", 
                "avx", 
                "f16c", 
                "rdrand", 
                "hypervisor", 
                "lahf_lm", 
                "abm", 
                "cpuid_fault", 
                "invpcid_single", 
                "pti", 
                "fsgsbase", 
                "bmi1", 
                "avx2", 
                "smep", 
                "bmi2", 
                "erms", 
                "invpcid", 
                "xsaveopt"
            ], 
            "localhost": "0106d05fdcb6", 
            "ipv4": [
                "127.0.0.1", 
                "172.17.0.2"
            ], 
            "username": "root", 
            "fqdn_ip4": [
                "172.17.0.2"
            ], 
            "fqdn_ip6": [], 
            "nodename": "0106d05fdcb6", 
            "saltversion": "2019.2.0", 
            "lsb_distrib_release": "10", 
            "ip6_gw": false, 
            "saltpath": "/tmp/salt.docker.10f253/pyall/salt", 
            "pythonversion": [
                2, 
                7, 
                16, 
                "final", 
                0
            ], 
            "zfs_feature_flags": false, 
            "osmajorrelease": 10, 
            "os_family": "Debian", 
            "oscodename": "buster", 
            "virtual_hv_features": "00000705", 
            "osfinger": "Debian-10", 
            "biosreleasedate": "08/24/2006", 
            "virtual_hv_version": "4.2.amazon", 
            "manufacturer": "Xen", 
            "kernelversion": "#45-Ubuntu SMP Mon Jun 24 14:07:03 UTC 2019", 
            "uuid": "ec23a746-c47c-840b-fccb-6161a947e730", 
            "ip4_gw": "172.17.0.1", 
            "virtual_hv_version_info": [
                "4", 
                "2", 
                ".amazon"
            ], 
            "num_gpus": 0, 
            "virtual": "xen", 
            "server_id": 1260888005, 
            "cpu_model": "Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz", 
            "fqdn": "0106d05fdcb6", 
            "pythonexecutable": "/usr/bin/python2", 
            "productname": "HVM domU", 
            "host": "0106d05fdcb6", 
            "swap_total": 0, 
            "lsb_distrib_codename": "buster", 
            "osrelease_info": [
                10
            ], 
            "lsb_distrib_id": "Debian GNU/Linux", 
            "gpus": [], 
            "path": "/bin:/usr/bin:/sbin:/usr/sbin:/opt/bin:/usr/local/bin:/usr/local/sbin", 
            "machine_id": "e89ed93837262f1d6b20cd9e97e84170", 
            "os": "Debian"
        }, 
        "retcode": 0, 
        "fun": "grains.items", 
        "id": "0106d05fdcb6", 
        "out": "nested"
    }
}

notice:

"virtual_subtype": "Docker", 

So it looks like grains works in the container, just not when jinja is using it.

I had

echo {{ grains.virtual_subtype }}:
  cmd.run

as the sls and it printed

[DEBUG   ] stdout: {
    "local": {
        "fun_args": [
            "/tmp/salt.docker.8ecd38/salt_state.tgz", 
            "d889b89c305b54b376df1048c1facfe8b11b03273091ccc052dff2ea75f2701a", 
            "sha256"
        ], 
        "jid": "20190718231247492935", 
        "return": {
            "cmd_|-echo Xen HVM DomU_|-echo Xen HVM DomU_|-run": {
                "comment": "Command \"echo Xen HVM DomU\" run", 
                "name": "echo Xen HVM DomU", 
                "start_time": "23:12:47.602060", 
                "result": true, 
                "duration": 540.063, 
                "__run_num__": 0, 
                "__sls__": "theabovestate", 
                "changes": {
                    "pid": 179, 
                    "retcode": 0, 
                    "stderr": "", 
                    "stdout": "Xen HVM DomU"
                }, 
                "__id__": "echo Xen HVM DomU"
            }
        }, 
        "retcode": 0, 
        "fun": "state.pkg", 
        "id": "06da48bde2f5", 
        "out": "highstate"
    }
}

@xeacott
Copy link
Contributor

xeacott commented Jul 19, 2019

I think this is an issue with Jinja rendering...

Running salt-call --local state.apply test -l debug with the following:
test.sls...

docker-doesnt-work-as-expected:
  docker_image.present:
    - name: pleasefix
    - tag: lovesalt
    - sls: theabovestate
    - base: centos

and theabovestate.sls...

echo {{ salt['grains.items'] }}:
  cmd.run

{% if grains.virtual_subtype is defined and grains.virtual_subtype == "Docker" %}
it-works-as-expected:
  test.succeed_without_changes:
    - name: "Running inside of a docker container"
{% else %}
it-doesnt-work:
  test.fail_without_changes:
    - name: "Detected as not running inside of docker even though it is"
{% endif %}

i see the following (snipped):

            "osfullname": "CentOS Linux", 
            "gid": 0, 
            "master": "salt", 
            "virtual_subtype": "Docker", 
            "dns": {
[DEBUG   ] Rendered data from file: /var/cache/salt/minion/files/base/theabovestate.sls:
echo <function items at 0x7f1ffdd55410>:
  cmd.run


it-doesnt-work:
  test.fail_without_changes:
    - name: "Detected as not running inside of docker even though it is"

@xeacott
Copy link
Contributor

xeacott commented Aug 2, 2019

@terminalmage Hey Erik, are you the original author of this file?

@terminalmage
Copy link
Contributor

I wrote most of that code, but the code to build an image using SLS files was contributed by someone else and I'm not very familiar with it.

@waynew
Copy link
Contributor

waynew commented Aug 7, 2019

I'm not entirely sure if this is a bug, it seems like it might be, but the problem here is definitely that the SLS file is not being rendered on the minion in question - it's being rendered outside the minion when the thin.tar.gz is being created. I'm think that's the typical scenario when generating the thin client... but I'm not sure what the fix is here.

You can actually see the the correct grain is being shown within the docker container:

Grain is being set:
  module.run:
    - name: grains.get
    - key: virtual_subtype

If you run with -ldebug you'll see this in your logs:

                "comment": "Module function grains.get executed",
                "name": "grains.get",
                "start_time": "19:15:24.368647",
                "result": true,
                "duration": 2.547,
                "__run_num__": 1,
                "__sls__": "blarp",
                "changes": {
                    "ret": "Docker"
                },

@waynew
Copy link
Contributor

waynew commented Aug 8, 2019

Does this change fix things for everyone?

@fignew
Copy link
Contributor Author

fignew commented Aug 13, 2019

Wow! Yes, that works swimmingly! So nice to have this working. Thank you Wayne.

@xeacott xeacott added the fixed-pls-verify fix is linked, bug author to confirm fix label Sep 9, 2019
@KChandrashekhar KChandrashekhar assigned waynew and unassigned xeacott Nov 14, 2019
@clallen
Copy link
Contributor

clallen commented Dec 4, 2019

I'm still seeing this issue in 2019.2.2, after applying commit ea21006.

Here are my SLS files.
sls_build/init.sls:

Build image with SLS:
  docker_image.present:
    - name: sls_build
    - sls: sls_build.image_build
    - tag: latest
    - base: 'centos:7'
    - pillar:
        symlinks:
          - /usr/local/bin/gcc
          - /usr/local/bin/cc
        ccache: /usr/bin/ccache

sls_build/image_build.sls:

Install ccache:
  pkg.installed:
    - name: ccache

{% for symlink in pillar['symlinks'] %}
Create symlink {{symlink}}:
  file.symlink:
    - name: {{symlink}}
    - target: {{pillar['ccache']}}
    - makedirs: True
    - force: True
{% endfor %}

And the output when running salt-call state.sls sls_build:

[WARNING ] The following arguments were ignored because they are not recognized by docker-py: [u'pillar', u'saltenv']
[ERROR   ] Failed to decode stdout from command cat "/var/cache/salt/minion/thin/thin.tgz" | docker exec -i a486fa2b353afd1c5e868895573a240d4e38968cc635ab8eb8ea96ac1985cfad env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin:/opt/bin:/usr/local/bin:/usr/local/sbin tee "/tmp/salt.docker.a3bd1d/thin.tgz", non-decodable characters have been replaced
[ERROR   ] Rendering exception occurred
Traceback (most recent call last):
  File "/opt/apps/unix/salt-2019.2.2/lib/python2.7/site-packages/salt/utils/templates.py", line 169, in render_tmpl
    output = render_str(tmplstr, context, tmplpath)
  File "/opt/apps/unix/salt-2019.2.2/lib/python2.7/site-packages/salt/utils/templates.py", line 404, in render_jinja_tmpl
    buf=tmplstr)
SaltRenderError: Jinja variable 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'symlinks'
[CRITICAL] Rendering SLS 'base:sls_build.image_build' failed: Jinja variable 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'symlinks'
[ERROR   ] Failed to decode stdout from command cat "/tmp/__salt.tmp.bqowmi" | docker exec -i a486fa2b353afd1c5e868895573a240d4e38968cc635ab8eb8ea96ac1985cfad env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin:/opt/bin:/usr/local/bin:/usr/local/sbin tee "/tmp/salt.docker.310b7d/salt_state.tgz", non-decodable characters have been replaced
[ERROR   ] Failed to decode stdout from command cat "/var/cache/salt/minion/thin/thin.tgz" | docker exec -i a486fa2b353afd1c5e868895573a240d4e38968cc635ab8eb8ea96ac1985cfad env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin:/opt/bin:/usr/local/bin:/usr/local/sbin tee "/tmp/salt.docker.6a84ac/thin.tgz", non-decodable characters have been replaced
[ERROR   ] Encountered error using SLS sls_build.image_build for building sls_build:latest: [u"Rendering SLS 'base:sls_build.image_build' failed: Jinja variable 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'symlinks'"]
local:
----------
          ID: Build image with SLS
    Function: docker_image.present
        Name: sls_build
      Result: False
     Comment: Encountered error using SLS sls_build.image_build for building sls_build:latest: [u"Rendering SLS 'base:sls_build.image_build' failed: Jinja variable 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'symlinks'"]
     Started: 09:23:29.272101
    Duration: 39059.262 ms
     Changes:

Summary for local
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:  39.059 s

Versions report:

Salt Version:
           Salt: 2019.2.2

Dependency Versions:
           cffi: 1.6.0
       cherrypy: 17.4.2
       dateutil: 2.8.1
      docker-py: 4.1.0
          gitdb: 2.0.6
      gitpython: 2.1.14
          ioflo: Not Installed
         Jinja2: 2.10.3
        libgit2: Not Installed
        libnacl: Not Installed
       M2Crypto: 0.21.1
           Mako: Not Installed
   msgpack-pure: Not Installed
 msgpack-python: 0.6.2
   mysql-python: 1.2.5
      pycparser: 2.14
       pycrypto: 2.6.1
   pycryptodome: Not Installed
         pygit2: Not Installed
         Python: 2.7.5 (default, Apr  9 2019, 16:02:27)
   python-gnupg: 0.4.5
         PyYAML: 3.13
          PyZMQ: 18.1.0
           RAET: Not Installed
          smmap: 2.0.5
        timelib: Not Installed
        Tornado: 5.1.1
            ZMQ: 4.3.2

System Versions:
           dist: oracle 7.6
         locale: UTF-8
        machine: x86_64
        release: 4.14.35-1844.4.5.2.el7uek.x86_64
         system: Linux
        version: Oracle Linux Server 7.6

@Ch3LL
Copy link
Contributor

Ch3LL commented Dec 17, 2019

bump @waynew looks like @clallen is still having issues with teh fix

@clallen
Copy link
Contributor

clallen commented Dec 18, 2019

It occurred to me that using salt['pillar.get']() might make a difference, so I tried that.
It still fails, but in a different way. It gets past the rendering phase and dies on a "state.pkg" call.
So I think this particular issue is fixed, as long as you use the ".get" module function instead of a direct dict index.

@sagetherage
Copy link
Contributor

Ok, @clallen I am going to close the issue, but if you run into problems with it, again we can always open it, again. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior fixed-pls-verify fix is linked, bug author to confirm fix P3 Priority 3 severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around ZD The issue is related to a Zendesk customer support ticket. ZRELEASED - Neon retired label
Projects
None yet
Development

No branches or pull requests