Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example branch #1

Closed
wants to merge 853 commits into from
Closed

Example branch #1

wants to merge 853 commits into from

Conversation

AharonMalkin
Copy link
Owner

Description of PR

Summary:
BLA BLA BLA

Type of change

  • Bug fix
  • Testbed and Framework(new/improvement)
  • Test case(new/improvement)

Back port request

  • 201911
  • 202012
  • 202205
  • 202305
  • 202311

Approach

What is the motivation for this PR?

How did you do it?

How did you verify/test it?

Any platform specific information?

Supported testbed topology if it's a new test case?

Documentation

wangxin and others added 30 commits January 25, 2024 14:28
…#9452)

Approach
What is the motivation for this PR?
The upgrade_image.py script defined some bool type arguments. However, "action" is not specified while defining these arguments.

The result is that if user pass in arguments like below, the argument will be still be evaluated to True as python by default casting string "False" to bool value True.

./upgrade_image.py --enable-fips Flase
How did you do it?
The fix is to take advantage of setuptools.distutils.util.strtobool.

With tool, any values like below passed in to this type of arguments would be evaluated to integer 0:

"n", "no", "No", "NO", "False", "false", "FALSE", "FaLsE", ...
Any argument value like below would be evaluated to integer 1:

"y", "yes", "Yes", "YES", "True", "true", "TRUE", "TrUe", ...

Signed-off-by: Xin Wang <xiwang5@microsoft.com>

co-authorized by: jianquanye@microsoft.com
What is the motivation for this PR?
Fix bug update timer issue on dual tor, when set up bgp speaker session with dual tor.
Interface selection on PTF has wrong behavior, config IP addresses within same subnet on different interfaces would case connection error. Refer to analysis of sonic-net#8487.

How did you do it?
Config primary /secondary ip addresses on same physical interface.

How did you verify/test it?
re-run test on dual-tor env
* refactor hooks as sonic hooks to support additional types

* spytest base config update for sonic-vs

* spytest feature api hooks for linux servers

* spytest feature api hooks for poe devices

* support collect tech-support at various phases

---------

Co-authored-by: Rama Sasthri, Kristipati <rama.kristipati@broadcom.com>
What is the motivation for this PR?
In SONiC, a new peer-group (BGPSentinel) is added in BGP configuration. This testcase is to verify that IBGP session would be established between DUT and BGPSentinel host (simulated by exabgp in ptf). And to verify exabgp in BGPSentinel host would advertise and withdraw route to DUT.

How did you do it?
Using t1-lag topology, and configure route in ptf to ensure that ptf and dut would ping each other.
Add BGPSentinel configuration in DUT and start exabgp in ptf to setup IBGP connection.
Advertise route from ptf with higher local-preference and no-export community to DUT, check route in DUT would be suppressed.
Withdraw route from ptf, check route in DUT would be advertised to EBGP peers.
How did you verify/test it?
Setup t1-lag topology. Using command "./run_tests.sh -m individual -a False -n vms-kvm-t1-lag -u -c bgp/test_bgp_sentinel.py -f vtestbed.yaml -i veos_vtb -e "--neighbor_type=eos"" to run testcase. Testcase would pass.
* fix tcpdump issue in bgp update timer testing
…global env (sonic-net#9440)

Description of PR
Summary:
In setup-container stage, all packages under the user, AzpDevOps, will be reinstalled to the virtual environment of host user. But there are a few incompatible packages that cannot be installed by pip command of the virtual environment.

Type of change
 Bug fix
 Testbed and Framework(new/improvement)
 Test case(new/improvement)
Back port request
 201911
 202012
 202205
Approach
What is the motivation for this PR?
In setup-container stage, all packages under the user, AzpDevOps, will be reinstalled to the virtual environment of host user. But there are a few incompatible packages that cannot be installed by pip command of the virtual environment.

How did you do it?
Filter out these incompatible packages at the re-installing step.

How did you verify/test it?
Check setup-container command in the locally.

$ ./setup-container.sh -n sonic-mgmt  -d /data
'/home/zegan/.ssh/id_rsa_docker_sonic_mgmt' -> '/tmp/tmp.Ssj2kL0x43/id_rsa'
'/home/zegan/.ssh/id_rsa_docker_sonic_mgmt.pub' -> '/tmp/tmp.Ssj2kL0x43/id_rsa.pub'
[+] Building 66.9s (27/27) FINISHED                                                                                                                                                                                                                                               docker:default
 => [internal] load build definition from Dockerfile                                                                                                                                                                                                                                        0.1s
 => => transferring dockerfile: 2.31kB                                                                                                                                                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                                                                           0.1s
 => => transferring context: 2B                                                                                                                                                                                                                                                             0.0s
 => [internal] load metadata for sonicdev-microsoft.azurecr.io:443/docker-sonic-mgmt:latest                                                                                                                                                                                                 0.0s
 => [ 1/22] FROM sonicdev-microsoft.azurecr.io:443/docker-sonic-mgmt                                                                                                                                                                                                                        0.0s
 => [internal] load build context                                                                                                                                                                                                                                                           0.1s
 => => transferring context: 3.25kB                                                                                                                                                                                                                                                         0.0s
 => CACHED [ 2/22] RUN if getent group zegan; then groupmod -o -g 1000 zegan; else groupadd -o -g 1000 zegan; fi                                                                                                                                                                            0.0s
 => CACHED [ 3/22] RUN if getent passwd zegan; then userdel zegan; fi                                                                                                                                                                                                                       0.0s
 => CACHED [ 4/22] RUN useradd -o -l -g 1000 -u 1000 -m -d /home/zegan -s /bin/bash zegan;                                                                                                                                                                                                  0.0s
 => CACHED [ 5/22] RUN if getent group docker; then groupmod -o -g 998 docker; else groupadd -o -g 998 docker; fi                                                                                                                                                                           0.0s
 => CACHED [ 6/22] RUN if [ 'zegan' != 'AzDevOps' ]; then /bin/bash -O extglob -c 'cp -a -f /var/AzDevOps/!(env-*) /home/zegan/'; for hidden_stuff in '.profile .local .ssh'; do /bin/bash -c 'cp -a -f /var/AzDevOps/$hidden_stuff /home/zegan/ || true'; done fi                          0.0s
 => CACHED [ 7/22] RUN usermod -a -G sudo zegan                                                                                                                                                                                                                                             0.0s
 => CACHED [ 8/22] RUN usermod -a -G docker zegan                                                                                                                                                                                                                                           0.0s
 => CACHED [ 9/22] RUN echo 'zegan ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/zegan                                                                                                                                                                                                           0.0s
 => CACHED [10/22] RUN chmod 0440 /etc/sudoers.d/zegan                                                                                                                                                                                                                                      0.0s
 => CACHED [11/22] RUN chown -R '1000:1000' /home/zegan                                                                                                                                                                                                                                     0.0s
 => CACHED [12/22] RUN sed -i -E 's/^#?PermitRootLogin.*$/PermitRootLogin yes/g' /etc/ssh/sshd_config                                                                                                                                                                                       0.0s
 => CACHED [13/22] RUN echo 'root:root' | chpasswd                                                                                                                                                                                                                                          0.0s
 => CACHED [14/22] RUN echo 'zegan:12345' | chpasswd                                                                                                                                                                                                                                        0.0s
 => CACHED [15/22] COPY --chown=1000:1000 id_rsa id_rsa.pub /home/zegan/.ssh/                                                                                                                                                                                                               0.0s
 => CACHED [16/22] RUN chmod 0700 /home/zegan/.ssh                                                                                                                                                                                                                                          0.0s
 => CACHED [17/22] RUN chmod 0600 /home/zegan/.ssh/id_rsa                                                                                                                                                                                                                                   0.0s
 => CACHED [18/22] RUN chmod 0644 /home/zegan/.ssh/id_rsa.pub                                                                                                                                                                                                                               0.0s
 => CACHED [19/22] RUN cat /home/zegan/.ssh/id_rsa.pub >> /home/zegan/.ssh/authorized_keys                                                                                                                                                                                                  0.0s
 => CACHED [20/22] RUN chmod 0600 /home/zegan/.ssh/authorized_keys                                                                                                                                                                                                                          0.0s
 => CACHED [21/22] WORKDIR /home/zegan                                                                                                                                                                                                                                                      0.0s
 => [22/22] RUN if [ 'zegan' != 'AzDevOps' ] && [ -d /var/AzDevOps/env-python3 ]; then /bin/bash -c 'python3 -m venv ${HOME}/env-python3'; /bin/bash -c '${HOME}/env-python3/bin/pip install pip --upgrade'; /bin/bash -c '${HOME}/env-python3/bin/pip install wheel'; /bin/bash -c '${HO  61.1s
 => exporting to image                                                                                                                                                                                                                                                                      5.6s
 => => exporting layers                                                                                                                                                                                                                                                                     5.6s
 => => writing image sha256:fd47f5fa0ac90e0f2084037cb9f7fe85e48577c3854031578e585fbcb6630c03                                                                                                                                                                                                0.0s
 => => naming to docker.io/library/docker-sonic-mgmt-zegan:master                                                                                                                                                                                                                           0.0s
b419771c024f1b8dc5e59270f24bf55df86e695e62da7fdb4c3d09d62417bc06
 * Restarting OpenBSD Secure Shell server sshd
   ...done.
******************************************************************************
EXEC: docker exec --user zegan -ti sonic-mgmt bash
SSH:  ssh -i ~/.ssh/id_rsa_docker_sonic_mgmt zegan@172.17.0.2
******************************************************************************


Signed-off-by: Ze Gan <ganze718@gmail.com>
…multi-asic. (sonic-net#8884)" (sonic-net#9470)

Reverts sonic-net#8884

In PR sonic-net#8884, in multi-asic scenrio
```
node.copy(content=json.dumps(duts_data[node.hostname]["pre_running_config"]["asic0"], indent=4),
              dest='/etc/sonic/config_db.json', verbose=False)
```
This code will overwrite the global config with asic0 config, which will cause failure. So revert this PR first. And then we will set a more exactly key name.

Signed-off-by: Yutong Zhang <yutongzhang@microsoft.com>
The `platform_tests/test_sensors.py` rely on the information provided
in this config file to check for the existance of sysfs paths.

This test was introduced before the Platform API existed and did have
some purpose then. However all SONiC platform daemons now rely on the
Platform API which is tested by numerous tests under `platform_tests`.

There is no longer a need to hardcode sysfs paths for products.
Keeping this data there is bound to generate recurring issues in the
future and translate directly into maintenance burden.

Some sysfs paths are just not deterministic. They will depend on which
driver is loaded first and whatnot which is inherently flaky for a test
to rely on.
Description of PR
In the recover of sanity check, for config_reload, we simply sleep 120s.
In some topology, like multi-asic scenario, it's not enough.
lldp services get a big chance that not ready after 120s, then sanity failed.

Use config_reload with safe_reload to better handle this flaky issue.

Approach
What is the motivation for this PR?
Fix multi-asic lldp flaky issue.

How did you do it?
Use config_reload with safe_reload to better handle this flaky issue.

co-authorized by: jianquanye@microsoft.com
Approach
What is the motivation for this PR?
Fix the error:

inv_files = request.config.option.ansible_inventory
> ip = utilities.get_test_server_vars(inv_files, server).get('ansible_host')
E AttributeError: 'NoneType' object has no attribute 'get'
Signed-off-by: Longxiang Lyu lolv@microsoft.com

How did you do it?
The reason is that request.config.option.ansible_inventory returns a string, which could be a string of multiple inventory files like "inv_file0,inv_file1"

But InventoryManager takes inventory files as a list(https://github.com/ansible/ansible/blob/ca08261f08a5071cc5f8c73e61342f5a9581b9cd/lib/ansible/inventory/manager.py#L157-L163), so we should split the string if it contains multiple inventory files(from "inv_file0,inv_file1" to ["inv_file0", "inv_file1"]

How did you verify/test it?
Run pretests.
qos.yml update for gb topo-t2 and longlink

Signed-off-by: Zhixin Zhu <zhixzhu@cisco.com>
Signed-off-by: Zhixin Zhu <zhixzhu@cisco.com>
Summary:
Migrate test_vxlan_decap PTF script to Python3

What is the motivation for this PR?
This is part of Python3 migration project.
This PR migrate test_vxlan_decap PTF script to Python3.

How did you do it?
2to3 and manually code change

How did you verify/test it?
Run with physical testbed
…9487)

Signed-off-by: Neetha John <nejo@microsoft.com>

What is the motivation for this PR?
On certain platforms with low disk space, failures are seen occassionally on running the test_fdb_mac_move case. This testcase runs in a loop and the number of times the loop is executed is dependent on the completeness_level setting. Lot of syslog messages are generated per loop and for completeness_level settings of 'confident' and above, it can lead to disk full case for certain platforms. This PR resets the completeness_level to basic for td2 platforms so as to reduce the number of iterations of test run.

How did you verify/test it?
Ran the testcase multiple times against t0-backend topology and no longer see failures
What is the motivation for this PR?
Seeing failures on testbed with different hwsku's with different asic count.
In Chassis testbed, with different LC hwSKU , if one of the selected LCs have 3 asics and another LC has 2 asics, we see the below failure:

drop_packets/test_drop_counters.py::test_src_ip_is_loopback_addr[rif_members] 
-------------------------------- live log call ---------------------------------
02:08:08 utilities.wait_until                     L0118 ERROR  | Exception caught while checking <lambda>:Traceback (most recent call last):
  File "/./tests/common/utilities.py", line 112, in wait_until
    check_result = condition(*args, **kwargs)
  File "/./tests/common/helpers/drop_counters/drop_counters.py", line 101, in <lambda>
    check_drops_on_dut = lambda: packets_count in get_drops_across_all_duthosts()
  File "/./tests/common/helpers/drop_counters/drop_counters.py", line 94, in get_drops_across_all_duthosts
    pkt_drops = get_pkt_drops(duthost, get_cnt_cli_cmd, asic_index)
  File "/./tests/common/helpers/drop_counters/drop_counters.py", line 34, in get_pkt_drops
    namespace = duthost.get_namespace_from_asic_id(asic_index)
  File "/./tests/common/devices/multi_asic.py", line 230, in get_namespace_from_asic_id
    raise ValueError("Invalid asic_id '{}' passed as input".format(asic_id))
ValueError: Invalid asic_id '2' passed as input
, error:Invalid asic_id '2' passed as input
Issue is because the get_pkt_drop function takes the asic_index of the selected HwSKU or selected LC.
If the random asic_index selected is 2, and if get_pkt_drops() is called with asic_index 2 for a different LC in the testbed with 2 asics, the test fails as asic_index 2 is not present in the different LC.

How did you do it?
Fixed get_pkt_drops to go through all asic namespaces and get drop count.
This way get_pkt_drops will return packet drops for all interfaces from all namespaces.

How did you verify/test it?
Verified on packet chassis, voq chassis and single asic DUT testbed.
…onic-net#9530)

What is the motivation for this PR?
This is to workaround pytest-ansible plugin issue:
Even no extra-inventory is specified, extra_inventory_manager will still be initialized.
ansible/pytest-ansible#135

As of today, pytest-ansible supports host manager and module dispatcher v29, v212, v213. While initializing the pytest-ansible plugin, it tries to collect default ansible configurations and command line options. These options are used for creating host manager. When no extra_inventory is specified, the options passed to host manager will include extra_inventory=None. However, the host manager will still try to create extra_inventory_manager with None as the inventory source. This will cause the module dispatcher to run the module on hosts matching host pattern in both inventory and extra inventory. This would cause an ansible module executed twice on the same host. In case we wish to use the shell module to run command like "rm <some_file>" to delete a file. The second run would fail because the file has been deleted in the first run. For more details, please refer to the Github issue mentioned above.

This change is only required for python3. Currently master and 202305 branch are using python3. For older branches, they are still using python2 and an older version of pytest-ansible which does not have the bug.

How did you do it?
The fix is to overwrite the pytest-ansible plugin's initialize method and remove the extra_inventory option if it is None. This will prevent the host manager from creating extra_inventory_manager with None as the inventory source.

How did you verify/test it?
Tried run a few tests with pytest-ansible 3.2.1 and 4.0.0.

Signed-off-by: Xin Wang <xiwang5@microsoft.com>
Description of PR
In PR sonic-net#9054, it will try to get duthost.facts["platform_asic"] in test_crm.py, but in file tests/common/devices/sonic.py, which define the duthost.facts, platform_asic may not exist in duthost.facts. And this will cause KeyError in test_crm.py. In this PR, if platform_asic doesn't exist in duthost.facts, we set it None.

What is the motivation for this PR?
In PR sonic-net#9054, it will try to get duthost.facts["platform_asic"] in test_crm.py, but in file tests/common/devices/sonic.py, which define the duthost.facts, platform_asic may not exist in duthost.facts. And this will cause KeyError in test_crm.py. In this PR, if platform_asic doesn't exist in duthost.facts, we set it None.

Signed-off-by: Yutong Zhang <yutongzhang@microsoft.com>
Approach
What is the motivation for this PR?
As the probes received from one port are the same basically(except those with TLVs), we could simply cache the replies instead of generating them in each reply cycle.

Signed-off-by: Longxiang Lyu lolv@microsoft.com

How did you do it?
Use functools.lru_cache to cache the calls to icmp_reply.

How did you verify/test it?
With the lru_cache, the execution time of 1000 repeated calls to icmp_reply is reduced from 0.6184256076812744s to 0.0006690025329589844s.

Any platform specific information?
* test_neighbor_mac_noptf is flaky because sometimes it may take longer than usual to
update intf address, add neighbor, and also change neighbor MAC. Make the test more
robust by retrying the validation of neighbor in redis DB.
…net#9480)

What is the motivation for this PR?
Priority use args.scripts to get test modules if this param is not None for KVM platform. Because pr_test_scripts.yaml is the default file to get test modules, If we don't pass args.scripts, it will get test module from pr_test_scripts.yaml.

How did you do it?
Modify test_plan.py.

How did you verify/test it?
Manually trigger two test plans, one pass args.scrips, it will get test module from it, another don't pass args.scripts, it will get test module from pr_test_scripts.yaml. All as expected.

Signed-off-by: Chun'ang Li <chunangli@microsoft.com>
Description of PR
PR sonic-net#9312 will try to get DATAACL in running config in fixture recover_acl_rule. But if param enable_data_acl is set false when deploying minigraph, DATAACL will not appear in running config. Which may cause key error in fixture recover_acl_rule. In this PR, we fix this issue.

What is the motivation for this PR?
PR sonic-net#9312 will try to get DATAACL in running config in fixture recover_acl_rule. But if param enable_data_acl is set false when deploying minigraph, DATAACL will not appear in running config. Which may cause key error in fixture recover_acl_rule. In this PR, we fix this issue.

How did you do it?
Run TC on testbed which enable_data_acl is false when deploying minigraph.

Signed-off-by: Yutong Zhang <yutongzhang@microsoft.com>
Summary:
Migrate test_arpall PTF script to Python3

What is the motivation for this PR?
This is part of Python3 migration project.
This PR migrate test_arpall PTF script to Python3.

How did you do it?
2to3 and manually code change

How did you verify/test it?
Run with physical testbed
ridahanif96 and others added 26 commits January 25, 2024 14:28
Description of PR
This PR is created to resolve issues in KVM test which are failing in advance utilities HEAD PR. KVM Elastic test are failing in generic_config_updater/test_dhcp_relay.py file due to missing mode attribute.

What is the motivation for this PR?
Modified GCU fixtures/duthost_utilities.py to make it compatible for HEAD PR . We have added "switchport mode command" in duthost_utilites to add mode trunk on Ethernet4 so that it will allow addition of vlan membership on ports. Removed added function and check "switchport mode explicitly"


co-authorized by: jianquanye@microsoft.com
What is the motivation for this PR?
Skip below testcases on M0/Mx topology:
- bgp/test_bgp_queue.py
- generic_config_updater/test_pg_headroom_update.py

How did you do it?
Updated conditional mark file.

How did you verify/test it?
Verified on Mx testbed.

Signed-off-by: Zhijian Li <zhijianli@microsoft.com>
Approach
What is the motivation for this PR?
Due to the TYPE 7 changing for MACsec, MACsec tests break the submodule,sonic-swss/sonic-sairedis, updating.

How did you do it?
Disable MACsec tests in the pr_test script.

How did you verify/test it?
Check Azp

co-authorized by: jianquanye@microsoft.com
…et#9896)

* Qos_LossyQueueTest fix_adding COUNTER MARGIN

* Adding COUNTER_MARGIN to qos PFCXonTest

* Adding COUNTER_MARGIN to qos PFCTest

* Flake8 fixes

* testQosSaiPFCXoffLimit updated with correct check

* Conditional check corrected for LossyQueueTest

* flake8 fix
## Approach
#### What is the motivation for this PR?
"show interface status" is wrong after gnmi incremental config, this command reads from APPL_DB, and we should read from CONFIG_DB.

Microsoft ADO: 25136266

#### How did you do it?
Read CONFIG_DB to get admin_status.

#### How did you verify/test it?
Run gnmi end2end test.
Description of PR
This PR is created to resolve issues in KVM test which are failing in advance utilities HEAD PR. KVM Elastic test are failing in generic_config_updater/test_dhcp_relay.py file due to missing mode attribute.

What is the motivation for this PR?
Modified GCU fixtures/duthost_utilities.py to make it compatible for HEAD PR . We have added "switchport mode command" in duthost_utilites to add mode trunk on Ethernet4 so that it will allow addition of vlan membership on ports. Added a function to check "switchport mode explicitly". The check was missing in earlier case due to which all port are running "switchport" command twice and change mode everytime.

co-authorized by: jianquanye@microsoft.com
## Description of PR

Summary:
Fixes # (issue)

### Type of change

- [x] Bug fix
- [ ] Testbed and Framework(new/improvement)
- [ ] Test case(new/improvement)


## Approach
#### What is the motivation for this PR?

event-down-ctr test is flaky because we run the test before monit changes are in place which means start delay for monit is at 5 minutes, so container_checker will not catch event.

#### How did you do it?

Move test after monit config is changed

#### How did you verify/test it?

Manual/pipeline
Approach
What is the motivation for this PR?
Add server to server normal testcase to cover the east-to-west traffic verification for dualtor.

Signed-off-by: Longxiang Lyu lolv@microsoft.com

How did you do it?
Select two mux ports, one as source port, one as dest port.
Send traffic from source port and verify traffic delivery at the dest port.

How did you verify/test it?
* Macsec profiles to accept type 7 encoded keys
* Add type 7 even to eos configs as macsec profilejson now has type 7 keys
If ntp server changes, local time stamp may jump and will cause the age value in lldpctl json output turns to negative time such as '00:-54:-15'. During lldp-syncd deamon process, it will fail and print an error log when parsing age. Add this unharmful syslog into ignore list.

Signed-off-by: Zhaohui Sun <zhaohuisun@microsoft.com>
…t#10125)

Signed-off-by: Zhaohui Sun <zhaohuisun@microsoft.com>
Use gre type 0x8949 when it is Nvidia platform.
* Fix sub_port_interfaces issue

Co-authored-by: Chuan Wu <chuanw@nvidia.com>
…t#10074)

For Nvidia, we use the RPC image instead of swapping syncd in the qos sai test. We need an option to control if the swap_syncd should be executed.

Change-Id: Ideb8bd1054604d77bf26c5c44624f7d493d9eb41
…sonic-net#10052)

What I did:

Fix the QOS test failure when packet sent are greater than 4K in size
Why I did:

    Increased socket buffer for PTF while running QoS test
    as we are sending packet > 4k in some cases where there is HBM involved
    to fill the buffer faster.

How I verify:
Test passing after making this change.

Signed-off-by: Abhishek Dosi <abdosi@microsoft.com>
What is the motivation for this PR?
Add step in bgp holdtimer repro to list iptables rules.

The default chain policy is to accept all packets. When I add the rule to drop all packets (including bgp packets), when we list the iptable rules, the rules are reevaluated such that they take place into effect.

How did you do it?
code change

How did you verify/test it?
manual/pipeline
In the dualtor scenario, when dscp remapping is enabled and packet with dscp 2 or 6 is received by the standby tor, the mapping behavior is different between Nvidia and other vendors, so the test cases fail on Nvidia platforms.
Actually, this is not considered as a valid use case, the dscp 2 and 6 should be reserved for the remapped packets. And this has been confirmed with MSFT, so skip the cases for dscp 2 and 6 on Nvidia platforms when dscp remapping is enabled.

Change-Id: Icdf19a7191d25c36bdd4cdc659e96297f924abf7
For Nvidia SN5600 the formula to calculate the shared buffer size for pg and queue differs from other Spectrum. So, update the corresponding code for it.
What is the motivation for this PR?
Add new topology DPU to Azp
Port exiting DASH test cases to support GNMI and protobuf changes
How did you do it?
Enable a new job for DPU
Replace the original interface ,swssconfig, to 'gnmi client'
Convert json format to protobuf
How did you verify/test it?
Check Azp
---------

Signed-off-by: Ze Gan <ganze718@gmail.com>
Co-authored-by: ganglyu <ganglv@microsoft.com>
Summary:

Revert the PRs, to get the macsec tests run
sonic-net#10101
sonic-net#10088

co-authorized by: jianquanye@microsoft.com
AharonMalkin pushed a commit that referenced this pull request Jun 13, 2024
From the ovs doc, if mod-flow is used without --strict, priority is not
used in matching.
This will cause problem for downstream set_drop when
duplicate_nic_upstream is disabled.

For example:

When set_drop is applied to upstream_nic_flow(#1), mod-flow will match both
flow sonic-net#2 and flow  sonic-net#3 as priority is not used in flow match.

So let's enforce strict match for mod-flow.

Signed-off-by: Longxiang Lyu <lolv@microsoft.com>
AharonMalkin pushed a commit that referenced this pull request Jun 13, 2024
…y cases (sonic-net#12825)

Description of PR
This PR is to address the fixture setup sequence issue, and teardown out of sequence issue.

In convert_and_restore_config_db_to_ipv6_only, it will do a "config reload -y" during fixture setup or teardown.

For feature test cases where config is not saved into config_db.json, this reload needs to be done before feature fixture setup and after feature teardown, such as: tacacs_v6, setup_streaming_telemetry, or setup_ntp.

According to https://docs.pytest.org/en/latest/reference/fixtures.html#reference-fixtures, it only considers the following when deciding the fixture orders:

scope
dependencies
autouse
We shouldn't use autouse in this test module. So only two options to let convert_and_restore_config_db_to_ipv6_only runs before other fixtures:

define other fixtures in 'function' scope.
define the feature fixture to request convert_and_restore_config_db_to_ipv6_only explicit.
Using option #1 in this PR as the new 'function' scope fixture can be reused by other cases. Option sonic-net#2 has good readability, but will limit the new fixture to be used by ipv6_only cases.

Summary:
Fixes sonic-net#12705


Approach
What is the motivation for this PR?
Multiple errors were observed in mgmt_ipv6 are related to fixture setup/teardown sequence.

How did you do it?
Added two 'function' scope fixture: check_tacacs_v6_func and setup_streaming_telemetry_func.
And modified 3 tests cases to use 'function' scope fixture.

test_ro_user_ipv6_only
test_rw_user_ipv6_only
test_telemetry_output_ipv6_only

co-authorized by: jianquanye@microsoft.com
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.