Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Buffer Manager][201911] Reclaim unused buffer for admin-down ports #1837

Merged
merged 6 commits into from
Oct 26, 2021

Conversation

stephenxs
Copy link
Collaborator

@stephenxs stephenxs commented Jul 26, 2021

Depends on #1787

What I did

To reclaim reserved buffer.
As the way to do it differs among vendors, the environment ASIC_VENDOR is passed to swss docker and will be loaded when buffermgrd starts. After that, buffermgrd will:

  • Handle port admin down on Mellanox platform.
    • Not apply lossless buffer PG to an admin-down port
    • Remove lossless buffer PG (3-4) from a port when it is shut down.
  • Readd lossless buffer PG (3-4) to a port when a port is started up.

Why I did it

To support reclaiming reserved buffer when a port is shut down on Mellanox platform.

How I verified it

Regression test and vs test.

Details if related

@stephenxs stephenxs changed the title [201911][buffer manager] Reclaim unused buffer for admin-down ports [Buffer Manager][201911] Reclaim unused buffer for admin-down ports Jul 26, 2021
@liat-grozovik
Copy link
Collaborator

/azp run

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@stephenxs stephenxs marked this pull request as ready for review September 13, 2021 07:22
cfgmgr/buffermgr.cpp Show resolved Hide resolved
cfgmgr/buffermgr.cpp Show resolved Hide resolved
cfgmgr/buffermgr.cpp Show resolved Hide resolved
cfgmgr/buffermgr.cpp Outdated Show resolved Hide resolved
cfgmgr/buffermgr.cpp Show resolved Hide resolved
tests/test_buffer_manager.py Show resolved Hide resolved
tests/test_buffer_manager.py Outdated Show resolved Hide resolved
@neethajohn
Copy link
Contributor

Depends on #1787

@stephenxs
Copy link
Collaborator Author

/azpw run

@mssonicbld
Copy link
Collaborator

/AzurePipelines run

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@neethajohn
Copy link
Contributor

/azp run

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@stephenxs stephenxs closed this Oct 13, 2021
@stephenxs stephenxs reopened this Oct 13, 2021
- Don't deploy/remove lossless buffer PG if port is shutdown
- Readd buffer PG if port is started up

Signed-off-by: Stephen Sun <stephens@nvidia.com>
Signed-off-by: Stephen Sun <stephens@nvidia.com>
Signed-off-by: Stephen Sun <stephens@nvidia.com>
…PG is executed on Mellanox platform only

The vendor information is passed by docker level environment when the docker container is created
Signed-off-by: Stephen Sun <stephens@nvidia.com>
- Remove redundant return value check in buffer manager
- Make sure the port is admin down before vs test
- Move the code recovering the port's admin status to the finally block
  in order to guarantee the port will be admin down in any case.

Signed-off-by: Stephen Sun <stephens@nvidia.com>
…FFER_QUEUE table

Originally, there was one fvVector in doSpeedUpdateTask for both tables.
It was OK since there was no intersection between its lifespan of both tables.
However, when reclaiming buffer feature is introduced, its lifespan of both tables interleave,
which requires it to be cleared when it was for one table and will be for another.
This makes the code difficult to be understood and prevents the data fetched from redis db to be reused.
To make the code clear and more efficient, a dedicated fvVector is introduced for each table.

Signed-off-by: Stephen Sun <stephens@nvidia.com>
@stephenxs stephenxs deleted the handle-admin-down branch October 26, 2021 23:46
liat-grozovik pushed a commit that referenced this pull request Nov 22, 2021
…2011)

- What I did
It's to port #1837 to master to reclaim reserved buffer.
As the way to do it differs among vendors, buffermgrd will:
1. Handle port admin down on Mellanox platform.
    - Not apply lossless buffer PG to an admin-down port
    - Remove lossless buffer PG (3-4) from a port when it is shut down.
2. Read lossless buffer PG (3-4) to a port when a port is started up.

- Why I did it
To support reclaiming reserved buffer when a port is shut down on Mellanox platform in traditional buffer model.

- How I verified it
sonic-mgmt test and vs test.

Signed-off-by: Stephen Sun <stephens@nvidia.com>
EdenGri pushed a commit to EdenGri/sonic-swss that referenced this pull request Feb 28, 2022
sonic-net#1837)

What I did
This PR adds support for an option to configure muxcable mode to a standby mode. The standby mode is in addition
to auto\active\manual mode.

The new output would look like this in case an standby arg is passed to the command line

admin@sonic:~$ sudo config muxcable mode standby Ethernet0

admin@sonic:~$ sudo config muxcable mode standby all

added an option to set muxcable mode to standby mode, in addition to existing auto/active/manual modes.

How I did it
added the changes in config/muxcable.py and added testcases

How to verify it
Ran the unit tests
Signed-off-by: vaibhav-dahiya <vdahiya@microsoft.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants