Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing bug in get_volume_inventory #6719

Conversation

TSKushal
Copy link
Contributor

@TSKushal TSKushal commented Jun 17, 2023

SUMMARY
Fixing issues found in GetVolumeInventory. The issues are as below:

  • Controller name can be found directly under each member of the storage API -> /redfish/v1/Systems/1/Storage//. With the existing code the controller is not being populated as expected.
  • The volumes collected are repeated in the subsequent controllers i.e., if Controller 1 has 2 volumes, the same 2 volumes are also displayed under Controller 2 along with any volumes in Controller 2.

ISSUE TYPE

  • Bugfix Pull Request

COMPONENT NAME
redfish_utils.py

ADDITIONAL INFORMATION
Sample .yaml used

---
- hosts: localhost
  tasks:
    - name: Get Volume Inventory
      community.general.redfish_info:
        category: Systems
        command: GetVolumeInventory
        baseuri: "10.131.0.209"
        username: "admin"
        password: "HP1nvent"
      register: result

    - name: Print fetched information
      ansible.builtin.debug:
        msg: "{{ result.redfish_facts.volume.entries }}"

Output before suggested code change:

TASK [Print fetched information] ***********************************************
ok: [localhost] => {
    "msg": [
        [
            {
                "system_uri": "/redfish/v1/Systems/1/"
            },
            [
                {
                    "Controller": "Controller 1",
                    "Volumes": [
                        {
                            "BlockSizeBytes": 512,
                            "CapacityBytes": 1599774654464,
                            "Encrypted": false,
                            "EncryptionTypes": [],
                            "Id": "238",
                            "Identifiers": [
                                {
                                    "DurableName": "600062B2127C60802C1FC5D485606669",
                                    "DurableNameFormat": "NAA"
                                }
                            ],
                            "Linked_drives": [
                                {
                                    "Id": "1"
                                }
                            ],
                            "Name": "MR Volume1",
                            "Operations": [],
                            "OptimumIOSizeBytes": 65536,
                            "RAIDType": "RAID0",
                            "Status": {
                                "Health": "OK",
                                "State": "Enabled"
                            }
                        },
                        {
                            "BlockSizeBytes": 512,
                            "CapacityBytes": 1599774654464,
                            "Encrypted": false,
                            "EncryptionTypes": [],
                            "Id": "239",
                            "Identifiers": [
                                {
                                    "DurableName": "600062B2127C60802C1FB98B3772214B",
                                    "DurableNameFormat": "NAA"
                                }
                            ],
                            "Linked_drives": [
                                {
                                    "Id": "0"
                                }
                            ],
                            "Name": "MR Volume2",
                            "Operations": [],
                            "OptimumIOSizeBytes": 65536,
                            "RAIDType": "RAID0",
                            "Status": {
                                "Health": "OK",
                                "State": "Enabled"
                            }
                        }
                    ]
                },
                {
                    "Controller": "Controller 1",
                    "Volumes": [
                        {
                            "BlockSizeBytes": 512,
                            "CapacityBytes": 1599774654464,
                            "Encrypted": false,
                            "EncryptionTypes": [],
                            "Id": "238",
                            "Identifiers": [
                                {
                                    "DurableName": "600062B2127C60802C1FC5D485606669",
                                    "DurableNameFormat": "NAA"
                                }
                            ],
                            "Linked_drives": [
                                {
                                    "Id": "1"
                                }
                            ],
                            "Name": "MR Volume1",
                            "Operations": [],
                            "OptimumIOSizeBytes": 65536,
                            "RAIDType": "RAID0",
                            "Status": {
                                "Health": "OK",
                                "State": "Enabled"
                            }
                        },
                        {
                            "BlockSizeBytes": 512,
                            "CapacityBytes": 1599774654464,
                            "Encrypted": false,
                            "EncryptionTypes": [],
                            "Id": "239",
                            "Identifiers": [
                                {
                                    "DurableName": "600062B2127C60802C1FB98B3772214B",
                                    "DurableNameFormat": "NAA"
                                }
                            ],
                            "Linked_drives": [
                                {
                                    "Id": "0"
                                }
                            ],
                            "Name": "MR Volume2",
                            "Operations": [],
                            "OptimumIOSizeBytes": 65536,
                            "RAIDType": "RAID0",
                            "Status": {
                                "Health": "OK",
                                "State": "Enabled"
                            }
                        },
                        {
                            "BlockSizeBytes": 512,
                            "CapacityBytes": 480036519936,
                            "Encrypted": false,
                            "Id": "1",
                            "Identifiers": [
                                {
                                    "DurableName": "005043e105000001",
                                    "DurableNameFormat": "EUI"
                                }
                            ],
                            "Linked_drives": [
                                {
                                    "Id": "1"
                                },
                                {
                                    "Id": "2"
                                }
                            ],
                            "Name": "NS Volume",
                            "Operations": [],
                            "OptimumIOSizeBytes": 131072,
                            "RAIDType": "RAID1",
                            "Status": {
                                "Health": "OK",
                                "State": "Enabled"
                            }
                        }
                    ]
                }
            ]
        ]
    ]
}

Output after suggested code change:

TASK [Print fetched information] ***********************************************
ok: [localhost] => {
    "msg": [
        [
            {
                "system_uri": "/redfish/v1/Systems/1/"
            },
            [
                {
                    "Controller": "HPE MR416i-o Gen11",
                    "Volumes": [
                        {
                            "BlockSizeBytes": 512,
                            "CapacityBytes": 1599774654464,
                            "Encrypted": false,
                            "EncryptionTypes": [],
                            "Id": "238",
                            "Identifiers": [
                                {
                                    "DurableName": "600062B2127C60802C1FC5D485606669",
                                    "DurableNameFormat": "NAA"
                                }
                            ],
                            "Linked_drives": [
                                {
                                    "Id": "1"
                                }
                            ],
                            "Name": "MR Volume1",
                            "Operations": [],
                            "OptimumIOSizeBytes": 65536,
                            "RAIDType": "RAID0",
                            "Status": {
                                "Health": "OK",
                                "State": "Enabled"
                            }
                        },
                        {
                            "BlockSizeBytes": 512,
                            "CapacityBytes": 1599774654464,
                            "Encrypted": false,
                            "EncryptionTypes": [],
                            "Id": "239",
                            "Identifiers": [
                                {
                                    "DurableName": "600062B2127C60802C1FB98B3772214B",
                                    "DurableNameFormat": "NAA"
                                }
                            ],
                            "Linked_drives": [
                                {
                                    "Id": "0"
                                }
                            ],
                            "Name": "MR Volume2",
                            "Operations": [],
                            "OptimumIOSizeBytes": 65536,
                            "RAIDType": "RAID0",
                            "Status": {
                                "Health": "OK",
                                "State": "Enabled"
                            }
                        }
                    ]
                },
                {
                    "Controller": "HPE NS204i-p Gen10+ Boot Controller",
                    "Volumes": [
                        {
                            "BlockSizeBytes": 512,
                            "CapacityBytes": 480036519936,
                            "Encrypted": false,
                            "Id": "1",
                            "Identifiers": [
                                {
                                    "DurableName": "005043e105000001",
                                    "DurableNameFormat": "EUI"
                                }
                            ],
                            "Linked_drives": [
                                {
                                    "Id": "1"
                                },
                                {
                                    "Id": "2"
                                }
                            ],
                            "Name": "NS Volume",
                            "Operations": [],
                            "OptimumIOSizeBytes": 131072,
                            "RAIDType": "RAID1",
                            "Status": {
                                "Health": "OK",
                                "State": "Enabled"
                            }
                        }
                    ]
                }
            ]
        ]
    ]
}

@ansibullbot
Copy link
Collaborator

@ansibullbot ansibullbot added bug This issue/PR relates to a bug module_utils module_utils plugins plugin (any type) labels Jun 17, 2023
@ansibullbot

This comment was marked as outdated.

@ansibullbot ansibullbot added ci_verified Push fixes to PR branch to re-run CI needs_revision This PR fails CI tests or a maintainer has requested a review/revision of the PR labels Jun 17, 2023
@ansibullbot ansibullbot removed ci_verified Push fixes to PR branch to re-run CI needs_revision This PR fails CI tests or a maintainer has requested a review/revision of the PR labels Jun 17, 2023
@felixfontein felixfontein added check-before-release PR will be looked at again shortly before release and merged if possible. backport-6 labels Jun 18, 2023
Copy link
Collaborator

@felixfontein felixfontein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution. I've added a first comment.

TSKushal and others added 2 commits June 19, 2023 12:43
…entory.yml


Agreed

Co-authored-by: Felix Fontein <felix@fontein.de>
else:
sc_id = sc[0].get('Id', '1')
controller_name = 'Controller %s' % sc_id
if 'Name' in data:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic isn't correct; this pulls the name of the storage subsystem rather than of the storage controller.

Inside the Storage resource is a "StorageControllers" property; this property contains the set of storage controllers in the storage subsystem, which is what this controller_list is intended to contain.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually this is under a for loop which is iterating through each controller.

               for controller in data[u'Members']:
                    controller_list.append(controller[u'@odata.id'])
                for c in controller_list:
                    uri = self.root_uri + c
                    response = self.get_request(uri)
                    if response['ret'] is False:
                        return response
                    data = response['data']
                    controller_name = 'Controller 1'
                    if 'StorageControllers' in data:
                        sc = data['StorageControllers']
                        if sc:
                            if 'Name' in sc[0]:
                                controller_name = sc[0]['Name']
                            else:
                                sc_id = sc[0].get('Id', '1')
                                controller_name = 'Controller %s' % sc_id
                    if 'Name' in data:

@mraineri
Copy link
Contributor

@TSKushal can you please show what's inside of your Storage resources? Reviewing the logic in place today I believe the behavior is correct and the issue lies with the Redfish service not providing proper names for storage controllers.

@mraineri
Copy link
Contributor

Controller name can be found directly under each member of the storage API -> /redfish/v1/Systems/1/Storage//. With the existing code the controller is not being populated as expected.

This statement is not correct; the name at that level per the spec is for the Storage resource and not the individual controller (or controllers) inside that storage subsystem. Clients need to step into the controllers array to extract the name information for the controllers.

@TSKushal
Copy link
Contributor Author

Controller name can be found directly under each member of the storage API -> /redfish/v1/Systems/1/Storage//. With the existing code the controller is not being populated as expected.

This statement is not correct; the name at that level per the spec is for the Storage resource and not the individual controller (or controllers) inside that storage subsystem. Clients need to step into the controllers array to extract the name information for the controllers.

Correct, I didn't mean at /redfish/v1/Systems/1/Storage/ level.
Each member under "/redfish/v1/Systems/1/Storage/" is what i was referring to i.e., /redfish/v1/Systems/1/Storage/

@TSKushal
Copy link
Contributor Author

Redfish responses

/redfish/v1/Systems/1/Storage/

{
    "@odata.context": "/redfish/v1/$metadata#StorageCollection.StorageCollection",
    "@odata.etag": "W/\"570254F2\"",
    "@odata.id": "/redfish/v1/Systems/1/Storage/",
    "@odata.type": "#StorageCollection.StorageCollection",
    "Description": "Storage subsystems known to this system",
    "Name": "Storage",
    "Members": [
        {
            "@odata.id": "/redfish/v1/Systems/1/Storage/DE00B000/"
        },
        {
            "@odata.id": "/redfish/v1/Systems/1/Storage/DE00A000/"
        }
    ],
    "Members@odata.count": 2
}

/redfish/v1/Systems/1/Storage/DE00B000/

{
    "@odata.etag": "\"a9279128\"",
    "@odata.id": "/redfish/v1/Systems/1/Storage/DE00B000",
    "@odata.type": "#Storage.v1_11_0.Storage",
    "Id": "DE00B000",
    "Name": "HPE MR416i-o Gen11",
    "Actions": {
        "#Storage.ResetToDefaults": {
            "ResetType@Redfish.AllowableValues": [
                "ResetAll",
                "PreserveVolumes"
            ],
            "target": "/redfish/v1/Systems/1/Storage/DE00B000/Actions/Storage.ResetToDefaults"
        }
    },
    "Status": {
        "State": "Enabled",
        "HealthRollup": "OK"
    },
    "Drives@odata.count": 2,
    "Drives": [
        {
            "@odata.id": "/redfish/v1/Systems/1/Storage/DE00B000/Drives/0"
        },
        {
            "@odata.id": "/redfish/v1/Systems/1/Storage/DE00B000/Drives/1"
        }
    ],
    "Volumes": {
        "@odata.id": "/redfish/v1/Systems/1/Storage/DE00B000/Volumes"
    },
    "Controllers": {
        "@odata.id": "/redfish/v1/Systems/1/Storage/DE00B000/Controllers"
    }
}

/redfish/v1/Systems/1/Storage/DE00A000/

{
    "@odata.etag": "W/\"298afb7b\"",
    "@odata.id": "/redfish/v1/Systems/1/Storage/DE00A000",
    "@odata.type": "#Storage.v1_7_1.Storage",
    "Id": "DE00A000",
    "Name": "HPE NS204i-p Gen10+ Boot Controller",
    "Status": {
        "State": "Enabled",
        "HealthRollup": "OK"
    },
    "Drives@odata.count": 2,
    "Drives": [
        {
            "@odata.id": "/redfish/v1/Systems/1/Storage/DE00A000/Drives/1"
        },
        {
            "@odata.id": "/redfish/v1/Systems/1/Storage/DE00A000/Drives/2"
        }
    ],
    "Volumes": {
        "@odata.id": "/redfish/v1/Systems/1/Storage/DE00A000/Volumes"
    },
    "StorageControllers": [
        {
            "@odata.id": "/redfish/v1/Systems/1/Storage/DE00A000#/StorageControllers/0",
            "CacheSummary": {
                "TotalCacheSizeMiB": 0,
                "PersistentCacheSizeMiB": 0,
                "Status": {
                    "Health": "OK",
                    "State": "Absent"
                }
            },
            "ControllerRates": {
                "ConsistencyCheckRatePercent": 100,
                "RebuildRatePercent": 100
            },
            "FirmwareVersion": "1.2.14.1004",
            "Identifiers": [
                {
                    "DurableName": "53:48:00:d0:00:f3:35:3a",
                    "DurableNameFormat": "NAA"
                }
            ],
            "Location": {
                "PartLocation": {
                    "ServiceLabel": "Slot 1",
                    "LocationType": "Slot",
                    "LocationOrdinalValue": 1
                }
            },
            "Manufacturer": "HPE",
            "MemberId": "0",
            "Model": "HPE NS204i-p Gen10+ Boot Controller",
            "PartNumber": "P14379-001",
            "PCIeInterface": {
                "MaxPCIeType": "Gen3",
                "PCIeType": "Gen3",
                "MaxLanes": 4,
                "LanesInUse": 4
            },
            "SKU": "P12965-B21",
            "SerialNumber": "PWWVF0DSTGW0EZ",
            "Status": {
                "State": "Enabled",
                "Health": "OK"
            },
            "SupportedControllerProtocols": [
                "PCIe"
            ],
            "SupportedDeviceProtocols": [
                "NVMe"
            ],
            "SupportedRAIDTypes": [
                "RAID1"
            ]
        }
    ]
}

@mraineri
Copy link
Contributor

mraineri commented Jun 19, 2023

For /redfish/v1/Systems/1/Storage/DE00A000/, the StorageControllers array does not contain names of the storage controllers. It just contains "MemberId", but no "Name" property, which is why the logic is entering the default naming case.

For /redfish/v1/Systems/1/Storage/DE00B000/, that storage resource instead contains a "Controllers" resource collection rather than "StorageControllers". This is valid, and I would propose we should modify the logic to step into "Controllers" and extract the controller names via their dedicated resources if this is present.

@TSKushal
Copy link
Contributor Author

For /redfish/v1/Systems/1/Storage/DE00A000/, the StorageControllers array does not contain names of the storage controllers. It just contains "MemberId", but no "Name" property, which is why the logic is entering the default naming case.

For /redfish/v1/Systems/1/Storage/DE00B000/, that storage resource instead contains a "Controllers" resource collection rather than "StorageControllers". This is valid, and I would propose we should modify the logic to step into "Controllers" and extract the controller names via their dedicated resources if this is present.

This was going to be my initial change, but then I observed that "/redfish/v1/Systems/1/Storage/DE00A000#/StorageControllers/0" contains the same name,

{
    "@odata.etag": "W/\"298afb7b\"",
    "@odata.id": "/redfish/v1/Systems/1/Storage/DE00A000",
    "@odata.type": "#Storage.v1_7_1.Storage",
    "Id": "DE00A000",
    "Name": "HPE NS204i-p Gen10+ Boot Controller",
    "Status": {
        "State": "Enabled",
        "HealthRollup": "OK"
    },
    "Drives@odata.count": 2,
    "Drives": [
        {
            "@odata.id": "/redfish/v1/Systems/1/Storage/DE00A000/Drives/1"
        },
        {
            "@odata.id": "/redfish/v1/Systems/1/Storage/DE00A000/Drives/2"
        }
    ],
    "Volumes": {
        "@odata.id": "/redfish/v1/Systems/1/Storage/DE00A000/Volumes"
    },
    "StorageControllers": [
        {
            "@odata.id": "/redfish/v1/Systems/1/Storage/DE00A000#/StorageControllers/0",
            "CacheSummary": {
                "TotalCacheSizeMiB": 0,
                "PersistentCacheSizeMiB": 0,
                "Status": {
                    "Health": "OK",
                    "State": "Absent"
                }
            },
            "ControllerRates": {
                "ConsistencyCheckRatePercent": 100,
                "RebuildRatePercent": 100
            },
            "FirmwareVersion": "1.2.14.1004",
            "Identifiers": [
                {
                    "DurableName": "53:48:00:d0:00:f3:35:3a",
                    "DurableNameFormat": "NAA"
                }
            ],
            "Location": {
                "PartLocation": {
                    "ServiceLabel": "Slot 1",
                    "LocationType": "Slot",
                    "LocationOrdinalValue": 1
                }
            },
            "Manufacturer": "HPE",
            "MemberId": "0",
            "Model": "HPE NS204i-p Gen10+ Boot Controller",
            "PartNumber": "P14379-001",
            "PCIeInterface": {
                "MaxPCIeType": "Gen3",
                "PCIeType": "Gen3",
                "MaxLanes": 4,
                "LanesInUse": 4
            },
            "SKU": "P12965-B21",
            "SerialNumber": "PWWVF0DSTGW0EZ",
            "Status": {
                "State": "Enabled",
                "Health": "OK"
            },
            "SupportedControllerProtocols": [
                "PCIe"
            ],
            "SupportedDeviceProtocols": [
                "NVMe"
            ],
            "SupportedRAIDTypes": [
                "RAID1"
            ]
        }
    ]
}

Is there any case where the Name directly under "/redfish/v1/Systems/1/Storage/DE00A000" and "/redfish/v1/Systems/1/Storage/DE00A000#/StorageControllers/0" will not be same?

@mraineri
Copy link
Contributor

I would never expect them to be the same; "Name" inside of "/redfish/v1/Systems/1/Storage/DE00A000" (or any Storage resource) is the name of the storage subsystem; this is independent of the controller name.

@TSKushal
Copy link
Contributor Author

I would never expect them to be the same; "Name" inside of "/redfish/v1/Systems/1/Storage/DE00A000" (or any Storage resource) is the name of the storage subsystem; this is independent of the controller name.

Ah alright, I think I got confused with the subsystem name and the actual controller name. Will make the changes as suggested.

@TSKushal
Copy link
Contributor Author

@mraineri any updates on this?

@@ -912,7 +912,26 @@ def get_volume_inventory(self, systems_uri):
else:
sc_id = sc[0].get('Id', '1')
controller_name = 'Controller %s' % sc_id
if 'Controllers' in data:
Copy link
Contributor

@mraineri mraineri Jun 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would make this an elif; if StorageControllers was found, there's no need to step into Controllers. Both will represent the same storage controllers in the storage subsystem. It's possible for a service to implement both in order to support clients that expect the older method (StorageControllers) and to support clients that expect the newer method (Controllers).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point. Changing as suggested.

@mraineri
Copy link
Contributor

shipit

@felixfontein felixfontein merged commit 22efbcc into ansible-collections:main Jun 26, 2023
@patchback
Copy link

patchback bot commented Jun 26, 2023

Backport to stable-6: 💚 backport PR created

✅ Backport PR branch: patchback/backports/stable-6/22efbcc627764f29ce1e75f634dacab8566d2299/pr-6719

Backported as #6791

🤖 @patchback
I'm built with octomachinery and
my source is open — https://github.com/sanitizers/patchback-github-app.

@felixfontein felixfontein removed the check-before-release PR will be looked at again shortly before release and merged if possible. label Jun 26, 2023
patchback bot pushed a commit that referenced this pull request Jun 26, 2023
* Fixing bug in get_volume_inventory

* Adding changelog fragment

* sanity fix

* Update changelogs/fragments/6719-redfish-utils-fix-for-get-volume-inventory.yml

Agreed

Co-authored-by: Felix Fontein <felix@fontein.de>

* Updating changelog fragment

* Update changelogs/fragments/6719-redfish-utils-fix-for-get-volume-inventory.yml

Agreed

Co-authored-by: Felix Fontein <felix@fontein.de>

* Updating changes as per PR comments

* PR comment changes

---------

Co-authored-by: Kushal <t-s.kushal@hpe.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 22efbcc)
@patchback
Copy link

patchback bot commented Jun 26, 2023

Backport to stable-7: 💚 backport PR created

✅ Backport PR branch: patchback/backports/stable-7/22efbcc627764f29ce1e75f634dacab8566d2299/pr-6719

Backported as #6792

🤖 @patchback
I'm built with octomachinery and
my source is open — https://github.com/sanitizers/patchback-github-app.

patchback bot pushed a commit that referenced this pull request Jun 26, 2023
* Fixing bug in get_volume_inventory

* Adding changelog fragment

* sanity fix

* Update changelogs/fragments/6719-redfish-utils-fix-for-get-volume-inventory.yml

Agreed

Co-authored-by: Felix Fontein <felix@fontein.de>

* Updating changelog fragment

* Update changelogs/fragments/6719-redfish-utils-fix-for-get-volume-inventory.yml

Agreed

Co-authored-by: Felix Fontein <felix@fontein.de>

* Updating changes as per PR comments

* PR comment changes

---------

Co-authored-by: Kushal <t-s.kushal@hpe.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 22efbcc)
@felixfontein
Copy link
Collaborator

@TSKushal thanks for your contribution!
@mraineri thanks for reviewing!

felixfontein pushed a commit that referenced this pull request Jun 27, 2023
…tory (#6792)

Fixing bug in get_volume_inventory (#6719)

* Fixing bug in get_volume_inventory

* Adding changelog fragment

* sanity fix

* Update changelogs/fragments/6719-redfish-utils-fix-for-get-volume-inventory.yml

Agreed

Co-authored-by: Felix Fontein <felix@fontein.de>

* Updating changelog fragment

* Update changelogs/fragments/6719-redfish-utils-fix-for-get-volume-inventory.yml

Agreed

Co-authored-by: Felix Fontein <felix@fontein.de>

* Updating changes as per PR comments

* PR comment changes

---------

Co-authored-by: Kushal <t-s.kushal@hpe.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 22efbcc)

Co-authored-by: TSKushal <44438079+TSKushal@users.noreply.github.com>
felixfontein pushed a commit that referenced this pull request Jun 27, 2023
…tory (#6791)

Fixing bug in get_volume_inventory (#6719)

* Fixing bug in get_volume_inventory

* Adding changelog fragment

* sanity fix

* Update changelogs/fragments/6719-redfish-utils-fix-for-get-volume-inventory.yml

Agreed

Co-authored-by: Felix Fontein <felix@fontein.de>

* Updating changelog fragment

* Update changelogs/fragments/6719-redfish-utils-fix-for-get-volume-inventory.yml

Agreed

Co-authored-by: Felix Fontein <felix@fontein.de>

* Updating changes as per PR comments

* PR comment changes

---------

Co-authored-by: Kushal <t-s.kushal@hpe.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
(cherry picked from commit 22efbcc)

Co-authored-by: TSKushal <44438079+TSKushal@users.noreply.github.com>
@TSKushal TSKushal deleted the update_get_volume_inventory branch June 29, 2023 16:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue/PR relates to a bug module_utils module_utils plugins plugin (any type)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants