Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Conformance Test] Fix cache test case failure for auto plugin #23473

Merged
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -242,22 +242,34 @@ void CompileModelCacheTestBase::run() {
GTEST_FAIL() << "Can't compile network without cache for " << m_functionName << " with precision " << m_precision.get_type_name() << std::endl;
}
auto originalOutputs = get_plugin_outputs();

size_t blobCountInitial = -1;
size_t blobCountAfterwards = -1;
for (int i = 0; i < 2; i++) {
// Step 2: Load with cache. Export or import shall not throw
compiledModel = {}; // Destroy network object
inferRequest = {};
{
core->set_property(ov::cache_dir(m_cacheFolderName));
ASSERT_NO_THROW(compiledModel = core->compile_model(function, targetDevice, configuration));
if (targetDevice.find("AUTO") == std::string::npos)
// Apply check only for HW plugins
ASSERT_EQ(i != 0, compiledModel.get_property(ov::loaded_from_cache));
ASSERT_EQ(i != 0, compiledModel.get_property(ov::loaded_from_cache));
while (targetDevice.find("AUTO") != std::string::npos) {
auto exeDevices = compiledModel.get_property(ov::execution_devices);
if (exeDevices.size() == 1 && exeDevices.front().find("(CPU)") != std::string::npos)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please, remove hardcode -> exeDevices.front().find("(CPU)")

Copy link
Contributor

@songbell songbell Mar 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yangwang201911 seems some problem here, if GPU is compiling faster than cpu help, the logic below will fail.
let's put it this way, for AUTO, first iter : at the very end after result compare , destroy the compile model to let auto finish async compiling, then record the cache num size
second iter: compile and compare the result, destroy the compile model, then compare the cache num to the previously recorded one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@songbell Updated.

continue;
break;
}
generate_inputs(targetStaticShapes.front());
ASSERT_NO_THROW(infer());
}
// cache is created and reused
ASSERT_EQ(ov::test::utils::listFilesWithExt(m_cacheFolderName, "blob").size(), 1);
if (i == 0) {
// blob size should be greater than 0 initially
blobCountInitial = ov::test::utils::listFilesWithExt(m_cacheFolderName, "blob").size();
ASSERT_GT(blobCountInitial, 0);
} else {
// cache is created and reused
blobCountAfterwards = ov::test::utils::listFilesWithExt(m_cacheFolderName, "blob").size();
ASSERT_EQ(blobCountInitial, blobCountAfterwards);
}
compare(originalOutputs, get_plugin_outputs());
}
if ((targetDevice.find("GPU") != std::string::npos)) {
Expand Down
Loading