Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Await utility in unit testing not usable for the store #7

Open
bilsou opened this issue Nov 12, 2015 · 18 comments
Open

Await utility in unit testing not usable for the store #7

bilsou opened this issue Nov 12, 2015 · 18 comments

Comments

@bilsou
Copy link

bilsou commented Nov 12, 2015

Hi there,
First thing amazing job you've done with this library. Been playing around with it and taught me so many things regarding WRL/WinRT and the whole video pipeline.In your unit testing, you're using a custom Await method which does exactly what is so hard to do on WinRT and C++/CX which is awaiting on the UI thread.
The app I'm working on is pure C++ and C++/CX and I tried to use this Await outside of the unit testing environment and it works really well. Unfortunately it seems that the cowaitformultiplehandles is completely forbidden to use for an app released on the store as it doesn't get past the certification. I know why they did it, because cowaitformultiplehandles and its messages pumping can be quite dangerous as it allows reentrancy. Nevertheless it's a shame we can't use some stuff like PeekMessage and some other nice COM/Win32 functions.

Anyway I've modified the way I was doing things like that

        MediaReaderReadResult^ readResult = nullptr;

        std::shared_ptr<Concurrency::event> completed = std::make_shared<Concurrency::event>();

        auto workItem = ref new Windows::System::Threading::WorkItemHandler(
            [completed, &readResult, this](Windows::Foundation::IAsyncAction^ workItem)
        {
            Concurrency::task<MediaReaderReadResult^> getReaderTask(m_reader->VideoStream->ReadAsync());

            getReaderTask.then([completed, &readResult, this](MediaReaderReadResult^ result)
            {
                readResult = result;

                completed->set();
            });

        });

        auto asyncAction = Windows::System::Threading::ThreadPool::RunAsync(workItem);

        completed->wait();

and it works well, but unfortunately when I use the same method on let's say InitializeAsync or ClearEffectsAsync it won't work as both needs to be dispatched on the UI thread.

Do you know of anything that could be similar to that and would allow waiting for some async call on a UI thread ?

Thanks a lot and keep up the good work !
B.

@mmaitre314
Copy link
Owner

Thanks! I am not aware of a good way to block on the UI thread (baring a while(true){...} loop). Is there a reason you would prefer blocking instead of using .then() async chains? For InitializeAsync() I believe only the first call needs to be on the UI thread (to potentially display a consent UI). One pattern to handle that is to create a dummy MediaCapture early on to handle consent on the UI thread, and create the real MediaCapture afterward on some other thread(s). I don't believe ClearEffectsAsync() requires being called on the UI thread, but I haven't tried.

@bilsou
Copy link
Author

bilsou commented Nov 14, 2015

You're welcome, both your VideoEffects and MediaReader are so valuable, so much that I learn most of the video pipeline from them.
Your idea of using a dummy camera at the first initialisation worked liked a charm ! I can now use my workaround with the workitemhandler with InitializeAsync !
As for why I can't use the .then(), is because the C++ code is shared on iOS and Android, and I can't change this common code to handle my async calls as the camera lifecycle is managed from there. On both platforms you can wait/sleep on the main thread in order to make it synchronous not on WinRT...
Seems they've added the await keyword in C++/CX on visual studio 2015 but again can't use that as I am sill targeting Windows 8.1 (phone/desktop).
Last questions :) :
When using your VideoEffects on Windows Phone, I created a custom blur effect using Win2D and it's really really slow on some devices like the Nokia 530, 630 or even 1020. One thing to keep in mind I do have some processing of the video stream going on which takes out a lot of performance but still when I got rid of the blur effect, the difference was really big. I do think it's because of a poor camera hardware not being able to handle the effect on the video stream as there is no notable issue with devices like let's say the mid range 640 Lumia or even the HTC 8X.
Is it possible to do some scaling using some IMFTransform like the way you do it with your LumiaAnalyzer effect or with the ImageProcessor you have in MediaReader ?

Sorry for those questions and thanks a lot for your help !
B.

@mmaitre314
Copy link
Owner

There are a couple of ways to resize:

The perf issue you hit is likely due to color conversion. Several phones have hardware acceleration for Nv12 (camera format) -> Bgra8 (screen format), but not for Bgra8 -> Nv12. That triggers a copy to CPU to make the conversion, which is really really slow. Phones often have Bgrx8 -> Nv12 conversion, so one workaround is to add a DirectX Pixel Shader doing Bgra8 -> Bgrx8 (basically dropping the opacity channel). Perf-wise the best is to stay in Nv12 color format all the time, but that rules out Win2D. Plus older phones like the HTC8X do not support Nv12 pixel shaders.

@bilsou
Copy link
Author

bilsou commented Nov 16, 2015

Sorry for the late answer and thanks for being so quick to answer !
My camera resolution is fixed to 640*480 and I'm always using this one on both phone and desktop. I do that because I render everything on a texture which is then rendered on a swapchain/swapchainpanel. I do that in order to have more control over the rendering like cropping or being able to draw the camera feed in portrait or landscape without querying any SetRotation from the mediacpture itself. I found out it was quite tricky to do when using a custom mediasink.
Anyway that being said, I think the ImageProcessor would suit me the best here. i will also try the direct pixel shader conversion if ImageProcessor is not enough or does not work.
As for the blur itself, I could do without Win2D because I'm drawing everything manually but Win2D has so many nice features outside effects and is quite optimised.

I'll keep you posted, thanks again !

@bilsou
Copy link
Author

bilsou commented Dec 9, 2015

I'm back after some time digging for some other issues I had to fix before looking at my video effect performance issue. I only had the time to try the resizing solution via the VideoDeviceController but as I thought the quality is really crap. I really would like to keep a decent resolution which would be no less than 640 * 480 resolution.
I haven't tried to resize via the ImageProcessor nor the Bgra8 -> Bgrx8 conversion as I wasn't sure how and where to put them in the CanvasEffect pipeline.
Could you give some hints on where to look at so I won't mess up with the whole pre-existing pipeline ?

Thanks !

@bilsou
Copy link
Author

bilsou commented Mar 9, 2016

Been a while, but after a lot of attempts and workarounds I decided to just apply a postprocess effect on the texture I was rendering the frames to. Performance is better. So you can close the original issue.

@bilsou
Copy link
Author

bilsou commented Mar 9, 2016

Almost forgot before you close the thread, is there any reason why I would get a black texture from this bit of code in my my project but works fine in the sample ?

ComPtr<IMFMediaBuffer> buffer;
CHK(sample->GetSample()->GetBufferByIndex(0, &buffer));

// Get the MF DXGI buffer, make a copy if the buffer is not yet a DXGI one
ComPtr<IMFDXGIBuffer> bufferDxgi;
if (FAILED(buffer.As(&bufferDxgi)))
{
    auto buffer2D = As<IMF2DBuffer2>(buffer);

    D3D11_TEXTURE2D_DESC desc = {};
    desc.Width = m_width;
    desc.Height = m_height;
    desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
    desc.Usage = D3D11_USAGE_STAGING;
    desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
    desc.BindFlags = 0;
    desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX | D3D11_RESOURCE_MISC_SHARED_NTHANDLE;
    desc.ArraySize = 1;
    desc.MipLevels = 1;
    desc.SampleDesc.Count = 1;
    desc.SampleDesc.Quality = 0;

    ComPtr<ID3D11Texture2D> texture;
    CHK(As<ID3D11Device>(renderer->d3dDevice)->CreateTexture2D(&desc, nullptr, &texture));

    ComPtr<IMFMediaBuffer> bufferCopy;
    CHK(MFCreateDXGISurfaceBuffer(__uuidof(texture), texture.Get(), 0, /*fBottomUpWhenLinear*/false, &bufferCopy));
    CHK(buffer2D->Copy2DTo(As<IMF2DBuffer2>(bufferCopy).Get()));

    bufferDxgi = As<IMFDXGIBuffer>(bufferCopy);
}

ComPtr<ID3D11Texture2D> texture;
unsigned int subresource;
CHK(bufferDxgi->GetResource(IID_PPV_ARGS(&texture)));
CHK(bufferDxgi->GetSubresourceIndex(&subresource));

ComPtr<ID3D11Texture2D> textureDst;
CHK(renderer->swapChain->GetBuffer(0, IID_PPV_ARGS(&textureDst)));

ComPtr<ID3D11Device> device;
ComPtr<ID3D11DeviceContext> context;
texture->GetDevice(&device);
device->GetImmediateContext(&context);

context->CopySubresourceRegion(textureDst.Get(), 0, 0, 0, 0, texture.Get(), subresource, nullptr);

if I comment the
if (FAILED(buffer.As(&bufferDxgi)))
and force the creation of MFCreateDXGISurfaceBuffer then it works but it's damn slow...

Any idea would much appreciated !

@mmaitre314
Copy link
Owner

Re: it's damn slow - you bet it is :-) It is most likely copying the whole frame from GPU memory to CPU memory and back to GPU memory.

Re: black texture - DX found something it did not like between the source and destination textures and refused to do the copy. It's hard to tell exactly what from the code. The best tool in that case is the DX Debug Layer (in VS: Debug > Graphics > DirectX Control panel). DX will print what went wrong in the Output window when a debugger is attached. Creating the DX device with D3D11_CREATE_DEVICE_DEBUG might also give the same info (not sure).

@bilsou
Copy link
Author

bilsou commented Mar 14, 2016

Yep found the issue, I was creating manually my d3d device and from there my swapchain all its directx resources. I can't do that if I want to query directly frames as texture2d from the GPU.
To access the texture from the camera, I need to retrieve the D3D device from the camera itself it seems.

Microsoft::WRL::ComPtr<IAdvancedMediaCapture> spAdvancedMediaCapture; Microsoft::WRL::ComPtr<IAdvancedMediaCaptureSettings> spAdvancedMediaCaptureSettings; Microsoft::WRL::ComPtr<IMFDXGIDeviceManager> spDXGIDeviceManager; // Assume mediaCapture is Windows::Media::Capture::MediaCapture and initialized if (SUCCEEDED(((IUnknown *)(mediaCapture)) ->QueryInterface(IID_PPV_ARGS(&spAdvancedMediaCapture)))) { if ( SUCCEEDED(spAdvancedMediaCapture ->GetAdvancedMediaCaptureSettings(&spAdvancedMediaCaptureSettings))) { spAdvancedMediaCaptureSettings->GetDirectxDeviceManager(&spDXGIDeviceManager); } }

I will have to do a lot of reworking as my camera was created way after my directx resources and the swapchain.

@mmaitre314
Copy link
Owner

You should be able to copy the DX texture between the two DX devices using resource sharing:
https://msdn.microsoft.com/en-us/library/windows/desktop/ff476531(v=vs.85).aspx
Basically, create a shareable texture with your display DX device, get a handle, open that texture in the camera DX device, and there copy the DX texture. The rest of your app should remain unchanged. There are some tricks around synchronization I forgot but should be able to pull some code sample from somewhere if interested.

@bilsou
Copy link
Author

bilsou commented Mar 15, 2016

void present(CONTEXT* renderer, MediaSample2D^ sample)
{
CHKNULL(sample);

// Create a shareable texture with the display DX device
D3D11_TEXTURE2D_DESC td;
renderer->renderTarget->GetDesc(&td);

ComPtr<ID3D11Texture2D> texture;
CHK(renderer->d3dDevice->CreateTexture2D(&td, nullptr, &texture));
renderer->d3dContext->CopyResource(texture.Get(), renderer->renderTarget.Get());

// Get a DXGI resource from the back buffer
ComPtr<IDXGIResource> dxgiResource;
texture.As(&dxgiResource);

// Get a handle
HANDLE sharedHandle = nullptr;
dxgiResource->GetSharedHandle(&sharedHandle);

// open that texture in the camera DX device
m_cameraDevice->OpenSharedResource(sharedHandle, __uuidof(ID3D11Texture2D), (LPVOID*)&texture);

ComPtr<IMFMediaBuffer> buffer;
CHK(sample->GetSample()->GetBufferByIndex(0, &buffer));

// Get the MF DXGI buffer
ComPtr<IMFDXGIBuffer> dxgiBuffer;
HRESULT hr = buffer.As(&dxgiBuffer);

// Get the texture from dxgi buffer
unsigned int subresource;
ComPtr<ID3D11Texture2D> cameraTexture;
dxgiBuffer->GetResource(IID_PPV_ARGS(&cameraTexture));
CHK(dxgiBuffer->GetSubresourceIndex(&subresource));

if (cameraTexture)
{
    renderer->d3dContext->CopySubresourceRegion(texture.Get(), 0, 0, 0, 0, cameraTexture.Get(), subresource, nullptr);
}

}

I tried that in my camera's present, it seems I'm doing something wrong as I have a

D3D11 CORRUPTION: ID3D11DeviceContext::CopySubresourceRegion: First parameter is corrupt or NULL [ MISCELLANEOUS CORRUPTION #13: CORRUPTED_PARAMETER1]

for the last line.

My renderer->renderTarget is created with

CD3D11_TEXTURE2D_DESC renderTargetDesc(
        DXGI_FORMAT_B8G8R8A8_UNORM,
        static_cast<UINT>(renderer->renderTargetSize.Width),
        static_cast<UINT>(renderer->renderTargetSize.Height),
        1,
        1,
        D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE);
    renderTargetDesc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX | D3D11_RESOURCE_MISC_SHARED_NTHANDLE;

so it has the D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX flag, am I missing something ?

@bilsou
Copy link
Author

bilsou commented Mar 15, 2016

Forgot to add the safety bit with the mutex :

    // open that texture in the camera DX device
    m_cameraDevice->OpenSharedResource(sharedHandle, __uuidof(ID3D11Texture2D), (LPVOID*)&texture);

    ComPtr<IDXGIKeyedMutex> keyedMutex;
    texture.As(&keyedMutex);
    UINT acqKey = 0;
    UINT relKey = 1;
    DWORD timeout = 16;

    DWORD res = keyedMutex->AcquireSync(acqKey, timeout);

    if (res == WAIT_OBJECT_0 && texture)
    {
        ComPtr<ID3D11Device> device;
        ComPtr<ID3D11DeviceContext> context;
        texture->GetDevice(&device);
        device->GetImmediateContext(&context);

        context->CopyResource(texture.Get(), cameraTexture.Get());
        context->CopySubresourceRegion(renderer->renderTarget.Get(), 0, 0, 0, 0, texture.Get(), subresource, nullptr);
    }

    keyedMutex->ReleaseSync(relKey);

This bit does not crash but I only get black frames
It really seems that as soon as I try to use the DeviceContext from my original device and not the camera one the texture.Get() is somehow corrupted.

@bilsou
Copy link
Author

bilsou commented Mar 15, 2016

Coming back after trying for hours and still don't know why my shared resource got corrupted..

Here is my final code:

void present(CONTEXT* renderer, MediaSample2D^ sample)
{
CHKNULL(sample);

if (sample->Format != MediaSample2DFormat::Bgra8)
{
    throw ref new InvalidArgumentException(L"Only Bgra8 supported");
}

// Create a shareable texture with the display DX device
D3D11_TEXTURE2D_DESC desc = { };
desc.Width = m_width;
desc.Height = m_height;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.CPUAccessFlags = 0;
desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX;
desc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;

ComPtr<ID3D11Texture2D> texture;
CHK(renderer->d3dDevice->CreateTexture2D(&desc, nullptr, &texture));

// Get a DXGI resource from the back buffer
ComPtr<IDXGIResource> dxgiResource;
HRESULT hr = texture.As(&dxgiResource);
CHK(hr);

// Get a handle
HANDLE sharedHandle;
hr = dxgiResource->GetSharedHandle(&sharedHandle);
CHK(hr);

ComPtr<IMFMediaBuffer> buffer;
hr = sample->GetSample()->GetBufferByIndex(0, &buffer);
CHK(hr);

// Get the MF DXGI buffer
ComPtr<IMFDXGIBuffer> dxgiBuffer;
hr = buffer.As(&dxgiBuffer);
CHK(hr);

// open the shared texture
hr = m_cameraDevice->OpenSharedResource(sharedHandle, __uuidof(ID3D11Texture2D), (void**)(&texture));
CHK(hr);

// Get the mutex
IDXGIKeyedMutex* keyedMutex;
hr = texture.Get()->QueryInterface(__uuidof(IDXGIKeyedMutex), (void**)&keyedMutex);
CHK(hr);
UINT acqKey = 0;
UINT relKey = 1;
DWORD timeout = 5000;

DWORD res = keyedMutex->AcquireSync(acqKey, INFINITE);

if (res == WAIT_OBJECT_0 && texture)
{
    // Get the texture from dxgi buffer
    unsigned int subresource;
    hr = dxgiBuffer->GetResource(IID_PPV_ARGS(&texture));
    CHK(hr);
    hr = dxgiBuffer->GetSubresourceIndex(&subresource);
    CHK(hr);

    renderer->d3dContext->CopySubresourceRegion(renderer->renderTarget.Get(), 0, 0, 0, 0, texture.Get(), subresource, nullptr);
}

CHK(hr = keyedMutex->ReleaseSync(relKey));
keyedMutex->Release();  

}

Any help would be really much appreciated, been on this issue for days now..

@mmaitre314
Copy link
Owner

I found some code sample here (toward the bottom of the file): https://github.com/mmaitre314/MediaCaptureWPF/blob/5e46dbc0700e06425363fd0db2bbd01fc59ae34e/MediaCaptureWPF.Native/CapturePreviewNative.cpp
It uses DX texture sharing to go from a DX11 device (MediaCapture) to a DX9 device (WPF).

I did not fully read your code but it looks like 'texture' is on the camera DX device and CopySubresourceRegion() is called on the renderer DX device, while CopySubresourceRegion() should be called with two textures on the same DX device.

@bilsou
Copy link
Author

bilsou commented Mar 16, 2016

Thanks for your sample. I indeed understood that the issue was calling CopySubresourceRegion on textures created from 2 different devices but that's why I thought the OpenSharedResource was coming into play here. It seems I didn't get that part right.
Your code was pretty much what I have already right now, I just can't figure out how to draw the final outputD3dResource to my original device/content and not the one coming from the camera.

I literally can't do that

renderer->d3dContext->CopyResource(renderer->renderTarget.Get(), outputD3dResource.Get());

That's the only bit missing ..

@bilsou
Copy link
Author

bilsou commented Mar 17, 2016

Sorry I might not have been clear on my previous comment (shame one me) that I understood everything was basically shared through this handle got from GetShareHandle and that is how the original back buffer was getting set. I just can't figure out how come mine is always empty

ComPtr<IMFDXGIBuffer> bufferDxgi;
ComPtr<IMFMediaBuffer> buffer;
CHK(sample->GetSample()->GetBufferByIndex(0, &buffer));
CHK(buffer.As(&bufferDxgi));

ComPtr<IDXGIResource> inputResource;
unsigned int inputSubresource;
CHK(bufferDxgi->GetResource(IID_PPV_ARGS(&inputResource)));
CHK(bufferDxgi->GetSubresourceIndex(&inputSubresource));

ComPtr<ID3D11Texture2D> inputD3dTexture;
ComPtr<ID3D11Resource> inputD3dResource;
CHK(inputResource.As(&inputD3dTexture));
CHK(inputResource.As(&inputD3dResource));

ComPtr<ID3D11Device> inputD3dDevice;
inputD3dTexture->GetDevice(&inputD3dDevice);

// create texture on original device
D3D11_TEXTURE2D_DESC desc;
renderer->renderTarget->GetDesc(&desc);
if(m_tex == nullptr)
{
    desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX;
    renderer->d3dDevice->CreateTexture2D(&desc, nullptr, &m_tex);
}

// get dxgi resource
ComPtr<IDXGIResource> dxgiResource;
CHK(m_tex.As(&dxgiResource));
//get mutex
ComPtr<IDXGIKeyedMutex> keyedMutex;
CHK(m_tex.As(&keyedMutex));
//get shared handle
HANDLE shareHandle;
keyedMutex->AcquireSync(0, INFINITE);
{
    CHK(dxgiResource->GetSharedHandle(&shareHandle));
}
keyedMutex->ReleaseSync(1);

// open shared resource
ComPtr<ID3D11Texture2D> sharedD3dResource;
CHK(inputD3dDevice->OpenSharedResource(shareHandle, IID_PPV_ARGS(&sharedD3dResource)));

// get input context
ComPtr<ID3D11DeviceContext> inputD3dContext;
inputD3dDevice->GetImmediateContext(&inputD3dContext);

// Get shared mutex
ComPtr<IDXGIKeyedMutex> sharedMutex;
CHK(sharedD3dResource.As(&sharedMutex));

sharedMutex->AcquireSync(1, INFINITE);
{
    // copy input resource to shared resource
    inputD3dContext->CopySubresourceRegion(sharedD3dResource.Get(), 0, 0, 0, 0, inputD3dResource.Get(), inputSubresource, nullptr);

    //renderer->d3dContext->CopySubresourceRegion(renderer->renderTarget.Get(), 0, 0, 0, 0, m_tex.Get(), 0, nullptr);

    static int i = 0;
    if (i < 50)
    {
        auto folder = Windows::Storage::ApplicationData::Current->LocalFolder;

        WCHAR fname[_MAX_PATH];
        wcscpy_s(fname, folder->Path->Data());

        auto wstr = std::wstring(fname).append(L"\\z_sharedTex").append(std::to_wstring(i++)).append(L".jpg");
        //DirectX::SaveWICTextureToFile(inputD3dContext.Get(), sharedD3dResource.Get(), GUID_ContainerFormatJpeg, wstr.c_str());
        DirectX::SaveWICTextureToFile(renderer->d3dContext.Get(), m_tex.Get(), GUID_ContainerFormatJpeg, wstr.c_str());
    }
}
CHK(sharedMutex->ReleaseSync(0));

The first commented SaveWICTexture works perfectly, the second is always black..
I checked the different DESC in case something was corrupted, I enabled all D3D debugging to check for any corruption and nothing..

@bilsou
Copy link
Author

bilsou commented Mar 18, 2016

It was all about the D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX descriptor. If used on both the render target and the shared texture created from it, it just doesn't work and gives black textures. The AcquireSync and ReleaseSync used from the key mutex don't change a thing unfortunately.
On the other hand using only D3D11_RESOURCE_MISC_SHARED for both gives the right textures presented on the render target, the only issue is that every few frames there is some lag slowing down the rendering; probably due to the fact that obtaining sample and textures from the dgix buffer takes longer than the main rendering loop which has a higher frame rate.

Almost forgot, thanks again for all your help you were able to give !

@mmaitre314
Copy link
Owner

That goes beyond my knowledge of DX... Glad it's working now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants