MediaPortal's EVR Scaling Algorithm (1 Viewer)

Scythe42

Retired Team Member
  • Premium Supporter
  • June 20, 2009
    2,065
    2,703
    50
    Berlin
    Home Country
    Germany Germany
    It is done by the GPU. We just call a standard fixed pipeline function. Every DirectX 9 compatible GPU has to implement the algorithm when we are setting up the D3D device as a hardware or pure device.

    DXVA2 has nothing to do with it. It comes into play decoding individual video frames from a stream. The result that is inside the GPU still needs to be rendered. So all this "video acceleration" stuff happens one stage prior to actual rendering in the graphics pipeline.

    The fixed function pipeline only offers stuff up to bilinear filtering: http://msdn.microsoft.com/en-us/library/windows/desktop/bb172357(v=vs.85).aspx

    If you want more you either have to decode the video on the CPU, so you can access the result of the decoded frame and scale it with a different algorithm or you have to use pixel shaders. The later instruct the GPU how to process data it has inside its memory.

    Pixel Shaders will be supported when we switch to DirectX 11, probably on a D3D 10 feature level. It is easier for coding. D3D9 is just ugly here. Note, that the fixed function pipeline has been dropped starting with D3D 10 I think. Everything is designed to run over shaders. We will provide the usual suspects for selection.

    Regarding their implementation: there is not much tweaking/tuning possible here in regards to quality. Sure they have all parameters, but this is nothing new. Very good combinations of the algorithms' parameter are known. All is standard and well documented when it comes to videos. The photography/print market is more complex here. It only makes a difference how well your shader is written in regards to execution speed.

    Will provide a couple of combinations that make sense.

    When? When it's ready.
     
    Last edited:

    tourettes

    Retired Team Member
  • Premium Supporter
  • January 7, 2005
    17,301
    4,800
    If you want more you either have to decode the video on the CPU, so you can access the result of the decoded frame and scale it with a different algorithm or you have to use pixel shaders. The later instruct the GPU how to process data it has inside its memory.

    Actually there are few video decoders that allow the decoding to be done on GPU but still allow the image to be manipulated by CPU (LAV CUDA & LAV DXVA2 copy back for example). Of course it stresses the system more than DXVA2 as the textures travel between GPU & CPU.
     

    DragonQ

    Portal Pro
    August 30, 2011
    644
    79
    Home Country
    United Kingdom United Kingdom
    Thanks for the in-depth reply, Scythe42. Do you think it is worth spending time moving to a DirectX 11 implementation of EVR when MadVR already exists? Would it not be a more efficient use of developers' time to add MadVR compatability instead? After all, it can do higher quality scaling than EVR even with DXVA2 decoding.

    In the mean time, I think I found a combination of LAV and ffdshow post-processing that looks much better than just using EVR but keeps hardware deinterlacing and colour in-tact. It does use more CPU and more power though, and I haven't tested whether my HTPC can cope with it yet!
     

    The_Stig

    Retired Team Member
  • Premium Supporter
  • April 5, 2005
    2,176
    428
    Thanks for the in-depth reply, Scythe42. Do you think it is worth spending time moving to a DirectX 11 implementation of EVR when MadVR already exists? Would it not be a more efficient use of developers' time to add MadVR compatability instead? After all, it can do higher quality scaling than EVR even with DXVA2 decoding.

    In the mean time, I think I found a combination of LAV and ffdshow post-processing that looks much better than just using EVR but keeps hardware deinterlacing and colour in-tact. It does use more CPU and more power though, and I haven't tested whether my HTPC can cope with it yet!
    A bit off topic but could you give me your settings of ffdshow post-processing? I am not satisfied with quality too (but I guess this is due to my onboard Intel HD 4000 which drivers seems to be not good regarding quality on dark scenes). Maybe it gets better.
     

    DragonQ

    Portal Pro
    August 30, 2011
    644
    79
    Home Country
    United Kingdom United Kingdom
    A bit off topic but could you give me your settings of ffdshow post-processing? I am not satisfied with quality too (but I guess this is due to my onboard Intel HD 4000 which drivers seems to be not good regarding quality on dark scenes). Maybe it gets better.
    I'm using just "Resize & Aspect". Settings are attached. Obviously you need to set the output levels to whatever looks right on your display. I am unsure if ticking "Resize always" is a good idea - the benefit is that you get better chroma upscaling for 1080i/p content (I assume) but I don't know if it introduces any other issues.

    The main problem is that the only way I can get hardware deinterlacing when using ffdshow post-processing is to do the deinterlacing before ffdshow gets its hands on the image (which is the correct way to do it anyway - deinterlacing should always be done before resizing). Unfortunately, the only way I know of doing this is using CUVID, which only works on nVidia GPUs.
     

    Attachments

    • Properties.png
      Properties.png
      22.9 KB
    • Properties (2).png
      Properties (2).png
      21.4 KB
    • Properties (3).png
      Properties (3).png
      17.9 KB
    • Properties (4).png
      Properties (4).png
      20.2 KB

    Scythe42

    Retired Team Member
  • Premium Supporter
  • June 20, 2009
    2,065
    2,703
    50
    Berlin
    Home Country
    Germany Germany
    Thanks for the in-depth reply, Scythe42. Do you think it is worth spending time moving to a DirectX 11 implementation of EVR when MadVR already exists? Would it not be a more efficient use of developers' time to add MadVR compatability instead? After all, it can do higher quality scaling than EVR even with DXVA2 decoding.

    In the mean time, I think I found a combination of LAV and ffdshow post-processing that looks much better than just using EVR but keeps hardware deinterlacing and colour in-tact. It does use more CPU and more power though, and I haven't tested whether my HTPC can cope with it yet!
    That the EVR presenter can make use of pixel shaders is more or less a side effect for the planned work. The intention is to have better and faster rendering in general and easier code to work with from the D3D side point of view.

    Changing MP to work with MadVR is way more work then just adding pixel shaders to the EVR presenter after the D3D 11 update. MadVR is designed to be a combined presenter-renderer with post-processing making a media player just a framework for reading files and UI controls. And as it is closed source, we cannot make the necessary adjustments we need as too much stuff is hardcoded. Therefore MadVR is a no go for MP as an option. The shaders itself of MadVR are nothing special, not are they very performant. It is just a nice combined solution mainly designed to be used with MPC-HC.
     
    Last edited:

    DragonQ

    Portal Pro
    August 30, 2011
    644
    79
    Home Country
    United Kingdom United Kingdom
    Therefore MadVR is a no go for MP as an option.
    Hmm OK. JRiver apparently supports it though, so it must at least be possible.

    The shaders itself of MadVR are nothing special, not are they very performant. It is just a nice combined solution mainly designed to be used with MPC-HC.
    The main bonus of MadVR, aside from the performance, is the proprietary anti-ringing filter. Works wonders on scalers like Lanczos and Jinc, since they generally have the downside of introducing ringing in return for excellent scaling.
     

    Users who are viewing this thread

    Top Bottom