It is done by the GPU. We just call a standard fixed pipeline function. Every DirectX 9 compatible GPU has to implement the algorithm when we are setting up the D3D device as a hardware or pure device.
DXVA2 has nothing to do with it. It comes into play decoding individual video frames from a stream. The result that is inside the GPU still needs to be rendered. So all this "video acceleration" stuff happens one stage prior to actual rendering in the graphics pipeline.
The fixed function pipeline only offers stuff up to bilinear filtering: http://msdn.microsoft.com/en-us/library/windows/desktop/bb172357(v=vs.85).aspx
If you want more you either have to decode the video on the CPU, so you can access the result of the decoded frame and scale it with a different algorithm or you have to use pixel shaders. The later instruct the GPU how to process data it has inside its memory.
Pixel Shaders will be supported when we switch to DirectX 11, probably on a D3D 10 feature level. It is easier for coding. D3D9 is just ugly here. Note, that the fixed function pipeline has been dropped starting with D3D 10 I think. Everything is designed to run over shaders. We will provide the usual suspects for selection.
Regarding their implementation: there is not much tweaking/tuning possible here in regards to quality. Sure they have all parameters, but this is nothing new. Very good combinations of the algorithms' parameter are known. All is standard and well documented when it comes to videos. The photography/print market is more complex here. It only makes a difference how well your shader is written in regards to execution speed.
Will provide a couple of combinations that make sense.
When? When it's ready.
DXVA2 has nothing to do with it. It comes into play decoding individual video frames from a stream. The result that is inside the GPU still needs to be rendered. So all this "video acceleration" stuff happens one stage prior to actual rendering in the graphics pipeline.
The fixed function pipeline only offers stuff up to bilinear filtering: http://msdn.microsoft.com/en-us/library/windows/desktop/bb172357(v=vs.85).aspx
If you want more you either have to decode the video on the CPU, so you can access the result of the decoded frame and scale it with a different algorithm or you have to use pixel shaders. The later instruct the GPU how to process data it has inside its memory.
Pixel Shaders will be supported when we switch to DirectX 11, probably on a D3D 10 feature level. It is easier for coding. D3D9 is just ugly here. Note, that the fixed function pipeline has been dropped starting with D3D 10 I think. Everything is designed to run over shaders. We will provide the usual suspects for selection.
Regarding their implementation: there is not much tweaking/tuning possible here in regards to quality. Sure they have all parameters, but this is nothing new. Very good combinations of the algorithms' parameter are known. All is standard and well documented when it comes to videos. The photography/print market is more complex here. It only makes a difference how well your shader is written in regards to execution speed.
Will provide a couple of combinations that make sense.
When? When it's ready.
Last edited: