- September 12, 2009
- 798
- 577
- Home Country
- Germany
In principle you are on the right way, but you cannot access the player itself via ServiceRegistration, because MP2 handles two or even more players concurrently. You need to use the IPlayerManger service to get the active instances. Look into the StatsRenderer plugin, it also enumerates players.
I think the StatsRenderer approach might even work better: you could get the full screen once it is rendered from backbuffer. Then it doesnt matter if one or more videos are playing, and also if fullscreen, window or PiP is used. Check the plugin code, it directly hooks into render events.
Thanks, this helped a lot, however im not quite there yet.
i now have the following:
Code:
private SharpDX.Direct3D9.Surface rgbSurface;
private IPlayerManager pm;
private ISharpDXVideoPlayer player;
private void MyThread()
{
while (ServiceRegistration.Get<IPlayerContextManager>().IsVideoContextActive)
{
try
{
pm = ServiceRegistration.Get<IPlayerManager>();
pm.ForEach(psc =>
{
player = psc.CurrentPlayer as ISharpDXVideoPlayer;
if (player == null || player.Surface == null)
return;
rgbSurface = player.Surface;
});
Rectangle rect = new Rectangle(0, 0, AtmoLightObject.captureWidth, AtmoLightObject.captureHeight);
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
DataStream stream = SharpDX.Direct3D9.Surface.ToStream(rgbSurface, SharpDX.Direct3D9.ImageFileFormat.Bmp, rect);
stopwatch.Stop();
Log.Error("Time: {0}", stopwatch.Elapsed);
BinaryReader reader = new BinaryReader(stream);
stream.Position = 0; // ensure that what start at the beginning of the stream.
reader.ReadBytes(14); // skip bitmap file info header
byte[] bmiInfoHeader = reader.ReadBytes(4 + 4 + 4 + 2 + 2 + 4 + 4 + 4 + 4 + 4 + 4);
int rgbL = (int)(stream.Length - stream.Position);
int rgb = (int)(rgbL / (AtmoLightObject.captureWidth * AtmoLightObject.captureHeight));
byte[] pixelData = reader.ReadBytes((int)(stream.Length - stream.Position));
byte[] h1pixelData = new byte[AtmoLightObject.captureWidth * rgb];
byte[] h2pixelData = new byte[AtmoLightObject.captureWidth * rgb];
//now flip horizontally, we do it always to prevent microstudder
int i;
for (i = 0; i < ((AtmoLightObject.captureHeight / 2) - 1); i++)
{
Array.Copy(pixelData, i * AtmoLightObject.captureWidth * rgb, h1pixelData, 0, AtmoLightObject.captureWidth * rgb);
Array.Copy(pixelData, (AtmoLightObject.captureHeight - i - 1) * AtmoLightObject.captureWidth * rgb, h2pixelData, 0, AtmoLightObject.captureWidth * rgb);
Array.Copy(h1pixelData, 0, pixelData, (AtmoLightObject.captureHeight - i - 1) * AtmoLightObject.captureWidth * rgb, AtmoLightObject.captureWidth * rgb);
Array.Copy(h2pixelData, 0, pixelData, i * AtmoLightObject.captureWidth * rgb, AtmoLightObject.captureWidth * rgb);
}
AtmoLightObject.SetPixelData(bmiInfoHeader, pixelData);
stream.Close();
stream.Dispose();
}
catch (Exception ex)
{
Log.Error("ex: {0}", ex.Message);
}
}
System.Threading.Thread.Sleep(10);
}
With this, i was able to get the surface. However using the ToStream method has 2 problems for me. First, the surface gets cropped to the rect size instead of resized, and it takes 500ms (atleast on my laptop) to convert the surface to stream. So i checked the mp1 implementation again and there is a step to resize the surface before converting it to stream, so i tried that aswell.
Thats however also where i now fail.
I tried:
Code:
private SharpDX.Direct3D9.Device sharpDXDevice;
private void MyThread()
{
sharpDXDevice = SkinContext.Device;
while...
...
sharpDXDevice.StretchRectangle(rgbSurface, null, rgbSurface, rect, SharpDX.Direct3D9.TextureFilter.None);
DataStream stream = SharpDX.Direct3D9.Surface.ToStream(rgbSurface, SharpDX.Direct3D9.ImageFileFormat.Bmp);
...
}
but that just gets me exceptions.
Code:
[ERROR] - AtmoLight: ex: HRESULT: [0x8876086C], Module: [SharpDX.Direct3D9], ApiCode: [D3DERR_INVALIDCALL/InvalidCall], Message: Unknown
Maybe you have an idea? Im just hitting walls again .
Thanks!