Hi everybody,
I just did some tests on playing Bluray Isos remotely, which was always a problem for me, but I didn't find enough time to dig deeper (using the winter release wip branch).
The situation is as follows:
MP2-Server runs on my htpc. On that htpc there is a bluray-iso-file on a directly attached harddisk.
MP2-Client runs on my laptop. There, I want to play the bluray, which resides on the htpc as iso.
Network between them is a wlan, which at least in theory should be sufficient from a bandwidth perspective...
The result is: Either the bluray does not play at all or it stutters like hell. However, when I use VCD on my laptop to mount the remote iso, I can use vlc to watch the bluray (nearly) without stuttering.
My first idea was that dokan may be the problem, but while investigating this, I found the "CachedMultiSegmentHttpStream", which is used to access remote files in MP2. I added a little bit of logging in the "read"-method of that class and the result can be found in the attached logs (just started MP2-Client and opened the respective "video", i.e. the remote bluray iso).
First of all, the BDHandler seems to look for the iso-file itself on R: (which is the dokan drive). I would therfore assume that the chaining-up of the iso-resourceprovider happens on the client side. But when you have a look at the added logging for the CachedMultiSegmentHttpStream, there is a separate url for every file inside the iso. Furthermore, for blurays it seems that it first accesses hundrets of mpls-files inside the iso before it finally starts reading the m2ts file.
Now my suspicion is that we have a separate CachedMultiSegmentHttpStream for every file inside the iso file (in particular the mpls-files) and every instance of the CachedMultiSegmentHttpStream reads part of the iso-file to cache it just for the one file we want to access in the iso. Since we use a chunk size of 512K, this would mean that for every single 200-byte-mpls-file, we transfer (probably the same) 512K of the iso file over the network - which would be a huge overhead.
Maybe my suspicion above is completely wrong, I still did not understand the whole mechanism in detail. But perhaps someone of our pros can shed some light on it...
Thanks,
Michael
I just did some tests on playing Bluray Isos remotely, which was always a problem for me, but I didn't find enough time to dig deeper (using the winter release wip branch).
The situation is as follows:
MP2-Server runs on my htpc. On that htpc there is a bluray-iso-file on a directly attached harddisk.
MP2-Client runs on my laptop. There, I want to play the bluray, which resides on the htpc as iso.
Network between them is a wlan, which at least in theory should be sufficient from a bandwidth perspective...
The result is: Either the bluray does not play at all or it stutters like hell. However, when I use VCD on my laptop to mount the remote iso, I can use vlc to watch the bluray (nearly) without stuttering.
My first idea was that dokan may be the problem, but while investigating this, I found the "CachedMultiSegmentHttpStream", which is used to access remote files in MP2. I added a little bit of logging in the "read"-method of that class and the result can be found in the attached logs (just started MP2-Client and opened the respective "video", i.e. the remote bluray iso).
First of all, the BDHandler seems to look for the iso-file itself on R: (which is the dokan drive). I would therfore assume that the chaining-up of the iso-resourceprovider happens on the client side. But when you have a look at the added logging for the CachedMultiSegmentHttpStream, there is a separate url for every file inside the iso. Furthermore, for blurays it seems that it first accesses hundrets of mpls-files inside the iso before it finally starts reading the m2ts file.
Now my suspicion is that we have a separate CachedMultiSegmentHttpStream for every file inside the iso file (in particular the mpls-files) and every instance of the CachedMultiSegmentHttpStream reads part of the iso-file to cache it just for the one file we want to access in the iso. Since we use a chunk size of 512K, this would mean that for every single 200-byte-mpls-file, we transfer (probably the same) 512K of the iso file over the network - which would be a huge overhead.
Maybe my suspicion above is completely wrong, I still did not understand the whole mechanism in detail. But perhaps someone of our pros can shed some light on it...
Thanks,
Michael