home
products
contribute
download
documentation
forum
Home
Forums
New posts
Search forums
What's new
New posts
All posts
Latest activity
Members
Registered members
Current visitors
Donate
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Search titles only
By:
Menu
Log in
Register
Navigation
Install the app
Install
More options
Contact us
Close Menu
Forums
MediaPortal 2
Feature Requests
Aibility to play Media without adding them to DB
Contact us
RSS
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="MJGraf" data-source="post: 1027432" data-attributes="member: 17886"><p>I think we are talking about two different things here which we have to tackle separately:</p><p></p><p><strong><u>Browsing media without adding them to the db</u></strong></p><p>This is a good point and worth an improvement. But at least my feeling is that this should only be the way to go when you watch media that are "temporary" (i.e. on sd cards or something you downloaded, want to have a look at and delete afterwards). When you are talking about media you don't watch that often but you store them somewhere at a fixed location, I would nevertheless want to have the possibility to add them to the DB without having to worry about this being a problem with DB size and speed. The reason for me is easy: This is the whole point of MP2 to me - having a central DB which has as much information on all your media as possible. Not putting this information into the DB intentionally is IMO just a workaround for a completely different issue:</p><p></p><p><u><strong>Speed</strong></u></p><p>And this brings us to the second thing we are talking about: speed. Independently from the above, we should put a lot of effort into making MP2's speed (as far as possible) independent from its speed. The reason is that I think over the next years, the amount of media people have will grow - and so will the amount of metadata for these media. That means MP2 will have to cope with very big amounts of data - such as your collections, Lehmden (and this is why I have to say thousand <img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" class="smilie smilie--sprite smilie--sprite8" alt=":D" title="Big Grin :D" loading="lazy" data-shortname=":D" /> to you for testing MP2. Your media collections are really a great test to make MP2 future-proof <img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" class="smilie smilie--sprite smilie--sprite1" alt=":)" title="Smile :)" loading="lazy" data-shortname=":)" /> ). And I also think this should be possible, because although the amount of media and metadata will grow, the user can only handle a very small amount of data at the same time. So we are talking about very small parts of the huge data collection to be shown at the same time, which our system should be able to handle very quickly. Now when I'm thinking about speed improvements, five things come into my mind:</p><p></p><p><u>Reduce the amount of data</u></p><p>What you are reporting about XBMC sounds like they import it anyway into their db, but if you don't select a scraper, no additional information is imported making the import very fast, maybe hardly noticeable that they are actually imported into some kind of db. We currently only have the choice to tell the importer which type of media to expect (videos, movies, etc.). That makes it easier for the average user who doesn't know what a scraper is. But maybe we should have an "advanced" button there to enable the advanced user to choose the MetadataExtractors he wants to apply (or at least something like (a) all the information you can get - even from the internet, (b) only local information and (c) only local information without images such as covers, thumbs, etc. and maybe even (d) only path and file name).</p><p>But in the end, this is again only a workaround. It is the purpose of MP2 to get as much info as possible - and to handle it as fast as possible.</p><p></p><p><u>Speed up the collection of data</u></p><p>This seems to be one of the issues you are reporting because it takes ages to generate the thumbnails for your pictures. Besides tackling this particular problem in detail it may be an option that you can choose whether the MP2 Server generates this data for all media automatically in the background (via the importworker) or whether this data is only generated when you watch a media item for the first time. It would then be slower to watch this particular media the first time, because we would only then generate the metadata, but when you watch them the second time, it would be faster because the metadata is already there. </p><p></p><p><u>Speed up the retrieval of data</u></p><p>This is what Valks is working on and it is not really a "speed-up" it is more an "only get the data we really need". It has to be seen with what I said above about the user only being able to handle a very small amount of data at the same time. Currently, when you search for all the mediaitems with "Eric Clapton" as a search string, you may get 200 mediaitems, which are all retrieved from the database - although the UI can only display maybe 10 of them at the same time. Valks is working on something called "data virtualization", which makes sure that only 10 (or maybe 30 - the 10 you see, 10 before and ten after to make scrolling smooth) are fetched from the database. I'm sure this will improve the speed a lot.</p><p>The same idea - just for the UI (and therefore called UI virtualization) - is possible on the client side regarding the generation of the display views of the items to be shown. Instead of generating the views for all the items available, it makes sense to only generate the views for those you actually see (and some before and after). You can already see this effect by switching between list view and e.g. thumbs view. Our list view already virtualizes - the other views don't which is why they are much slower with large amounts of mediaitems.</p><p></p><p><u>Speed up the transfer of data to the client</u></p><p>Since we cannot speed-up the network we have available from the MP2 Server to the MP2 Client, this is again no real speed-up, but a way to make MP2 "feel faster". There are two options I can see:</p><p>(a) First transfer the necessary (non image) data and display them and then in the background fetch the image data. This is what Morph has implemented for the FanArtService. The effect is that when you enter a screen with all your mediaitems of a certain kind, you see them immediately by name (but without images) and can already scroll, etc., but the images for the items appear one by one while your UI is responsive all the time and independent from the images. We should definitely find a way to make this happen not only for the FanArtService but also for the thumbnails in the MediaLibrary.</p><p>(b) Having a cache for images on the client side. This seems to be the way XMBC does it. They maintain a small database on the client side, just holding the IDs, the path, a checksum and the "number this item was display within the last week or so" and a folder with the respective images. When they display an image, they use the checksum to check whether the local cache still is the same as the original image on the server. If yes, they first look into their local cache and only get it via network, if it's not in the cache. If the checksum doesn't match, they get it from the server in any case (to make sure the client immediately reacts to changes on the server). Everytime they fetch an image from the server, they put it into the local cache. If the image isn't used for some time, it is deleted from the local cache to keep it small. Maybe we should think about something like that...</p><p></p><p><u>Speed up the backend</u></p><p>Now this is certainly a very important part of the speed, but according to the above, it's only one of many options. And this is where we are talking about not putting BLOBs (i.e. images) into the database to make it faster. I personally like having all the metadata in the database - the images as well - for several reasons (not cluttering the directory structure with thousands of files having cryptic names, data integrity in the db, etc.). And this is the area, where I can help most in improving the speed.</p><p>I already tried with the SQLite plugin and I'm perfectly willing to improve its speed further in particular with respect to BLOBs. The only thing I fear is that no one answered Morpheus' post <a href="http://system.data.sqlite.org/index.html/tktview/c8083867b9" target="_blank">here </a>regarding the problem we have with SQLite and TVE3.5 and if this isn't solved by the syste.data.sqlite developers, we have a problem with SQLite... I therefore already had a look at Firebird (which we have as a very old and outdated plugin urgently needing an update). To me it looks promising as (a) it is possible to integrate it into MP2 Server without the need for a separate installation, (b) there is no size limit of the database and (c) they seem to be very far with their support of EF with respect to TVE3.5. See <a href="http://www.firebirdsql.org/en/net-provider/" target="_blank">here </a>- they already support EF 6 for .NET 4.0 and .NET 4.5. But I have no clue how fast this would be so the only chance is to try and error... (a minor drawback that checkoch won't like: They offer their .net-provider via nuget, but not the database itself so we would have to create our own package with the database for the MP-nuget-feed. If this wasn't the case, I would have removed the binaries already from our repository...).</p><p>Now back to the question how we should deal with images on the storage side. I see five options:</p><ol> <li data-xf-list-type="ol">Storing images like any other metadata in the same tables in the database. This is what we currently do. What we can do to improve this is try different databases and different settings of the databases.</li> <li data-xf-list-type="ol">Storing images in the same database but in a different table.</li> <li data-xf-list-type="ol">Storing images in a second database (of the same database type), i.e. in a separate database file.</li> <li data-xf-list-type="ol">Storing images in a second and different database which is optimized for BLOB storage (I thought about some kind of document database such as MongoDB or a simple but fast key-value-store only).</li> <li data-xf-list-type="ol">Storing images in the filesystem and only the rest of the metadata in the database.</li> </ol><p>Unfortunately, anything but option 1 requires changes to the MediaLibrary itself, because currently we treat BLOBs like any other data and to implement 2-5, we would have to introduce a separate MediaLibrary treatment for BLOBs first, before we can try these options.</p><p>I found a very interesting <a href="https://groups.google.com/forum/#!topic/alt.comp.lang.borland-delphi/_cFtA7YToA4" target="_blank">test </a>last night, which implies that with SQLite even option 3 could give as enormous speed improvements and option 3 would definitely be easier to implement than options 4 or 5.</p><p>But before I start digging into all these options, I wanted to here more opinions about all this - so let your thoughts flow <img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" class="smilie smilie--sprite smilie--sprite2" alt=";)" title="Wink ;)" loading="lazy" data-shortname=";)" /></p><p></p><p>Finally, as to your speed problem, Lehmden: Sorry for cluttering this thread with so much information, but your posts in this thread made me want to find more info on this whole speed thing and the above is the result of a lot of reading in the internet last night. However, I still cannot judge what makes your system slow when you import a lot of data in your database. To find our whether it is the database itself, please do whatever makes your system slow, post your SQLDebug.log here, tell me what you did and I will have a look at it to find out whether is has to do something with "speed up the backend".</p><p></p><p>Thanks,</p><p>Michael</p></blockquote><p></p>
[QUOTE="MJGraf, post: 1027432, member: 17886"] I think we are talking about two different things here which we have to tackle separately: [B][U]Browsing media without adding them to the db[/U][/B] This is a good point and worth an improvement. But at least my feeling is that this should only be the way to go when you watch media that are "temporary" (i.e. on sd cards or something you downloaded, want to have a look at and delete afterwards). When you are talking about media you don't watch that often but you store them somewhere at a fixed location, I would nevertheless want to have the possibility to add them to the DB without having to worry about this being a problem with DB size and speed. The reason for me is easy: This is the whole point of MP2 to me - having a central DB which has as much information on all your media as possible. Not putting this information into the DB intentionally is IMO just a workaround for a completely different issue: [U][B]Speed[/B][/U] And this brings us to the second thing we are talking about: speed. Independently from the above, we should put a lot of effort into making MP2's speed (as far as possible) independent from its speed. The reason is that I think over the next years, the amount of media people have will grow - and so will the amount of metadata for these media. That means MP2 will have to cope with very big amounts of data - such as your collections, Lehmden (and this is why I have to say thousand :thx: to you for testing MP2. Your media collections are really a great test to make MP2 future-proof :) ). And I also think this should be possible, because although the amount of media and metadata will grow, the user can only handle a very small amount of data at the same time. So we are talking about very small parts of the huge data collection to be shown at the same time, which our system should be able to handle very quickly. Now when I'm thinking about speed improvements, five things come into my mind: [U]Reduce the amount of data[/U] What you are reporting about XBMC sounds like they import it anyway into their db, but if you don't select a scraper, no additional information is imported making the import very fast, maybe hardly noticeable that they are actually imported into some kind of db. We currently only have the choice to tell the importer which type of media to expect (videos, movies, etc.). That makes it easier for the average user who doesn't know what a scraper is. But maybe we should have an "advanced" button there to enable the advanced user to choose the MetadataExtractors he wants to apply (or at least something like (a) all the information you can get - even from the internet, (b) only local information and (c) only local information without images such as covers, thumbs, etc. and maybe even (d) only path and file name). But in the end, this is again only a workaround. It is the purpose of MP2 to get as much info as possible - and to handle it as fast as possible. [U]Speed up the collection of data[/U] This seems to be one of the issues you are reporting because it takes ages to generate the thumbnails for your pictures. Besides tackling this particular problem in detail it may be an option that you can choose whether the MP2 Server generates this data for all media automatically in the background (via the importworker) or whether this data is only generated when you watch a media item for the first time. It would then be slower to watch this particular media the first time, because we would only then generate the metadata, but when you watch them the second time, it would be faster because the metadata is already there. [U]Speed up the retrieval of data[/U] This is what Valks is working on and it is not really a "speed-up" it is more an "only get the data we really need". It has to be seen with what I said above about the user only being able to handle a very small amount of data at the same time. Currently, when you search for all the mediaitems with "Eric Clapton" as a search string, you may get 200 mediaitems, which are all retrieved from the database - although the UI can only display maybe 10 of them at the same time. Valks is working on something called "data virtualization", which makes sure that only 10 (or maybe 30 - the 10 you see, 10 before and ten after to make scrolling smooth) are fetched from the database. I'm sure this will improve the speed a lot. The same idea - just for the UI (and therefore called UI virtualization) - is possible on the client side regarding the generation of the display views of the items to be shown. Instead of generating the views for all the items available, it makes sense to only generate the views for those you actually see (and some before and after). You can already see this effect by switching between list view and e.g. thumbs view. Our list view already virtualizes - the other views don't which is why they are much slower with large amounts of mediaitems. [U]Speed up the transfer of data to the client[/U] Since we cannot speed-up the network we have available from the MP2 Server to the MP2 Client, this is again no real speed-up, but a way to make MP2 "feel faster". There are two options I can see: (a) First transfer the necessary (non image) data and display them and then in the background fetch the image data. This is what Morph has implemented for the FanArtService. The effect is that when you enter a screen with all your mediaitems of a certain kind, you see them immediately by name (but without images) and can already scroll, etc., but the images for the items appear one by one while your UI is responsive all the time and independent from the images. We should definitely find a way to make this happen not only for the FanArtService but also for the thumbnails in the MediaLibrary. (b) Having a cache for images on the client side. This seems to be the way XMBC does it. They maintain a small database on the client side, just holding the IDs, the path, a checksum and the "number this item was display within the last week or so" and a folder with the respective images. When they display an image, they use the checksum to check whether the local cache still is the same as the original image on the server. If yes, they first look into their local cache and only get it via network, if it's not in the cache. If the checksum doesn't match, they get it from the server in any case (to make sure the client immediately reacts to changes on the server). Everytime they fetch an image from the server, they put it into the local cache. If the image isn't used for some time, it is deleted from the local cache to keep it small. Maybe we should think about something like that... [U]Speed up the backend[/U] Now this is certainly a very important part of the speed, but according to the above, it's only one of many options. And this is where we are talking about not putting BLOBs (i.e. images) into the database to make it faster. I personally like having all the metadata in the database - the images as well - for several reasons (not cluttering the directory structure with thousands of files having cryptic names, data integrity in the db, etc.). And this is the area, where I can help most in improving the speed. I already tried with the SQLite plugin and I'm perfectly willing to improve its speed further in particular with respect to BLOBs. The only thing I fear is that no one answered Morpheus' post [URL='http://system.data.sqlite.org/index.html/tktview/c8083867b9']here [/URL]regarding the problem we have with SQLite and TVE3.5 and if this isn't solved by the syste.data.sqlite developers, we have a problem with SQLite... I therefore already had a look at Firebird (which we have as a very old and outdated plugin urgently needing an update). To me it looks promising as (a) it is possible to integrate it into MP2 Server without the need for a separate installation, (b) there is no size limit of the database and (c) they seem to be very far with their support of EF with respect to TVE3.5. See [URL='http://www.firebirdsql.org/en/net-provider/']here [/URL]- they already support EF 6 for .NET 4.0 and .NET 4.5. But I have no clue how fast this would be so the only chance is to try and error... (a minor drawback that checkoch won't like: They offer their .net-provider via nuget, but not the database itself so we would have to create our own package with the database for the MP-nuget-feed. If this wasn't the case, I would have removed the binaries already from our repository...). Now back to the question how we should deal with images on the storage side. I see five options: [LIST=1] [*]Storing images like any other metadata in the same tables in the database. This is what we currently do. What we can do to improve this is try different databases and different settings of the databases. [*]Storing images in the same database but in a different table. [*]Storing images in a second database (of the same database type), i.e. in a separate database file. [*]Storing images in a second and different database which is optimized for BLOB storage (I thought about some kind of document database such as MongoDB or a simple but fast key-value-store only). [*]Storing images in the filesystem and only the rest of the metadata in the database. [/LIST] Unfortunately, anything but option 1 requires changes to the MediaLibrary itself, because currently we treat BLOBs like any other data and to implement 2-5, we would have to introduce a separate MediaLibrary treatment for BLOBs first, before we can try these options. I found a very interesting [URL='https://groups.google.com/forum/#!topic/alt.comp.lang.borland-delphi/_cFtA7YToA4']test [/URL]last night, which implies that with SQLite even option 3 could give as enormous speed improvements and option 3 would definitely be easier to implement than options 4 or 5. But before I start digging into all these options, I wanted to here more opinions about all this - so let your thoughts flow ;) Finally, as to your speed problem, Lehmden: Sorry for cluttering this thread with so much information, but your posts in this thread made me want to find more info on this whole speed thing and the above is the result of a lot of reading in the internet last night. However, I still cannot judge what makes your system slow when you import a lot of data in your database. To find our whether it is the database itself, please do whatever makes your system slow, post your SQLDebug.log here, tell me what you did and I will have a look at it to find out whether is has to do something with "speed up the backend". Thanks, Michael [/QUOTE]
Insert quotes…
Verification
Post reply
Forums
MediaPortal 2
Feature Requests
Aibility to play Media without adding them to DB
Contact us
RSS
Top
Bottom