home
products
contribute
download
documentation
forum
Home
Forums
New posts
Search forums
What's new
New posts
All posts
Latest activity
Members
Registered members
Current visitors
Donate
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Search titles only
By:
Menu
Log in
Register
Navigation
Install the app
Install
More options
Contact us
Close Menu
Forums
MediaPortal 2
Plugin Development
SQLiteDatabase Plugin for MP2
Contact us
RSS
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="MJGraf" data-source="post: 1030797" data-attributes="member: 17886"><p>Another update:</p><p></p><p>Even a cache of 64MB instead of 40MB helps to improve Read Test 4 considerably. I'm getting 102ms (first run) and 41ms (second run). So it really depends on how big the result of a search is - here with 33,6MB data to be read a cache size of 40MB is too small, but 64MB is absolutely fine.</p><p></p><p>In the end I think I will implement a standard value calculation based on the total RAM in the computer (maybe in steps of 32MB, 64MB, 128MB, max 256MB cache for total RAM sizes of <=512MB, <=1GB, <= 2GB, > 2GB) which are used if there is no setting in the XML-file. Additionally I will save this standard value as a setting which then can be manually tweaked via the XML-file to override the standard settings.</p><p>For now I will just continue with a cache size of 256MB since this would be the standard size for my server.</p><p></p><p>What we have to keep in mind is that there is another setting I still have to test: memory mapped i/o which may need a lot of RAM. What happens with this setting is that SQLite "maps" the content of the disk directly into RAM without having to copy it from Kernel RAM to "normal" RAM. The downside currently is that when you e.g. set the "mmap_size" so 256MB, it maps the first 256MB of the database file and if the database file is bigger, the rest is not mapped but read in the normal way. So here with my 1,7GB database file, I will hardly ever be able to map the whole file into RAM. But I don't know whether SQLite e.g. stores indices at the beginning of the database file or whether there is a possibility to rearrange the database file in a way to make this happen (e.g. by executing "VACUUM", which recreates the whole database file from scratch). So depending on the results with MMAP, we may need another considerable amount of RAM for this setting. Let's see...</p></blockquote><p></p>
[QUOTE="MJGraf, post: 1030797, member: 17886"] Another update: Even a cache of 64MB instead of 40MB helps to improve Read Test 4 considerably. I'm getting 102ms (first run) and 41ms (second run). So it really depends on how big the result of a search is - here with 33,6MB data to be read a cache size of 40MB is too small, but 64MB is absolutely fine. In the end I think I will implement a standard value calculation based on the total RAM in the computer (maybe in steps of 32MB, 64MB, 128MB, max 256MB cache for total RAM sizes of <=512MB, <=1GB, <= 2GB, > 2GB) which are used if there is no setting in the XML-file. Additionally I will save this standard value as a setting which then can be manually tweaked via the XML-file to override the standard settings. For now I will just continue with a cache size of 256MB since this would be the standard size for my server. What we have to keep in mind is that there is another setting I still have to test: memory mapped i/o which may need a lot of RAM. What happens with this setting is that SQLite "maps" the content of the disk directly into RAM without having to copy it from Kernel RAM to "normal" RAM. The downside currently is that when you e.g. set the "mmap_size" so 256MB, it maps the first 256MB of the database file and if the database file is bigger, the rest is not mapped but read in the normal way. So here with my 1,7GB database file, I will hardly ever be able to map the whole file into RAM. But I don't know whether SQLite e.g. stores indices at the beginning of the database file or whether there is a possibility to rearrange the database file in a way to make this happen (e.g. by executing "VACUUM", which recreates the whole database file from scratch). So depending on the results with MMAP, we may need another considerable amount of RAM for this setting. Let's see... [/QUOTE]
Insert quotes…
Verification
Post reply
Forums
MediaPortal 2
Plugin Development
SQLiteDatabase Plugin for MP2
Contact us
RSS
Top
Bottom