home
products
contribute
download
documentation
forum
Home
Forums
New posts
Search forums
What's new
New posts
All posts
Latest activity
Members
Registered members
Current visitors
Donate
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Search titles only
By:
Menu
Log in
Register
Navigation
Install the app
Install
More options
Contact us
Close Menu
Forums
MediaPortal 1
Development
General Development (no feature request here!)
AutoCropper plugin
Contact us
RSS
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="knutinh" data-source="post: 70281" data-attributes="member: 14776"><p>"Not at all, its an embarrasingly simple approach. First of i rely on the assumption that logos will be either to left or top right and define an interval on the x-axis where i dont expect to find logos. In that interval i choose a number of random x values which are my samples points. I then start scanning from the bottom up. For each line y, i calculate the average R,G,B values of the samples points, and then sum over the sample points deviation from the average. This gives an indications of how different the image is at the sample points, so it will find consider any non-uniform line as image content. This catches most things, but not uniform white for example, so another check is added considering the max of any color component(R,G,B) seen by a sample point, if that is high enough it is also considered as content(would capture any kind of at least somewhat bright uniform background). The reason i dont just use the last method alone, is that very dark images will have low color components, but still have a variation i dark colors which will yield a variance from the average not seen in the black edges. Noise on analogue signals might confuse it though(i use DVB-C) since it can add 'snow' to the black edges. </p><p></p><p>In order to avoid considering VBI data or whatever that whiteblack line is in some of my screenshots as content, a line is only considered image content if the next line also fullfills this criteria. </p><p></p><p>For the above i think i used 20 sample points, meaning i sample 20*576 pixels in total. I havent optimized it yet, the annoying lookahead mentioned above causes some double-work right now(easily fixed). I could also reduce the scanned lines by ~50% by scanning bottom up, and then top down instead of going right through the image. Furthermore i have considered trying a binary search style scan, which i think would make it capable of running in real-time. I havent timed it as such yet, but will do."</p><p></p><p>Are the sample points random per line, or are you re-using the same pattern for all lines?</p><p></p><p>I see that just selecting random pixels reduce the number of instructions needed, but perhaps this situation is more cache-limited (a large video frame residing in memory), so one may process entire lines without much more cpu time?</p><p></p><p>In what way does your basic algo differ from, say, doing a histogram of intensity (discaring color information), and demanding that all pixels must be below a threshold?</p><p></p><p>regards</p><p>Knut</p></blockquote><p></p>
[QUOTE="knutinh, post: 70281, member: 14776"] "Not at all, its an embarrasingly simple approach. First of i rely on the assumption that logos will be either to left or top right and define an interval on the x-axis where i dont expect to find logos. In that interval i choose a number of random x values which are my samples points. I then start scanning from the bottom up. For each line y, i calculate the average R,G,B values of the samples points, and then sum over the sample points deviation from the average. This gives an indications of how different the image is at the sample points, so it will find consider any non-uniform line as image content. This catches most things, but not uniform white for example, so another check is added considering the max of any color component(R,G,B) seen by a sample point, if that is high enough it is also considered as content(would capture any kind of at least somewhat bright uniform background). The reason i dont just use the last method alone, is that very dark images will have low color components, but still have a variation i dark colors which will yield a variance from the average not seen in the black edges. Noise on analogue signals might confuse it though(i use DVB-C) since it can add 'snow' to the black edges. In order to avoid considering VBI data or whatever that whiteblack line is in some of my screenshots as content, a line is only considered image content if the next line also fullfills this criteria. For the above i think i used 20 sample points, meaning i sample 20*576 pixels in total. I havent optimized it yet, the annoying lookahead mentioned above causes some double-work right now(easily fixed). I could also reduce the scanned lines by ~50% by scanning bottom up, and then top down instead of going right through the image. Furthermore i have considered trying a binary search style scan, which i think would make it capable of running in real-time. I havent timed it as such yet, but will do." Are the sample points random per line, or are you re-using the same pattern for all lines? I see that just selecting random pixels reduce the number of instructions needed, but perhaps this situation is more cache-limited (a large video frame residing in memory), so one may process entire lines without much more cpu time? In what way does your basic algo differ from, say, doing a histogram of intensity (discaring color information), and demanding that all pixels must be below a threshold? regards Knut [/QUOTE]
Insert quotes…
Verification
Post reply
Forums
MediaPortal 1
Development
General Development (no feature request here!)
AutoCropper plugin
Contact us
RSS
Top
Bottom