home
products
contribute
download
documentation
forum
Home
Forums
New posts
Search forums
What's new
New posts
All posts
Latest activity
Members
Registered members
Current visitors
Donate
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Search titles only
By:
Menu
Log in
Register
Navigation
Install the app
Install
More options
Contact us
Close Menu
Forums
MediaPortal 1
MediaPortal 1 Talk
Stability Release
Contact us
RSS
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="gds" data-source="post: 20280" data-attributes="member: 10435"><p><strong>Defining test metrics</strong></p><p></p><p>One thing that needs to be defined in the test plans are some metrics.</p><p>I mean that we need to define formally what are the success/failure conditions for a given test. In some tests it will be enough to have a success/failure switch, while some other tests would require a more fine grained approach (i.e. sometimes we just need to know if a feature works or not, while sometimes, given a feature works, we need to know how satisfactory is its behaviour).</p><p>I suggest we organize it like that:</p><p>1 - binary switches for the whole test and for simpler features: success (if everything works as expected, failure (if any of the test condition fails).</p><p>2 - 3 degrees switches for stability: totally stable (it always works stable), unstable (it works stable sometimes), totally unstable (it never works stable).</p><p>3 - 3 degrees switches for speed/responsiveness (from responsive to turtle-like).</p><p>4 - given the importance of MyTV for most of MP users, it could be useful to define some switches specific to it, like lost frames, lost audio/video sync, etc.</p><p>...and so on.</p><p>Bye.</p><p>GDS</p></blockquote><p></p>
[QUOTE="gds, post: 20280, member: 10435"] [b]Defining test metrics[/b] One thing that needs to be defined in the test plans are some metrics. I mean that we need to define formally what are the success/failure conditions for a given test. In some tests it will be enough to have a success/failure switch, while some other tests would require a more fine grained approach (i.e. sometimes we just need to know if a feature works or not, while sometimes, given a feature works, we need to know how satisfactory is its behaviour). I suggest we organize it like that: 1 - binary switches for the whole test and for simpler features: success (if everything works as expected, failure (if any of the test condition fails). 2 - 3 degrees switches for stability: totally stable (it always works stable), unstable (it works stable sometimes), totally unstable (it never works stable). 3 - 3 degrees switches for speed/responsiveness (from responsive to turtle-like). 4 - given the importance of MyTV for most of MP users, it could be useful to define some switches specific to it, like lost frames, lost audio/video sync, etc. ...and so on. Bye. GDS [/QUOTE]
Insert quotes…
Verification
Post reply
Forums
MediaPortal 1
MediaPortal 1 Talk
Stability Release
Contact us
RSS
Top
Bottom