Stability Release (1 Viewer)

Do you want a Stability Relase after Beta 0.2

  • No, i dont want that

    Votes: 0 0.0%

  • Total voters
    272

infinite.loop

Retired Team Member
  • Premium Supporter
  • December 26, 2004
    16,163
    4,133
    127.0.0.1
    Home Country
    Austria Austria
    Mediaportal Test Team info:
    today i got a pm from Smirnoff who is pleased with the progress we are making here with the new Test Team.

    this members have already joined the test team:
    x tkortell
    x Chli
    x scoop
    x LXB
    x wortelsoft
    x Inker
    x ronilse

    if you want to join, or already made an offer to join in the forum, please send me a email with:
    x information about yourself (do have experience in testing/developing software)
    x forum nickname
    x basic system information (support template)
    x and the dxdiag.txt file
    Start -> Run -> DxDiag
    (Select "No" when prompted to check for WHQL signed drivers)
    Then, choose the "Save All information" button.


    i need this information to set up a list for myself to know who can do what.

    thank you all for the great response i am getting here :)
     

    Marcusb

    Retired Team Member
  • Premium Supporter
  • February 16, 2005
    1,995
    29
    Melbourne
    A further comment on details for testing, don't forget the nationality.
    There are a lot of "quirks" that are introduced depending on what country you are in. (AC3 on HD channels in Australia for example).
     

    FlipGer

    Retired Team Member
  • Premium Supporter
  • April 27, 2004
    2,658
    115
    49
    Leipzig, Germany
    Home Country
    Germany Germany
    Hi,

    Marcusb said:
    A further comment on details for testing, don't forget the nationality.
    There are a lot of "quirks" that are introduced depending on what country you are in. (AC3 on HD channels in Australia for example).

    That is a good comment. Also things like special characters, subtitles, teletext, etc. can cause problems in foreign verions.

    What also could be tested are drivers, in first case for TV capture cards, later perhaps also graphic card drivers.
    For example does Radio not work any more with the new hauppauge drivers for the PVR 350 (version 2.3). So the team could make suggestions for "the best" driver and it ccan test new drivers comming out, if there are any issues with MP.

    Just an idea.

    Flip.
     

    gds

    Portal Pro
    October 4, 2004
    53
    0
    Italy
    Defining test metrics

    One thing that needs to be defined in the test plans are some metrics.
    I mean that we need to define formally what are the success/failure conditions for a given test. In some tests it will be enough to have a success/failure switch, while some other tests would require a more fine grained approach (i.e. sometimes we just need to know if a feature works or not, while sometimes, given a feature works, we need to know how satisfactory is its behaviour).
    I suggest we organize it like that:
    1 - binary switches for the whole test and for simpler features: success (if everything works as expected, failure (if any of the test condition fails).
    2 - 3 degrees switches for stability: totally stable (it always works stable), unstable (it works stable sometimes), totally unstable (it never works stable).
    3 - 3 degrees switches for speed/responsiveness (from responsive to turtle-like).
    4 - given the importance of MyTV for most of MP users, it could be useful to define some switches specific to it, like lost frames, lost audio/video sync, etc.
    ...and so on.
    Bye.
    GDS
     

    CHli

    Portal Pro
    July 5, 2005
    1,251
    14
    Switzerland
    Home Country
    Switzerland Switzerland
    Re: Defining test metrics

    gds said:
    One thing that needs to be defined in the test plans are some metrics.
    I mean that we need to define formally what are the success/failure conditions for a given test. In some tests it will be enough to have a success/failure switch, while some other tests would require a more fine grained approach (i.e. sometimes we just need to know if a feature works or not, while sometimes, given a feature works, we need to know how satisfactory is its behaviour).
    I suggest we organize it like that:
    1 - binary switches for the whole test and for simpler features: success (if everything works as expected, failure (if any of the test condition fails).
    2 - 3 degrees switches for stability: totally stable (it always works stable), unstable (it works stable sometimes), totally unstable (it never works stable).
    3 - 3 degrees switches for speed/responsiveness (from responsive to turtle-like).
    4 - given the importance of MyTV for most of MP users, it could be useful to define some switches specific to it, like lost frames, lost audio/video sync, etc.
    ...and so on.
    Bye.
    GDS

    I think a test is not subjective it's binary. I click there, there and there : I intend to have this result. If this is not happening the test fail. If it fails then we can add comment. (having comment for test success could be a good point too)

    Adding "intermediate" choice, introduce human decision. It's stable or it's not. If it's sometime unstable then it's not stable.

    About responsiveness we have the same problem. What is fast, what is slow ? You can always add a comment to the test if you find the functionnality very slow, but this as nothing to do with stability I think.
     

    gds

    Portal Pro
    October 4, 2004
    53
    0
    Italy
    Well, in an ideal world you'd be totally right. But software is never really error-free, so it's just a matter of defining metrics to say what works (and to what extent it works) and what doesn't work. To make myself clearer, think about Microsoft releasing Windows XP: when they released it, it wasn't certainly bug-free (a couple of Service Packs and some more updates prove I'm not wrong :D ...) but they let it out of the beta stage when most of the show-stopper bugs were solved for most of the possible HW configurations. It was not perfect but under most conditions it worked. If they had waited for the "perfect" XP to let it come out, it would never come out...
    I think it would help the developers to know what always works, what works most of the time and what doesn't work at all for most of the users, so that they can prioritize bug-hunting, working first and foremost to solve show-stopper bugs and leave the least annoying one waiting for their spare time.
    And, by they way, I agree that having non binary test results can lead the test results to be conditioned by human judgement, but here comes statistics to help us figure the real priorities.
     

    gds

    Portal Pro
    October 4, 2004
    53
    0
    Italy
    About responsiveness we have the same problem. What is fast, what is slow ? You can always add a comment to the test if you find the functionality very slow, but this as nothing to do with stability I think.
    Well, we are here to define the test structure and I think that the tests should encompass the broader possible range of issues. So, given that the most important goal of a test set is to identify bugs, one other important goal is to identify areas where the program behaves in an inadequate way. By this, I don't mean we should have testers say: "hey, scrolling items in menus is tooo slow", but have them identify areas where the lack of speed can impede using the software in a fruitful way (e.g., if MP records your favorite show but looses a frame here and there cos its code is not optimized, you can say it works, but it cannot be tought it works in an adequate way).
    I hope I made myself clear.
    Bye.
    GDS
     

    GitsLM

    Portal Member
    October 19, 2004
    31
    0
    Germany / USA
    I wanna keep it short and simple. Thanks a lot for the programm, I like it a lot. Thanks for all the hard work you guys put into it, awesome.

    Lets think about the new features: It is funny but most users do not even know the full potential of MP since you develop to much to fast for them :D So I think it might realy be a good idea to do a cleanup in the code, let the users catch up with the developers and then go on with the development. But since I do not have a clue at all about open source projects I just use the outcome, maybe the other way is the one to go for. :roll:
     

    guytpetj

    MP Donator
  • Premium Supporter
  • March 24, 2005
    424
    57
    60
    USA
    Home Country
    United States of America United States of America
    Stability is the way to go.

    But how stable is MP? I'm still running 0.1.1.1 which for me is very stable.
    I work in the Australian outback (desert), 3 weeks at work, 2 weeks at home. During my 3 weeks away, the wife and kids have not managed to get MP or the computer crashed ever since I installed 0.1.1.1

    The computer and MP is running 24/7, with on average 7 hours of MP time a day - either recording or viewing.

    Peter
     

    Users who are viewing this thread

    Top Bottom