Unit testing guidelines (1 Viewer)

sys

Portal Member
January 4, 2006
5
0
Writing tests for existing code is a great way to help MediaPortal becoming faster and more stable.

Another way is to expose bugs by writing a test or two that fail as long as the bug is in place. If you don't feel comfortable with fixing the code, write a test for it, and you are helping the devs.

Unit Testing Guidelines:

Tests should be kept very small! Your tests should be around 10 lines long - no more. If you can't write tests smaller, you should look at your code and see if you could split it up better. But it's always better with a test than with no test at all.

Write tests that are readable! Use good variable names, and avoid long lines. If you have to comment your tests you are definatly writing them wrong - the test code should be self-explanatory.

Don't create tests that depend on each other. If your tests are only green if you run them in a certain sequence, you are doing it wrong.

Check boundary conditions heavily. If the parameter of a method expects values in a specific range, your tests should pass in values that lie across that range. For example, if an integer parameter can have values between 0 and 100 inclusive, three variants of your test might pass in the values 0, 50, and 100 respectively.

Use negative tests to be sure your code responds to error conditions appropriately. Verify that your code behaves appropriately when it receives invalid or unexpected input values. Verify that it returns errors or throws exceptions when it should. You might be surprised to find that a test you expected to fail actually succeeds. For example, if an integer parameter to a method can accept values in the range 0 to 100 inclusive, you might create tests that pass in the values -1 and 101.

[Edit]
It's consider very bad form to commit failing tests, or test that won't run on everyones computers.
[/Edit]

Read more about unit tests and TDD:

http://www-128.ibm.com/developerworks/library/j-test.html

http://www.extremeprogramming.org/rules/unittests.html

http://www.nunit.org/

_________________
[sys] - Unit Testing Guru
Cleaning Maid Deluxe
Mad Professor
 

stefpet

Portal Member
December 12, 2005
9
0
Good initiative!

Is there any template/example unit test? I assume it is the NUnit framework that is used since it is linked (although not mentioned in the text), is there any specific instructions of how to use it?

Is there a repository of existing unit tests? Are they/should they be checked in somewhere?
 

sys

Portal Member
January 4, 2006
5
0
> Is there any template/example unit test?
Well, there are some tests written. I'm partial to the ones written by myself :lol:
They can be found in MediaPortal.Tests\Core\PlayLists


> I assume it is the NUnit framework that is used since it is linked
> (although not mentioned in the text), is there any specific instructions
> of how to use it?

The NUnit website explains it all a lot better than I could ever try to. Go there! http://www.nunit.org/index.php?p=docHome&r=2.2.5

> Is there a repository of existing unit tests? Are they/should they
>be checked in somewhere?

MediaPortal.Tests is your friend.
 

TheTechnologist

Portal Member
January 13, 2005
5
0
[Edit]
It's consider very bad form to commit failing tests, or test that won't run on everyones computers.
[/Edit]

Uh, no it's absolutely not. And absolutely fine to have failing tests in a suite until the test is fixed. Committing a test only when it passes is just stupid as it doesnt help other devs know when a failing test gets fixed by fixing another unrelated bug.

Furthermore, without these tests you can't see the ratio of how many regressions you have.

Just the few cents of someone that has been using xp and tdd for 3 years.
 

sys

Portal Member
January 4, 2006
5
0
TheTechnologist said:
[Edit]
It's consider very bad form to commit failing tests, or test that won't run on everyones computers.
[/Edit]

Uh, no it's absolutely not. And absolutely fine to have failing tests in a suite until the test is fixed. Committing a test only when it passes is just stupid as it doesnt help other devs know when a failing test gets fixed by fixing another unrelated bug.

Furthermore, without these tests you can't see the ratio of how many regressions you have.

Just the few cents of someone that has been using xp and tdd for 3 years.

We'll, I beg to differ. Since my plan is to set up continous integration for this project, it is bad form to commit failing tests. If you feel different, please show me a test in the mediaportal source that you feel is OK to have failing. I rather not discuss theory, if that's ok with you.

Regards,

[sys]
 

TheTechnologist

Portal Member
January 13, 2005
5
0
I'm very sorry if my attempt to share a bit of my experience is considered a useless and insulting thing.

I am certain that with a bit more experience in development, cc, unit testing et al, you'll reconsider your point of view. And with more experience, I'm also confident you'll be more open to input from people.

Let's discuss this in a few month.
 

jslinnha

Portal Member
January 25, 2006
11
0
Cross in Hand, UK
[Edit]
It's consider very bad form to commit failing tests, or test that won't run on everyones computers.
[/Edit]

Did you mean that it is bad form to commit code that causes tests to fail...

I'm sure you can both agree on this.

BTW - A nice idea. Do these tests get run with every build, on a cvs change, and do emails get sent out when test fail.

I / We where i work have built a continuous integration system using ANT / Cruiscontrol (all java based tho', however the idea remains the same)

cheers
 

TheTechnologist

Portal Member
January 13, 2005
5
0
My point here is that doing the coverage of a codebase that has not been written with tests at all (and probably with very poor code coverage) means that you need to commit certain failing tests to show where work needs to be done (argument sanitation is a common example) and the progress done. If it's a new code base it's a different case, but asking for unit tests to all pass assumes you always commit your test at the same time as the code needed for the test to pass, and that's assuming someone that discover a bug and write a test with it would commit the modified code without necessarily knowing what he's doing.

Anyway, let's talk about that at a later point, after all i've never contributed back any of my changes from the branching i did on mediaportal so i shouldnt complain about anything at all.
 

Users who are viewing this thread

Top Bottom