Release Testing

The subject of project management and release testing has been discussed in other topics. I am opening this topic to discuss how we implement some form of formal testing of major releases and rolling updates.

Testing is a thorny subject. It can be time consuming and formal testing can get quite tedious, especially if full test runs need to be repeated. It can also be fun and rewarding, watching those pie charts turn green :-). The purpose of the testing needs to be clear and each test should provide benefit and ideally avoid repeating elements tested elsewhere. Tests should be as atomic as possible allowing direct correlation between a requirement and its test. This ensures functionality (and other elements) are thoroughly tested without the tedious overlap.

I have several years experience of managing and performing testing on software projects. I have mostly used a commercial product called TestRail to track that testing. Today I tried out Test Collab which provides a free plan with limitations (200 test cases and 400 executed cases). Much of my time today was spent trying to exceed those limits to discover the behaviour at the break point. I can report that it stops at the limit (400 executed) informing the user that they have exceeded their plan limit (and offering upgrade) but retains results to that point. Deleting previous results will allow further testing. Each project seems to have its own allocation so splitting tests across multiple projects within the same TEst Collab account also extends their offering. The product itself is similar to TestRail with slightly fewer features but is generally comparable. Test Collab supports multiple users for each account with various roles and a fair degree of customisation and integration options. (There doesn’t seem to be a limit on quantity of users for the free plan though I haven’t tested that.) Paid plans are subscription based calculated by quantity of users.

Regarding the actual testing, we should test have functional tests which test that the product functions as expected under normal conditions and non-functional tests which test other elements like where the break points are (stress testing), how long the product performs without degradation (soak testing), behaviour under higher than normal (but not excessive) load (load testing), etc.

Functional tests are basically a list of user requirements (what the product should do) written as a pair of:

  • Steps to exercise the function
  • Expected behaviour

Non-functional tests are similar pairs:

  • Steps to trigger the test condition
  • Expected behaviour

Each test is called a test case and is formed by a set of steps and the expected result.
Test cases are combined to form a test execution which has the purpose of testing a particular area or group of features, e.g. functional requirements.
Test executions are added to a Test Suite which has the purpose of testing a particular release. This may be a full set of tests for a major release like Aruk or a smaller set of tests to validate an software update.
Each test case may be assigned to a team member.

Test Collab can send emails informing team members of their allocation of tests and results of test runs. It has some reporting features. It can export and import test cases, etc. which is valuable in case it proves to be an inappropriate platform.

I believe we can probably fit our test requirements within the free plan with some careful planning.

Ideally as much testing as possible is automated but before we can think about that we need to define what the tests are and run them manually to prove they perform the required test and produce expected result data. Some tests may always need manual testing, e.g. physical tests like rotating a knob to check the encoder library works. (Someone in our community might provide a robot for this but let’s walk first). Once there is a test pack we can then start to automate elements which may start with assisted automation, e.g. a script triggers some activity then a human monitors the output. Further enhancement may lead to some fully automated tests.

As well as formal testing there is a real advantage to performing monkey testing which is basically playing with the product without a specific purpose (the fun bit). This often finds many issues that formal testing won’t find because the product gets exercised in ways that are unexpected.

I feel we need some measure of success so that we avoid endless monkey testing and vague ideas of, “That is probably about good enough” for our releases. Test suite content is driven by the milestone (another element of the test tool) and should include all the tests that measure success that signals the product is ready for release. If all tests in the test suite pass or there are failed tests which are accepted by the project manager to be follow-on actions then the product is ready.

Testing needs to be dynamic and adaptable too. If during testing you find something passes but it should fail then the test case / execution / suite needs to be updated. This can happen during execution of a test suite which can lead to a 99% passed test suite jumping to 88% passed but that is normal. The more testing that is performed the more accurate the test suites become and the less often this happens.

I am happy to start setting up some formal test monitoring if we want. Similar to issue tracking, this can seem process heavy and few people relish the prospect of formal testing but it really makes a difference to product credibility. If you consistently release good quality products then your users will stay. The converse is far truer. It is easy to lose customers through a single bad experience.

1 Like

I have created a Zynthian project within my own Test Collab account and started to add test cases. These may be exported and imported to a Zynthian account. I have not found a way to create different configurations for the same test, e.g. testing audio output (same test) for different audio interfaces. This may need to be managed by process. If you want to look at this you may need to create a Test Collab account. The URL to this project is: https://riban.app.testcollab.com/index.php/#/new/project/3. I may need to add you as a user to the project. I would suggest that this is actually owned by @jofemodo but I can put some shape around it to get the ball rolling and opinion then we can transfer the data.

I have added many test cases but not yet populated all these cases with steps.

Can you add me please?

It looks like you need the team member email address to add to the project. IM an email address if you want to be added. (Bear in mind that this is only temporary and that if we decide it is a goer then we may move it to another account.

Do I need edit rights to allow me to record the result of an execution?
I don’t seem able to assign it to myself.

Really nice environment.

Would we be able to get meaningful level out of a sine wave file recorded and then played?

Could we produce test asserts for level meter analysis as a test?

Does Engine X produce a signal that at all times is within ± of the reference level… .?

We would add two messages Engine X Play test Tone and Engine X Stop Tone.

How much, fruitful, sine wave analysis could we do?
Counting cycles in a period of time would, presumably, be relatively simple and act as a rather fine conformation that sample rates et al. are nicely aligned as well as an agreed reference tone?

I had set you to the role of “Tester” which does not allow assigning tests to users. I have changed your role to “Developer” which has lots more permissions. This is configurable so we can tweak each role / add new ones to suit our workflows.

The environment is pretty good. There are a few things that irk me like you are often placed at the wrong place on a page when doing repetitive work, requiring extra scrolling or navigating but it mostly does a good job.

I have created a fairly comprehensive set of tests though I have not populated all the tests with steps. Some I don’t know now to use the feature so am currently unable to write those test cases.

An execution is presumably an instantiation of a test case. So where do I tell it that an endurance test (for instance) passes?

Correct, an execution is a test run - the collection of test cases / instance of a test suite that has been scheduled for testing. This bit of the application seems a bit limited. You can navigate to the execution then click “Start Now”. This shows what they call the pre-execution screen which is a list of the test cases and a bit of detail about each. There is another “Start Now” button which takes you into the actual tests where you can pass / fail / skip each step as wall as the whole test case. (I recommend recording results for each step.) There does not seem to be anyway to directly select the test you want to do - you have to start at the beginning and step through them all until you reach the desired test case. This is a significant limitation which seems unlikely to be a constraint but I haven’t found a way around this yet. (In TestRail you can select any test and record results against it giving random access which is the way I prefer to work. There are times when you can work through certain tests but not all sequentially.)

The online docs are quite good and worth checking if you get stuck.

Guys, I’m co-founder of Test Collab and stumbled across this by accident while our backlink analysis. I’m still not sure if it was meant for public sharing or not, if it wasn’t I apologise in advance - feel free to delete me from the thread or anything :grinning:

I couldn’t help myself to post here, since you had questions - I had to answer… hehe :smiley:

We have custom fields for that purpose: TestCollab Help Center

Thank you kind sir.

Actually we have 2 modes of execution (not sure if you’ve gone through this?): TestCollab Help Center

And finally, we have wonderful support right there from the app - even if you’re on free plan. You can message us anytime using chat icon on bottom right.

1 Like

Always nice to see interest. The constant repeatition of testing makes the use of a software tool essential and it needs to be one that doesn’t nag or insist. This would seem to meet those criteria…

Now should I ask him for a sound sample…?

I think we can let this particular contributor off the hook :slight_smile:.

Thanks for the info. I have configured to allow random access to test cases within an execution.

I have added a custom label but I have not yet figured out how to add several instances of the same test case to a test execution for different configurations. The help page tells me that there are various ways to use custom fields but I am yet to find out how to actually implement that.

Has anyone got 500ml of distilled water…?

1 Like

I haven’t finished that test case yet. I need to add the chain saw and sledge hammer…

1 Like

Nearly got to 50% passes . . . .

@wyleu has been doing a sterling job of pretty much single handedly performing these formal tests from which we are learning how they may be improved to give best benefit and also gaining confidence that Aruk is release-ready. We need YOUR help because we want to test each configuration, e.g. different screens, audio and MIDI interfaces, etc. If you have some time to perform a few tests then please IM an email address to me so that I may add you to the TestCollab project.

We are also learning how best to use the test record tool (TestCollab) which is fairly good but lacks a couple of features that would be useful.

1 Like

Just added some fairly vague Key Binding tests.
But certain test components are beginning to emerge
Key Binding Tests

Key Bind test.
Just run some keybind tests. . .