Testing Async Systems

{{jfunreport
 * title = Testing Async Systems
 * convenor = Phil Shotton

--Kon Soulianidis 20:54, 14 September 2011 (UTC) Notes

What is Testing / Why do it?

0/ Functional requirements

1/ Failure Prevention

2/ Regression Testing and to help refactoring

3/ Better Code by Design

4/ Cost - changing code whilst writing is easy and cheap. Cost to fix

5/ Confidence

How do we test?
Unit Tests - Component Class level ---&gt; Compnent Integration --&gt; Staging ---&gt; Functional / Integration ---&gt;Testing During Production

Most ppl don't do enough tests and Code fails

Functional Tests
Requirements - when the requirements aren't specified properly, it impacts the functional tests

It all comes down to resources available

We also have the non-functional requirements such as performance.

Interconnects
In Software dev, you have too many interconnects, you can't like engineering test the whole end to end.

We have to break up and test the bits individually.

Should we test each part individually? at the API/interconnect level?

How do you design a good test?

 * The analogy that alexander provided was that when he was teaching, he got his students to write req's for a simple sorting algorithm. Not all students got all the requirements (about 90%)

Failure Scenarios

 * How do you simulate them?
 * Testing all the conditions that this could happen is very hard
 * Time Related Edge conditions is very hard.

Functional Degradation

 * One function failure in one system can have an impact on another.
 * Failing fast is very important pattern.
 * Marc Hoffmans co mirrors the systems - inc the data - to the test system

Expendiency

 * When something breaks, a dev, goes I know where to fix, does the fix but not regression testing occurs, the fix is experdited because of cost

Cost is the excuse why it isn't done It would be good to track why the manager who said we don't need tests, because we don't have time, capture that as a functional requirement


 * recommendations = (as above)

}}