Test Parameterisation #208041/HEAD / v12122 |
Tags:
not added yet
Test ParameterisationTable of Contents
OverviewTest Parameterisation is a powerful practice to enhance your testing scenarios. The concept of parameters in testing is simply that your Test cases may define and use some parameters. During the execution of a test case the parameters are filled in with their actual value, so a parameterised variation of the original test case is produced. So the main benefits of parameterisation are:
Using Parameters in Test CasesEvery test case can be parameterised by simply using parameters when editing the test case's definition. A parameter is defined by using the ${parametername} notation in the wiki text of the Test Cases. For example the "speed" parameter is used in the name field of this test: That is it, use any name for your parameters, the parameter will be automatically recognised by CodeBeamer, and the "${parametername}" placeholder will be automatically filled in with the actual value during the run-time of the test. Parameters can be used in the following locations:
Default values for ParametersWhen placing parameters to Test Case definitions you can also provide some default value to the parameter. That is done using the "${parameterName|defaultValue}" format, so the "|" (pipe) character is used to separate the parameter and its default value. For example the ${car|bmw} text defines a parameter named "car" and the default value of this is "bmw". As you would expect the default value is only used if the parameter did not get any value otherwise. But if the value is provided in the container TestSet or in the TestCase or one of its parent TestCases then the default-value is ignored. Empty string values in parametersIf a parameter has empty string value like " " (just one space) that is handled as if the parameter would be missing. This case the default-value of the parameter is used - if that is provided Providing Parameter values in Test Cases or Test SetsAssuming that you have used some parameters in your Test Cases definition(s) those parameters still missing some value. The values of the parameters can be provided on each test case when editing it: in the "Test Parameters" section as screenshot shows below: Values can also be assigned to a test set: the similar "Test Parameters" section also appear when editing a test set. These values provided on a test set will be shared and available in all test cases which are part of that set. (The exact parameter resolution algorithm is a bit more complicated than this, will be explained later...) There is currently 3 ways for providing parameter values:
Providing Parameter values as name-value pairsThe simplest form of providing parameter values is entering it as name=value pairs in the "Parameters in Wiki" wiki editor. The following screenshot shows an example (e.g. speed parameter gets value "100"): The rule is simple: each line in the wiki text is read as one parameter=value assignment if possible. Also visible on screenshot you can optionally put the values to a preformatted block (curly brackets {{{..}}}), but if missing those will be automatically added on save. You can only define one dimensional parameter sets this way however. For multi-row parameters use the Wiki tables explained next. Providing Parameter values as a Wiki tableParameters can get values from Wiki tables. Just add a Wiki formatted table to "Parameters in Wiki" wiki editor, and the contents of that table will be used as parameter values. An example:
The values are taken from this Wiki table as:
Practically by defining a table you can define a multi-row parameter sets. This means that when running a test case it will be run multiple times: once for each value row in this table. (A bit more complicated though, absolutely correct only if the test is using all parameters from the table, will be explained later...) Providing Parameter values in Excel sheet in Document ManagementThe next way for providing parameter values is by selecting an Excel sheet from Document Management. An special ".codeBeamer/Test Parameters" directory is automatically created in the current projects' Document Management, where the Excel files can be uploaded to. These files are then can be used as parameter data sources by picking them in the "Excel in Document Management" tab. An example selection:
Hint: It is easy to find "Test Parameters" folder in Document Management by clicking on the "Test Parameters" link on the screen.
The rules here are:
The benefit of using an Excel sheet and Document Management is that you can reuse and share this sheet between different test cases and test sets. Additionally the Document Management controls the access to the shared file, and manages versioning/history of that, so you have greater control over the changes. Providing Parameter values dynamically from CBql or any other sourcesSee: Tip: using dynamic data as Test Parameters like cbQL results or any plugins' result Sharing parameter values between Test Cases and Test Sets: Parameter inheritanceAs mentioned earlier parameter values can be assigned to both Test Cases and Test Sets. As known the Test Cases and Test Sets can be organized to tree-hierarchies (i.e. any Test Case/Sets can have a parent and many children). These parent-child hierarchies of the Test Cases and Test Sets are used when looking up parameter values, as the "child" elements inherit the parameter values defined on higher levels. Practically it is desirable that the functionally similar Test Cases are grouped below a parent/container Test Case, and/or the functionally similar Test Cases are grouped to one Test Set or hierarchy of Sets. If this is happened so then it becomes easy to share parameter values between them by defining these values on the parent elements. How inheritance works: the search for parameter valuesWhen a -parameterised- test runs the Test Runner will search for the missing parameters in these hierarchies bottom-up as:
In the following example the search for the parameter values is performed in this order: "A" TestCase, "B" TestCase, "Y" TestSet and "X" TestSet: X (TestSet) └── Y (TestSet) └── B (TestCase) └── A (TestCase) This search stops the earliest when "some" value found for all parameters. (For example if all parameters got value by scanning the current Test case then no parents will be scanned at all.). In this search it may happen that multiple parameter sets are defined at different levels of the scanned hierarchy. These parameter sets are kept and merged if necessary to produce a complete parameter-set. The details of that is explained in Advanced: Parameter resolution inheritance and merging Reviewing and managing Parameter configurationsOnce you start using parameters and provided few values the question starts to arise like:
The "Test Parameter" section shown on the detail page shows this information. The tab is available for both test test cases and test sets. An example screenshot:
This part contains:
Parameterised Test RunningWhen running parameterised tests the Test Runner will automatically resolve the parameter's values and will replace the parameter placeholders with the actual values. This means that the parameters are not any more visible in the text of the test, however the actually used parameters are displayed for the user. Following screenshot illustrates: When the test case contains some parameters the "Using parameters..." side-bar will appear in the Test Runner. This displays:
If a parameter's value is missing then the original placeholder is left in the text. For example if the ${speed} parameter has no value then the text "${speed}" will be kept in the test's text. Providing missing parameter values during the run of a TestStarting with codeBeamer 8.1.0 if a parameter's value is missing while running a Test then the Test Runner will ask for values for the missing parameters. The following dialog appears, where the Tester has the option to fill the missing values:
The rules are:
Running a test several times driven by parameter-valuesIf the parameters defined in a test has multiple values (for example there are multiple matching rows in the Excel sheet), that will cause that the Test Runner will display the same test as many timesas many value(-sets) available. The number of tests to run is displayed in the runner.The Test Runner will automatically choose and offer the next test-parameters by picking the first parameter-set which has not been run yet. If the Tester wants to choose a different parameter-set manually then can select it by clicking on the "Select..." button in the runner.
Then an overlay dialog appears showing the Parameter-sets which has not been run yet. This is not possible to select already completed Parameters here.
This example shows 3 parameter sets available. The current Parameter-set is selected, but by clicking on some other and choosing the "Select" button the user can select a different one. If you select a new Parameter here then the current Test results will be lost, because the test-runner will be reloading with the new Parameters inside: a warning dialog will be shown to pervent this. Also the Tester can skip the remaining parameters completely. So the remaining parameterised runs of the current test case can be skipped by clicking on the "Skip All" button in the parameters' side-panel. This might be useful if there are simply too many parameters found or generated, like in case of 100s or many more parameters... Skipping some Test parameters and Partly Passed resultWhen running Tests with parameters the Tester may decide to skip some of the parameters. That can be done by clicking on "Skip" button next to the Parameters' part inside the Test Runner.
When Parameters are skipped however that is indicated by a special result Partly Passed for the Test Runs. This means that the:
The Partly Passed result is handled similarly to the Passed result, however this is indicated on the TestRun page as follows:
The skipped parameters are remembered and a "Partial" badge indicates the partial results. During the run you can still select and re-run previously skipped by parameter using the "Select..." button of the parameters: Adding Partly Passed support to legacy Test Run trackersThe Partly Passed result was added in codeBeamer 7.8, however if you have some Test Run tracker where you want to see this result too then do this:
Reporting bugs of parameterised test case runsWhen a test with parameters are run and a bug is being reported in the Test Runner the generated bug report will automatically contain the actual values of the parameters. This makes the bug reproducible. A screenshot: Reviewing Parameterised Tests' resultsAfter a Test Run is complete the result of each parameterised tests can be viewed by looking at the Test Run's "Children" tab. This example shows when a test had 3 parameter sets and so 3 matching runs: As shown on the example this "Run of Engine..." test had 3 parameterised runs. They appear below and as child of a container test-run which is automatically created when a single test case has several parameterised runs. This container test run issue summarizes the result of each parameterized runs, in our example it is "FAILED" because one of the parameterised runs are FAILED. Additionally if you click on the result test-run of a parameterised tests you should see the actual parameters in the description of the test-run. This screenshot shows that: Parameterised Tests and the Coverage BrowserAs expected the Coverage Browser will show the result of parameterised tests too, however it will not show all parameterised test runs, but only the aggregated result of them. For example if the "Engine test" was run 3 times (because it had 3 parameter-rows), all was PASSED then the coverage will show one PASSED test. If however one of these 3 test was FAILED the coverage will show FAILED as well. This example screenshot shows such aggregated result: Advanced topicsConfiguring existing Test Case and Test Set trackers for parameterisationFor using (=defining) parameters you don't need to do any configuration, by entering the ${parametername} placeholders to description or name fiels will work seamlessly. For configuring parameter values you should add a new wiki field with the name "Test Parameters" to the Test Case or Test Set trackers configuration. This field stores the configured values (or the reference to the Excel sheet). Open the configuration gui, and add a new custom field like this: The permissions of this field should be set that typically the "Test Engineer" and "Stakeholder" roles should have read-write permissions on this field, as they will configure the parameter values. The "Tester" roles should have at least read permission, otherwise the configured parameters won't be found during run of the test. Also if you want to use an Excel sheet in the Document Management then you should set the permissions of that Excel sheet so that all "Testers" should have read access on these parameter files. The default permission configuration of the "Test Parameter" field looks like: For Test Run trackers and Bug trackers there is no need for configuration. The parameters' values are stored in the "description" of these issues, so as normally these fields are writable by Testers that should work fine by default. Parameter Resolution: Inheritance and MergingWhen a test case is run, the test runner resolves the parameter values as follows:
The search for parameters is performed bottom-up (from children towards the parents) and continues until all parameters have a value. The search stops when values for all parameters are found. While searching, any partially complete parameter sets are merged together. This is done to build a complete parameter set. Runner keeps only used and distinct parameter from the parameter-valuesIt may happen that the parameter values found during the search may contain many more parameters than needed for the current test-case. This is especially likely if the parameter-source is shared between several test cases. When this happens the parameter resolution algorithm will ignore and remove the irrelevant values, and only keep the unique/distinct values in order to avoid duplicated runs of the a test case with same parameters. As an example imagine that the current test case is only using a parameter ${fruit}, and the parameter values found is this table:
Because the "city" and "price" parameters are unused in this test case and they won't appear in the test case anywhere they are irrelevant. So these values/parameters can be dropped from the parameter-set and so only 2 rows would remain both having the "fruit=banana" value. If we would run that it would cause that same test would run with same "banana" parameter twice, which would be quite annoying for Testers. So to avoid this only the unique parameters-set combinations are kept. Essentially this works same way as SQL's SELECT DISTINCT statement. Merging (joining) parameter valuesWhen scanning parameter-values from the test-case and test-set hierarchy it may happen that partially complete parameter-sets are found on different levels. These parameter-sets are joined together to fill up the missing parameter-values required for running the test case. Cross joining parameter valuesLet's see an example:
During the scan for parameters the resolver find the 1st table, but it only contains values for ${fruit} parameter. These values are kept, the scan continues to the parent test-set, which contains the 2nd table, which contains several values for the ${city} parameter. Now the join algorithm does a CROSS JOIN to join these 2 tables in order to produce the possible parameter combinations. Effectively the test will run with 4 different parameter-rows produced by the cartesian product:
Natural (left outer) joining parameter valuesThe cross join performed is nice but may produce too many parameter-rows. However if the parameter-sets have few common columns then a NATURAL LEFT (OUTER) JOIN is performed similar way as SQL databases would do. An example:
Then the resolution algorithm will do a join of the 2 parameter sets similar to NATURAL LEFT JOIN in SQL. This means that it will join the 2 tables using the common parameter (=column) names in the parameter sets, and for all rows in the 1st set it will join all matching rows in the 2nd. An "LEFT OUTER" join is performed, so all rows in the 1st set is always kept even when there is no matching row in the 2nd set. The result in this case will be:
So the test case will be run 5 times for each row in this joined parameter set. Limiting number of combinations produced by joinsAs visible on the previous examples the JOINs may produce large number of combined rows. Therefore the resolution algorithm limits the number of rows joined to one "left" row to 10. So if one row in the "left" parameters has more than 10 matching joined row on the "right" (using natural or cross join) only the first 10 rows is joined and rest is ignored. This helps to reduce the number of combinations. |
Fast Links
codebeamer Overview codebeamer Knowledge Base Services by Intland Software |
This website stores cookies on your computer. These cookies are used to improve your browsing experience, constantly optimize the functionality and content of our website, furthermore helps us to understand your interests and provide more personalized services to you, both on this website and through other media. With your permission we and our partners may use precise geolocation data and identification through device scanning. You may click accept to consent to our and our partners’ processing as described above. Please be aware that some processing of your personal data may not require your consent, but you have a right to object to such processing. By using our website, you acknowledge this notice of our cookie practices. By accepting and continuing to browse this site, you agree to this use. For more information about the cookies we use, please visit our Privacy Policy.Your preferences will apply to this website only.