You are not logged in. Click here to log in.

codebeamer Application Lifecycle Management (ALM)

Search In Project

Search inClear

Tags:  Test Management

Test Management with Codebeamer

Table of Contents

Watch videos on test management here.

Overview

The test management facilities in codebeamer enable controlled testing of software, hardware and any kind of product or system.

What makes codebeamer different from other test management software available in the market, is its holistic approach to collaborative testing. In codebeamer, tests are not existing in an isolated silo, but are tightly integrated with the requirements, the Wiki documentation, the bug and task trackers, the source code and the full lifecycle of the tested product.

codebeamer's access control and web interface makes it ideal for collaborative testing, dividing the work among test engineers (defining test cases to be executed, specifying the configurations and releases to be used, and coordinating the work), testers (executing the test cases).

How does Test Management work?

The best practice summarized:

  1. Define the requirements of the product precisely.
    Ex: battery life must be at least 5 hours.
  2. Set up a Test Plan with Test Cases to verify whether those requirements are met. A test case consists of pre-actions, test steps and post-actions.
    Ex: Pre: charge the battery. 1. Turn the device on. 2. Leave it turned on for 5 hours. 3. Check the battery indicator. Is there 10% left? Post: turn the device off.
  3. Group test cases into Test Sets. Grouping can happen according to multiple aspects: by type, by importance and so on.
    Ex: Smoke Tests - this set contains the test cases that must pass even for the internal alpha releases.
  4. Define Test Configurations.
    Ex: Android Honeycomb, Android Ice Cream Sandwich and Android Jelly Bean.
  5. Define releases. These are the version of your product.
    Ex: 0.9.1-beta or 1.0.0-GA.
  6. Initiate Test Runs by selecting a Test Set, one or more Test Configurations and one or more Releases.
    Ex: Execute Smoke Tests on HoneyComb with 1.0.0-GA.
  7. Execute the Test Runs. If problems are found while executing the runs, you can quickly report them without leaving the context of the test run.
  8. Analyze the results and the coverage to make sure that nothing left untested and your quality criteria are met.
    Ex: make sure that the battery life tests pass on all Android configurations with our 1.0.0-GA

Glossary

Testing Roles

Test Engineer

A Test Engineer is a professional who determines how to create a process that would test a particular product, in order to assure that the product meets applicable specifications. Test Engineer produces the Test Cases.

Tester

A Tester is responsible for the core activities of the test effort, which involves conducting the necessary tests and logging the outcomes of testing.

Test Plan

A Test Plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements.
A Test Plan is typically prepared by Test Engineers. Technically, the Test Plan is a tree structure that consists concrete Test Cases and folders. Folders are container items to group related test cases or subfolders.
In codebeamer, Test Plans are built and maintained in the "Test Cases" tracker. Both folders and test cases are represented by tracker items in that tracker.

Test Case

A Test Case is a detailed procedure that fully tests a feature or an aspect of a feature. Test cases consist of detailed descriptions, pre-actions, test steps, and post-actions.
Pre-actions should be executed before the actual test steps to initialize the environment, to populate test data and for other types of preparation. Post-actions should be executed after the steps, to close open resources and to clean up the environment. Post-actions should be executed regardless the actual result of the test case, unless documented otherwise.
Technically, Test Case is a tracker item in the "Test Cases" tracker.

Test Step

A Test Step is one step in a Test Case procedure.

Watch a video on Test Cases and Test Steps here. Please note that the default values for "Action" and "Expected result" fields in Test Case configuration does not have any effect during test step creations and Test Runs

Test Set

A Test Set is a logical group of related Test Cases. It is associated with a list of releases it is allowed to be executed with, and a list of test configurations that are allowed to be used while executing it.
Technically, Test Sets are maintained in the "Test Sets" tracker.

Test Configuration

A Test Configuration is one possible configuration of the product under testing.
Tests can lead to different results depending on the configuration used, therefore configurations must be recorded for effective testing.
Technically, Test Configurations are stored in the "Test Configurations" tracker.

Test Run

A Test Run is an actual execution of a Test Set using one specific Release and one specific Test Configuration.
Test Runs are initiated by Test Engineers and are executed by Testers.
Test Runs are maintained in the "Test Runs" tracker.

Watch a video on Test Sets and Test Runs here.

Test Results

The system starts showing results for the test run as soon as a result is available for at least one test case in the run. It means that depending on the status of the Test Run, the results shown by the system could be partial results or final one.

Passed

Passed results mean that all executed test cases and all checked aspects worked as expected so far.

Partly Passed

Has similar meaning to the Passed result: all executed test cases have passed. Used when the Test is parameterized, and the Tester skips some of the Test parameters. All parameters used were Passed.

For more information see: Test Parameterisation and What is Partly Passed?

Failed

A test run is considered failed, if at least one of the checked aspects (test case) did not work as expected.

Blocked

Meaning: the test could not be completed yet, because something blocked the full completion of the testing procedure. Tests with this result are not considered as closed! After removing the cause of the block, the tests should be continued from the point of the block.

Not Applicable

Test step and test run results can be set to Not Applicable in case certain steps or runs are considered not relevant. Test steps with Not Applicable results can be ignored.


Logic of the Not Applicable result:

  • When at least one Test Step Result has a value other than Not Applicable, the result of the Test Run shall be calculated as if the Test Step with value Not Applicable result did not exist. That is, it is ignored.
  • When all of the Test Step Results in a Test Run have a value of Not Applicable, the result of the Test Run shall be Not Applicable.
  • If a Test Run has only Passed and Not Applicable evaluations, the final result should be Partly Passed (only in case of Steps).

Test Coverage

Test Coverage is a metric to express what requirements of your product are verified by test cases, and by how many of those test cases. Simply speaking, the more Test Cases exist for a Requirement, the more it is covered.
codebeamer enables easily mapping test cases to requirements.

Creating Test Cases

In codebeamer, a test case is an item in a Test Case typed Work items. When you create a new project, a default Test Case tracker will be available inside the "Trackers" menu on the top.

You can create a test case in the traditional way: just click the New Item link when browsing a Test Case category. The Test Case editor page is however different from the editor of other items.


The main parts of the page:

  • the top part is the same as for ordinary items. You can set the basic properties of the new item here.
  • Test Step Editor: the bottom part of the page. Here you can specify the actions and expected results (see later) of each test step, and change the order of execution.
  • Test Case Tree: the tree in the right panel contains all the test cases previously created in the same category. When you click on a node, the steps of that test case will be shown on the bottom of the tree. You can drag and drop these steps to your newly created test case.

There are four special fields in the test case trackers:

  • Verifies: here you can select a requirement that is verified by the currently edited test case.
  • Pre-Action: describe the things that must be done before the test steps can be executed, e.g. preparatory steps or entering test data. You can define the prerequisites here. Starting from codebeamer 9.2.0 this field can be edited with the rich text editor.
  • Post-Action: in this field you can describe the things that must be done after the test steps were executed, e.g. clean-up. Starting from codebeamer 9.2.0 this field can be edited with the rich text editor.
  • Reusable: you can select another test case that can be Reused. The "Reusable" TestCases are easily filtered on the right side in the Test Case tree by clicking on the Reusable only filter.

Test Step Editor

The test step editor is a simple table with editable cells. The Action and Expected result cells can be edited with the rich text editor.


Test Step attributes

A test step is defined by three attributes: the action, the critical flag and the expected result.

  • The Action describes what the tester should do in that given step.
  • The Critical flag decides whether the test should be continue if the current step fails.
  • The Expected result field describes what result the tester should get after executing the test step.

Adding new test steps with the test step editor is very easy and can be done in several ways:

  • click on the Add step link and you will get a new row to the end of the table
  • when the cursor is in the last cell of the table just press Tab key on your keyboard to add a new row to the end of the table
  • use the icons to add a new row before/after the currently edited one

You can edit a cell by clicking in it. Rows can be re-arranged by drag and drop. And you can also delete rows by clicking the button.

The Action and Expected Result fields can contain wiki markup but this is only rendered after saving the test case.

You can access the same test step editor in the Document View of the test case trackers.

Generating Test Cases for Requirements

When viewing your requirements in the Requirement Document View, you can easily generate test cases for them. Just click on the + icon next to the Requirement select "Generate Test Case" and choose the target TestCase tracker from the sub-menu:

This will immediately generate a test case for the requirement (or a whole test case structure if it is a folder) in the selected Test Case tracker of the project.

Please note that this test case is just a starting point, mostly filled with empty values. You should open it and give it an exact definition, specify the test steps and elaborate it in general.


Organizing Test Cases to Test Sets

Earlier in the testing process, Test Engineers defined the Test Plan which contains a big tree structure of Test Cases. To be able to execute these tests, the Test Engineer will decide which subsets of these Tests are to run together. The selection of the Test Cases is called Test Set, which must be defined before executing the test runs. The Test Set also defines the order how the Test Cases are executed, and whether these tests must be executed in this specific order or in any arbitrary order.

To create a Test Set from Test Cases,

  1. Select the relevant Test Cases from the left side tree,
  2. Right-click on the selected Test Cases
  3. On the right-click menu, click Create Test Set from selected Test Cases

The maximum number of Test Cases of which a Test Set can be created is limited when the Test Set is created from the Test Cases tracker view. The default maximum value is 1000, which can be modified in the Application Configuration.

In case the number of Test Cases selected in the left side tree exceeds the limit set in the Application Configuration, the following error message is displayed:

You can generate Test Sets from the maximum of 1000 Test Cases.


Creating Test Sets

In codebeamer a Test Set is an item in a Test Sets tracker. When you create a new project, a default "Test Sets" tracker is available below the "Trackers" menu.

You can create a Test Set by just opening the Test Set category and click the New Item link. The Test Set editor which is a specialized item editor page will appear, that will look like:


The main parts of the page:

  • the top part is the same as for ordinary items. You can set the basic properties of the new item here, like the Status.
  • Test Cases selection: the bottom part of the page. Here you will see those Test Cases which are selected for inclusion in the Test Set. Test Cases can be added here by dragging-and-dropping Test Cases from the Test Case tree on the right. Also you can re-order the selected Test Cases in this list by dragging and dropping them upwards or downwards. The order may be important, as the tests will be to be executed in this order during testing if this is a sequential test set, or will be offered in this order by default if it is not.
  • Test Plan tree: the tree in the right panel contains all the test cases in the Test Case tracker. You can drag and drop these Test Cases to the Test Cases selection at the bottom to add them to the Set.
  • Test Sets tree: the tree in the right panel contains all Test Sets defined earlier. You can drag and drop these Test Sets to the Test Cases selection, and so you can combine the selection from several previously selected Sets.
  • Test Set Libary: this tree shows TestSets defined elsewhere in other projects. You can browse and reuse TestSets from there too.

There are few special fields in the Test Set item:

  • Type: A Test Set can be optionally marked as folder. Such Test Sets are hierarchical elements used to contain other Test Sets as child items. Folders themselves should not include test cases. Their primary aim is to provide easier management for complex test set structures.
  • Possible Configurations: The Test Configurations with which this Set is allowed to be executed. Can be used to reduce the selection if you have big number of Test Configurations, but only few of them are targeted.
  • Possible Releases: The Releases on which this Set is allowed to be executed.
  • Sequential: If the Test Cases in the Set must be executed strictly in the order they are ordered in the Test Cases listing.

Adding/Deleting Tests in Sets, Ordering them

As mentioned previously, you can add Test Cases to Test Sets by dragging and dropping from the Test Plan tree in the Test Set editor page. You can drop several Test Cases or even folders of Test Cases, all of them will be added to the Test Cases listing at once.

The Test Set can not contain Folder or Information typed Test Cases: so if you drop a "Folder" Test Case to the Test Set then this will NOT be added to the Set, but all of its children/grandchiden etc Test Cases will be added (except the Folder or Information Test Cases obviously).

The order of the Test Cases is important for the execution: this is the order the tests will be (should be) run by the Tester persons. So the dropped Test Cases are initially added in the same order as they appear in the tree. When dropping the Test Cases, they will be inserted to the selection exactly where you drop them, so you can for example choose to add the new selection in between two test-cases.

Those Test Cases which are already added to the Test Set will appear as "grayed out" in the Test Case tree. This however does not prevent you from dragging-and-dropping the same Test Cases again, but dropping them again will add them to the new location, and the previously added duplicates are automatically removed. Tip: this is an easy and effective way to reorganize the Test Cases' order, too!

Test Cases can be removed from the listing by clicking the button that appears when moving the cursor to the test case in the list.

Ordering of Test Cases in the Test Case listing can be altered by dragging an dropping them upwards or downwards in the list, on need.


Duplicated Test Cases in Test Sets

When adding a Test Case to a Test Set it may happen that the same Test Case is added twice or more as duplicates. The Test Set can optionally contain a TestCase multiple times, if the "Allow Duplicates" boolean field of the TestSet is set to true.

When running a TestSet with duplicate TestCases the Test-runner dialog will display and run the same TestCase as many times as it appears in the TestSet, and in the same order as they appear in the TestSet.

When building the TestSet the duplicated TestCases are indicated with an special icon. The screenshot shows two duplicated TestCases in this TestSet (marked as #1), and the icon #2 indicates that these are duplicates. To remove the duplications click on the #2 icon, which removes the duplicates of that Test Cases from the list.

To configure if a TestSet allows duplicated TestCases you can:

  • Set the "Allow Duplicates" field to true/false in your TestSet. (Note: this field is hidden field in Tracker Configuration from CB 8.1, so you may need to re-enable that...)

    For legacy trackers add this field if necessary:
  • Set its default value globally by editing the general.xml like this: <testManagement testSetAllowDuplicates="true" ></testManagement>

Test Set Report

On the Reports page, and in Table View, the Test Cases field of a Test Set tracker is rendered as a table field, therefore, only the first value of the Test Cases field is returned.

The following message is displayed when hovering over the warning icon: Only the first value appears for the Test Cases field. Please open the item in item Details mode to be able to see all Test Cases field values.



Composing Test Sets from other -included- Test Sets

For simpler Test scenarios it is sufficient to build the TestSets from simply adding individual TestCases to it. However as the number of Tests grow the Test architects may want to compose smaller or larger groups of TestCases into TestSets, and then build bigger TestSets from these "small" TestSets as building blocks.

This now is possible: the TestSets can include other TestSets, and whenever the included TestSet changes (TestCases are added or removed) that will be automatically reflected in the outer/bigger TestSet.

So to include a TestSet your currently edited TestSet just drag-and-drop the desired TestSet into the editor. Such dropped TestSet will appear in the TestCase/TestSet listing like the screenshot shows:


So you can mix the TestCases and TestSets inside a bigger TestSet as you wish. The rules are:

  • When a TestSet is dropped it will not be expanded to its TestCases. In previous versions (< CB7.8 it was always expanding)
  • The included TestSets and TestCases can be differentiated by the icon appears below their names.
  • An included TestSet can be exploded to its TestCases by clicking on the green-arrows icon, and therefore the "inclusion" is not happening any more, but the content of the included TestSet is added instead.

Once such a "composite" TestSet is saved you can view the detailed TestSet/TestCase hierarchy by drilling down the usual "down-arrow" icon in front of it:


Running a composed Test Set

When running a TestSet which includes other TestSets the TestRunner will walk through all the TestCases in the order as they appear in the TestSet/TestCase hierarchy. The few consequences are:

  • By default only the "Active" TestCases are executed and the rest are skipped automatically
  • The included TestSets are expaned to their TestCases during run, and the TestCases' order will stay same as it was in the included TestSets.
  • Duplicate TestCases added by the included TestSet are running multiple times: If an included TestSet contains the same TestCase which was appearing already once on the higher level or elsewhere that will be executed twice/as many times as it appears in the hierarchy.
  • Parameter resolution inside of an included TestSet: if a TestCase has parameters, then it will try to pick the parameters from the "included" TestSet first, and then the outer Testset.

Quick assign of Test Cases to Test Sets

When you are on the document-view or table-view of a Test Case tracker then you can also quickly assign a selection of Test Cases to existing Test Sets or create new Test Sets from these selection.

This functionality is in the menus:

There you can:

  • Choose the "Create Test Set..." menu which starts a wizard to create a new Test Set which contains the selected Test Cases.
  • Or Choose the "Add selected Test Cases..." menu which will show a dialog where you can pick a recently viewed Test Set from the history or search for any Test Set.
  • Or by using the sub-menu of "Add selected Test Cases..." menu: this shows the latest 5 recent Test Sets visited. If you choose a Test Set from this menu then the selected Test Cases will be added to that immediately.

The dialog where you can choose Test Set looks like this below, here you can do free-text search for any Test Sets:

Note: the Test Set history and search will only show those Test Sets where the user has permission to add Test Cases into. So if you don't find some Test Sets here that might be the reason...

HTML Mode for Table Fields when Comparing Tracker Items Versions

Since Codebeamer release HUSKY, the availability of the HTML mode is extended to table fields as well on pages where items are compared, for example item compare, item revert, working set merge, review hub and so on.


On the issue difference overlay, you can choose between Wiki and HTML modes of display either in the Change display mode drop-down list on the top right, or in the separate display mode drop-down lists available for each table fields.

The display mode set in the Change display mode drop-down list is applied to the whole content of the overlay. The mode of display can also be changed separately for each table field using the drop-down lists next to them.


The default value is Wiki. If the display mode is set to:

  • Wiki - The values of the table fields are displayed in Wiki Markup.
  • HTML - The contents of the table fields are rendered as displayed on the item details page.

Filtering on the Test Cases & Sets Tab

Since Codebeamer release HUSKY, filters, AND/OR and Order by logic, and columns can be added to the Test Cases & Sets tab on the item details page of Test Sets.


The usage and operation of these widgets on the Test Cases & Sets tab is the same as, for instance, in reports or in table view.

Most filtering options are available (Default Fields, Common Reference Fields, Reference Filters, Suspected Link Filters, Review Hub Filters, Item Based Review Filters, Tag Filters, Historical, Other), and the special fields of those trackers from which items are added to the Test Set are also included. However, the Filtering Reference Items section has been disabled to avoid performance issues.

The context of filtering, ordering and the list of columns that can be added is defined by the items included in a Test Set.


The Test Case & Set column is fixed, and cannot be removed as it displays the name of the test cases and test sets included in the Test Case.


Shared Fields can also be created between different trackers on the Test Cases & Sets tab to make filtering and ordering more effective.


When adding Test Set B to the Test Set A, the test cases of Test Set B are listed on the Test Cases & Sets tab as the children items of Test Set B, highlighted in yellow background. The set filtering criteria is applied both to the parent and children items, while the ordering is only applied to the parent items on the Test Cases & Sets tab.

In case a parent item is returned that meets the filtering criteria, all of its children items are listed when expanding the parent item. The children items that meet the filtering criteria are highlighted in yellow. The ones that do not meet the criteria are displayed in red background, and the following note is displayed on hover: "The item is not matching the set Filter(s) but displaying because one of its Child/Parent items is matching the Filter(s)."

CB:/displayDocument/TestCases%2526SetsFilters.png?doc_id=30440589&version=1&history=false&notification=false


The set filters are not stored, but lost when refreshing or navigating away and then back to the page. The filters cannot be used when editing the item details page.


Test Sets and Test Cases are displayed on the Test Cases & Sets tab with the following structure:

Example
Description
[TESTSET - 12345678] Entertainment Tests
PTC Testing / Codebeamer
The first row shows the name of the Test Case or Test Set.
The second row shows the folder path (Project name / Folder name) of the Test Case or Test Set.

Initiating Test Runs

You need three components to initiate a new test run:

  1. First pick a Test Set in "Test Sets" by clicking on the "Trackers" top menu, then clicking "Test Sets" and finally clicking one of the Test Sets previously set up. You can also create a Test Run from a baselined version of a Test Set. Just select a baseline under Baselines then select a Test Set from a Test-Set tracker.
  2. Click "New Test Run" link above the description of the picked Test Set.
  3. Choose the test run's priority, the members of the Project who should execute it, give it a descriptive name and a detailed description, optionally.
    Choose one or multiple Test Configurations and one Release that the Test Cases contained by this Test Set should be executed on. Note that the Test Case may restrict the selectable configurations and releases to allow running it only on a subset of those parameters. (This restriction is part of the Test Set definition, see the previous section for details.)
    Submit the form.
  4. Finally your Test Run item is created and ready for run.
  5. Now the testing staff members receive notification emails about the newly defined test work. They can now come and start actually executing the test runs assigned to them.

Creating Test Runs directly from Test Cases (without Test Sets)

Alternatively - for simplicy- you can run a single Test Case or a selection of Test Cases without putting them to a Test Set.

How? Screenshot
In the Document View of Test Cases select items in the left tree (using ctrl+click or shift+click), and in the right-click menu choose the "Generate Test Run(s) from selected" option, and choose the Test Run tracker.
In the Document View of Test Cases select an item in the middle panel and click on + sign, and choose the "Generate Test Run" plus the target Test Run tracker here
In the Table View of Test Cases select items by checkboxes, and then click on New Test Run icon
On a single Test Case click on New Test Run menu

In all cases a new Test Run will be created which will contain and run the selected Test Cases. Optionally you can also add the children Test Cases recursively during this process...


Adding and Removing Test Cases to Test Runs Directly Generated from Test Cases

Since Codebeamer release 2.0 (HUSKY), you can edit the content of Test Runs in Ready for Execution (in case of regular test runs) and To Be Approved (in case of formal test runs) statuses that were generated directly from Test Cases.

Test Cases can be added to and removed from Test Runs, as well as the order of Test Case execution can be modified.

This feature does not apply to Test Runs generated from Test Sets.

Adding Test Cases to a Test Run

To add a Test Case, open the relevant Test Run and click the Edit icon on the top left of the toolbar.


Test Cases can be added to the Test Run via drag and drop from the right side selector tree panel to the Test Cases section of the current Test Run. The order of the newly added Test Case can also be modified.

The drag and drop selector tree panel is disabled by default, and can be enabled in the Application Configuration by the System Admin. The feature remains disabled for items where an item review workflow is implemented.

"testManagement": {
  "enableTestCaseEditAfterTestRunCreation" : true/false
}



When adding Test Cases to a Test Run, it may happen that the same Test Case is added more times. To avoid creating Test Runs with duplicated Test Cases, the following warning message is displayed:There are duplicated Test Cases in this Test Run.

with the Remove All Duplicates button that deletes all the duplicated Test Cases when clicked. Duplicates can also be removed by clicking the This is a duplicate Test. Click to remove other duplicates and keep this. option displayed when hovering over a duplicated Test Case.


Test Runs with duplicated Test Cases cannot be saved. When trying to save such Test Runs, the following warning message pops up:Cannot save because there are duplicated Test Cases in this Test Run. Please remove the duplicates.


Only AcceptedTest Cases can be added to a Test Run if:
  • The Test Run is generated only from Accepted Test Cases.
  • The Test Run contains an Accepted and a Not Accepted Test Case, and the Run Only Accepted Test Cases option was selected in the Submit Work Item window.

Deleting Test Cases from a Test Run

To delete a Test Case from a Test Run, use the Click to delete from Test Run. icon displayed when hovering over a Test Case.


A Test Run can only be saved if it contains at least one Test Case. Empty Test Runs cannot be saved. When trying to save Test Runs with 0 Test Cases, the following error message is displayed:Could not save Test Run because no runnable Test Cases were found.

Important notes:
  • The test re-run dialog does not allow editing. After generating a Test Run again, editing is enabled.
  • The grand-child item of parameterized Test Case can only be deleted together with its parent Test Case. Deleting a parameterized Test Case deletes all of its parameters, that is, its children Test Cases as well.


Options for creating Test Runs

When creating a Test Run you have few options to configure which determine how the Test Runs are created, what Test Cases they contain, and which users gets assigned to the new Test runs.

You can change these options by clicking on the "Test Run Creation Options" toggle on the Test Runs page.

The options are:

Option Meaning
Include non-Accepted Test Cases? By using this option you can choose if the not-accepted Test Cases are included in the Test Run or not - whether they will run or not? (This only appears if there is at least 1 non-accepted Test Case.)
How would you like to distribute the work between the assigned users and roles ? Decides if multiple Test Runs are created for each Assignee user or not. See below for more detailed explanation...
Include children of the selected Test Cases? Decides if all the children of the selected Test Cases also is included to the new Test Run. This will not appear when running a Test Set, but only when running a selection of Test Cases -without using a Test Set-.
Create one Test Run for each Test Cases? Normally the one TestRun will be created that contains all Test Cases/Test Sets selected. However here you can choose to create one Test Run for each Test Cases (for example if 5 Test Cases are selected this option will create 5 Test Runs, and each contains just 1 Test Case)

Distributing the Test Run work between multiple Users or Roles

codebeamer by default uses as "Shared" model of distributing the Tests of a Test Runs, which means that:

  • A Test Run can be assigned to several users or Roles
  • These assigned users can work on the same Test Run at the same time. The Test Runner will automatically distribute the Test Cases of a Test Run to the users who work on them, so they will not get the same Test Case and will not overwrite each others changes, neither test the same thing twice.

However if your Test Team needs more flexibility you can choose to create multiple Test Runs for parallel Testing of a Test Set.

This option appears on the UI when creating a new Test Run as:

There the user can:

  • Choose the default Shared model, when all assignee gets and share 1/single Test Run, and they can work on that together.
  • Choose to create multiple Test Runs, with two options:
    • The "Multiple Test Runs with Roles/Groups" will create multiple Test Runs: one for each User or Role or Groups, and assign the Test Runs to them. This will not expand the Roles nor Groups to individual users, but assign the Test Run to Roles or Groups.
    • The "Multiple Test Runs with Users only": same as previous, except that it will create one Test Run for each members of the Roles and Groups. Roles and Groups will not have any assignments.

So the difference between the two latter options is that the "Users only" mode will create one Test Run for each Users in the Roles may appear in the assigned to, while the "With Roles/Groups" option will assign the Test Runs to Roles/Groups if they appear.

An example: As seen in the previous screenshot the Test Run is assigned to: "bond and Tester". "Bond" is an ordinary user, but Tester is a Role with possibly several members (for example Joe an Fred).

If the "...Users Only" option is choosen then the system will create 3 Test Runs: one for "bond" and two more for "Joe" and "Fred" as they are all in Tester role.

The "...with Roles" option will only create 2 Test Runs: one is assigned to "bond" user and 2nd is assigned to "Tester" role. This role is not expanded.

When creating Multiple Test Runs with multiple Test Configurations

If the "Multiple Test Runs..." option is selected and also multiple Test Configurations is selected then codebeamer will create one Test Run for all combinations of the assignees and Test Configurations.

For example 5 assignees is selected (in the Assigned To selector) and also there are 3 Test Configurations is selected then codebeamer will create total 5*3 Test Runs: 3 Test Runs for each assignee, where each Test Run contains one of the 3 Test Configurations.

Creating Releases for Testing

You can define the releases of your product in one of the Releases tracker: either a "Customer Requirement Specifications" or "System Requirement Specifications"

For that, just click "Trackers" in the top then choose one of the Releases tracker and start adding new items or modifying the existing ones.

Please note that the very same releases will be globally available in this project:

  • You can select these releases as "detected in" or "target release" for tasks, bug reports and other issues.
  • You work with these while planning releases.
  • Etc.

Creating Configurations for Testing

Similarly to releases, you define the configurations of your product that you want tests to be executed on in the "Test Configurations" tracker.

You can:

  • Add a Test Configuration immediately on page where you edit the TestRuns using the "Add" button:
  • Or on "Trackers" top menu, find the Test Configurations tracker, and add/modify configurations there.

Executing Test Runs

Once the Test Engineer has initiated a Test Run, that will be immediately available for the Testers to run. Testers will find the test runs assigned to them in the "My Start ->My Issues" menu. Alternatively, they can open the "Test Runs" tracker and pick the open Test Runs.

There are two kinds of Test Runs stored in Test Run trackers:

  • The top level Test Runs are so called Test Set Runs: One Test Set Run holds information about a single run of a Test Set on a certain platform and version.
  • Child issues of the Test Set Runs (in the same tracker) are the so called Test Case Runs, which represent the results of a run of one/single Test Case.

To execute runs, pick a top level issue (a Test Set Run) from the "Test Runs" tracker. Choose one that is still "open" (completed Test Runs cannot be re-run by default), and click that to get to its properties page. You can click on the "Run!" action to start testing.

The Test Results section shows the detailed results of each Test Case and a pie-chart shows how the progress goes in this Test Set.


Using Test Runner

The current progress of this Test Set Run is shown in the top. Click the "Run!" link to bring up the Test Runner dialog which guides you through the execution of the tests in this Set.

Test Runner is the primary interface to actually do testing.

It helps testers by guiding them through the tests and their test steps to execute. It also records the result of each step (whether passed or failed) and the result the complete test case. Moreover, it makes it convenient to report bugs if the tester faces defects while running tests.

The runner looks as shown below:



The main parts of the page:


  • Progress bar - Within the whole Test Set Run, there is a progress bar on the top, counting the passed and failed tests. It also shows the configuration and release to be used for the current run.
  • Navigation buttons - allowing testers to jump to the next test, or generally browse through the outstanding test cases. This only works if the Test Set is not marked as Sequential, which forbids random execution order.
  • The name, description and attachments of the actual test case.
  • Timer - measuring the duration of each test execution. This timer can be paused to get more accurate results in case the Tester has to temporarily suspend the testing. The main goal of the timing that it helps planning future testing cycles, once such temporal data is collected. See also: Timing can be turned off
  • Attachments to the current test case, if any. Click the file names to download them.
  • Pre-Action, Test Steps and Post-Actions. These parts might be hidden if they are not filled for the test case.
  • Set of buttons - Allowing testers to set the result for the current step or for the whole test:
    [Pass Step] - to mark a step as PASSED.
    [Fail Step] - to mark a step as FAILED.
    [Block Step] - to mark a step as BLOCKED.
    [Clear Step] - to clear the PASSED/FAILED/BLOCKED/NOT APPLICABLE results from the related Step. However, it will not clear the Actual Result value.
    [Not Applicable Step] - to mark a step as NOT APPLICABLE.
    [Report Bug] - to report a bug identified while running a test.
    [Save & Next Test Case] - to save the current test result. It closes the test if it already has a result.
    [Pause Run] - to save the results of the Test Case, and close the dialog. The test is left open, therefore, it can be continued next time.
    [End Run] - to close and mark the whole Test Set Run as complete. None of the remaining Test Cases can be run from this point.

Executing test steps and recording results

If the test - which is shown in the Runner - has steps defined. then those steps are displayed and first step is selected for execution. It is denoted by the highlight around the current step: it appears with a small icon, yellowish background and its "Actual Result" edit box is open (as seen in the previous runner screenshot).

The Tester shall read and execute the Action of the test, then compare the result with the Expected Result, and record the difference into the Actual Result box - if necessary. Starting from codebeamer 9.2.0 the Actual Result can be edited with the rich text editor. By clicking the "Pass Step" / "Fail Step" / "Block Step" / "Not Applicable" buttons, the focus moves to the next step.

It is possible to go back to any previous Step by simply clicking the step's row. One can then update the Actual Result or the step's result.

When all the steps are completed, Runner will ask if you are ready to move to the next test case. At this stage, tester can optionally report bugs and this is the time to execute the Post Action. If tester agrees, then the Runner will show him the next available test.

If all Test Cases are completed in the current Test Set, then the execution is done, and the Runner closes automatically: and the Test Set goes to Finished status.

If there are no steps defined for the current test case or the tester does not want execute all of them for some reason, then the user can mark the current test case as passed / failed/ blocked using the buttons at the bottom.

Finishing run without completing the remaining Tests

The "End Run" button can terminate the Test Set Run any time, without executing the outstanding test cases. In this case, the user must decide what is the final result for the Test Set:

Reporting Bugs during Test Executions

Testers can report bugs in context of the current test being executed in Runner at any time during the execution of the test.

Click the "Report Bug" button to suspend the Test Runner temporarily and to display a bug reporting interface.

The user has the option to choose the bug tracker where the bug will be reported. This tracker is remembered and same tracker will be offered next time as default.

The bug to be reported is automatically initialized from the current test run's data. It captures many details:

  • including the properties of the test case under execution
  • the configuration and release used
  • the results of each test step.

For trace-ability reasons, the bug will also be automatically associated with the current test run. These help the person who will fix the bug later, to reproduce the same environment and the same situation efficiently.

In any case, the tester can update the properties of the run, and submit the bug report. Once the bug is submitted, the Runner will continue.

Filling Bug's properties from Test Run or Test Case

When generating a new Bug report during Test Case run then the generated Bug's properties will be filled up by the certain properties from the related Test Run or Test Case.

This means that:

  • If Bug being created contains a field with the same name and same type as the Test Run or Test Case then the field's value will be copied into the Bug
  • If Bug contains a field which is configured as same Shared Field (therefore also has same type) as the Test Run or Test Case then field's value will be copied into matching Shared Field of the Bug

Additionally:

  • Empty field values are NOT copied
  • If both the Test Run and Test Case contains some value for field "X" then the "Test Run"'s value will be kept: this is considered "stonger"

The user has the option to disable copying from the Test Run or Test Case by turning off checkboxes on dialog:

The copied values are displayed on the TestRun dialog where the user can correct the values if necessary. The dialog will show information about what values are copied:

Reporting duplicate Bugs during Test Execution

During extensive Testing it often happens that the same bug is found several times during execution of a Test Case. To avoid duplicate/repeated bug reports for the same bug again the Test Runner shows the already reported bugs for the same TestCase: and the Tester can choose an already-reported bug instead of creating a new one.

This appears like this on the Test-Runner dialog: by clicking on the "Report this" button next to the bug the Tester can choose an existing bug and associate that to the Test Run:

Reporting already existing Bugs

Another option for reporting Bugs to a Test Run is to find any existing bug using free-text search. This is available on the report-bug dialog: if you enter a search text you can find any bug and add those to the Test Run like this:

Reporting Bugs to any Test Runs any time - even after running them

Normally Testers would report Bugs from the Test Runner: while running the Test Run, but sometimes they may want to add new or existing Bugs later or any time.

Since codebeamer version 10.0 this is possible on the Test Run's page:

  • From the menu of a TestRun: which adds the bug to the current TestRun (both TestCase's Run or TestSet's Run)
  • Or from the details of the Test Cases' Runs by clicking on the "Add Bug..." link, which adds a bug to the related TestCase (and its TestCaseRun):

These "add bug" links will show the same "Add Bug" dialog that appears from the Test Runner with the options of:

  • Create a new Bug for the TestCase
  • Choose an existing Bug for the same TestCase: this won't appear for Test-Set-Runs
  • Find any existing Bug using free-text search

Finding Reported bugs of a TestCase

If you look at the details of TestCase you will find the Reported Bugs for that TestCase on a new tab:

Recording the Conclusion of Test Run

When finishing the running of a Test the Tester can add an optional Conclusion. This Conclusion can be used to summarize the result of the run, and this is a wiki text is added to the "description" of the Test Case Run.

The Conclusion can be added in two ways:

1. For Tests with Steps, the Conclusion will be requested on the final dialog, when all Steps are completed:



2. The Conclusion can be added at the bottom of the Test Runner any time:



Conclusion appears:

  • on the Test Runs' details page like this:
  • in the Test Runs' reports like this:

How does it work? The execution of TestCases

The TestRunner will automatically show the next available TestCase in the current TestSet, which is:

  • The TestCases by default are executed in the order they appear in the TestSet.
    You can navigate to the next/previous/last TestCase in the TestSet using the arrow buttons in the TestRunner, however that is only possible if the TestSet is not set to "Sequential", when the order of TestCases is forced.
  • If you want you can "jump" to a different TestCase by using the navigation buttons:
    • Enter the index of the desired Test Case to the input box on the navigation and press enter
    • Click on the search icon which brings up the list of non-completed Test Cases, which can be filtered and any of them can be selected as your next Test Case.
  • The TestRunner will only show the TestCase which is not executed yet.
    So if you have already run Test "X" within this TestRun that will not be offered any more.
  • You can stop and close TestRunner any time, and come back later: the execution will be continued at the next not-yet-run TestCase, which is typically where you left off (except if somebody else also is working on the same TestRun)
  • After each Step is completed the TestRunner will automatically save the results in the background. This is to avoid the data loss in case of the browser crashes or some other problems should happen.
  • Multiple Testers can work on the same Test-Set-Run simultaneously: the system will automatically take care of concurrency: if a TestCase "X" is being run by somebody that won't be show to the next Tester. So no work will be duplicated and wasted. Concurrency also handles if the same user-account runs the same TestSet from two different browsers (multiple sessions) then they won't get the same testCase so won't accidently overwrite each others' work.
  • If a TestCase has parameters, the TestCase will be run as many times as many parameters are available. The Runner won't go to the next TestCase until all parameters are completed, however you can skip some of the parameters too. For more details see
  • Typically Only TestCases which are in "Accepted" status are run by the TestRunner, but this can be configured...
    See next section why...

Finding Tests by name in the Test-Runner

In the TestRunner you can find a Test by its name and choose that to run next time. Just click on the magnifier icon of

which brings up a dialog like this:

On this dialog you can search/filter the Test Cases by entering some text in the filter box, and choose any Test Case by using the "Select" button.

The previously Suspended Test Cases can also be selected here: they will be resumed and run. Typically the previously Skipped Parameterised Tests are appearing here as Suspended.

Running only "Accepted" or all TestCases ?

By default the TestRunner will run only the "Accepted" TestCases.

The reason of this is that some TestCases' definition may be incomplete, or inaccurate, so the Test Engineer may set them to "Design" or "New" status while they are being corrected. This avoids confusions of the Tester because of inaccurate TestCase description, or false error reports and prevents wasted test-run efforts & time.

When creating a new Test Run the user has the option to choose if he wants to run "All" or "Only Accepted" Test cases. This can be configured by opening the "Test Run Creation Options" section when creating the TestRuns:

The "Run only Accepted TestCases" behaviour has been changed in 8.0.1:
  1. If you create a Run for a one/single TestCase then its "Run only Accepted TestCases" will be always forced to "false" regardless of the Tracker's default setting.
  2. If you create a Run for a TestSet then that will use and preserve the setting in the Traker configuration.

The reason why we have changed the setting for single TestCases is to make it easier to use: why would you create a TestRun for a single TestCase which is not Accepted? If you create such TestRun that means that you want to run that immediately regardless of its Status.

Notes:

  • You can also permanently turn on running all TestCases if you set the "Run only Accepted TestCases" field to "false". After this the TestRunner will run even the "Incompelete" or "In Design" TestCases too.
  • Or make this default for your TestRun tracker by configuring the tracker: this makes all future TestRuns work like this.
  • For legacy trackers you can configure the Test Run tracker and add this field like this:

Re-running already Finished Tests

Once a TestRun is Finished or it is Suspended its run is over. These are represented with the "Finished" and "Suspended" states in the TestRun's transition diagram:


The Tester can partly or completely re-run a TestSet by choosing the "Restart"/"Resume" transitions:

During the restart the Tester can decide which TestCases will be re-run:

The options are:

  • Re-run all tests: will re-run all TestCases in this TestSet regardless of their results
  • Re-run selected tests: user can choose individually which Tests will be re-run
  • Re-run failed or blocked tests only: choose this if you want to re-run only failed or blocked Tests
  • Run the remaining open tests: option appears if some of the Tests were not run at all, because the Test-Set-Run was closed manually. This option will keep all Test results, and run only the missing ones.

By default the Restart will clear test's results (means that the Step's result, and conclusion will be cleared and attachments are removed). If you want to keep the results then uncheck the "Clear previous results" checkbox.

This behaviour is configurable in the general.xml like this:

	<testManagement ...
                rerunClearsResults="true"
	/>

Re-running Tests: how does this work?

When restarting a TestRun this will make a copy of the whole TestRun hierarchy including the individual TestCases' results. The original Test results will be kept unchanged in order to keep the history.

Also the TestRunner will load and show the previous Run's result initially and fill the "Actual Result" and Step's Status with the same value from the previous Run.

In previous codebeamer versions - before codebeamer 7.9- the re-run dialog has offered an option to either:

  • Make a copy of the Tests
  • Or delete the results of the TestRun, and re-run them in place.

Now the copy is the default option. However if you want to get the old behaviour that can be turned on by setting a "forceCopyOnReRun" flag to "false" in the general.xml like this:

<testManagement forceCopyOnReRun="false" ></testManagement>

Re-running finished Tests selectively

One of the options when restarting a TestRun allows the Tester to choose which TestCases will be rerun manually. For that use this option:

Then new dialog appears where user can select which Test-Runs will be re-run. In this dialog the user can select all items by result by clicking on the result filters (marked as "2"), or select the Test Runs individually (marked as "1"), and then click on Select to save the selection.

Test Parameterisation

Test Parameterisation is a powerful practice to enhance your testing scenarios. The concept of parameters in testing is simply that your Test cases may define and use some parameters. During the execution of a test case the parameters are filled in with their actual value, so a parameterised variation of the original test case is produced.

Test Parameterisation is an advanced feature which is documented here: Test Parameterisation

Overview of Test Results

In codebeamer 8.1+ we have added an overview of the Tests Results which makes lot easier to understand and overview the Test Results of a Test Set.

The following overview information will appear on the page of Test Runs:


The "Test Results" part shows most information about the Tests has been run within the current Test Set. The major parts are:

  1. Progress counters
    • This part shows the Progress of Testing using a pie-chart including the number of Tests has been executed, the number of Failed/Passed/Blocked Tests and the number of Tests to remaining yet.
  2. Tests Summary
    • This part lists each Test Cases, and shows their Results
    • The Test Cases can be clicked which opens the Test Case in a new window
    • By clicking on the Test Result will scroll to the details of that Test Run
    • Shows Conclusion (except when it is empty)
    • For parameterised Tests shows each parameter-variants and their Results too
  3. Detailed Results
    • The Detailed Results part shows the detailed information of each Test Case here
    • Shows each Steps' results
    • Shows Reported Bugs (if has any)
    • Shows parameters (if has any)
    • Shows Conclusions (if not empty)

So as you see this Report will show all important information in a concise format.

Exporting Test Results to Microsoft Word

The Test Run's Report can also be exported to Microsoft Word using the "more->Export to Office" menu of a Test Run. This produces a Word file which contains the same information in a more Word-friendly format than the Test Results Report which appears on the web page.

An example output of the Word export is can be seen here: exampleTestReport.docx

Analyzing Test Results and Coverage

codebeamer provides the coverage browser tool to follow the execution of a test and analyze its results. The coverage browser is available for the follow trackers under the following names:

Tracker Name Coverage Browser Name
Test Sets Test Set Results
Test Runs Test Run Browser
Requirement Test Coverage
User Stories
Epics
Releases

codebeamer offers the multiple ways to get quick overviews or detailed information on testing progress and results.

Configure Coverage Browser Behavior

Automatic Submission of Results

By default, coverage browser results load immediately and automatically. But it is also possible to change this behavior to edit the filters before the results are shown. To disable the results loading automatically, the following section must be added to the application configuration:

   "testCoverage" : {
      "automaticSubmitDisabled" : true
   }


If automaticSubmitDisabled is set to true, the user must click the [GO] button to apply the filters and load the results.

If automaticSubmitDisabled is set to false or this section is not present in the application configuration, the results load automatically.



Default Preselection of Trackers

The number of the preselected trackers is limited to five by default when using the below filters in the specified coverage browsers:


Filter Type Coverage Browser Type Application Configuration
Initial level filter Test Run browser
Release Coverage
"testCoverage" : {
 "maxNumberOfInitialTrackers" : 5
 }
Test Case filter Any kind of Test Run browsers.
"testCoverage" : {
 "maxNumberOfTestCaseTrackers" : 5
 }
Second level filter Epics, Requirements and User Story trackers.
"testCoverage" : {
"maxNumberOfSecondaryTrackers" : 5,
}



The default value of both the "maxNumberOfInitialTrackers" and the "maxNumberOfTestCaseTrackers" configuration is 5. To increase or decrease the default value, amend the Application Configuration accordingly.


In case the number of trackers listed exceeds the value defined in the application configuration, codebeamer displays the following warning messages:For performance reasons, none of the Trackers are selected on Initial level. Please select them manually!

For performance reasons, none of the Trackers are selected in Test Case level. Please select them manually!

If page view is

  • enabled: codebeamer displays the warning message only on the first opening of a coverage page.
  • disabled: The warning message is shown whenever a user opens a coverage page.

Coverage Browser

Watch a video on the coverage browser here.

Coverage browser is the tool to analyze the result of the latest test runs and of the resulted test coverage of the following tracker types:

  • Requirement
  • User stories
  • Epics
  • Releases
  • Test sets
  • Test runs

To access this tool, click on the Test Coverage, Test Set Results, or Test Run Browser item in the context menu of tracker.

Alternatively, click the coverage browser icon in the toolbar.

The coverage browser page looks like this:


The main area contains a tree with several columns, depending on the selected filters. The first column (Tracker Items) is the tree itself. The tree might contain three types of items:

  • The items of the selected trackers.
  • The test cases assigned to those items. The test case nodes are the children of the requirement nodes to which they are associated.
  • The test runs of the test cases. These can be either quick test runs or normal test runs. The test runs however are not displayed by default because thousands of them might exist. To learn how to display them, scroll to the section about filtering the test coverage.

The other columns in the grid are the following:

  • Color: Not displayed by default. It is possible turn this feature on by ticking the Show work item groups color check box above the tree. If a tracker item has a color field and it is set, or a tracker item has a subject with a color set, the color is displayed in this column.
  • Coverage: Computed based on the most recent test runs. The results depend on whether the user checked the OR or the AND computation, and how the filters are set.
  • Run by: Shown only for test runs. The user who run the test case.
  • Test Cases: The number of test cases verifying this requirement.
  • Coverage Analysis: Shows the most recent result of all test cases that verify this particular requirement. Note that this data is aggregating upwards, for example, the test case executions of a child requirement will contribute to the parent. The bar itself displays the ratio of successful, failed, blocked and not-ran-yet test cases. For instance, a full-green bar means: "all test cases that verify this requirement or any of its children are executed successfully". This result depends on how the filters are set.
  • Run at: The time when the test case was run.

An item in a status with Closed or Resolved meaning is only displayed on the Release Coverage page if the Resolution field of the item is either empty, or the meaning of the item Resolution is Successful.

Computation and Meaning of the Coverage Column

The computation of this column is based on the status of the Calculate Coverage with Or check box. If the checkbox is selected, the AND operation is used to calculate the coverage, otherwise the OR operation is used. This table summarizes the possible combinations:

Combination Result with AND Result with OR
Passed, Failed Failed Partly Passed
Passed, Blocked Blocked Partly Passed
Passed, Partly Passed Partly Passed Passed
Blocked, Partly Passed Blocked Partly Passed
Blocked, Failed Blocked Blocked
Failed, Partly Passed Failed Partly Passed
Passed, Blocked, Failed Blocked Partly Passed
Passed, Blocked, Partly Passed Blocked Partly Passed
Passed, Failed, Partly Passed Failed Partly Passed
Failed, Blocked, Partly Passed Blocked Partly Passed
Passed, Blocked, Failed, Partly Passed Blocked Partly Passed
Passed, Not Applicable Party Passed Passed
Party Passed, Not Applicable Partly Passed Partly Passed
Failed, Not Applicable Failed Failed
Blocked, Not Applicable Blocked Blocked
No test cases for the requirement Not Covered Not Covered

The interpretation of this table: get the test runs shown in the tree for a test case. Find the matching combination in the first column. The coverage status is then found in the second on third column (based on the operator). The computation is the same for requirements (so you combine the coverage statuses of the test cases of the requirement).

Note that the Partly Passed result with the OR operator is "stronger" than Blocked or Failed. For example, this image shows a subtree where the coverage is computed with the AND operator (default):

The requirement Brakes is Failed because it has a Failed and a Passed test run. According to the first row of the table, this combination results in Failed with the AND operator.

However if the OR operator is used, the coverage is Partly Passed:

Note that the computation is affected by the Number of recent Test Runs shown option in the filter because it only considers the test runs shown in the tree.

Filtering in the Coverage Browser

codebeamer provides filtering options in the coverage browser. The following filtering criteria are available:

  • Coverage: Filters the items by the content of the Coverage column. Only the matching requirements are shown in the tree.
  • Feature Stability: An item is stable if all of the last test runs of their test cases passed. It is unstable if at least one of the last 10 test runs failed, is blocked etc. Only the matching requirements are shown in the tree.
  • Number of recent Test Runs shown: By default, only one test run is shown in the tree. Setting this field you can display a maximum of 10 of the most recent test runs for each test case.

Additional filters and AND/OR logic can be added on three levels for the test runs, test sets, and releases tracker, and on for levels to user stories, requirement, and epic tracker with the additional second level. The four levels are as follows:

  • Initial level: Select the supported trackers.
  • Second level: Filter by the referenced trackers. This level can be enabled or disabled with a check box.
  • Test Case Filters: Filter by the available fields of the selected test cases trackers.
  • Test Run Filters: Filter by the available fields of the selected test runs trackers.

Note that historical filters cannot be used in the coverage browser.


It is possible filter by the name of the requirement or test case with the text box above the coverage tree.

The filter is not case sensitive. By default, it filters the only requirements-type trackers. This means that all test cases of the matching requirements tracker is shown.

However if the Search in Test Cases checkbox is selected next to the name filter box, the test cases are searched and filtered too. To search only for test cases, uncheck the Search in Work Items checkbox and check Search in Test Cases.

Note that the filter shows the children of the matching requirements as well, even if they do not match the filter.


Filtering in the Coverage Browser before codebeamer 22.10-LTS (GINA) Version

There are many filtering options on test coverage. The requirement-related options are in the left column of the filter section, while the test run related ones are on the right:

The requirement-related options are as follows:

  • Coverage: filters the items by the content of the Coverage column. Only the matching requirements are shown in the tree.
  • Status: filters the requirements by their statuses. You can also filter by the status meaning (Unset, In Progress, Resolved Closed). This is useful in cases when the coverage browser shows multiple trackers and you want to see all closed items across all trackers. Only the matching requirements are shown in the tree.
  • Tracker: filters by tracker. Only the requirements from the matching trackers are shown in the tree. This is useful in cases when the coverage browser shows multiple trackers.
  • Work Item Release: filters the requirements by the release they are assigned to. Only the matching requirements are shown in the tree.
  • Feature Stability: An item is stable if all the last test runs of their test cases passed. It is unstable if at least one of the last 10 test runs failed, is blocked etc. Only the matching requirements are shown in the tree.
  • Test Case Type: filters the test cases by their type (value of the Type field in the test case tracker).

The test run related options filter the test runs that are used in the computation of the coverage and the test runs shown in the tree, but the Last 10 runs column is always the same. These options in order:

  • Configuration: filters the test runs by the configuration they were run on.
  • Test Run Release: filters the test runs by the release they were run on.
  • Running Interval: only the test runs run in this interval will match.
  • Run by: filters the test runs by the test runner.
  • Number of recent Test Runs shown: by default only one test run is shown in the tree. Setting this field you can display at most 10 of the most recent test runs for each test case.

To change from which tracker the requirement issues are listed, just select another tracker in the topmost selector. Only the requirements-type trackers are available as options here.

To narrow down the set of requirements, enter a term in the text box in the header of the table, and only the requirements whose name match the entered term will be visible.


Select Trackers and Branches (from codebeamer 9.2.0 to codebeamer 22.04)

This section of the coverage browser lets the user select an additional level to the coverage browser. If the tracker has downstream reference from another requirement or user story tracker, it is possible add it to the second level. On the the third level, users can select a test case tracker referencing the ones selected on the second level. If there are second level trackers selected, the coverage browser shows the items from those trackers that reference the first level tracker items.


Filtering in the Test Run Browser

The operation of the Initial level filter and the Test Case Filters in the Test Run Browser is described in the below table:


Filter Applied To Limitation
Initial level First level test cases and test sets. Applied only to first level test cases and test sets.
Test Case Filters Test cases in test sets (test set runs). Not applied to first level test cases (quick test case runs).


Using the Test Case Filters is recommended in case the coverage for test set runs should be modified, and a test set tracker is selected on the Initial level. The Test Case Filters can be inactivated to avoid confusion by deselecting the related checkbox.


Coverage Statistics

The Test Coverage Statistics section shows the aggregated coverage statistics. This information is computed based on the filters so it matches with the coverage tree. The statistics are grouped by trackers.

Exporting Test Coverage

The coverage can be exported to Microsoft Word and Excel. The exported document contains the same information as the tree and the statistics table, including the filter settings. To export the coverage, click the Export to Office link on the action bar.

Coverage Browser Presets

From codebeamer 22.10-LTS (GINA), users can save their coverage browser settings as presets. To save a new preset, do the following:

  1. Configure the coverage browser filters.
  2. Click the Save new Preset link on the top of the page.
  3. Add a name for the preset.
  4. Optionally, add a description and select the roles that that have permission to this preset. If none is selected, the preset is private.
  5. Click [Save].


Once a preset is saved, it can be selected by clicking the Load/Manage Presets link. Here saved presets can be edited as well. When a saved preset is used, clicking the Save current Preset link overwrites the save.

Non-private presets are included in project exports.

Sharing the Coverage Browser

From codebeamber 22.10-LTS (GINA), the link to the coverage browser with the applied filters can be shared. The sharing function is available by clicking the share icon in the toolbar. Here, the link can be copied, or users can add roles, groups, user names, or email address with whom they want to share the coverage browser.


Test Case Library and Test Set Library

When building a product you may want to reuse the same test case in multiple versions. A test case and a test set library will help you in this task.

A test set library is a collection of test set trackers while a test case library contains test case trackers. You can use these libraries on the create/edit test case (for reusing test steps) and the create/edit test set (for reusing test cases) pages.


With the filter above the tree you can filter the items by their status meaning:


Configuring the libraries

You can configure the libraries by clicking on the cog icon above the trees.

This will bring up an overlay where you can select the trackers that you'd like to display in the library. Since this is a user level setting you'll see the same list of trackers wherever you open the library.

Note that codebeamer stores the list of the selected trackers. When you add a new tracker to your project you have to reconfigure the library if you want to display that too. Also, the trackers of the current project are always displayed in the tree.


FAQ

Round-trip editing TestCases in Excel, and importing/exporting TestCases from Excel or Word

See this wiki page for details: Importing and round-trip editing of TestCases with TestSteps in Excel

How to turn off Timing of TestRuns

The Test-Runner does measure time to take to run each individual Test Cases during their runs automatically, and saves and aggregates this time to the Test Runs. This feature can help Test team to estimate how long a testing effort will take next time.

However sometimes this feature is unwanted (for example regulations forbid this), then this can be disabled completely.

This can be turned off in general.xml, by setting the allowTiming="false" setting in this part of the general.xml. For example:

<testManagement ... allowTiming="false" ... ></testManagement>


When timing turned off then:

  • The timer control won't appear on the TestRunner dialog
  • No time information is collected and saved, so run-time of Tests will be always 0

I do not want to use "Blocked" result, how can I disable that?

You can turn off "Blocked" result completely and globally by configuring it in the general.xml by setting this flag to false. Alternatively you can get this removed for a Test Run tracker if you turn off workflow on that tracker.

<testManagement ...
	testRunnerShowsBlock="true"
></testManagement>



I do not want to allow "End Run" button in Test Runner, how can I remove that?

You can turn this off completely and globally by configuring it in the general.xml by setting this flag to false. Alternatively you can get this removed for a Test Run tracker if you turn off workflow on that tracker.

<testManagement ... testRunnerShowsEndRun="true"


</testManagement>


I want to set up Test Management defaults globally. How can I do that?

These options - from codebeamer 8.1- are configurable in general.xml:

<testManagement ... runOnlyAcceptedTestCases="true" createTestRunForEachTestCase="false" includeTestsRecursively="false" testRunnerShowsBlock="true" testRunnerShowsEndRun="true"


</testManagement>

The explanations are (see codebeamer-config-1.0.dtd for complete reference):

 runOnlyAcceptedTestCases      If the Test Management runs only Accepted TestCases
 createTestRunForEachTestCase  When creating a TestRuns: if this should create a separate TestRun for each TestCases ?
 includeTestsRecursively       If the TestCases children should be included as default
 testRunnerShowsBlock          If the TestRunner shows BLOCK button ?
 testRunnerShowsEndRun         If the TestRunner shows END RUN button?
 canChangeRunOnlyAcceptedTestCases If the Test Manager can choose between to run accepted or non-accepted TestCases ? If you set this to false then the the Test Manager can not change whether to run "Accepted testCases" or not when creating a TestRuns. Available since CB version 10+