Codebeamer QA: Test Management #95044/HEAD / v397116 |
Tags:
Test Management
Test Management with CodebeamerTable of Contents
OverviewThe test management facilities in codebeamer enable controlled testing of software, hardware and any kind of product or system. What makes codebeamer different from other test management software available in the market, is its holistic approach to collaborative testing. In codebeamer, tests are not existing in an isolated silo, but are tightly integrated with the requirements, the Wiki documentation, the bug and task trackers, the source code and the full lifecycle of the tested product. codebeamer's access control and web interface makes it ideal for collaborative testing, dividing the work among test engineers (defining test cases to be executed, specifying the configurations and releases to be used, and coordinating the work), testers (executing the test cases). How does Test Management work?The best practice summarized:
GlossaryTesting RolesTest EngineerA Test Engineer is a professional who determines how to create a process that would test a particular product, in order to assure that the product meets applicable specifications. Test Engineer produces the Test Cases. TesterA Tester is responsible for the core activities of the test effort, which involves conducting the necessary tests and logging the outcomes of testing. Test PlanA Test Plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. Test CaseA Test Case is a detailed procedure that fully tests a feature or an aspect of a feature. Test cases consist of detailed descriptions, pre-actions, test steps, and post-actions. Test StepA Test Step is one step in a Test Case procedure. Watch a video on Test Cases and Test Steps here. Please note that the default values for "Action" and "Expected result" fields in Test Case configuration does not have any effect during test step creations and Test RunsTest SetA Test Set is a logical group of related Test Cases. It is associated with a list of releases it is allowed to be executed with, and a list of test configurations that are allowed to be used while executing it. Test ConfigurationA Test Configuration is one possible configuration of the product under testing. Test RunA Test Run is an actual execution of a Test Set using one specific Release and one specific Test Configuration. Test ResultsThe system starts showing results for the test run as soon as a result is available for at least one test case in the run. It means that depending on the status of the Test Run, the results shown by the system could be partial results or final one. PassedPassed results mean that all executed test cases and all checked aspects worked as expected so far. Partly PassedHas similar meaning to the Passed result: all executed test cases have passed. Used when the Test is parameterized, and the Tester skips some of the Test parameters. All parameters used were Passed. For more information see: Test Parameterisation and What is Partly Passed? FailedA test run is considered failed, if at least one of the checked aspects (test case) did not work as expected. BlockedMeaning: the test could not be completed yet, because something blocked the full completion of the testing procedure. Tests with this result are not considered as closed! After removing the cause of the block, the tests should be continued from the point of the block. Not ApplicableTest step and test run results can be set to Not Applicable in case certain steps or runs are considered not relevant. Test steps with Not Applicable results can be ignored.
Logic of the Not Applicable result:
Test CoverageTest Coverage is a metric to express what requirements of your product are verified by test cases, and by how many of those test cases. Simply speaking, the more Test Cases exist for a Requirement, the more it is covered. Creating Test CasesIn codebeamer, a test case is an item in a Test Case typed Work items. When you create a new project, a default Test Case tracker will be available inside the "Trackers" menu on the top. You can create a test case in the traditional way: just click the New Item link when browsing a Test Case category. The Test Case editor page is however different from the editor of other items.
The main parts of the page:
There are four special fields in the test case trackers:
Test Step EditorThe test step editor is a simple table with editable cells. The Action and Expected result cells can be edited with the rich text editor.
Test Step attributesA test step is defined by three attributes: the action, the critical flag and the expected result.
Adding new test steps with the test step editor is very easy and can be done in several ways:
You can edit a cell by clicking in it. Rows can be re-arranged by drag and drop. And you can also delete rows by clicking the button. The Action and Expected Result fields can contain wiki markup but this is only rendered after saving the test case. You can access the same test step editor in the Document View of the test case trackers. Generating Test Cases for RequirementsWhen viewing your requirements in the Requirement Document View, you can easily generate test cases for them. Just click on the + icon next to the Requirement select "Generate Test Case" and choose the target TestCase tracker from the sub-menu:
This will immediately generate a test case for the requirement (or a whole test case structure if it is a folder) in the selected Test Case tracker of the project. Please note that this test case is just a starting point, mostly filled with empty values. You should open it and give it an exact definition, specify the test steps and elaborate it in general.
Organizing Test Cases to Test SetsEarlier in the testing process, Test Engineers defined the Test Plan which contains a big tree structure of Test Cases. To be able to execute these tests, the Test Engineer will decide which subsets of these Tests are to run together. The selection of the Test Cases is called Test Set, which must be defined before executing the test runs. The Test Set also defines the order how the Test Cases are executed, and whether these tests must be executed in this specific order or in any arbitrary order. To create a Test Set from Test Cases,
The maximum number of Test Cases of which a Test Set can be created is limited when the Test Set is created from the Test Cases tracker view. The default maximum value is 1000, which can be modified in the Application Configuration. In case the number of Test Cases selected in the left side tree exceeds the limit set in the Application Configuration, the following error message is displayed:
You can generate Test Sets from the maximum of 1000 Test Cases.
Creating Test SetsIn codebeamer a Test Set is an item in a Test Sets tracker. When you create a new project, a default "Test Sets" tracker is available below the "Trackers" menu. You can create a Test Set by just opening the Test Set category and click the New Item link. The Test Set editor which is a specialized item editor page will appear, that will look like:
The main parts of the page:
There are few special fields in the Test Set item:
Adding/Deleting Tests in Sets, Ordering themAs mentioned previously, you can add Test Cases to Test Sets by dragging and dropping from the Test Plan tree in the Test Set editor page. You can drop several Test Cases or even folders of Test Cases, all of them will be added to the Test Cases listing at once. The Test Set can not contain Folder or Information typed Test Cases: so if you drop a "Folder" Test Case to the Test Set then this will NOT be added to the Set, but all of its children/grandchiden etc Test Cases will be added (except the Folder or Information Test Cases obviously). The order of the Test Cases is important for the execution: this is the order the tests will be (should be) run by the Tester persons. So the dropped Test Cases are initially added in the same order as they appear in the tree. When dropping the Test Cases, they will be inserted to the selection exactly where you drop them, so you can for example choose to add the new selection in between two test-cases. Those Test Cases which are already added to the Test Set will appear as "grayed out" in the Test Case tree. This however does not prevent you from dragging-and-dropping the same Test Cases again, but dropping them again will add them to the new location, and the previously added duplicates are automatically removed. Tip: this is an easy and effective way to reorganize the Test Cases' order, too! Test Cases can be removed from the listing by clicking the button that appears when moving the cursor to the test case in the list. Ordering of Test Cases in the Test Case listing can be altered by dragging an dropping them upwards or downwards in the list, on need.
Duplicated Test Cases in Test SetsWhen adding a Test Case to a Test Set it may happen that the same Test Case is added twice or more as duplicates. The Test Set can optionally contain a TestCase multiple times, if the "Allow Duplicates" boolean field of the TestSet is set to true. When running a TestSet with duplicate TestCases the Test-runner dialog will display and run the same TestCase as many times as it appears in the TestSet, and in the same order as they appear in the TestSet. When building the TestSet the duplicated TestCases are indicated with an special icon. The screenshot shows two duplicated TestCases in this TestSet (marked as #1), and the icon #2 indicates that these are duplicates. To remove the duplications click on the #2 icon, which removes the duplicates of that Test Cases from the list.
To configure if a TestSet allows duplicated TestCases you can:
Test Set ReportOn the Reports page, and in Table View, the Test Cases field of a Test Set tracker is rendered as a table field, therefore, only the first value of the Test Cases field is returned. The following message is displayed when hovering over the warning icon: Only the first value appears for the Test Cases field. Please open the item in item Details mode to be able to see all Test Cases field values.
Composing Test Sets from other -included- Test SetsFor simpler Test scenarios it is sufficient to build the TestSets from simply adding individual TestCases to it. However as the number of Tests grow the Test architects may want to compose smaller or larger groups of TestCases into TestSets, and then build bigger TestSets from these "small" TestSets as building blocks. This now is possible: the TestSets can include other TestSets, and whenever the included TestSet changes (TestCases are added or removed) that will be automatically reflected in the outer/bigger TestSet. So to include a TestSet your currently edited TestSet just drag-and-drop the desired TestSet into the editor. Such dropped TestSet will appear in the TestCase/TestSet listing like the screenshot shows:
So you can mix the TestCases and TestSets inside a bigger TestSet as you wish. The rules are:
Once such a "composite" TestSet is saved you can view the detailed TestSet/TestCase hierarchy by drilling down the usual "down-arrow" icon in front of it:
Running a composed Test SetWhen running a TestSet which includes other TestSets the TestRunner will walk through all the TestCases in the order as they appear in the TestSet/TestCase hierarchy. The few consequences are:
Quick assign of Test Cases to Test SetsWhen you are on the document-view or table-view of a Test Case tracker then you can also quickly assign a selection of Test Cases to existing Test Sets or create new Test Sets from these selection. This functionality is in the menus: There you can:
The dialog where you can choose Test Set looks like this below, here you can do free-text search for any Test Sets: Note: the Test Set history and search will only show those Test Sets where the user has permission to add Test Cases into. So if you don't find some Test Sets here that might be the reason... HTML Mode for Table Fields when Comparing Tracker Items VersionsSince Codebeamer release HUSKY, the availability of the HTML mode is extended to table fields as well on pages where items are compared, for example item compare, item revert, working set merge, review hub and so on.
On the issue difference overlay, you can choose between Wiki and HTML modes of display either in the Change display mode drop-down list on the top right, or in the separate display mode drop-down lists available for each table fields. The display mode set in the Change display mode drop-down list is applied to the whole content of the overlay. The mode of display can also be changed separately for each table field using the drop-down lists next to them.
The default value is Wiki. If the display mode is set to:
Filtering on the Test Cases & Sets TabSince Codebeamer release HUSKY, filters, AND/OR and Order by logic, and columns can be added to the Test Cases & Sets tab on the item details page of Test Sets.
The usage and operation of these widgets on the Test Cases & Sets tab is the same as, for instance, in reports or in table view. Most filtering options are available (Default Fields, Common Reference Fields, Reference Filters, Suspected Link Filters, Review Hub Filters, Item Based Review Filters, Tag Filters, Historical, Other), and the special fields of those trackers from which items are added to the Test Set are also included. However, the Filtering Reference Items section has been disabled to avoid performance issues. The context of filtering, ordering and the list of columns that can be added is defined by the items included in a Test Set.
The Test Case & Set column is fixed, and cannot be removed as it displays the name of the test cases and test sets included in the Test Case.
Shared Fields can also be created between different trackers on the Test Cases & Sets tab to make filtering and ordering more effective.
When adding Test Set B to the Test Set A, the test cases of Test Set B are listed on the Test Cases & Sets tab as the children items of Test Set B, highlighted in yellow background. The set filtering criteria is applied both to the parent and children items, while the ordering is only applied to the parent items on the Test Cases & Sets tab. In case a parent item is returned that meets the filtering criteria, all of its children items are listed when expanding the parent item. The children items that meet the filtering criteria are highlighted in yellow. The ones that do not meet the criteria are displayed in red background, and the following note is displayed on hover: "The item is not matching the set Filter(s) but displaying because one of its Child/Parent items is matching the Filter(s)."
The set filters are not stored, but lost when refreshing or navigating away and then back to the page. The filters cannot be used when editing the item details page.
Test Sets and Test Cases are displayed on the Test Cases & Sets tab with the following structure:
Initiating Test RunsYou need three components to initiate a new test run:
Creating Test Runs directly from Test Cases (without Test Sets)Alternatively - for simplicy- you can run a single Test Case or a selection of Test Cases without putting them to a Test Set.
In all cases a new Test Run will be created which will contain and run the selected Test Cases. Optionally you can also add the children Test Cases recursively during this process...
Adding and Removing Test Cases to Test Runs Directly Generated from Test CasesSince Codebeamer release 2.0 (HUSKY), you can edit the content of Test Runs in Ready for Execution (in case of regular test runs) and To Be Approved (in case of formal test runs) statuses that were generated directly from Test Cases. Test Cases can be added to and removed from Test Runs, as well as the order of Test Case execution can be modified. This feature does not apply to Test Runs generated from Test Sets.Adding Test Cases to a Test RunTo add a Test Case, open the relevant Test Run and click the Edit icon on the top left of the toolbar.
Test Cases can be added to the Test Run via drag and drop from the right side selector tree panel to the Test Cases section of the current Test Run. The order of the newly added Test Case can also be modified. The drag and drop selector tree panel is disabled by default, and can be enabled in the Application Configuration by the System Admin. The feature remains disabled for items where an item review workflow is implemented. "testManagement": { "enableTestCaseEditAfterTestRunCreation" : true/false }
with the Remove All Duplicates button that deletes all the duplicated Test Cases when clicked. Duplicates can also be removed by clicking the This is a duplicate Test. Click to remove other duplicates and keep this. option displayed when hovering over a duplicated Test Case.
Test Runs with duplicated Test Cases cannot be saved. When trying to save such Test Runs, the following warning message pops up:Cannot save because there are duplicated Test Cases in this Test Run. Please remove the duplicates.
Only AcceptedTest Cases can be added to a Test Run if:
Deleting Test Cases from a Test RunTo delete a Test Case from a Test Run, use the Click to delete from Test Run. icon displayed when hovering over a Test Case.
A Test Run can only be saved if it contains at least one Test Case. Empty Test Runs cannot be saved. When trying to save Test Runs with 0 Test Cases, the following error message is displayed:Could not save Test Run because no runnable Test Cases were found.
Important notes:
Options for creating Test RunsWhen creating a Test Run you have few options to configure which determine how the Test Runs are created, what Test Cases they contain, and which users gets assigned to the new Test runs. You can change these options by clicking on the "Test Run Creation Options" toggle on the Test Runs page.
The options are:
Distributing the Test Run work between multiple Users or Rolescodebeamer by default uses as "Shared" model of distributing the Tests of a Test Runs, which means that:
However if your Test Team needs more flexibility you can choose to create multiple Test Runs for parallel Testing of a Test Set. This option appears on the UI when creating a new Test Run as:
There the user can:
So the difference between the two latter options is that the "Users only" mode will create one Test Run for each Users in the Roles may appear in the assigned to, while the "With Roles/Groups" option will assign the Test Runs to Roles/Groups if they appear. An example: As seen in the previous screenshot the Test Run is assigned to: "bond and Tester". "Bond" is an ordinary user, but Tester is a Role with possibly several members (for example Joe an Fred). If the "...Users Only" option is choosen then the system will create 3 Test Runs: one for "bond" and two more for "Joe" and "Fred" as they are all in Tester role. The "...with Roles" option will only create 2 Test Runs: one is assigned to "bond" user and 2nd is assigned to "Tester" role. This role is not expanded. When creating Multiple Test Runs with multiple Test ConfigurationsIf the "Multiple Test Runs..." option is selected and also multiple Test Configurations is selected then codebeamer will create one Test Run for all combinations of the assignees and Test Configurations. For example 5 assignees is selected (in the Assigned To selector) and also there are 3 Test Configurations is selected then codebeamer will create total 5*3 Test Runs: 3 Test Runs for each assignee, where each Test Run contains one of the 3 Test Configurations. Creating Releases for TestingYou can define the releases of your product in one of the Releases tracker: either a "Customer Requirement Specifications" or "System Requirement Specifications" For that, just click "Trackers" in the top then choose one of the Releases tracker and start adding new items or modifying the existing ones. Please note that the very same releases will be globally available in this project:
Creating Configurations for TestingSimilarly to releases, you define the configurations of your product that you want tests to be executed on in the "Test Configurations" tracker. You can:
Executing Test RunsOnce the Test Engineer has initiated a Test Run, that will be immediately available for the Testers to run. Testers will find the test runs assigned to them in the "My Start ->My Issues" menu. Alternatively, they can open the "Test Runs" tracker and pick the open Test Runs. There are two kinds of Test Runs stored in Test Run trackers:
To execute runs, pick a top level issue (a Test Set Run) from the "Test Runs" tracker. Choose one that is still "open" (completed Test Runs cannot be re-run by default), and click that to get to its properties page. You can click on the "Run!" action to start testing. The Test Results section shows the detailed results of each Test Case and a pie-chart shows how the progress goes in this Test Set.
Using Test RunnerThe current progress of this Test Set Run is shown in the top. Click the "Run!" link to bring up the Test Runner dialog which guides you through the execution of the tests in this Set. Test Runner is the primary interface to actually do testing. It helps testers by guiding them through the tests and their test steps to execute. It also records the result of each step (whether passed or failed) and the result the complete test case. Moreover, it makes it convenient to report bugs if the tester faces defects while running tests. The runner looks as shown below:
Executing test steps and recording resultsIf the test - which is shown in the Runner - has steps defined. then those steps are displayed and first step is selected for execution. It is denoted by the highlight around the current step: it appears with a small icon, yellowish background and its "Actual Result" edit box is open (as seen in the previous runner screenshot). The Tester shall read and execute the Action of the test, then compare the result with the Expected Result, and record the difference into the Actual Result box - if necessary. Starting from codebeamer 9.2.0 the Actual Result can be edited with the rich text editor. By clicking the "Pass Step" / "Fail Step" / "Block Step" / "Not Applicable" buttons, the focus moves to the next step. It is possible to go back to any previous Step by simply clicking the step's row. One can then update the Actual Result or the step's result. When all the steps are completed, Runner will ask if you are ready to move to the next test case. At this stage, tester can optionally report bugs and this is the time to execute the Post Action. If tester agrees, then the Runner will show him the next available test. If all Test Cases are completed in the current Test Set, then the execution is done, and the Runner closes automatically: and the Test Set goes to Finished status. If there are no steps defined for the current test case or the tester does not want execute all of them for some reason, then the user can mark the current test case as passed / failed/ blocked using the buttons at the bottom. Finishing run without completing the remaining TestsThe "End Run" button can terminate the Test Set Run any time, without executing the outstanding test cases. In this case, the user must decide what is the final result for the Test Set:
Reporting Bugs during Test ExecutionsTesters can report bugs in context of the current test being executed in Runner at any time during the execution of the test. Click the "Report Bug" button to suspend the Test Runner temporarily and to display a bug reporting interface. The user has the option to choose the bug tracker where the bug will be reported. This tracker is remembered and same tracker will be offered next time as default. The bug to be reported is automatically initialized from the current test run's data. It captures many details:
For trace-ability reasons, the bug will also be automatically associated with the current test run. These help the person who will fix the bug later, to reproduce the same environment and the same situation efficiently. In any case, the tester can update the properties of the run, and submit the bug report. Once the bug is submitted, the Runner will continue. Filling Bug's properties from Test Run or Test CaseWhen generating a new Bug report during Test Case run then the generated Bug's properties will be filled up by the certain properties from the related Test Run or Test Case. This means that:
Additionally:
The user has the option to disable copying from the Test Run or Test Case by turning off checkboxes on dialog:
The copied values are displayed on the TestRun dialog where the user can correct the values if necessary. The dialog will show information about what values are copied:
Reporting duplicate Bugs during Test ExecutionDuring extensive Testing it often happens that the same bug is found several times during execution of a Test Case. To avoid duplicate/repeated bug reports for the same bug again the Test Runner shows the already reported bugs for the same TestCase: and the Tester can choose an already-reported bug instead of creating a new one. This appears like this on the Test-Runner dialog: by clicking on the "Report this" button next to the bug the Tester can choose an existing bug and associate that to the Test Run:
Reporting already existing BugsAnother option for reporting Bugs to a Test Run is to find any existing bug using free-text search. This is available on the report-bug dialog: if you enter a search text you can find any bug and add those to the Test Run like this:
Reporting Bugs to any Test Runs any time - even after running themNormally Testers would report Bugs from the Test Runner: while running the Test Run, but sometimes they may want to add new or existing Bugs later or any time. Since codebeamer version 10.0 this is possible on the Test Run's page:
These "add bug" links will show the same "Add Bug" dialog that appears from the Test Runner with the options of:
Finding Reported bugs of a TestCaseIf you look at the details of TestCase you will find the Reported Bugs for that TestCase on a new tab: Recording the Conclusion of Test RunWhen finishing the running of a Test the Tester can add an optional Conclusion. This Conclusion can be used to summarize the result of the run, and this is a wiki text is added to the "description" of the Test Case Run. The Conclusion can be added in two ways: 1. For Tests with Steps, the Conclusion will be requested on the final dialog, when all Steps are completed:
2. The Conclusion can be added at the bottom of the Test Runner any time:
Conclusion appears:
How does it work? The execution of TestCasesThe TestRunner will automatically show the next available TestCase in the current TestSet, which is:
Finding Tests by name in the Test-RunnerIn the TestRunner you can find a Test by its name and choose that to run next time. Just click on the magnifier icon of
On this dialog you can search/filter the Test Cases by entering some text in the filter box, and choose any Test Case by using the "Select" button. The previously Suspended Test Cases can also be selected here: they will be resumed and run. Typically the previously Skipped Parameterised Tests are appearing here as Suspended. Running only "Accepted" or all TestCases ?By default the TestRunner will run only the "Accepted" TestCases. The reason of this is that some TestCases' definition may be incomplete, or inaccurate, so the Test Engineer may set them to "Design" or "New" status while they are being corrected. This avoids confusions of the Tester because of inaccurate TestCase description, or false error reports and prevents wasted test-run efforts & time. When creating a new Test Run the user has the option to choose if he wants to run "All" or "Only Accepted" Test cases. This can be configured by opening the "Test Run Creation Options" section when creating the TestRuns:
The "Run only Accepted TestCases" behaviour has been changed in 8.0.1:
The reason why we have changed the setting for single TestCases is to make it easier to use: why would you create a TestRun for a single TestCase which is not Accepted? If you create such TestRun that means that you want to run that immediately regardless of its Status. Notes:
Re-running already Finished TestsOnce a TestRun is Finished or it is Suspended its run is over. These are represented with the "Finished" and "Suspended" states in the TestRun's transition diagram:
The Tester can partly or completely re-run a TestSet by choosing the "Restart"/"Resume" transitions:
During the restart the Tester can decide which TestCases will be re-run:
The options are:
By default the Restart will clear test's results (means that the Step's result, and conclusion will be cleared and attachments are removed). If you want to keep the results then uncheck the "Clear previous results" checkbox. This behaviour is configurable in the general.xml like this: <testManagement ... rerunClearsResults="true" /> Re-running Tests: how does this work?When restarting a TestRun this will make a copy of the whole TestRun hierarchy including the individual TestCases' results. The original Test results will be kept unchanged in order to keep the history. Also the TestRunner will load and show the previous Run's result initially and fill the "Actual Result" and Step's Status with the same value from the previous Run. In previous codebeamer versions - before codebeamer 7.9- the re-run dialog has offered an option to either:
Now the copy is the default option. However if you want to get the old behaviour that can be turned on by setting a "forceCopyOnReRun" flag to "false" in the general.xml like this: <testManagement forceCopyOnReRun="false" ></testManagement> Re-running finished Tests selectivelyOne of the options when restarting a TestRun allows the Tester to choose which TestCases will be rerun manually. For that use this option:
Then new dialog appears where user can select which Test-Runs will be re-run. In this dialog the user can select all items by result by clicking on the result filters (marked as "2"), or select the Test Runs individually (marked as "1"), and then click on Select to save the selection.
Test ParameterisationTest Parameterisation is a powerful practice to enhance your testing scenarios. The concept of parameters in testing is simply that your Test cases may define and use some parameters. During the execution of a test case the parameters are filled in with their actual value, so a parameterised variation of the original test case is produced. Test Parameterisation is an advanced feature which is documented here: Test Parameterisation Overview of Test ResultsIn codebeamer 8.1+ we have added an overview of the Tests Results which makes lot easier to understand and overview the Test Results of a Test Set. The following overview information will appear on the page of Test Runs:
The "Test Results" part shows most information about the Tests has been run within the current Test Set. The major parts are:
So as you see this Report will show all important information in a concise format. Exporting Test Results to Microsoft WordThe Test Run's Report can also be exported to Microsoft Word using the "more->Export to Office" menu of a Test Run. This produces a Word file which contains the same information in a more Word-friendly format than the Test Results Report which appears on the web page. An example output of the Word export is can be seen here: exampleTestReport.docx Analyzing Test Results and Coveragecodebeamer provides the coverage browser tool to follow the execution of a test and analyze its results. The coverage browser is available for the follow trackers under the following names:
codebeamer offers the multiple ways to get quick overviews or detailed information on testing progress and results. Configure Coverage Browser BehaviorAutomatic Submission of ResultsBy default, coverage browser results load immediately and automatically. But it is also possible to change this behavior to edit the filters before the results are shown. To disable the results loading automatically, the following section must be added to the application configuration: "testCoverage" : { "automaticSubmitDisabled" : true }
If automaticSubmitDisabled is set to true, the user must click the [GO] button to apply the filters and load the results. If automaticSubmitDisabled is set to false or this section is not present in the application configuration, the results load automatically.
Default Preselection of TrackersThe number of the preselected trackers is limited to five by default when using the below filters in the specified coverage browsers:
The default value of both the "maxNumberOfInitialTrackers" and the "maxNumberOfTestCaseTrackers" configuration is 5. To increase or decrease the default value, amend the Application Configuration accordingly.
In case the number of trackers listed exceeds the value defined in the application configuration, codebeamer displays the following warning messages:For performance reasons, none of the Trackers are selected on Initial level. Please select them manually!
For performance reasons, none of the Trackers are selected in Test Case level. Please select them manually!
If page view is
Coverage BrowserWatch a video on the coverage browser here.Coverage browser is the tool to analyze the result of the latest test runs and of the resulted test coverage of the following tracker types:
To access this tool, click on the Test Coverage, Test Set Results, or Test Run Browser item in the context menu of tracker.
Alternatively, click the coverage browser icon in the toolbar.
The coverage browser page looks like this:
The main area contains a tree with several columns, depending on the selected filters. The first column (Tracker Items) is the tree itself. The tree might contain three types of items:
The other columns in the grid are the following:
An item in a status with Closed or Resolved meaning is only displayed on the Release Coverage page if the Resolution field of the item is either empty, or the meaning of the item Resolution is Successful. Computation and Meaning of the Coverage ColumnThe computation of this column is based on the status of the Calculate Coverage with Or check box. If the checkbox is selected, the AND operation is used to calculate the coverage, otherwise the OR operation is used. This table summarizes the possible combinations:
The interpretation of this table: get the test runs shown in the tree for a test case. Find the matching combination in the first column. The coverage status is then found in the second on third column (based on the operator). The computation is the same for requirements (so you combine the coverage statuses of the test cases of the requirement). Note that the Partly Passed result with the OR operator is "stronger" than Blocked or Failed. For example, this image shows a subtree where the coverage is computed with the AND operator (default):
The requirement Brakes is Failed because it has a Failed and a Passed test run. According to the first row of the table, this combination results in Failed with the AND operator. However if the OR operator is used, the coverage is Partly Passed:
Note that the computation is affected by the Number of recent Test Runs shown option in the filter because it only considers the test runs shown in the tree. Filtering in the Coverage Browsercodebeamer provides filtering options in the coverage browser. The following filtering criteria are available:
Additional filters and AND/OR logic can be added on three levels for the test runs, test sets, and releases tracker, and on for levels to user stories, requirement, and epic tracker with the additional second level. The four levels are as follows:
Note that historical filters cannot be used in the coverage browser.
It is possible filter by the name of the requirement or test case with the text box above the coverage tree.
The filter is not case sensitive. By default, it filters the only requirements-type trackers. This means that all test cases of the matching requirements tracker is shown. However if the Search in Test Cases checkbox is selected next to the name filter box, the test cases are searched and filtered too. To search only for test cases, uncheck the Search in Work Items checkbox and check Search in Test Cases. Note that the filter shows the children of the matching requirements as well, even if they do not match the filter.
Filtering in the Coverage Browser before codebeamer 22.10-LTS (GINA) Version There are many filtering options on test coverage. The requirement-related options are in the left column of the filter section, while the test run related ones are on the right:
The requirement-related options are as follows:
The test run related options filter the test runs that are used in the computation of the coverage and the test runs shown in the tree, but the Last 10 runs column is always the same. These options in order:
To change from which tracker the requirement issues are listed, just select another tracker in the topmost selector. Only the requirements-type trackers are available as options here. To narrow down the set of requirements, enter a term in the text box in the header of the table, and only the requirements whose name match the entered term will be visible.
Select Trackers and Branches (from codebeamer 9.2.0 to codebeamer 22.04) This section of the coverage browser lets the user select an additional level to the coverage browser. If the tracker has downstream reference from another requirement or user story tracker, it is possible add it to the second level. On the the third level, users can select a test case tracker referencing the ones selected on the second level. If there are second level trackers selected, the coverage browser shows the items from those trackers that reference the first level tracker items.
Filtering in the Test Run BrowserThe operation of the Initial level filter and the Test Case Filters in the Test Run Browser is described in the below table:
Using the Test Case Filters is recommended in case the coverage for test set runs should be modified, and a test set tracker is selected on the Initial level. The Test Case Filters can be inactivated to avoid confusion by deselecting the related checkbox.
Coverage StatisticsThe Test Coverage Statistics section shows the aggregated coverage statistics. This information is computed based on the filters so it matches with the coverage tree. The statistics are grouped by trackers.
Exporting Test CoverageThe coverage can be exported to Microsoft Word and Excel. The exported document contains the same information as the tree and the statistics table, including the filter settings. To export the coverage, click the Export to Office link on the action bar. Coverage Browser PresetsFrom codebeamer 22.10-LTS (GINA), users can save their coverage browser settings as presets. To save a new preset, do the following:
Once a preset is saved, it can be selected by clicking the Load/Manage Presets link. Here saved presets can be edited as well. When a saved preset is used, clicking the Save current Preset link overwrites the save. Non-private presets are included in project exports. Sharing the Coverage BrowserFrom codebeamber 22.10-LTS (GINA), the link to the coverage browser with the applied filters can be shared. The sharing function is available by clicking the share icon in the toolbar. Here, the link can be copied, or users can add roles, groups, user names, or email address with whom they want to share the coverage browser.
Test Case Library and Test Set LibraryWhen building a product you may want to reuse the same test case in multiple versions. A test case and a test set library will help you in this task. A test set library is a collection of test set trackers while a test case library contains test case trackers. You can use these libraries on the create/edit test case (for reusing test steps) and the create/edit test set (for reusing test cases) pages.
With the filter above the tree you can filter the items by their status meaning:
Configuring the librariesYou can configure the libraries by clicking on the cog icon above the trees.
This will bring up an overlay where you can select the trackers that you'd like to display in the library. Since this is a user level setting you'll see the same list of trackers wherever you open the library.
Note that codebeamer stores the list of the selected trackers. When you add a new tracker to your project you have to reconfigure the library if you want to display that too. Also, the trackers of the current project are always displayed in the tree.
FAQRound-trip editing TestCases in Excel, and importing/exporting TestCases from Excel or WordSee this wiki page for details: Importing and round-trip editing of TestCases with TestSteps in Excel How to turn off Timing of TestRunsThe Test-Runner does measure time to take to run each individual Test Cases during their runs automatically, and saves and aggregates this time to the Test Runs. This feature can help Test team to estimate how long a testing effort will take next time. However sometimes this feature is unwanted (for example regulations forbid this), then this can be disabled completely. This can be turned off in general.xml, by setting the allowTiming="false" setting in this part of the general.xml. For example: <testManagement ... allowTiming="false" ... ></testManagement>
When timing turned off then:
I do not want to use "Blocked" result, how can I disable that?You can turn off "Blocked" result completely and globally by configuring it in the general.xml by setting this flag to false. Alternatively you can get this removed for a Test Run tracker if you turn off workflow on that tracker. <testManagement ... testRunnerShowsBlock="true" ></testManagement>
I do not want to allow "End Run" button in Test Runner, how can I remove that?You can turn this off completely and globally by configuring it in the general.xml by setting this flag to false. Alternatively you can get this removed for a Test Run tracker if you turn off workflow on that tracker. <testManagement ... testRunnerShowsEndRun="true"
</testManagement>
I want to set up Test Management defaults globally. How can I do that?These options - from codebeamer 8.1- are configurable in general.xml: <testManagement ... runOnlyAcceptedTestCases="true" createTestRunForEachTestCase="false" includeTestsRecursively="false" testRunnerShowsBlock="true" testRunnerShowsEndRun="true"
</testManagement> The explanations are (see codebeamer-config-1.0.dtd for complete reference): runOnlyAcceptedTestCases If the Test Management runs only Accepted TestCases createTestRunForEachTestCase When creating a TestRuns: if this should create a separate TestRun for each TestCases ? includeTestsRecursively If the TestCases children should be included as default testRunnerShowsBlock If the TestRunner shows BLOCK button ? testRunnerShowsEndRun If the TestRunner shows END RUN button? canChangeRunOnlyAcceptedTestCases If the Test Manager can choose between to run accepted or non-accepted TestCases ? If you set this to false then the the Test Manager can not change whether to run "Accepted testCases" or not when creating a TestRuns. Available since CB version 10+ |
Fast Links
codebeamer Overview codebeamer Knowledge Base Services by Intland Software |
This website stores cookies on your computer. These cookies are used to improve your browsing experience, constantly optimize the functionality and content of our website, furthermore helps us to understand your interests and provide more personalized services to you, both on this website and through other media. With your permission we and our partners may use precise geolocation data and identification through device scanning. You may click accept to consent to our and our partners’ processing as described above. Please be aware that some processing of your personal data may not require your consent, but you have a right to object to such processing. By using our website, you acknowledge this notice of our cookie practices. By accepting and continuing to browse this site, you agree to this use. For more information about the cookies we use, please visit our Privacy Policy.Your preferences will apply to this website only.