Tags:
Test Management
Test Management with codeBeamerTable of Contents
OverviewThe test management facilities in CodeBeamer enable controlled testing of software, hardware and any kind of product or system. What makes CodeBeamer different from other test management software available in the market, is its holistic approach to collaborative testing. In CodeBeamer, tests are not existing in an isolated silo, but are tightly integrated with the requirements, the Wiki documentation, the bug and task trackers, the source code and the full lifecycle of the tested product. CodeBeamer's access control and web interface makes it ideal for collaborative testing, dividing the work among test engineers (defining test cases to be executed, specifying the configurations and releases to be used, and coordinating the work), testers (executing the test cases). How does Test Management work?The best practice summarized:
GlossaryTesting RolesTest EngineerA Test Engineer is a professional who determines how to create a process that would test a particular product, in order to assure that the product meets applicable specifications. Test Engineer produces the Test Cases. TesterA Tester is responsible for the core activities of the test effort, which involves conducting the necessary tests and logging the outcomes of testing. Test PlanA Test Plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. Test CaseA Test Case is a detailed procedure that fully tests a feature or an aspect of a feature. Test cases consist of detailed descriptions, pre-actions, test steps, and post-actions. Test StepA Test Step is one step in a Test Case procedure. Watch a video on Test Cases and Test Steps here. Please note that the default values for "Action" and "Expected result" fields in Test Case configuration does not have any effect during test step creations and Test RunsTest SetA Test Set is a logical group of related Test Cases. It is associated with a list of releases it is allowed to be executed with, and a list of test configurations that are allowed to be used while executing it. Test ConfigurationA Test Configuration is one possible configuration of the product under testing. Test RunA Test Run is an actual execution of a Test Set using one specific Release and one specific Test Configuration. Test ResultsPassedMeaning: test is finished successfully, and all checked aspects worked as expected. Tests in this status are considered as finished and closed. Partly PassedHas similar meaning to the Passed result: the test is finished successfully. Used when the Test is parameterized, and the Tester skips some of the Test parameters. All parameters was used was Passed. For more information see: Test Parameterisation and What is Partly Passed? FailedMeaning: the test is finished with failure, at least one of the checked aspects did not work as expected. Tests in this status are considered as finished and closed. BlockedMeaning: the test could not be completed yet, because something blocked the full completion of the testing procedure. Tests with this result are not considered as closed! After removing the cause of the block, the tests should be continued from the point of the block. Test CoverageTest Coverage is a metric to express what requirements of your product are verified by test cases, and by how many of those test cases. Simply speaking, the more Test Cases exist for a Requirement, the more it is covered. Creating Test CasesIn CodeBeamer, a test case is an item in a Test Case typed Work items. When you create a new project, a default Test Case tracker will be available inside the "Trackers" menu on the top. You can create a test case in the traditional way: just click the New Item link when browsing a Test Case category. The Test Case editor page is however different from the editor of other items.
The main parts of the page:
There are four special fields in the test case trackers:
Test Step EditorThe test step editor is a simple table with editable cells. The Action and Expected result cells can be edited with the rich text editor.
Test Step attributesA test step is defined by three attributes: the action, the critical flag and the expected result.
Adding new test steps with the test step editor is very easy and can be done in several ways:
You can edit a cell by clicking in it. Rows can be re-arranged by drag and drop. And you can also delete rows by clicking the button. The Action and Expected Result fields can contain wiki markup but this is only rendered after saving the test case. You can access the same test step editor in the Document View of the test case trackers. Generating Test Cases for RequirementsWhen viewing your requirements in the Requirement Document View, you can easily generate test cases for them. Just click on the + icon next to the Requirement select "Generate Test Case" and choose the target TestCase tracker from the sub-menu:
This will immediately generate a test case for the requirement (or a whole test case structure if it is a folder) in the selected Test Case tracker of the project. Please note that this test case is just a starting point, mostly filled with empty values. You should open it and give it an exact definition, specify the test steps and elaborate it in general. Organizing Test Cases to Test SetsEarlier in the testing process, Test Engineers defined the Test Plan which contains a big tree structure of Test Cases. To be able to execute these tests, the Test Engineer will decide which subsets of these Tests are to run together. The selection of the Test Cases is called Test Set, which must be defined before executing the test runs. The Test Set also defines the order how the Test Cases are executed, and whether these tests must be executed in this specific order or in any arbitrary order. Creating Test SetsIn CodeBeamer a Test Set is an item in a Test Sets tracker. When you create a new project, a default "Test Sets" tracker is available below the "Trackers" menu. You can create a Test Set by just opening the Test Set category and click the New Item link. The Test Set editor which is a specialized item editor page will appear, that will look like:
The main parts of the page:
There are few special fields in the Test Set item:
Adding/Deleting Tests in Sets, Ordering themAs mentioned previously, you can add Test Cases to Test Sets by dragging and dropping from the Test Plan tree in the Test Set editor page. You can drop several Test Cases or even folders of Test Cases, all of them will be added to the Test Cases listing at once. The order of the Test Cases is important for the execution: this is the order the tests will be (should be) run by the Tester persons. So the dropped Test Cases are initially added in the same order as they appear in the tree. When dropping the Test Cases, they will be inserted to the selection exactly where you drop them, so you can for example choose to add the new selection in between two test-cases. Those Test Cases which are already added to the Test Set will appear as "grayed out" in the Test Case tree. This however does not prevent you from dragging-and-dropping the same Test Cases again, but dropping them again will add them to the new location, and the previously added duplicates are automatically removed. Tip: this is an easy and effective way to reorganize the Test Cases' order, too! Test Cases can be removed from the listing by clicking the button that appears when moving the cursor to the test case in the list. Ordering of Test Cases in the Test Case listing can be altered by dragging an dropping them upwards or downwards in the list, on need. Duplicated Test Cases in Test SetsWhen adding a Test Case to a Test Set it may happen that the same Test Case is added twice or more as duplicates. The Test Set can optionally contain a TestCase multiple times, if the "Allow Duplicates" boolean field of the TestSet is set to true. When running a TestSet with duplicate TestCases the Test-runner dialog will display and run the same TestCase as many times as it appears in the TestSet, and in the same order as they appear in the TestSet. When building the TestSet the duplicated TestCases are indicated with an special icon. The screenshot shows two duplicated TestCases in this TestSet (marked as #1), and the icon #2 indicates that these are duplicates. To remove the duplications click on the #2 icon, which removes the duplicates of that Test Cases from the list.
To configure if a TestSet allows duplicated TestCases you can:
Composing TestSets from other -included- TestSetsFor simpler Test scenarios it is sufficient to build the TestSets from simply adding individual TestCases to it. However as the number of Tests grow the Test architects may want to compose smaller or larger groups of TestCases into TestSets, and then build bigger TestSets from these "small" TestSets as building blocks. This now is possible: the TestSets can include other TestSets, and whenever the included TestSet changes (TestCases are added or removed) that will be automatically reflected in the outer/bigger TestSet. So to include a TestSet your currently edited TestSet just drag-and-drop the desired TestSet into the editor. Such dropped TestSet will appear in the TestCase/TestSet listing like the screenshot shows:
So you can mix the TestCases and TestSets inside a bigger TestSet as you wish. The rules are:
Once such a "composite" TestSet is saved you can view the detailed TestSet/TestCase hierarchy by drilling down the usual "down-arrow" icon in front of it:
Running a composed TestSetWhen running a TestSet which includes other TestSets the TestRunner will walk through all the TestCases in the order as they appear in the TestSet/TestCase hierarchy. The few consequences are:
Quick assign of Test Cases to Test SetsWhen you are on the document-view or table-view of a Test Case tracker then you can also quickly assign a selection of Test Cases to existing Test Sets or create new Test Sets from these selection. This functionality is in the menus: There you can:
The dialog where you can choose Test Set looks like this below, here you can do free-text search for any Test Sets: Note: the Test Set history and search will only show those Test Sets where the user has permission to add Test Cases into. So if you don't find some Test Sets here that might be the reason... Initiating Test RunsYou need three components to initiate a new test run:
Creating Test Runs directly from Test Cases (without Test Sets)Alternatively - for simplicy- you can run a single Test Case or a selection of Test Cases without putting them to a Test Set.
In all cases a new Test Run will be created which will contain and run the selected Test Cases. Optionally you can also add the children Test Cases recursively during this process... Options for creating Test RunsWhen creating a Test Run you have few options to configure which determine how the Test Runs are created, what Test Cases they contain, and which users gets assigned to the new Test runs. You can change these options by clicking on the "Test Run Creation Options" toggle on the Test Runs page.
The options are:
Distributing the Test Run work between multiple Users or RolesCodeBeamer by default uses as "Shared" model of distributing the Tests of a Test Runs, which means that:
However if your Test Team needs more flexibility you can choose to create multiple Test Runs for parallel Testing of a Test Set. This option appears on the UI when creating a new Test Run as:
There the user can:
So the difference between the two latter options is that the "Users only" mode will create one Test Run for each Users in the Roles may appear in the assigned to, while the "With Roles/Groups" option will assign the Test Runs to Roles/Groups if they appear. An example: As seen in the previous screenshot the Test Run is assigned to: "bond and Tester". "Bond" is an ordinary user, but Tester is a Role with possibly several members (for example Joe an Fred). If the "...Users Only" option is choosen then the system will create 3 Test Runs: one for "bond" and two more for "Joe" and "Fred" as they are all in Tester role. The "...with Roles" option will only create 2 Test Runs: one is assigned to "bond" user and 2nd is assigned to "Tester" role. This role is not expanded. When creating Multiple Test Runs with multiple Test ConfigurationsIf the "Multiple Test Runs..." option is selected and also multiple Test Configurations is selected then codeBeamer will create one Test Run for all combinations of the assignees and Test Configurations. For example 5 assignees is selected (in the Assigned To selector) and also there are 3 Test Configurations is selected then codeBeamer will create total 5*3 Test Runs: 3 Test Runs for each assignee, where each Test Run contains one of the 3 Test Configurations. Creating Releases for TestingYou can define the releases of your product in one of the Releases tracker: either a "Customer Requirement Specifications" or "System Requirement Specifications" For that, just click "Trackers" in the top then choose one of the Releases tracker and start adding new items or modifying the existing ones. Please note that the very same releases will be globally available in this project:
Creating Configurations for TestingSimilarly to releases, you define the configurations of your product that you want tests to be executed on in the "Test Configurations" tracker. You can:
Executing Test RunsOnce the Test Engineer has initiated a Test Run, that will be immediately available for the Testers to run. Testers will find the test runs assigned to them in the "My Start ->My Issues" menu. Alternatively, they can open the "Test Runs" tracker and pick the open Test Runs. There are two kind of Test Runs stored in Test Run trackers:
To execute runs, pick a top level issue (a Test Set Run) from the "Test Runs" tracker. Choose one that is still "open" (completed Test Runs can not be re-run by default), and click that to get to its properties page. You can click on the "Run!" action to start testing. The Test Results section shows the detailed results of each Test Case and a pie-chart shows how the progress goes in this Test Set.
Using Test RunnerThe current progress of this Test Set Run is shown in the top. Click the "Run!" link to bring up the Test Runner dialog which guides you through the execution of the tests in this Set. Test Runner is the primary interface to actually do testing. It helps testers by guiding them through the tests and their test steps to execute. It also records the result of each step (whether passed or failed) and the result the complete test case. Moreover, it makes it convenient to report bugs if the tester faces defects while running tests. The runner looks like:
Executing test steps and recording resultsIf the test - which is shown in the Runner - has steps defined. then those steps are displayed and first step is selected for execution. It is denoted by the highlight around the current step: it appears with a small icon, yellowish background and its "Actual Result" edit box is open (as seen in the previous runner screenshot). The Tester shall read and execute the Action of the test, then compare the result with the Expected Result, and record the difference into the Actual Result box - if necessary. Starting from codeBeamer 9.2.0 the Actual Result can be edited with the rich text editor. By clicking the "Pass Step" / "Fail Step" / "Block Step" buttons, the focus moves to the next step. It is possible to go back to any previous Step by simply clicking the step's row. One can then update the Actual Result or the step's result. When all the steps are completed, Runner will ask if you are ready to move to the next test case. At this stage, tester can optionally report bugs and this is the time to execute the Post Action. If tester agrees, then the Runner will show him the next available test. If all Test Cases are completed in the current Test Set, then the execution is done, and the Runner closes automatically: and the Test Set goes to Finished status. If there are no steps defined for the current test case or the tester does not want execute all of them for some reason, then the user can mark the current test case as passed / failed/ blocked using the buttons at the bottom. Finishing run without completing the remaining TestsThe "End Run" button can terminate the Test Set Run any time, without executing the outstanding test cases. In this case, the user must decide what is the final result for the Test Set:
Reporting Bugs during Test ExecutionsTesters can report bugs in context of the current test being executed in Runner at any time during the execution of the test. Click the "Report Bug" button to suspend the Test Runner temporarily and to display a bug reporting interface. The user has the option to choose the bug tracker where the bug will be reported. This tracker is remembered and same tracker will be offered next time as default. The bug to be reported is automatically initialized from the current test run's data. It captures many details:
For trace-ability reasons, the bug will also be automatically associated with the current test run. These help the person who will fix the bug later, to reproduce the same environment and the same situation efficiently. In any case, the tester can update the properties of the run, and submit the bug report. Once the bug is submitted, the Runner will continue. Reporting duplicate Bugs during Test ExecutionDuring extensive Testing it often happens that the same bug is found several times during execution of a Test Case. To avoid duplicate/repeated bug reports for the same bug again the Test Runner shows the already reported bugs for the same TestCase: and the Tester can choose an already-reported bug instead of creating a new one. This appears like this on the Test-Runner dialog: by clicking on the "Report this" button next to the bug the Tester can choose an existing bug and associate that to the Test Run:
Finding Reported bugs of a TestCaseIf you look at the details of TestCase you will find the Reported Bugs for that TestCase on a new tab: Recording the Conclusion of Test RunWhen finishing the running of a Test the Tester can add an optional Conclusion. This Conclusion can be used to summarize the result of the run, and this is a wiki text is added to the "description" of the Test Case Run. The Conclusion can be added two ways:
Conclusion appears:
How does it work? The execution of TestCasesThe TestRunner will automatically show the next available TestCase in the current TestSet, which is:
Finding Tests by name in the Test-RunnerIn the TestRunner you can find a Test by its name and choose that to run next time. Just click on the magnifier icon of
On this dialog you can search/filter the Test Cases by entering some text in the filter box, and choose any Test Case by using the "Select" button. The previously Suspended Test Cases can also be selected here: they will be resumed and run. Typically the previously Skipped Parameterised Tests are appearing here as Suspended. Running only "Accepted" or all TestCases ?By default the TestRunner will run only the "Accepted" TestCases. The reason of this is that some TestCases' definition may be incomplete, or inaccurate, so the Test Engineer may set them to "Design" or "New" status while they are being corrected. This avoids confusions of the Tester because of inaccurate TestCase description, or false error reports and prevents wasted test-run efforts & time. When creating a new Test Run the user has the option to choose if he wants to run "All" or "Only Accepted" Test cases. This can be configured by opening the "Test Run Creation Options" section when creating the TestRuns:
The "Run only Accepted TestCases" behaviour has been changed in 8.0.1:
The reason why we have changed the setting for single TestCases is to make it easier to use: why would you create a TestRun for a single TestCase which is not Accepted? If you create such TestRun that means that you want to run that immediately regardless of its Status. Notes:
Re-running already Finished TestsOnce a TestRun is Finished or it is Suspended its run is over. These are represented with the "Finished" and "Suspended" states in the TestRun's transition diagram:
The Tester can partly or completely re-run a TestSet by choosing the "Restart"/"Resume" transitions:
During the restart the Tester can decide which TestCases will be re-run:
The options are:
By default the Restart will clear test's results (means that the Step's result, and conclusion will be cleared and attachments are removed). If you want to keep the results then uncheck the "Clear previous results" checkbox. This behaviour is configurable in the general.xml like this: <testManagement ... rerunClearsResults="true" /> Re-running Tests: how does this work?When restarting a TestRun this will make a copy of the whole TestRun hierarchy including the individual TestCases' results. The original Test results will be kept unchanged in order to keep the history. Also the TestRunner will load and show the previous Run's result initially and fill the "Actual Result" and Step's Status with the same value from the previous Run. In previous codeBeamer versions - before codeBeamer 7.9- the re-run dialog has offered an option to either:
Now the copy is the default option. However if you want to get the old behaviour that can be turned on by setting a "forceCopyOnReRun" flag to "false" in the general.xml like this: <testManagement forceCopyOnReRun="false" ></testManagement> Re-running finished Tests selectivelyOne of the options when restarting a TestRun allows the Tester to choose which TestCases will be rerun manually. For that use this option:
Then new dialog appears where user can select which Test-Runs will be re-run. In this dialog the user can select all items by result by clicking on the result filters (marked as "2"), or select the Test Runs individually (marked as "1"), and then click on Select to save the selection.
Test ParameterisationTest Parameterisation is a powerful practice to enhance your testing scenarios. The concept of parameters in testing is simply that your Test cases may define and use some parameters. During the execution of a test case the parameters are filled in with their actual value, so a parameterised variation of the original test case is produced. Test Parameterisation is an advanced feature which is documented here: Test Parameterisation Overview of Test ResultsIn codeBeamer 8.1+ we have added an overview of the Tests Results which makes lot easier to understand and overview the Test Results of a Test Set. The following overview information will appear on the page of Test Runs:
The "Test Results" part shows most information about the Tests has been run within the current Test Set. The major parts are:
So as you see this Report will show all important information in a concise format. Exporting Test Results to Microsoft WordThe Test Run's Report can also be exported to Microsoft Word using the "more->Export to Office" menu of a Test Run. This produces a Word file which contains the same information in a more Word-friendly format than the Test Results Report which appears on the web page. An example output of the Word export is can be seen here: exampleTestReport.docx Analyzing Test Results and CoverageBy now, you understand how to set up and execute tests with CodeBeamer. Being a test manager or an executive, you probably want to closely follow the execution progress, or see the results after the testing phase has been completed. CodeBeamer offers the multiple ways to get quick overviews or detailed information on testing progress and results. Test CoverageWatch a video on the coverage browser here.Test Coverage is the tool to analyze the result of the latest test runs and of the resulted test coverage of requirements/user stories. To access this tool click on the Test Coverage item in the more menu of a User Story or a Requirement tracker:
Or you can access it on the TestRun trackers using this icon on the toolbar:
The Test Coverage page looks like this:
The main area contains a tree with several columns (depending on what your selected filters are). The first column (Tracker Items) is the tree itself. The tree may contain three type of items:
The other columns in the grid are the following:
Computation and meaning of the coverage columnThe computation of this column is based on the status of the Calculate Coverage with Or checkbox. If the checkbox is checked the AND operation is used to calculate the coverage, otherwise the OR operation is used. This table summarizes the possible combinations:
The interpretation of this table: get the test runs shown in the tree for a test case. Find the matching combination in the first column. The coverage status is then found in the second on third column (based on the operator). The computation is the same for requirements (so you combine the coverage statuses of the test cases of the requirement). Note that the Partly Passed result with the OR operator is "stronger" than the Blocked or the Failed. For example this image shows a subtree where the coverage is computed with the AND operator (default):
The requirement Brakes is Failed because it has a Failed and a Passed test run and according to the first row of the table this combination results in Failed with the AND operator. However if we use the OR operator the coverage is Partly Passed:
Note that the computation is affected by the Number of recent Test Runs shown option in the filter because it only considers the Test Runs shown in the tree. Filtering in Test CoverageThere are many filtering options on Test Coverage. The requirement related options are in the left column of the filter section, while the test run related ones are on the right:
The requirement related options in order:
The test run related options filter the test runs that are used in the computation of the coverage and the test runs shown in the tree, but the Last 10 runs column is always the same. These options in order:
To change from which tracker the requirement issues are listed, just select another tracker in the topmost selector. (Only the REQUIREMENT type trackers are available as options here.) To narrow down the set of requirements, enter a term in the text box in the header of the table, and only the requirements whose name match the entered term will be visible. You can also filter by name of the Requirement or Test Case. This is appearing as filter box above the Coverage tree:
The filter is case insensitive. By default will filter only Requirements: which means that all Test Cases of the matching Requirement is shown. This is more practical, because typically you want to see all Test Cases of the matching requirements. However if you turn on the "Search in Test Cases" checkbox next to the name-filter box that will cause that the Test Cases will be searched and filtered too (beside the Requirements). To search only for test cases uncheck the Search in Work Items checkbox and check Search in Test Cases. Note that the filter shows the children of the matching requirements as well even if they do not match the filter. Select Trackers and Branches (since 9.2.0)This section of the coverage browser lets you select an additional level to the coverage browser. If your tracker has downstream reference from an other requirement or user story tracker then you can add it to the second level. The on the third level you can select a test case tracker referencing the ones selected on the second level. If there are second level trackers selected then the coverage browser will show the items from those trackers that reference the first level tracker items.
Coverage StatisticsThe Test Coverage Statistics section shows the aggregated coverage statistics. This information is computed based on the filters so it matches with the coverage tree. The statistics are grouped by trackers.
Exporting Test CoverageThe coverage can be exported to Word and Excel. The exported document will contain the same information as the tree and the statistics table (it respects the filter settings). You can export the coverage by clicking on the Export to Office link on the action bar. Test Run BrowserThe Test Run browser is very similar to the Test Coverage. You can access Test Run Browser on Test Set and Test Run trackers. The Test Run Browser is a tree that contains the test cases and test sets from the selected trackers and displays the same information as the Test Coverage.
If the selected tracker is a test run tracker then the tree contains all test case and test set trackers that are referenced from the test run tracker. The subtrees for the test case trackers show only the quick test runs (the test runs that were run ad-hoc, without creating a test set). The filter section is the same as on the coverage browser page. One thing to note is that the Test Run Release field is mandatory and by default the first release is selected. This means that only the test runs run in one specific release are shown at one time. Release CoverageRelease Coverage is an other aspect of Test Coverage. With this tool you can overview how well the items of a release are tested. You can access the Release Coverage from the more menu of a release. Test Case and Test Set LibraryWhen building a product you may want to reuse the same test case in multiple versions. A test case and a test set library will help you in this task. A test set library is a collection of test set trackers while a test case library contains test case trackers. You can use these libraries on the create/edit test case (for reusing test steps) and the create/edit test set (for reusing test cases) pages.
With the filter above the tree you can filter the items by their status meaning:
Configuring the librariesYou can configure the libraries by clicking on the cog icon above the trees.
This will bring up an overlay where you can select the trackers that you'd like to display in the library. Since this is a user level setting you'll see the same list of trackers wherever you open the library.
Note that codeBeamer stores the list of the selected trackers. When you add a new tracker to your project you have to reconfigure the library if you want to display that too. Also, the trackers of the current project are always displayed in the tree. FAQRound-trip editing TestCases in Excel, and importing/exporting TestCases from Excel or WordSee this wiki page for details: Importing and round-trip editing of TestCases with TestSteps in Excel How to turn off Timing of TestRunsThe Test-Runner does measure time to take to run each individual Test Cases during their runs automatically, and saves and aggregates this time to the Test Runs. This feature can help Test team to estimate how long a testing effort will take next time. However sometimes this feature is unwanted (for example regulations forbid this), then this can be disabled completely. This can be turned off in general.xml, by setting the allowTiming="false" setting in this part of the general.xml. For example: <testManagement ... allowTiming="false" ... ></testManagement> When timing turned off then:
I don't want to use "Blocked" result, how can I disable that?You can turn off "Blocked" result completely and globally by configuring it in the general.xml by setting this flag to false. Alternatively you can get this removed for a Test Run tracker if you turn off workflow on that tracker. <testManagement ... testRunnerShowsBlock="true" ></testManagement>
I dont' want to allow "End Run" button in Test Runner, how can I remove that?You can turn this off completely and globally by configuring it in the general.xml by setting this flag to false. Alternatively you can get this removed for a Test Run tracker if you turn off workflow on that tracker. <testManagement ... testRunnerShowsEndRun="true" </testManagement>
I want to set up Test Management defaults globally, how can I do that?These options - from codeBeamer 8.1- are configurable in general.xml: <testManagement ... runOnlyAcceptedTestCases="true" createTestRunForEachTestCase="false" includeTestsRecursively="false" testRunnerShowsBlock="true" testRunnerShowsEndRun="true" </testManagement> The explanations are (see codebeamer-config-1.0.dtd for complete reference): runOnlyAcceptedTestCases If the Test Management runs only Accepted TestCases createTestRunForEachTestCase When creating a TestRuns: if this should create a separate TestRun for each TestCases ? includeTestsRecursively If the TestCases children should be included as default testRunnerShowsBlock If the TestRunner shows BLOCK button ? testRunnerShowsEndRun If the TestRunner shows END RUN button? |
Fast Links
codebeamer Overview codebeamer Knowledge Base Services by Intland Software |
This website stores cookies on your computer. These cookies are used to improve your browsing experience, constantly optimize the functionality and content of our website, furthermore helps us to understand your interests and provide more personalized services to you, both on this website and through other media. With your permission we and our partners may use precise geolocation data and identification through device scanning. You may click accept to consent to our and our partners’ processing as described above. Please be aware that some processing of your personal data may not require your consent, but you have a right to object to such processing. By using our website, you acknowledge this notice of our cookie practices. By accepting and continuing to browse this site, you agree to this use. For more information about the cookies we use, please visit our Privacy Policy.Your preferences will apply to this website only.