You are not logged in. Click here to log in.

codebeamer Application Lifecycle Management (ALM)

Search In Project

Search inClear

Tags:  not added yet
New infrastructure is only available in codeBeamer v9.5. Existing clients will be migrated automatically



Introduction

The solution was built with security and durability in mind, hosted on AWS, the leading cloud provider. The main concepts include current virtualization and containerization technologies, logically isolated client resources, encrypted and safely stored client data, and the principles of infrastructure-as-code. The automation of processes leave no room for human error when it comes to deployment, updates and more.

The time of deployment usually takes less than 1 hour.

The main concepts and their implementation

Durability and fault-tolerance

Containerization makes it possible to decouple the running environment from the application code, making it possible to deploy in a predictable and reproducible way. This makes it possible to debug reported incidents in a safe, client-like environment without client downtimes, or accessing sensitive data at all, where we know that the moving parts behave the same way. This also enables us to quickly redeploy faulty parts, guaranteeing minimal downtimes in a disaster-recovery situation.

Container orchestration and application fault-tolerance (AWS ECS)

Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service. Customers such as Duolingo, Samsung, GE, and Cookpad use ECS to run their most sensitive and mission critical applications because of its security, reliability, and scalability.

In the past years at Intland, we have been constantly improving our solutions to provide a streamlined customer experience with our platform, building mostly around AWS Elastic Beanstalk. Moving to ECS provides more flexibility and control over our processes, while retaining all the advantages Elastic Beanstalk has brought to our customers. ECS makes sure the actual workload matches the desired counts, providing advanced auto-healing and service recovery features with minimal downtimes.

Infrastructure durability (AWS Auto Scaling Group)

An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management. An Auto Scaling group also enables us to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies. Both maintaining the number of instances in an Auto Scaling group and automatic scaling are the core functionality of the Amazon EC2 Auto Scaling service. In other words, this service is responseible to ensure healthy infrastructure under our loads, recovering failing nodes.

Isolating customer resources

Private networking (AWS VPC)

Amazon Virtual Private Cloud (Amazon VPC) lets us provision a logically isolated section of the AWS Cloud where we can launch AWS resources in a virtual network that we define. We have complete control over our virtual networking environment, including selection of own IP address range, creation of subnets, and configuration of route tables and network gateways.

ECS let’s us launch our client stacks into exclusive VPCs, making them run in their very own private virtual networks., preventing cross-client communication.

Firewalls and network traffic control (AWS Security Group)

A security group acts as a virtual firewall for the instance to control inbound and outbound traffic, opening only the required ports to the public, and leaving the rest of the systems safely restricted to our infrastructure team.

User and access management (AWS IAM)

AWS Identity and Access Management (IAM) enables us to manage access to AWS services and resources securely. Using IAM, we can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. For every part of our client stack we use users and roles with minimal access to specific resources to guarantee even more security and isolation.

Handling of client data

Data volumes (AWS EBS)

Amazon Elastic Block Store (EBS) is an easy to use, high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.

Our instances use EBS volumes as their block devices.

Databases (AWS RDS)

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups.

Secrets and passwords (AWS Secrets Manager)

AWS Secrets Manager helps us protect secrets needed to access applications, services, and IT resources. The service enables us to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.

Encryption and backup of client data

Both block devices and databases are encrypted with AWS KMS keys and are backed up by AWS Backup, providing automated backups and safe storage of valuable client data at every stage throughout a client’s journey with us.

Other services

Various other AWS services are helping us provide our customers stress-free, automated deployments and updates of their codeBeamer and Retina stacks. Our DevOps team employs AWS CloudWatch for monitoring and alarms, AWS Lambda for serverless utility functions, DynamoDB as a persistance layer for them, S3 as an object storage, and more. Do you have a specific question for us? Reach out, and our DevOps team will surely have an answer for you.

Backup

AWS Backup service is used for creating a daily backup of your data. It is responsible for creating a snapshot of your database and file system, and restoring it in case of data loss.


A new backup is created every day between 12am and 1am (based on the timezone of the selected region). Backups are kept for 12 months (however it can be extended).


Transferable backup can be requested via support@intland.com.


RTO

Recovery time objective is one hour. Please note that support team is available in only working hours

RPO

Recovery point objective is 1 day by default. See the Backup

Downtime

In case of maintenance work, the application will not be available for a short time of period. The DevOps team will find a time to apply changes in the given maintenance window. Before applying any change on your instance, the DevOps team will contact you.


Production ready deployment takes only couple of minutes.

Your Data

Your data is your data! It is in a very secure environment, however it can be migrated into other environments (e.g. company server) without information loss. Please contact support@intland.com for more information.

What information should you provide?

In order to find the prefect size and place for your infrastructure, we need to know the following:

  • Number of active users
  • Location / Region of your data (the AWS Cloud spans 69 Availability Zones within 22 geographic Regions around the world). You can find the available locations here
  • IP4 / IP6 address(es) or range(s), in order to narrow access to your server. Server is available on public internet by default
  • Preferred maintenance window

Available Services

  • Git and Subversion are available without any further installation/configuration steps.
  • Bitbucket, GitHub and GitLab are enabled
  • Jira Sync
  • OpenID Connect
  • SAML and Two Factor Authentication (coming soon)
  • Active Directory
  • Doors Bridge (optional)
  • Jenkins (optional)

Server sizes

  • Small server has 2 vCPUs and 8GB memory, 50GB Disk, it is recommended small teams (1-10 active users)
  • Medium server has 4 vCPUs and 16GB memory, 100GB Disk, it is recommended medium teams (11 - 40 active users)
  • Large server has 8 vCPUs and 32GB memory, 500GB Disk (can be extended), it is recommended large teams (41 - 100 active users)
  • For more active users and/or Clustered environment with Load Balancing (Beta), please contact sales@intland.com for details.

Try it!?

Do you want to try it? Your private and secure instance can be deployed in couple of minutes. Please contact sales@intland.com.