**Administration Manual for CLC Genomics Cloud Engine v20.0.0** ![](./images/QIAGEN.png)





Administration Manual for CLC Genomics Cloud Engine

Wed Dec 18 2019



QIAGEN Aarhus A/S
Silkeborgvej 2
Prismet
DK-8000 Aarhus C
Denmark # Introduction The CLC Genomics Cloud Engine (GCE) is a cloud-based system used for secondary analysis of NGS data. The analyses are carried out using CLC workflows, which can be designed and validated using CLC Workbenches or the CLC Genomics Server. CLC workflows are submitted to GCE using the CLC Genomics Cloud Engine Command Line Tools (CLT). GCE is run using Amazon's VPC and EC2 services. Thus, standard AWS administration tools can be used for managing and monitoring the system. ![CLC Genomics Cloud Engine - System Context](./images/CloudSystemContext.png) ## System Overview The CLC Genomics Cloud Engine solution uses a range of Amazon AWS services, with the most important dependencies shown in the figure below. In addition, a range of other Amazon services are used for monitoring, configuring, deploying and scaling. ![CLC Genomics Cloud Engine - Deployment View](./images/DeploymentView.png) The configuration is based on two virtual private networks: a VPC for GCE and a VPC for the RDS database. This division is mainly made to allow for creating a new GCE stack running on an existing database, thereby providing flexibility for future upgrades. The main VPC contains a public subnet and 2 private subnets, with the services serving the clients placed in a private subnet and deployed on two server instances. A load balancer distributes requests between the server instances. The load balancer is placed in the public subnet and is attached to an Internet Gateway. All HTTPS traffic is forwarded to the application servers by the load balancer, which in practice makes them available from the internet (but more restrictive IP filtering can be applied). This solution architecture provides a high level of security, which keeps the analysis data safe and unexposed to the internet. Interaction with the servers requires authentication based on OAuth2, which integrates with any OAuth2 capable Authorization Server. The execution of the analysis takes place on servers in the second private subnet, which cannot be addressed from outside the subnet. Communication with Amazon services flows through a NAT server or in the case of S3 through a VPC endpoint. The servers in the private subnet can access the internet through a NAT server, but there is no way to initiate communication with the servers from the internet. The only means of communicating with the servers inside this subnet is through SQS messaging (an AWS service), which requires AWS authentication via IAM. This isolation keeps the servers inside this private subnet very secure. The CLC Genomics Cloud Engine solution consists of three core services deployed on three server clusters placed in the private subnets as described above: * Job Manager * Job Executor * License Manager Each service has a clear area of responsibility which is described below. ### Job Manager The Job Manager service receive and handle job submissions and cancellations sent from client software, here, the CLC Genomics Cloud Engine Command Line Tools. Job Manager service then distribute jobs to the GCE instances where the analyses will be run. Job progress and status can be followed by referring to a web page where summary information is provided, by using client software to poll the Job Manager service, or by subscribing to status events pushed by the Amazon SNS service. At least one instance of the Job Manager must be running for the solution to be available. The Job Manager is not a particularly demanding service, however, making it possible to run on inexpensive instances with modest hardware specifications. The Job Manager communicates with the Job Handler through SQS messages. The Job Handler services request jobs from the Job Manager which distribute them according to its internal rules. Metrics published from the Job Manager are used to facilitate auto scaling. The use of message based communication increases overall system security and robustness, and allows the system to scale smoothly with demand and also allows graceful service degradation during software upgrades. ### Job Executor The Job Executor service has the responsibility of picking up jobs from the Job Manager and starting the analysis execution. It also reports progress and state information to the Job Manager service. Internally, the Job Executor runs a CLC workflow on the CLC Genomics Server. In brief, it retrieves a job, downloads the data files specified, starts the execution of the workflow, and then uploads the results when the job is finished. All the steps of a given submitted workflow are run on the same Job Executor. For those familiar with the CLC Genomics Server: The Job Executor uses a standard, unmodified CLC Genomics Server in “Single server” mode. The hardware requirements for the Job Executor service is based on the requirements for the types analyses that will be run (eg. disk space, memory and number of CPU cores). ### License Manager The License Manager service provides licenses to all running CLC Genomics Server instances. If the License Manager cannot provide a valid license the CLC Genomics Server will not run. The service is also responsible for monitoring license usage and expiration. This is done using metrics that can be inspected in CloudWatch. ## System Requirements ### Prerequisites - The customer shall set up and administer an AWS account. - The customer shall provide an OAuth2 compliant authorization server capable of providing access and refresh tokens for user authentication. - The customer shall manage access to S3 storage location(s) that the system can use for storing input and output data. - The customer must acquire a valid license (provided by QIAGEN when GCE is purchased). GCE does not include user management or data storage management, tasks which will be the responsibility of the GCE customer in addition to setting up and running the solution. ### Supported AWS Regions The following regions are supported by default. To get support for other regions send a request to QIAGEN support. | Name | Code | |---------------|----------------| | N. Virginia | us-east-1 | | Ohio | us-east-2 | | N. California | us-west-1 | | Oregon | us-west-2 | | Frankfurt | eu-central-1 | | Ireland | eu-west-1 | | Tokyo | ap-northeast-1 | | Seoul | ap-northeast-2 | | Sydney | ap-southeast-2 | | Mumbai | ap-south-1 | | GovCloud (US-East) | us-gov-east-1 | | GovCloud (US-West) | us-gov-west-1 | ## Licensing A valid license must be provided before the system can run any workflows. A license must be ordered from QIAGEN and is bound to a specific installation. QIAGEN will provide a license order ID, which can be used to download a valid license. Notice that you will need the host id provided in one of the later steps in the installation before the license can be downloaded. [Please see the installation section for more information](#installstep9). > > **Notice:** *The CLC Workflows that are executed on the GCE solution are created in CLC Workbenches that require a CLC Workbench license.* > ## Key Features * Large suite of best-of-class bioinformatics tools (algorithms, analyses) * CLC Workflows for simplifying and streamlining genomic data analysis * Secure: encrypted data at rest and encrypted data in transit from client to end-point * Fault-tolerant: no single point of failure * Private cloud using AWS VPC with no access outside the company network * Scalable: unlimited data storage space and processing power (on demand) * Cost effective: no ‘idle’ job nodes * Simple deployment of GCE solution on AWS VPC ## Known Dependencies Workflows to be submitted to the CLC Genomics Cloud Engine service are created using a CLC Workbench. Workflow installer files should be made using a version compatible with the CLC Genomics Server used by GCE. CLC Workbenches compatible with relevant CLC Genomics Server versions are listed in the Compatibility section of the [Latest Improvements information for the GCE version you are working with](https://www.qiagenbioinformatics.com/products/clc-genomics-cloud-engine/latest-improvements/). ## Limitations A few tools are not currently supported on GCE and thus should not be included in workflows to be run on this system. These are: * Tools that modify the data elements provided as input, rather than generating a new data element with the results. * Differential Expression for RNA-Seq * Tools configured as External Applications, (a feature available with the CLC Genomics Server. All tools distributed in server plugins developed by QIAGEN Aarhus (CLC bio) are available, but with the same restrictions as mentioned in the first point above. Workflows containing tools distributed via third party server plugins can be run on GCE as long as the relevant server plugins are installed. How to do this is described int he section *Adding and updating CLC Genomics Server Plugins*. Further details about creating workflows for use on GCE are provided in the GCE Command Line Tools manual. # Installation This chapter describes the steps needed to install the CLC Genomics Cloud Engine solution in a new AWS account. In summary, the installation consist of the following main steps: 1. Preparation of prerequisites (e.g. AWS account for GCE, AWS users for clients, AWS key pair for administration host, S3 bucket for installation) 2. Starting an administration host instance on Amazon EC2 using AWS CloudFormation 3. Logging into the administration host 4. Configuring SSL certificate for the Job Manager service 5. Configuring GCE stack name 6. Configuring OAuth provider integration 7. Running install scripts on the administration host 8. Installing and configuring the Command Line Tools client in order to test the GCE installation ## Prerequisites This section describes the prerequisites for installing the GCE solution: ### AWS customer account Before GCE can be installed it is necessary to have an AWS account. An existing account can be used, but it may be convenient to install GCE in a new account, since this makes it simple to identify resources related to GCE and keeping track of costs related to running GCE. If installing to an existing account it should be verified that it has a default subnet defined for each availability zone of the AWS region that will be installed to. ### AWS users Each user of the GCE Command Line Tools will need an AWS user with permissions to access data on the S3 locations where input and output data is expected to be located. Please follow the instructions in the guide for [creating an IAM User in Your AWS Account](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html). ### AWS key pair for SSH The installation procedure requires SSH terminal access to the GCE administration host. For this purpose an AWS key pair is required for authentication. All mechanisms available through Amazon for registering or producing a key pair will work (e.g. generation through the web console and the AWS CLI). The main difference is whether the private key is generated locally in hardware or software, or if the private key is simply generated by AWS and downloaded. #### Option 1: Generate key pair in the AWS console If you do not already have a key pair, it can be convenient to get Amazon to generate one for you. Please follow the instructions in the [Amazon documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#having-ec2-create-your-key-pair). Please remember to set the AWS region to the target region for the deployment before generating the key pair. The key pair should be kept even after installation is completed, as it may be necessary to access the administration host again at a later point in time (e.g. installation or upgrades). #### Option 2: Import existing key pair To import the public key from an existing key pair, please follow the instructions in the [Amazon documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#how-to-generate-your-own-key-and-import-it-to-aws). This documentation provides guidance on how the public key must be formatted and outlines requirements for the key. Note that only RSA keys of sufficient length are supported. Please remember to set the AWS region to the target region for the deployment before importing the key. ### AWS S3 installation bucket The installation process (detailed in the next section) needs to store certain files used by the system during installation and when upgrading. These files are stored in a S3 bucket which is also used as a work area where information about running jobs are stored. It is advised to use a dedicated bucket with default server side encryption for this purpose, which can be created by following the steps: 1. Go to the [AWS Console Login](https://console.aws.amazon.com/console/home) webpage and login to your AWS account where you wish to install GCE. 2. Select the S3 service. 3. Press the _Create bucket_ button. 4. Enter a name in the field labeled _Bucket name_. The name you provide must be unique across the whole of AWS, for example `mycompany-genomicscloudengine-install-eu-west1`. Further naming restrictions and guidelines are provided in the AWS [documentation](http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html). 5. In the region field select the region where GCE should be running eg. `EU (Ireland)` or `US West (Oregon)`. 6. Press the _Next_ button which will advance to the next page. 7. On the _Configure options_ page make sure to check the _Default encryption_ checkbox. 8. Press the _Next_ button. 9. On the _Set permissions_ page press the _Next_ button. 10. On the _Review_ page press the _Create bucket_ button. The S3 bucket is now created and can be inspected in the S3 console. ## Install GCE on Amazon The following sections describe the steps needed for installing the GCE solution. Instructions for upgrading an existing installation are provided in the next chapter. ### Step 1: Create administration host A CloudFormation template is used to create the administration host, which will be used in the following steps of the installation. The administration host can also be used later for updating an already installed GCE environment. Using the administration host instead of performing the installation from your local desktop ensures high network bandwidth and automatically fulfills software requirements. The administration host is created by following the steps below: 1. Select the CloudFormation service. 2. Press the _Create Stack_ button and specify the `GceAdminInstance.template` template location in the _Specify an Amazon S3 template URL_ option. The template URL is available on the [release webpage](../../index.html). 3. Select _Stack name_ (must be in lower case with fewer than 26 characters), e.g. 'gce-admin'. Be aware that this stack name is not the stack name of the installed GCE solution, but a stack holding administrator host resources. 4. Enter the exact name of the S3 installation bucket. 5. Select the key which was previously imported/generated in `SshKey`. 6. Leave the `DockerSolutionStack` field with the default solution stack name. Only change this value if a previous attempt of creating the stack has failed because AWS has deprecated the default solution stack name. In that case a more recent `Single Container Docker` solution stack name can be found on the following [AWS page](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html). 7. Press the _Next_ button when you have filled out all fields in the form. 8. Skip the _Tags_, _Permissions_, and _Advanced options_ on the _Options_ page by pressing _Next_. 9. Review all details and make sure to check the box _I acknowledge that AWS CloudFormation might create IAM resources_ at the bottom of the page. 10. Press the _Create_ button. The GCE administration host stack will now be created. When the creation process is complete, the status CREATE_COMPLETE will appear in the _Status_ column in the AWS Console. ### Step 2: Login to the administration host The stack of the GCE admin instance, referred to from this point as the administration host, will now be created. When the creation process is complete, the status _CREATE_COMPLETE_ will appear in the Status column in the AWS Console. The administration host contains GCE installation software and scripts for configuring and installing the service. An overview of the scripts can be found in the installation details appendix. When the stack is fully created the tab "Outputs" will contain an output with the key `Filter`. The value of this output is a filter expression that can be used to locate the administration host instance in the AWS console: 1. Copy the value of the *Filter* expression, eg. `tag:install-host : gce-prerequisites`. 2. Open the EC2 service console. 3. Select *Instances* in the left navigation pane. 4. Enter the filter expression into the search area and press enter (see screenshot below). ![EC2 Console Filtered View](./images/EC2Console_Filtered.png) You can log into the administration host as user `install-user` using the public IP or DNS address shown in the EC2 console and the key pair imported into AWS in an earlier step. Using the openssh client, for example, a login will have the form: ```none ssh install-user@ -i .pem ``` Example: ```none ssh install-user@52.89.60.37 -i ~/.ssh/t3key.pem ``` After logging in, you will see a bash prompt ```none [root@GCE-installhost] ~ # ``` You are now running as root on the administration host Docker instance on Amazon EC2. ### Step 3: Set up SSL/TLS When setting up SSL/TLS, you can upload an existing server certificate obtained from a certificate authority (CA) or a self-signed certificate. The former, common for production environments, involves uploading the certificate and certificate chain to ACM, as described in the [AWS Documentation](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs_manage.html#UploadSignedCert). Self-signed certificates, often more convenient for testing environments, can be created using the instructions below. Regardless of the type of certificate the Amazon Resource Name (ARN) will be needed later in the installation process. #### Create a self-signed certificate The administration host provides a script for creating a self-signed certificate. Before running the script the `admin/ssl/openssl.cnf` configuration file must be modified. Please modify the `[req_distinguished_name]` settings and specify which DNS names to include in the `[alt_names]` section. These names should match the DNS name you want the GCE service to have. The file can be edited using either `nano` or `vim`, which are bundled with the administration host. When the configuration is set as wanted, run the script ```none admin/ssl/create-self-signed-cert.sh ``` The script writes the new private key to 'ssl/key.clear.pem' and configures the installation scripts to use the new certificate when installing GCE. This certificate will expire 30 years after creation. In addition, the script also uploads the certificate and key to the IAM service and imports the certificate to the gce-cacerts trust store using the _keytool_ command. Further information on certificates can be found in the [certificate management appendix](#certificate-management). ### Step 4: Configure the GCE stack name The admin scripts are configured by the _setup.sh_ script in the home folder on the administration host, e.g. `/root/setup.sh`. Edit the following fields in that file (other fields can be ignored), providing information relevant to your setup: * `STACK_NAME=<your stack name>` Enter the permanent name for your GCE service. It must be between 4 and 23 characters, use lower case letters only and be globally unique within each AWS region. A region descriptor and other information will be appended to that name to give the service endpoint (URL) of the stack. For example, if you were using the the EU (Ireland) region and provided the name _mygce_, this would be: `https://mygce.eu-west-1.elasticbeanstalk.com`. * `SSL_CERTIFICATE_ARN=<aws arn for your certificate>` Insert the AWS ARN for your own certificate here. If you are using a self signed certificate generated using the script in step 3 above this has already been specified for you. To edit `setup.sh`, use either `nano` or `vim`, which are bundled with the administration host. ### Step 5: Configure authentication Before the GCE application stack can be created you are required to provide information on your OAuth configuration. A template configuration can be found `~/job-manager-config/application-oauth2.yaml`. But check the the details, including more examples, in the [OAuth Configuration](#oauthconf) section. The template should be updated with the values corresponding to your own Authorization Server (AS). The AS should be configured with the redirect URI of your GCE installation. If you need to update the `application-oauth2.yaml` file later on, then the new version of the file should be uploaded to S3 and the Job Manager application will need to be restarted. Details about the authentication configuration and how to restart the application is described in the [Authentication](#idpconf) section. We recommend keeping a copy of the original `application-oauth2.yaml` file as a backup. ### Step 6: Transfer deployables Having performed the previous configuration steps you are now ready to start the installation by running the prepare script: `admin/prepare.sh` When running the prepare script you will be prompted to select which version of the CLC Genomics Server to use for executing workflows inside GCE. New users will likely want to run the latest version, whereas some existing users might want to run older versions for compatibility reasons. Following this step you may also be asked to choose a build version for the job executor image. This step is only included if multiple versions of the workflow executor service have been released and if this is the case we generally recommend running the latest version. Having selected system service versions the script retrieves the GCE application files from the release repository and transfers them to Amazon ECR and the `INSTALL_S3_BUCKET`. ### Step 7: Install GCE service The GCE service can now be installed by running: `admin/create-stack.sh` The GCE stacks and resources created by these stacks can be tagged with a custom key-value pair. This can be useful for tracking cost related to this specific GCE environment. The syntax for adding tags is as follows: `Key=string,Value=string ...` Example: `admin/create-stack.sh "Key=environment,Value=production Key=department,Value=lifescience"` A maximum number of 50 tags can be specified. Typically, it will take about 40 minutes to create the stack but this may vary. Further information on the CloudFormation templates run by the script can be found in installation details appendix. Progress can be monitored from the CloudFormation service in the AWS Console: ![Monitoring progress of stack creation](./images/CloudFormation_ShowStackProgress.png) When the _create-stack_ script has finished executing, it will output the location where a license file should be placed and the host ID that the license file needs to be associated with. After the GCE service has been installed: * Four applications should be visible in the _Elastic Beanstalk_ AWS Console: JobExecutorApp, JobManagerApp, LicenseServerApp and administration host. ### Step 8: Install the GCE license A valid license file is needed on the system before any workflows can be run. If this is the first time GCE has been installed to this location, then you will need to download a license file and install it into the relevant location in your GCE installation. #### Download a license file To download a license file, open a web browser and navigate to the [Download a Network License file page](http://licensing.clcbio.com/LmxWSv2/GetNetworkLicenseFile). ![Download a Network License file](./images/License_Download.png) Fill in the fields on the form as follows: 1. **License Order-ID**: This is sent to the license owner via email after the product is purchased. 2. **Server Host ID(s)**: The host ID identifies the system the GCE stack is on. It is printed to the terminal after running the admin/create-stack.sh script on the administration host. It is also recorded in the file called `-host-id.txt` inside the folder gce-license-files in the installation bucket on S3. 3. **Host Name**: Enter the name of the GCE stack. Click on the *Download License* button and save the license file. #### Installing the license on GCE The license file must now be uploaded to S3. The location to upload the file to is printed to the terminal after running the setup.sh script. The general path to the location is //gce-license-files/. The license file can be uploaded by: 1. Using the AWS Console S3 upload functionality. Ensure that Amazon S3 master-key encryption property is enabled when uploading. OR 2. Using the AWS command line client (AWS CLI) with the command form: ```none aws s3 cp < local path to the license file > s3:////gce-license-files/ --sse ``` The option --sse is required for this action. It enables Amazon S3 master-key encryption (AES256) encryption. The above commmand line, with the relevant S3 location is printed to the the terminal after running the setup.sh script. Installation of the AWS CLI is documented in this [installation guide](http://docs.aws.amazon.com/cli/latest/userguide/installing.html). #### Registering the license on the system The License Manager service and Job Executor service needs to be restarted for the new license file to be detected by the system. To do this, run the script below. Restarting the services should take about a minute: ```none admin/tools/restart-licenseserver.sh ``` At this stage it is also recommended to add subscribers to the [license expiration alarm](#licensealarm), but this step is not required. Further information about updating licenses, moving licenses to new GCE installation locations, and monitoring licenses can be found in the [Configuration chapter](#updatemonitorlic). #### Configuring autoscaling to match the license If your license impose restrictions on the number of jobs that can be executed in parallel it is necessary to configure the autoscaling configuration of the Elastic Beanstalk application ´JobExecutorApp´ as described below. 1. Click on the ´JobExecutorApp´ in the list of applications in the _Elastic Beanstalk_ AWS Console. 2. Click configuration on the left hand side menu 3. Edit ´Scaling´ by clicking its cogwheel icon 4. Click on ´Autoscaling´ 5. Set ´Maximum instance count´ to a value smaller or equal to the number of concurrent jobs allowed by your license (Open the license file in a text editor and inspect the ´COUNT´ property of the ´GENOMICSCLOUDENGINE´ feature). 6. Click ´Apply´ ### Step 9: Determining the GCE URL To determine the GCE URL: 1. Click on the JobManagerApp in the list of applications in the _Elastic Beanstalk_ AWS Console. 2. Copy the _URL_ value from the top right hand side of the application page. It will be of the form ..elasticbeanstalk.com. 3. Change the protocol of the copied URL from "`http://...`" to "`https://...`". An example of a GCE URL, where the stack name is "gce-stack" and the region was us-west-2 is: `https://gce-stack.us-west-2.elasticbeanstalk.com/` ![Elastic Beanstalk Environment](./images/Beanstalk_Environment.png) ### Step 10: Finish the OAuth configuration The redirect URL for the GCE installation now needs to be added to the client configuration of the Authorization Server. Note that the details of this step will depend on the specific OAuth provider, so only general guidelines can be provided. AWS Cognito, for example, refers to the _redirect URL_ as a _callback URL_ instead. The redirection URL is the GCE URL identified in the previous step with `login` appended to the path, eg. `https://gce-stack.us-west-2.elasticbeanstalk.com/login` for the example above. If you experience problems, please check that the https protocol was used. The installation of GCE is now complete. ### Validate the installation Some aspects of the installation can now be validated by checking the status of the Elastic Beanstalk services in the AWS Console: * The ´JobManagerApp´ should be colored green. If it is red, check that your authentication has been correctly set up. * The 'JobExecutorApp' is expected to be colored red, indicating that there are no EC2 instances running jobs. Just after installation, it will be green because the system starts with a single active instance. That instance is shut down after around an hour. At that point, this application will be colored red until EC2 instances are started up to run jobs. * The 'LicenseServerApp' should be colored green. If any plugins are needed for the embedded CLC Genomics Server they may now be installed as described in [the configuration chapter](#installplugins). ##### GCE Landing page The CLC Genomics Cloud Engine service _Landing Page_ provides easy access to the _Job overview_ and _Enrollment_ pages for the GCE setup: * The _Job overview_ page is described in the section [Monitoring the Status Of The System]. * The _Enrollment_ page is used for generating an authentication token used by the GCE clients. See the GCE Command Line Tools manual for details. You must be signed in to your Authorization Server to access either of the above pages. If you experience problems authenticating, please verify that the GCE URL is added as a valid Authorization Server redirect. Please also ensure you are using the https protocol. #### The GCE Command Line Tools client application The GCE Command Line Tools (CLT) is a client application for GCE. It can be used to submit jobs to GCE and perform certain administrative tasks. Installation of this client application is described in its own user manual. Installers for the CLT can be found on the GCE release page, where the manual is also linked. ## AWS Roles and Policies The installation process creates a number of AWS roles in order to set individual permissions for the different components of the system. An overview of the created roles is provided in the installation details appendix. Please note that the deployment process does not create IAM groups and users for GCE Command Line Tool users. These users will need permissions to read input data files, read result files and generate signed S3 URLs as described in the Command Line Tools manual. Such users must be created manually using the AWS Console or CLI. ## How to Uninstall the GCE Solution Unfortunately, it is not possible to provide a one-click uninstall procedure for removing all GCE settings and services. To remove GCE from your AWS account, the following steps should be manually performed via the AWS Console: * Delete the stacks listed below via the _CloudFormation_ service in the AWS Console. Please note that the main GCE stack can only be deleted by the root account and that the main stack must be fully deleted before the database stack can be deleted. Also note that the main GCE stack owns a number of sub-stacks that are named after the main stack, but with an added suffix. The GCE administration host stack also owns one sub-stack with the name prefix awseb. All sub-stacks are deleted automatically. * Main GCE stack (Facade template for VPC containing the CLC Genomics Cloud Engine services) * Database stack (VPC containing the CLC Genomics Cloud Engine Database) * Administration host stack (Admin instance used for installation and updates of GCE environments) * In the _S3_ service, empty the GCE installation bucket (details are provided in prerequisites). * Return to the _CloudFormation_ service and delete the Prerequisite stack (GCE Installation prerequisite resources). * Delete the docker container images (_service-job-handler_, _job-executor_, _licenseserver_, _licenseserver-boot_) from the EC2 Container service. * Delete all queues with the _JobCommand_ prefix from the _SQS_ service. * Delete all log groups prefixed with the name of the main GCE stack from the _Cloud Watch_ service. ![Assuming a role from the AWS console](./images/AssumeRole.png) # Upgrade an Existing GCE Solution Each new version of GCE is accompanied by the release of a new administration host CloudFormation template. With this template a new administration host containing the relevant update scripts can be created. After the upgrade the old administration host can be safely deleted. The details of the upgrade procedure depends on the version of GCE being upgraded from. Versions prior to 1.3 require the full AWS stack to be rebuilt, whereas upgrading from 1.3 or later only require the software to be upgraded unless stated otherwise in the release notes. Software upgrades can be made relatively quickly and does not have any effect on the existing system configuration. Full stack upgrades are more involved and take approximately 1-2 hours. During a full stack upgrade the GCE database is deleted which means the job queue, job history and job executor environment configuration is cleared. Existing users will also have to re-enroll after the upgrade, but the OAuth configuration is retained and the existing license file is re-used. ## Software upgrade (From version 1.3 or later) ### Step 1: Create a new administration host To create the new administration host enter the AWS web console and navigate to the CloudFormation service. Use the `GceAdminInstance.template` template URL found on the [release webpage](../../index.html). When filling in the CloudFormation parameters enter the install bucket name used by the existing installation. A detailed description of how to create the administration host can be found in [step 2](#installstep2) of the installation guide. ### Step 2: Initialize the new administration host SSH to the new administration host as described in [step 2](#installstep2) and initialize it by downloading the configuration of the existing stack. This is done by running the download-settings script and specifying the stack name as a parameter: `admin/download-settings.sh ` Having transfered the existing stack configuration to the admin host, the deployables of the upgrade version can be transfered by running the prepare script: `admin/prepare.sh` When running the prepare script you will be prompted to select which embedded version of the CLC Genomics Server to use for executing workflows inside GCE. New users will likely want to run the latest version, whereas some existing users might want to run older versions for compatibility reasons. Following this step you may also be asked to choose a build version for the job executor. This step is only included if multiple versions of the job executor image have been released and if this is the case we generally recommend running the latest version. Further details on initialization of the administration host can be found in [step 3](#installstep3). ### Step 3: Run upgrade script Before performing the actual upgrade it is important to ensure that there are no jobs currently being processed, since the upgrade process may cause these jobs to fail. Having ensured that the system is idle the upgrade is performed by running: `admin/upgrade-software.sh` When the upgrade script is done, go to the Elastic Beanstalk service in the AWS web console to verify that each of the applications of the stack have been upgraded. From the front page this can be done by inspecting the ´Running versions´ property of each application (JobManagerApp, JobExecutorApp and LicenseServerApp). Please note that it may take a few minutes for Elastic Beanstalk to apply the new software version. ### Rolling back a software upgrade Software upgrades can be rolled back using the Elastic Beanstalk service in the AWS Web Console. To roll back an upgrade please follow these steps for each of the three GCE Beanstalk Applications (JobManagerApp, JobExecutorApp and LicenseServerApp): 1. Go to the Elastic Beanstalk service in the AWS Web Console. 2. Select the application you want to roll back (notice the distinction between application and environment. Applications are postfixed with "App" while environments are postfixed with "Env"). 3. Select "Application versions" from the menu. 4. Select the previous version deployed on the application. 5. Press the Deploy button and press deploy again in the confirmation dialog. Although the console allows for managing the software version of each Elastic Beanstalk application independently, it is important to note that the JobManagerApp and the JobExecutorApp must always run the same version. This can be checked by inspecting the "version label" column in the "Application versions" page of the respective applications. Values in the "version label" column are prefixed with the GCE version number and postfixed with a timestamp. The GCE version number part of the labels must be the same for the two Beanstalk applications. ## Full stack upgrade (From versions prior to 1.3) ### Step 1: Create a new administration host To create the new administration host enter the AWS web console and navigate to the CloudFormation service. Use the `GceAdminInstance.template` template URL found on the [release webpage](../../index.html). When filling in the CloudFormation parameters enter the install bucket name used by the existing installation. A detailed description of how to create the administration host can be found in [step 2](#installstep2) of the installation guide. ### Step 2: Initialize the new administration host SSH to the new administration host as described in [step 2](#installstep2) and initialize it by downloading the configuration of the existing stack. This is done by running the download-settings script and specifying the stack name as a parameter: admin/download-settings.sh Prior to version 1.3 certificates were stored in an AWS service that is no longer recommended, so if the existing installation uses a self signed certificate a new certificate must now be generated: ```none admin/ssl/create-self-signed-cert.sh ``` The deployables of the upgrade version can now be transfered by running the prepare script: `admin/prepare.sh` When running the prepare script you will be prompted to select which embedded version of the CLC Genomics Server to use for executing workflows inside GCE. New users will likely want to run the latest version, whereas some existing users might want to run older versions for compatibility reasons. Following this step you may also be asked to choose a build version for the job executor image . This step is only included if multiple versions of the job executor have been released and if this is the case we generally recommend running the latest version. Further details on initialization of the administration host can be found in [step 3](#installstep3). ### Step 3: Delete the existing stacks Before the upgraded infrastructure can be installed the existing stack must first be deleted through the AWS web console. Since the step involves deletion of all data in the database it is necessary to be logged in with the root credentials of the account. * First delete the main stack via the _CloudFormation_ service in the AWS Console (ie. the stack with the name specified in setup.sh). The main GCE stack owns a number of sub-stacks that are named after the main stack, but with an added suffix. These stacks are deleted automatically along with the main stack. * After the main stack is deleted successfully the database stack can be deleted (ie. the stack called `-DB`). * Finally, delete the old administration host stack. ### Step 4: Create the new stack Having deleted the infrastructure of the old GCE version the next step is to re-create it in the updated version. This is done by running the create-stack script on the install host: `admin/create-stack.sh` When the create script is done the status of the main services should be verified as described in [Validate the installation](#validate-installation). ### Step 5: Re-configure job executor environment When the updated stack has been created the allowed number of instances in the job executor Elastic Beanstalk environment needs to be configured to match the upper limit provided by the license file. Detailed guidelines are provided in the [Installation chapter](#configure-autoscaling). If a custom instance type or data volume size is needed it also needs to be re-configured by following the guidelines in the Command Line Tools manual. ### Step 6: Configure plugins (Optional) Optionally, the GCE upgrade needs to be finalized by deploying CLC Genomics Server plugins. If the CLC Genomics Server version is unchanged after the upgrade, the existing plugin bundle can simply be reused. In this scenario the original plugin bundle can be transferred by simply moving the content of the orignal plugin directory into the new plugin location (a subdirectory versioned after the employed CLC Genomics Server version, as defined in setup.sh on the administration host): `aws s3 mv s3://< GCE install bucket >/< Stack name >/plugins/ s3://< GCE install bucket >/< Stack name >/plugins/< Genomics server version >/ --recursive --sse` If the version of the CLC Genomics Server is changed during the upgrade, a new set of compatible plugins must be uploaded to the plugin directory. In either case, the plugin bundle needs to be deployed after the upload to S3 is complete. This is done by running the following script on the administration host: `admin/redeploy-clc-server-plugins.sh` Further details on plugin installation can be found in the [configuration chapter](#installplugins). ### Step 7: Re-enroll and re-configure clients Since the GCE database is deleted during the stack upgrade, all existing users will now have to re-enroll and apply the new credentials on relevant clients. Instructions for enrolling can be found in the Command Line Tools manual. # Data access Data access is granted to GCE through [S3 pre-signed URLs](http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html). This gives GCE time limited access to the specific files needed to perform an analysis. GCE can access input data stored in any S3 location that the AWS user configured in the GCE CLT has permissions to access. This includes data stored in other AWS regions and/or other AWS Accounts. Output data, however, can only be uploaded to buckets located in the same AWS region as the GCE service and the bucket must be accessible by the AWS account of the GCE service. Amazons price model for S3 currently allow for free data transfer within a region, whereas inter-region traffic is subject to charges. [AWS web site](https://aws.amazon.com/s3/pricing/). By default the GCE CLT will prevent the user from inducing inter-region traffic charges, by rejecting submission of jobs which reference files in a different region than the GCE service. To disable this region check, the user must set the region to "any" during the configuration of the GCE service connection (see the GCE CLT manual). Cross account access to files in S3 is configured by defining bucket policies for one or more IAM users belonging to a foreign account. Minimum requirements for GCE input data buckets are the "GetObject" and "GetBucketLocation" actions. A sample read-only cross account policy is included below, but please consult AWS documentation to ensure that the format is up to date with the newest AWS API changes: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1505825829420", "Effect": "Allow", "Principal": "_Foreign_User Arn_", "Action": [ "s3:GetBucketLocation" ], "Resource": "arn:aws:s3:::_data-bucket_" }, { "Sid": "Stmt1505825829421", "Effect": "Allow", "Principal": "_Foreign_User Arn_", "Action": [ "s3:GetObject" ], "Resource": ["arn:aws:s3:::_data-bucket_/*"] }] } ``` # Configuration This chapter describes how to configure user customizable parts of CLC Genomics Cloud Engine and how to properly effectuate configuration changes. ## Working with multiple environments The administration host stores the _setup.sh_ file locally and this file defines which stack the administration scripts will modify. When working with multiple GCE environments, a script is used to change from one environment to the other. Each environment is identified by its CloudFormation stack name. ### Example Change perspective to work with production environment with stack name "gce-prod": `admin/download-settings.sh gce-prod` This will download all the settings related to the production environment and set the _setup.sh_ file accordingly. The same script can be used at any time to recover configuration files or reset them to the values actually used in the specified environment. ## Job Executor settings ### Switch to another embedded version of the Genomics server GCE supports execution of workflows on a range of different versions of the Genomics server, with support for updated versions being added continously. The active server version can be rapidly changed by running the following script on the administration host during an idle period: `admin/switch-gxs-version.sh` The script displays a list of available Genomics server versions and prompts the user for a selection. If multiple versions of the job executor have been released for a given server version the script further prompts for a build version. In this case we generally recommend choosing the latest version. ### Auto Scaling The GCE solution provides means for handling variable system load by dynamically adjusting the number of Job Executor EC2 instances. This auto scaling mechanism is facilitated by continuously monitoring system metrics in order to assess the load on the system. Simply put, the system starts a new instance whenever the number of waiting jobs exceed the number of running jobs for a period of two consecutive minutes. Conversely, the system tries to reduce the number of instances to 0 whenever there have been no jobs waiting for a period of 50 minutes. Technically speaking auto scaling is based on a scaling policy derived from a CloudWatch alarm monitoring a CloudWatch metric. Furthermore, an instance protection flag is employed to safe-guard instances while they are in the process of executing a job. Currently, the minimum and maximum number of executors can be adjusted. When adjusting these values it is important to keep them within the executor instance limits imposed by the license file of the system. If the minimum number of instances is set to 0 the system is allowed to perform a full scale-in after a given period of idle time. While this can be highly cost effective, it may be unsuitable for workloads that require very rapid execution. When a job is submitted to an idle system in the full scale-in state, it will typically take 5-10 minutes to have the job running. Auto scaling of Job Executor instances is configured using the [Elastic Beanstalk Management Console](http://console.aws.amazon.com/elasticbeanstalk/home). 1. Open the Elastic Beanstalk console. 2. Select the environment from the list (-JobManagerEnv). 3. Choose Configuration. 4. Choose Capacity Configuration. 5. Set desired values for Min and Max number of instances in the Auto Scaling Group subsection. ### Instance configuration A job submitted to the GCE has certain requirements for memory, cpu and diskspace. To match these requirements instance type of the Job Executors might require adjustment. By default, all Job Executors are configured as memory capacity optimized r4.large instances on AWS and r5.large on AWS GovCloud. Instance type of Job Executor instances is configured using [Elastic Beanstalk Management Console](http://console.aws.amazon.com/elasticbeanstalk/home). 1. Open the Elastic Beanstalk console. 2. Select the environment from the list (-JobManagerEnv). 3. Choose Configuration. 4. Choose Instances Configuration. 5. Set desired instance type in the Instance type subsection. Please note that changing the instance type immediately terminates all Job Executor instances. This may cause running jobs to fail, so the recommended approach is to put GCE in the maintenance mode and wait until the system is idle (there are no running jobs) to change the instance type. ### Maintenance mode Maintenance mode suspends job distribution by the JobManager. When maintenance mode is active users still can submit jobs to GCE but they will be put in queue until maintenance mode is deactivated. All the jobs that were running before the system enters maintenance mode are allowed to complete. Maintenance mode is controlled using [DynamoDB Management Console](http://console.aws.amazon.com/dynamodb/home). 1. Open DynamoDB console 2. Go to Tables section and select settings table (-SettingsTable) 3. Go to Items tab 4. Set maintenace value to true if you wish to activate maintenance mode or false if you would like to disable it ### S3 signing policy It is possible to require all S3 URIs submitted to the GCE to be presigned. This should be enabled if S3 Access Control Lists or Bucket Policies are used to restrict users access to S3 buckets on the same account as GCE is deployed. Enabling this option will ensure that job submitter has access to the resources used in the job submission. Activating this option has a disadvantage of creating more egress traffic and as a result increasing operating cost. Signing policy is controlled using [DynamoDB Management Console](http://console.aws.amazon.com/dynamodb/home). 1. Open DynamoDB console 2. Go to Tables section and select settings table (-SettingsTable) 3. Go to Items tab 4. Set signing_required value to true if you wish to require S3 URI presigning or false otherwise ## Modifying OAuth configuration OAuth is part of the configuration of the Job Manager and modifying it requires access to the [Elastic Beanstalk Management Console](http://console.aws.amazon.com/elasticbeanstalk/home) shown below. ![Elastic Beanstalk Management Console, Showing application](./images/Beanstalk_Console.png) Start by using the Elastic Beanstalk console to find the path to the Job Manager configuration on S3. It is stored in a variable on the Elastic Beanstalk environment and can be retrieved by the following steps: 1. Open the Elastic Beanstalk console. 2. Select the environment from the list (-JobManagerEnv). 3. Choose Configuration. 4. Choose Software Configuration (click the gear icon). 5. Locate the value of the environment property (`SKY_CONFIGURL`). ![Elastic Beanstalk Management Console, Set environment variable](./images/Beanstalk_SetEnv.png) The OAuth configuration can then be updated by modifying the application-oauth2.yaml file located at `SKY_CONFIGURL`. In the next section a description of the content of the configuration file is provided, but first it is described how to apply a configuration change after the file is updated: 2. Select the application from the list (i.e. -JobManagerApp). 3. Choose Application Versions. 4. In the version list select latest application. 5. Choose Deploy. 6. In the dialog choose Deploy. ![Elastic Beanstalk Management Console, Deploy application](./images/Beanstalk_Deploy.png) Redeploying the application as described above is typically a fairly quick process. The web container will restart if necessary and the application might become unavailable to users for a few seconds. You can prevent this by configuring your environment to use rolling deployments to deploy the new version to instances in batches. ### Required OAuth provider configuration You will need to configure your OAuth provider with a redirect URI that points to a login service provided by the Job Manager. The redirect URI is constructed in the following way: `https://{stackname}.{region}.elasticbeanstalk.com/login` *stackname* : is the stack name of the GCE installation, the value of the STACK_NAME variable in `setup.sh`. *region* : is the AWS region where the environment is created. Example: `https://gce.us-west-2.elasticbeanstalk.com/login` There is a chance that the host name does not follow this scheme. However when the GCE installation has been installed, you can easily obtain the URI from the Beanstalk Management Console: 1. Open the Elastic Beanstalk console. 2. Select the environment from the list (eg. gce-JobManagerApp). 3. Copy the URL (top of page, to the right of environment name). 4. Modify the protocol to `https://` instead of `http://`. 5. Append *login* to the URL. ## OAuth configuration file When updating the OAuth configuration file only the `security` block seen below should be modified. In the following description each parameter is explained in detail. Please note that the OAuth provider must satisfy the following criteria: * Must issue refresh tokens with expiration stamp * The user principal must be issued in a format supported by Spring Security (i.e. the info returned by the userInfoUri resource seen below) ```none version: 1 security: oauth2: client: clientId: clientSecret: accessTokenUri: userAuthorizationUri: tokenName: oauth_token authenticationScheme: clientAuthenticationScheme: resource: userInfoUri: ``` *clientId*
The OAuth client ID. This is the ID for identifying against the OAuth provider. *clientSecret*
The secret associated with the resource. *accessTokenUri*
The URI of the OAuth endpoint providing the access token. *userAuthorizationUri*
The URI to which the user will be redirected if the user needs to authorize access to the resource. Note that this is not always required, however, since this depends on which OAuth 2 profiles are supported. *authenticationScheme*
The scheme used to authenticate the access token. Suggested values: "query", "header" or "form". *clientAuthenticationScheme*
The scheme used by your client to authenticate against the access token endpoint. Suggested values: "http_basic" and "form". Default: "http_basic". *userInfoUri*
The URI of the resource endpoint providing the user principal. ### Example configurations The following configuration shows how to test the installation by authenticating against AWS Cognito (AWS managed OAuth2 provider): # The following values are example values and should be replaced with your own. # Having configured Cognito on your AWS account relevant parameters can be retrieved from: https://cognito-idp..amazonaws.com//.well-known/openid-configuration # Remember to add GCE as a valid callback url in your Cognito configuration. version: 1 security: oauth2: client: clientId: 1vcqXXXXXXXXXXXXXXXXXXXXXX clientSecret: 12ajXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX accessTokenUri: https://gce.auth.eu-central-1.amazoncognito.com/oauth2/token tokenUri: https://gce.auth.eu-central-1.amazoncognito.com/oauth2/token userAuthorizationUri: https://gce.auth.eu-central-1.amazoncognito.com/oauth2/authorize?scope=openid profile email&prompt=select_account consent&access_type=offline tokenName: oauth_token authenticationScheme: header clientAuthenticationScheme: form resource: userInfoUri: https://gce.auth.eu-central-1.amazoncognito.com/oauth2/userInfo userNameKey: name spring: profiles: include: auth Below is another example showing how one might configure a custom provider using [Ping](https://www.pingidentity.com/en.html) as OAuth service: ```none version: 1 security: oauth2: client: clientId: mycompany clientSecret: topsecret accessTokenUri: https://ping.profundo.dk/as/token.oauth2 userAuthorizationUri: https://ping.profundo.dk/as/authorization.oauth2?scope=openid profile email tokenName: oauth_token authenticationScheme: header clientAuthenticationScheme: form resource: userInfoUri: https://ping.profundo.dk/idp/userinfo.openid?schema=openid spring: profiles: include: auth ``` ## Updating and monitoring GCE licenses This section covers updating and monitoring of GCE licenses. Download and installation of licenses is covered in the [Installation chapter](#installlic). ### Updating or upgrading a GCE license If a license is set to expire, a new license file will need to be installed and registered. The steps involved are: 1. **Download a new license file.** This will usually involve a new License Order ID, which the license owner should receive by email after purchasing the renewal, and the host ID of the GCE installation. See the [Installation chapter](#installlic) for details. 2. **Install the new license on on GCE** Upload the new license file to S3. See the [Installation chapter](#installlic) for details. 3. **Remove the old license file** Remove the license file from S3 that is being replaced by the new license file. This may be an expired license, a soon-to-expire license, or simply a license file being upgraded. 4. **Register the changes to the licenses on the system** Restart the License Manager by running the `~/admin/tools/restart-licenseserver.sh` script on the administration host. While a restart is not expected to take more than about 20 seconds, we recommend this action is done when the queue is empty to ensure that running jobs are not interrupted. ### Licensing a new GCE installation License files are specific for the the GCE stack they were created for. A license file cannot be re-used for a new GCE installation or if you change the region the GCE stack is in. If you need to move to a new GCE installation, please contact QIAGEN Bioinformatics Support (AdvancedGenomicsSupport@qiagen.com) specifying that you wish to transfer your GCE license to a new GCE installation. Please include the License Order ID and, if possible, the old host ID in your communication. After Support tells you they have completed their work, you will need to download a new license file for the new installation, install that file, and register it with the new system. ### License expiration alarm In order to ensure timely renewal of the license, GCE provides an option for subscribing to a license expiration alarm. The alarm is triggered 60 days prior to license expiration and notifications are provided in a variety of forms, including e-mail and SMS. Recipients are easily added through the AWS web console by navigating to the SNS service and adding subscriptions to the `-LicenseExpirationEvents` topic. Further details on SNS configuration are provided by [AWS](https://docs.aws.amazon.com/sns/latest/dg/sns-getting-started.html#SubscribeTopic). ### Inspect license metrics The License Manager service will monitor and report license usage and expiration metrics, which can be inspected in CloudWatch. #### License usage 1. Open CloudWatch console. 2. Choose _Metrics_ in the left navigation pane. 3. Select _SKY/Metrics_ in the All metrics list. 4. Select _EnvironmentName_. 5. Select the _licenses.used_ metric for the given environment. 6. Choose _Graphed metrics_ tab. 7. Change the statistic method to _Maximum_. ![CloudWatch license usage](./images/CloudWatch_LicenseUsage.png) The image above displays an environment where a single license is in use for approximately one hour. Here the Y-axis is the number of licenses in use. #### License expiration 1. Open CloudWatch console. 2. Choose _Metrics_ in the left navigation pane. 3. Select _SKY/Metrics_ in the All metrics list. 4. Select _EnvironmentName_. 5. Select the _licenses.expiryin_ metric for the given environment. 6. Choose _Graphed metrics_ tab. 7. Change the statistic method to _Minimum_. ![CloudWatch license expiration](./images/CloudWatch_LicenseExpiration.png) The image above shows the time left until the license expires. Here the Y-axis is the number of seconds until expiry. ## Adding and updating CLC Genomics Server Plugins The feature set of the CLC Genomics Server embedded in GCE can be extended by installing [CLC Genomics Server Plugins](https://www.qiagenbioinformatics.com/plugins/). When downloading new plugins please be aware that GCE _only_ supports Server Plugins, _not_ Workbench Plugins. Also make sure to obtain plugins that are built for the relevant version of the CLC Genomics Server. Plugins are installed by uploading them to S3. All plugins are stored in a common plugin directory which is further divided into sub directories for different versions of the CLC Genomics server: `s3://< GCE install bucket >/< Stack name >/plugins/< Genomics server version >` To change the installed plugin bundle for a given version of the CLC Genomics server, simply add or delete files from the corresponding folder and apply the changes by the following steps: 1. Ensure that GCE is not currently processing any jobs. 2. Log in to the administration host. 3. Ensure the name of the stack you want to modify is set in the `setup.sh` file. 4. Run the following script from the administration host: `admin/redeploy-clc-server-plugins.sh` The progress of the update can be followed in the Beanstalk service in the AWS Console. ## Enabling automatic platform updates AWS Elastic Beanstalk supports automatic managed updating of the platform running the GCE services, eg. the host operating system and Docker. To enable this feature you will need to setup a weekly maintenance window as described in the [Elastic Beanstalk documentation](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-platform-update-managed.html). Managed updating will keep the platform updated to the latest minor and patch versions. Updates to new major versions of OS/Docker will only happen during updates of the CLC Genomics Cloud Engine itself. Automatic updating is most relevant for the Job Manager Elastic Beanstalk environment, since it runs the only internet facing service of GCE. It is also possible to enable automatic platform updates for the remaining Elastic Beanstalk environments, but it is important to note that any jobs executing inside the service window might fail if automatic updating is enabled for the license server environment. The GCE database comes pre-configured with a default maintenance window, which can be customized as described in the [RDS documentation](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html#Concepts.DBMaintenance). # Logging This chapter describes how to monitor, inspect, and export the log files created by the system using the AWS CloudWatch Management Console. For a general introduction to CloudWatch, see [Amazon CloudWatch Documentation](http://aws.amazon.com/documentation/cloudwatch/). > > **Notice:** *The system log files are only used for logging system specific information and for reporting system warnings and errors. They will not contain any confidential information, e.g. references to customer data. The log files will primarily be used by QIAGEN technicians when troubleshooting erroneous behavior reported by customers.* > ## Monitoring Log Files Log files are created by all running EC2 instances instantiated by the system and saved locally on each EC2 instance (e.g. Job Manager, Job Executor, and CLC Genomics Server logs). A _CloudWatch Logs_ agent is running on each EC2 instance, which provides an automated way to send log data back to the centralized _CloudWatch Logs_ repository hosted by CloudWatch. The agent contains the following components: * A plugin to the AWS CLI that pushes log data to CloudWatch Logs. * A script (daemon) that runs the CloudWatch Logs aws logs push command to send data to CloudWatch Logs. * A cron job ensuring that the daemon is always running. Logs collected by the _CloudWatch Logs_ agents are accessible from the Log Groups page in the CloudWatch Management Console: ![Log groups available for the Job Handler, Job Executor, and the Job Manager services.](./images/CloudWatch_Logs.png) ## Viewing Log Data It is possible to inspect the log streams for specific services by selecting a given log group. Each of the streams shown in the screenshot below are named after the EC2 instance that they are collecting log data from. ![Log streams available for a number of Job Handler instances.](./images/CloudWatch_LogStreams.png) Select one of the log streams to inspect the actual log data available for a given instance. A list of entries from the log file is presented similar to the example output shown below. It is possible to navigate the log data and locate a log entry for a given point in time. ![Detailed log information for a selected instance.](./images/CloudWatch_InstanceLog.png) ## Changing Log Retention By default, log files collected by CloudWatch are stored for two weeks but the retention period can be changed if needed (see [Setting Log Retention](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SettingLogRetention.html) for details). Any log file older than the current retention setting is automatically deleted. Local log files stored on the EC2 instances are automatically deleted when the given instance is shutdown or terminated. > > **Notice:** *Set the retention period as short as possible to reduce costs.* > ![Log retention can be changed for each log group individually.](./images/CloudWatch_EditRetention.png) ## Exporting Log Data Log data from selected log groups can be exported to an Amazon S3 bucket for use by other third-party software products. See [Exporting Log Data in Bulk to Amazon S3](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/S3Export.html) for details on how to export log files using either CloudWatch or the Amazon CLI. ![Using CloudWatch to export selected logs within a given timeframe to S3.](./images/CloudWatch_ExportLogs.png) # Monitoring ## Job status change notifications Whenever a job arrives in an end-state (failed, completed or cancelled) the system sends a message to the Amazon SNS push notification service. The SNS service provides means for delivering notifications to a number of subscribers over a variety of channels, including e.g. SQS and e-mail. During installation an SNS topic is created, which a user or a resource can easily subscribe to (SNS topic: _stack-name_-JobStatusChangeEvents). The SNS topic is created without any subscribers and it is up to the user to add subscribers and create the AWS resources and/or policies that may be necessary for delivering the notifications. The exact format of the messages received will depend on the type of delivery channel, since Amazon for example add a footer to e-mail notifications and various metadata to the Json message sent over SQS. The message sent to SNS from GCE simply consists of the raw job ID, ie. no header information or metadata of any type. Below, an example of the Json format used for SQS messages is provided. In this message the job ID sent by GCE is stored under the "Message" attribute. In general, we refer to Amazon's documentation for the details of their specific message formats. ```none { "Type" : "Notification", "MessageId" : "db1d173a-02da-52fa-8389-466e725a8cef", "TopicArn" : "arn:aws:sns:us-west-2:990456615767:kirkegac-JobStatusChangeEvents", "Message" : "9eeca625-0aef-43d5-9d39-ed563b125947", "Timestamp" : "2017-03-28T14:08:00.638Z", "SignatureVersion" : "1", "Signature" : "RmofvQNTKSA9IXfSf4LVe1fcm0vnXd3+hdUMPzqorOU6fIUhtV4ux0OYdelwlj6X...", "SigningCertURL" : "https://sns.us-west-2.amazonaws.com/SimpleNotificationServic... .pem", "UnsubscribeURL" : "https://sns.us-west-2.amazonaws.com/?Action=Unsubscribe&Subscript..." } ``` ## Monitoring the Status Of The System The system features a dedicated summary page, providing information about recent jobs and the general state of the system. The page is accessed by directing a web browser to the address `https://web-job-manager/summary/index.html`, where `` is the URL of the Job Manager service. After having authenticated against the OAuth provider the user will be presented with a system summary as shown below. ![Screen shot showing status of submitted workflows](./images/SummaryPage.png) The summary page is comprised of 3 main parts: system overview in the lower right corner, details of the currently selected job in the lower left corner and a job queue overview in the top part of the page. ## Monitoring with Amazon CloudWatch ### Queue sizes It is possible to monitor the number of waiting and running analyses using Amazon CloudWatch. The solution emits two custom metrics called `jobmanager.queue.waiting` and `jobmanager.queue.running`. These metrics represent the current number of waiting and running analysis jobs, respectively. An administrator may use this to set up alarms, notifications and automated actions using the Amazon CloudWatch services. It is recommended to use the "Average" statistic when doing so. ### Monitoring the Auto Scaling Configuration Cloudwatch form the basis of the auto scaling functionality of GCE. The metric relevant for auto scaling is `jobmanager.loadfactor`, defined as the ratio between the number of waiting and running jobs on the system. When the number of waiting jobs is greater than the number of running jobs their ratio is greater than 1, providing a simple and reasonable criteria for scaling out the system. Similarly, a ratio of 0 indicates a state suitable for scale-in as there are no jobs awaiting execution. These criteria are used to define CloudWatch alarms on the `jobmanager.loadfactor` metric and the alarms are then attached to scaling policies on the job executor auto scaling group. The scaling policies can be inspected from the EC2 dashboard as illustrated below. ![Inspecting scaling policies of an auto scaling group on the EC2 Dashboard](./images/scalinggroup-inspect-scaling-policies.png) Apart from inspecting scaling group policies the dashboard can also be used to monitor the state of all running instances, including their instance protection flag. This option is found on the “Instances” tab. Furthermore, the evolution of the number of instances over time can be monitored on the “Monitoring” tab. While the Dashboard allows for adjusting scaling group settings such as the min/max number of instances, it is important to note that QIAGEN only supports changes made using the Command Line Tools. # Appendix 1: Further installation details This appendix describes further details on the installation, including e.g. a description of scripts, Cloudformation template and AWS roles. ## CloudFormation installation files The installation procedure uses CloudFormation for creating and configuring required AWS components. Resources are defined in a set of CloudFormation script files (.template), which are all outlined in the following. **DbConnection.template**
VPC peering connection between service VPC and database VPC. **GenomicsCloudEngine.template**
Configures and creates all CloudFormation stacks and describes the dependencies between them. **GenomicsCloudEngineDb.template**
Configures the CLC Genomics Cloud Engine Database in a separate VPC. **IAMResources**
Sets up IAM roles and policies. **jobExecutor.template**
Creates ElasticBeanstalk environment and application for JobExecutor. **jobManager.template**
Creates ElasticBeanstalk environment and application for JobManager. **lambdaResources.template**
Lambda functions for installation support. **LicenseServer.template**
Creates ElasticBeanstalk environment and application for the required License server. **networkResources.template**
Network components and network topology configuration. Creates VPC, subnets and routing tables. **NonVPCResources.template**
Shared resources such as installation bucket and SQS queues for communication between components. **stringBuilder.template**
Simple utility for generating various string dependencies eg. URLs put together by multiple strings. ##Administration host scripts The following admin scripts are available in the `~/admin/` directory on the administration host: * `create-stack.sh`: create a new stack using the current configuration settings. * `retrieve-deployables.sh`: transfer the deployable files from installation location your account. * `manifest.sh`: system variables provided for the current CLC Genomics Cloud Engine release. * `init.sh`: default values for the used environment variables. Most of these values can be overwritten in your setup.sh script. * `job-manager-config/`: directory containing configuration templates for setting up the Authorization Server. * `tools/validate-stack-setup.sh`: checks the current configuration settings before creating a new stack. * `tools/restart-licenseserver.sh`: restarts the license manager service. Used after uploading new license file. * `tools/delete-stack.sh` deletes the deployed stack. * `tools/configure-job-executor.sh`: used internally by the `retrieve-deployables.sh` and `create-stack.sh` scripts * `tools/configure-job-manager.sh`: used internally by the `retrieve-deployables.sh` and `create-stack.sh` scripts ## Created IAM Resources When the solution is deployed the following roles are created: * `AdminHostRole` assumed by the administration host that creates the GCE application stack * `GCELogInpectorGroup` group providing provileges to view system logs and metrics. * `JobManagerRole` assumed by the Job Manager instances. * `JobExecutorRole` assumed by the Job Executor instances. * `LambdaCloudFormationExecutionRole` role for Lambda function used during installation. ### `AdminHostRole` This role may only be assumed by `ec2.amazonaws.com` and is used by the administration host. The role grants the following managed policies: * `job-function/SystemAdministrator` * `AWSElasticBeanstalkMulticontainerDocker` * `AmazonEC2ContainerRegistryPowerUser` * `AmazonS3FullAccess` * `AWSElasticBeanstalkFullAccess` * `IAMFullAccess` * `AWSLambdaFullAccess` ### `GCELogInpectorGroup` The log inspector group provides privileges for viewing GCE system logs and metrics. The group is empty by default. ### `JobManagerRole` This role may only be assumed by `ec2.amazonaws.com`. The role grants the following managed policies: * `CloudWatchFullAccess` * `AmazonSQSFullAccess` * `AmazonS3FullAccess` * `AmazonEC2ReadOnlyAccess` * `AWSCloudFormationReadOnlyAccess` * `AWSElasticBeanstalkWebTier` ### `JobExecutorRole` This role may only be assumed by `ec2.amazonaws.com`. The role grants the following managed policies: * `CloudWatchLogsFullAccess` * `AmazonSQSFullAccess` * `AmazonS3FullAccess` * `AmazonEC2FullAccess` * `AmazonEC2ContainerRegistryReadOnly` * `AWSCloudFormationReadOnlyAccess` * `AWSElasticBeanstalkMulticontainerDocker` * `AWSElasticBeanstalkWorkerTier` ### `LambdaCloudFormationExecutionRole` This role may only be assumed by `lambda.amazonaws.com`. A custom policy is defined, which grants permissions to create log groups, log streams and add log events. The policy also grants permissions to describe and list resources in CloudFormation and ElasticBeanstalk and limited access to autoscaling features. # Appendix 2: Certificate management ## Updating SSL/TLS certificate The GCE SSL/TLS certificate is easily updated from ACM as described in the [AWS documentation](https://docs.aws.amazon/acm/latest/userguide/import-reimport.html). The arn of the certificate used by GCE can be found in ~/setup.sh on the administration host. ## Using a Self-signed Certificate In the installation chapter it is described how to configure GCE with either a certificate issued by a root CA or a self-signed certificate for e.g. evaluation purposes or staging systems. While the installation of a self-signed certificate is handled transparently by the install scripts, it may be of use to understand some of the finer details of running GCE with a self-signed certificate. You will be guided through the process of generating a self–signed certificate using command line tools (`openssl`), configure the CLC Genomics Cloud Engine service to use this certificate and trust the certificate on the clients. Note that your organization may impose strict security requirements for certificate trust and management. You should always consult your local security team before deploying the service in production. ### Generating a Self–signed Certificate This guide uses the open source command line tool `openssl` which is available on all platforms supported by the CLC Genomics Cloud Engine. The tool is only used for generating the SSL private key and certificate offline. The setup and admin scripts include a `ssl/create-self-signed-cert.sh` bash script that automates the process. The script assumes access to the `openssl`, `aws` (Amazon command line tools) and `keytool` (Java command line tool for trust– and key store management) on the system path. Before running the script, note the following: * The script generates an SSL private key that is stored in clear text in the file system. The SSL private key protects the HTTPS connections that is made to the CLC Genomics Cloud Engine service. Care must be taken to ensure that this key is kept secret. * The script assumes certain defaults for the information that is written into the certificate, in particular the "Subject Alternative Name" extension that HTTPS clients use to validate that they are connected to the correct host. Review that the `ssl/openssl.cnf` file contains the correct information. Wildcard certificates (with e.g. `DNS.1 = *.us-west-2.elasticbeanstalk.com`) may be sufficient for evaluation purposes, but a production certificate should be specific. * The self–signed certificate has a default lifetime of one year. This may be changed in the script if desired. If you are using a platform without bash, you can view the script in an editor and perform the steps manually in a terminal. Once the script completes, it will have issued a certificate and uploaded it to Amazon IAM along with the SSL private key. The CLC Genomics Cloud Engine stack will use this certificate in the load balancer. The scripts also creates a trust store called `gce-cacerts`, which consists of a default trust store from the Java Runtime Environment with the self–signed certificate added. In the following, you will be guided through configuring certificate trust in the CLC Genomics Cloud Engine client. ### Configuring Trust for the CLC Genomics Cloud Engine Command Line Tools The CLC Genomics Cloud Engine Command Line Tools make connections to the CLC Genomics Cloud Engine service over HTTPS to ensure confidentiality of data transmitted to the service. To validate that the client is connecting to the correct service, the SSL server certificate will be validated as is common practice. With a self–signed certificate (or a certificate issued from a corporate PKI) the certificate will typically fail this validation because the certificate (or root CA certificate) is not in the trust store. Therefore, the client needs to be properly configured to trust the service, which can be done in two ways: either by having the tool contact the CLC Genomics Cloud Engine service configured with the certificate and learn the certificate, or by explicitly trusting the certificate based on the `ssl/cert.pem` file generated by the script used earlier in this chapter. Please see the CLC Genomics Cloud Engine Command Line Tools User Manual for further instructions.