I have been recently assigned task to evaluate DynaTrace tool, an application performance monitoring tool.
I configured three node cluster with one environment ActiveGate and one cluster ActiveGate node. The following howto covers the installation of three node DynaTrace cluster.
I received licenses for 60 nodes. Based on the information available on Dynatrace site, I configured VMs with a bit higher specs the Micro node size.
RAM: 64GB
vCPU: 8
Disks: 2 Disks
Disk 1 with LVM XFS (50GB)
LVM with xfs file system to enable on-line expansion in future.
/boot 1G
swap 8G
/ 41G
Disk 2 (500GB)
LVM with xfs file system to enable on-line expansion in future.
/opt 50GB
/var/opt/ 450GB
After the installation of virtual machine, I executed following command to display the disk layout
df -h
Output will be like this
Filesystem Size Used Avail Use% Mounted on devtmpfs 32G 0 32G 0% /dev tmpfs 32G 0 32G 0% /dev/shm tmpfs 32G 9.0M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/rhel-root 41G 3.2G 38G 8% / /dev/sda1 1014M 182M 833M 18% /boot /dev/mapper/dt-opt 50G 4.2G 46G 9% /opt /dev/mapper/dt-var_opt 442G 3.2G 439G 1% /var/opt tmpfs 6.3G 0 6.3G 0% /run/user/0
Three nodes with following hostnames and IPs were configured
Hostname | IP Address |
---|---|
lhr-dt01.induslevel.com | 172.16.50.101 |
lhr-dt02.induslevel.com | 172.16.50.110 |
lhr-dt03.induslevel.com | 172.16.50.122 |
We will later on configure ActiveGates
After the installation of virtual machines, connect them Redhat subscription portal to enable the repos in case of Redhat based virtual machines. You can skip this step in case of CentOS 7 virtual machines
subscription-manager register --username [email protected]
subscription-manager attach --auto
Update all the virtual machines
yum update -y
I find some tools from EPEL repository, very useful therefore enable the EPEL repo on Redhat
rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
Install few of the helper packages
yum install -y wget mlocate telnet htop atop iftop open-vm-tools && updatedb
Install ntpd to sync time otherwise cluster setup package will give error and will abort
yum install ntp -y
Update the NTP servers in case you have internal NTP servers configured. Otherwise skip this step
sed -i '/^server 2/d' /etc/ntp.conf
sed -i '/^server 3/d' /etc/ntp.conf
sed -i 's/^server 0.*/server lhr-ntp1.induslevel.com ibrust/g' /etc/ntp.conf
sed -i 's/^server 1.*/server lhr-ntp2.induslevel.com ibrust/g' /etc/ntp.conf
Review the changes made in ntp.conf file
cat /etc/ntp.conf |egrep -v "^$|#"
driftfile /var/lib/ntp/drift restrict default nomodify notrap nopeer noquery restrict 127.0.0.1 restrict ::1 server lhr-ntp1.induslevel.com ibrust server lhr-ntp2.induslevel.com ibrust includefile /etc/ntp/crypto/pw keys /etc/ntp/keys disable monitor
Enable the ntpd daemon on startup and check the status of sync. You might have to forcefully sync time via ntpdate command, first time
ntpdate -u lhr-ntp1.induslevel.com
systemctl enable ntpd && systemctl start ntpd && systemctl status ntpd && ntpq -c peers
ntpd.service - Network Time Service Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2021-09-11 21:06:09 PKT; 31min ago Main PID: 1421 (ntpd) Tasks: 1 CGroup: /system.slice/ntpd.service +-1421 /usr/sbin/ntpd -u ntp:ntp -g Sep 11 21:06:09 lhr-dt01.induslevel.com ntpd[1421]: Listen normally on 4 lo ::1 UDP 123 Sep 11 21:06:09 lhr-dt01.induslevel.com ntpd[1421]: Listen normally on 5 ens192 fe80::917d:c4a4:4545:8df2 UDP 123 Sep 11 21:06:09 lhr-dt01.induslevel.com ntpd[1421]: Listening on routing socket on fd #22 for interface updates Sep 11 21:06:09 lhr-dt01.induslevel.com ntpd[1421]: 0.0.0.0 c016 06 restart Sep 11 21:06:09 lhr-dt01.induslevel.com ntpd[1421]: 0.0.0.0 c012 02 freq_set kernel 0.000 PPM Sep 11 21:06:09 lhr-dt01.induslevel.com ntpd[1421]: 0.0.0.0 c011 01 freq_not_set Sep 11 21:09:27 lhr-dt01.induslevel.com ntpd[1421]: 0.0.0.0 c614 04 freq_mode Sep 11 21:24:43 lhr-dt01.induslevel.com ntpd[1421]: 0.0.0.0 0612 02 freq_set kernel 152.358 PPM Sep 11 21:24:43 lhr-dt01.induslevel.com ntpd[1421]: 0.0.0.0 061c 0c clock_step +0.152256 s Sep 11 21:24:44 lhr-dt01.induslevel.com ntpd[1421]: 0.0.0.0 c618 08 no_sys_peer remote refid st t when poll reach delay offset jitter ============================================================================== *11.13.164.111 129.250.35.251 3 u 5 64 377 1.425 -93.124 44.570 12.14.164.112 .STEP. 16 u - 512 0 0.000 0.000 0.000
For demo purposes, I have modified the default zone to trusted so that no traffic is blocked. DynaTrace installation script takes care of enabling required firewall access
firewall-cmd --set-default=trusted
I have disable Selinux as well
sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
Reboot the machines for Selinux to disable
reboot
It is better to take snapshots of virtual machines so that we can revert to starting point and can initiate the installation from the beginning.
Download the package which is around 1.1GB. A special link was provided by DynaTrace team as managed server package was not available on their website
wget -O dynatrace-managed.sh "https://mcsvc.dynatrace.com/downloads/installer/get/latest?token=xxxxxxxxxxxxxxxxxxxxxxxxxxx"
Once package is downloaded, you can use following command to check the integrity of the installer script
wget -qO dt-root.cert.pem https://mcsvc.dynatrace.com/dt-root.cert.pem; wget -qO dynatrace-managed.sh.sig https://mcsvc.dynatrace.com/downloads/signature?filename=$(grep -am 1 'ARCH_FILE_NAME=' dynatrace-managed.sh | cut -d= -f2 |sed 's/.tar.gz$//'); openssl cms -inform PEM -binary -verify -CAfile dt-root.cert.pem -in dynatrace-managed.sh.sig -content dynatrace-managed.sh > /dev/null
You can also verify the integrity via installer itself
/bin/sh dynatrace-managed.sh --self-check
Start the installation using following command. You will get the license code from DT sales team
/bin/sh dynatrace-managed.sh --license XXXXXXXXXXXXXXXX
During installation, I have selected the default options
Starting Dynatrace 1.224.84.20210902-165955 installer ... OK To continue installation please accept our Terms of Use, which are available at https://www.dynatrace.com/company/legal/customers/ By submitting 'Agree' you are confirming that you have read, understood, and accept to the Terms of Use. Otherwise, press Ctrl+C to cancel installation. ? Agree Verifying system compatibility ... OK Verifying RAM size ... OK Type the full path to your directory for Dynatrace binaries [/opt/dynatrace-managed]? Type the full path to your directory for Dynatrace data [/var/opt/dynatrace-managed]? Do you want to keep all Dynatrace data in this directory? [y]? Do you want to join an existing Dynatrace cluster (y/n)? [n]? Enter the command which should be used for executing commands with superuser privileges. This command should contain variable $CMD. [sudo -n $CMD]? Verifying disk space ... OK Testing connection to Dynatrace Mission Control ... OK Verifying system connectivity ... OK Preparing system user for Dynatrace ... OK Initializing installation ... OK Checking user permissions ... OK Downloading Dynatrace OneAgent. This may take a few minOKes ... Installing. This may take a few minutes ... OK Fixing selinux rules for binaries if needed ... Skipped Installing Nodekeeper ... OK Setting up cluster configuration. This may take a few minutes ... OK Starting Dynatrace. This may take up to half an hour ... OK Configuring Dynatrace. This may take a few minutes ... OK Installation completed successfully after 11 minutes 55 seconds. Dynatrace binaries are located in directory /opt/dynatrace-managed Dynatrace data is located in directory /var/opt/dynatrace-managed Dynatrace metrics repository is located in directory /var/opt/dynatrace-managed/cassandra Dynatrace Elasticsearch store is located in directory /var/opt/dynatrace-managed/elasticsearch Dynatrace server store is located in directory /var/opt/dynatrace-managed/server Dynatrace session replay store is located in directory /var/opt/dynatrace-managed/server/replayData You can now log into your Dynatrace Server at https://172.16.50.101
Once installation is complete, goto following address
https://172.16.50.101/cmc#cm/nodeDownload;gf=all
After setting initial configuration, page will give you instructions to setup other nodes
Execute following command on node 2. This will download the package from first node
wget -O managed-installer.sh https://ghbxxx.dynatrace-managed.com/nodeinstaller/dt0c01.XXXXXXXXXXXXXXXXXXXXXXXXXX
After data download, execute following command to start the installation. During script execution, I have selected default options
/bin/sh managed-installer.sh --seed-auth dt0c01.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Starting Dynatrace 1.224.84.20210902-165955 installer ... OK Verifying system compatibility ... OK Verifying RAM size ... OK Type the full path to your directory for Dynatrace binaries [/opt/dynatrace-managed]? Type the full path to your directory for Dynatrace data [/var/opt/dynatrace-managed]? Do you want to keep all Dynatrace data in this directory? [y]? Enter the command which should be used for executing commands with superuser privileges. This command should contain variable $CMD. [sudo -n $CMD]? Verifying disk space ... OK Testing connection to Dynatrace Mission Control ... OK Verifying system connectivity ... OK Collecting information about Dynatrace cluster ... OK Preparing Dynatrace cluster for extension ... Waiting 6 minutes for permission to add new node ... Permission status: pre-check pending Checking again in 20 seconds (5 minutes 59 seconds left) Permission status: Join is possible Permission is granted Adding this node to Dynatrace cluster ... OK Preparing system user for Dynatrace ... OK Extending existing cluster (seed node IP: 172.16.50.101) with new node. Initializing installation ... OK Checking user permissions ... OK Downloading Dynatrace OneAgent. This may take a few minutes ... OK Installing. This may take a few minutes ... OK Fixing selinux rules for binaries if needed ... Skipped Installing Nodekeeper ... OK Setting up cluster configuration. This may take a few minutes ... OK Starting Dynatrace. This may take up to half an hour ... OK Configuring Dynatrace. This may take a few minutes ... OK Installation completed successfully after 13 minutes 21 seconds. Dynatrace binaries are located in directory /opt/dynatrace-managed Dynatrace data is located in directory /var/opt/dynatrace-managed Dynatrace metrics repository is located in directory /var/opt/dynatrace-managed/cassandra Dynatrace Elasticsearch store is located in directory /var/opt/dynatrace-managed/elasticsearch Dynatrace server store is located in directory /var/opt/dynatrace-managed/server Dynatrace session replay store is located in directory /var/opt/dynatrace-managed/server/replayData You can now log into your Dynatrace Server at https://172.16.50.110
Do the same for third node
Execute following command on node 3. This will download the package from first node
wget -O managed-installer.sh https://ghbxxx.dynatrace-managed.com/nodeinstaller/dt0c02.XXXXXXXXXXXXXXXXXXXXXXXXXX
After data download, execute following command to start the installation. During script execution, I have selected default options
/bin/sh managed-installer.sh --seed-auth dt0c02.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Starting Dynatrace 1.224.84.20210902-165955 installer ... OK Verifying system compatibility ... OK Verifying RAM size ... OK Type the full path to your directory for Dynatrace binaries [/opt/dynatrace-managed]? Type the full path to your directory for Dynatrace data [/var/opt/dynatrace-managed]? Do you want to keep all Dynatrace data in this directory? [y]? Enter the command which should be used for executing commands with superuser privileges. This command should contain variable $CMD. [sudo -n $CMD]? Verifying disk space ... OK Testing connection to Dynatrace Mission Control ... OK Verifying system connectivity ... OK Collecting information about Dynatrace cluster ... OK Preparing Dynatrace cluster for extension ... Waiting 6 minutes for permission to add new node ... Permission status: pre-check pending Checking again in 20 seconds (5 minutes 59 seconds left) Permission status: Join is possible Permission is granted Adding this node to Dynatrace cluster ... OK Preparing system user for Dynatrace ... OK Extending existing cluster (seed node IP: 172.16.50.101) with new node. Initializing installation ... OK Checking user permissions ... OK Downloading Dynatrace OneAgent. This may take a few minutes ... OK Installing. This may take a few minutes ... OK Fixing selinux rules for binaries if needed ... Skipped Installing Nodekeeper ... OK Setting up cluster configuration. This may take a few minutes ... OK Starting Dynatrace. This may take up to half an hour ... OK Configuring Dynatrace. This may take a few minutes ... OK Installation completed successfully after 13 minutes 21 seconds. Dynatrace binaries are located in directory /opt/dynatrace-managed Dynatrace data is located in directory /var/opt/dynatrace-managed Dynatrace metrics repository is located in directory /var/opt/dynatrace-managed/cassandra Dynatrace Elasticsearch store is located in directory /var/opt/dynatrace-managed/elasticsearch Dynatrace server store is located in directory /var/opt/dynatrace-managed/server Dynatrace session replay store is located in directory /var/opt/dynatrace-managed/server/replayData You can now log into your Dynatrace Server at https://172.16.50.110
Now, you can access the cluster via IP of any of the node or the URL which has all three nodes added in A-record
Excellent article. Put this on your youtube channel as well.
Sure. Will do. I need to build test environment again. I will record video while setting up demo environment.
Looks like a very interesting and well-written how-to. Though not my domain but still the effort input is clear and evident. Keep up the good work!