Maintaining and Changing the Implementation

This section of the documentation covers maintaining, upgrading, and scaling the reference architecture implementation.

Upgrading LogScale

You can upgrade logscale by setting the logscale_image_version in your TFVAR_FILE to the desired target version:

ini
logscale_image_version = "1.179.0"

Apply the update:

shell
terraform apply -target module.logscale -var-file $TFVAR_FILE

This updates the kubernetes manifest that defines the LogScale cluster, triggering Humio Operator to upgrade the cluster appropriately.

Scaling the Architecture

All scaling operations should be done during maintenance windows. Keep the following points in mind.

  • When changing AKS node VM types, the entire node pool will be replaced and can result in downtime. AKS does this by:
    • Creating a temporary node pool
    • Migrating pods on that node pool
    • Terminating the old node pool
    • Creating a new final node pool
    • Migrating pods to the final node pool
    • Terminating the temporary node pool
  • When changing pod resourcing, some PVCs will not get replaced. For example, if a Kafka node has a persistent claim of 1TB and the new size calls for 2TB - the 1TB PVC will not be replaced without manual intervention.
Kafka Node Pod Scaling

The current Strimzi module will build nodes in two groups:

  1. Controller/Broker nodes
  2. Broker only nodes

Depending on the number of nodes selected for an architecture, you will have 3, 5, or 7 nodes that will act as a controller for the environment. This is determined by the following locals in terraform:

ini
# Convert the given number of broker pods into a controller/broker split and account for the smallest
# architecture of 3 nodes
locals {
  possible_controller_counts = [for c in [3,5,7] : c if c < var.kafka_broker_pod_replica_count]
  controller_count = var.kafka_broker_pod_replica_count <= 3 ? 3 : max(local.possible_controller_counts...)
  broker_count = var.kafka_broker_pod_replica_count <= 3 ? 0 : var.kafka_broker_pod_replica_count - local.controller_count
 
  kubernetes_namespace = "${var.k8s_namespace_prefix}"
}
Logscale Kubernetes Pod Scaling

LogScale pods in this architecture are designed to have a one-to-one relationship with the underlying AKS node. However, the Humio Operator does not currently support autoscaling operations while the underlying Kubernetes nodes do. In most cluster sizes, the desired count of pods is less than the maximum number of kubernetes nodes for that tier. For example, in a "small:advanced" architecture, the desired digest pod count is 6 while the maximum AKS digest node count is 12. In this situation, you can expand kubernetes by updating the node count in cluster_size.tpl to a new target count that should not exceed the maximum AKS node count.

After saving cluster_size.tpl, run:

shell
terraform apply -target module.logscale -var-file $TFVAR_FILE
Scaling up infrastructure

This section describes scaling up the infrastructure, for example from xsmall to small to medium, and so on.

Scaling up generally consists of:

  • Changing AKS node pool sizes
  • Changing AKS node pool VM types
  • Changing kubernetes assigned resources
  • Changing kubernetes pod counts

The process is as follows:

  1. Update your tfvars to the new size.
  2. Apply infrastructure changes:
    shell
    terraform apply -target module.azure-kubernetes -var-file $TFVAR_FILE
  3. Apply the kubernetes changes:
    shell
    terraform apply -target module.kafka -target module.logscale -var-file $TFVAR_FILE

When the process is complete, clean out any terminated pods from the LogScale Cluster Node Management UI.

Scaling Infrastructure Out

This section describes scaling out the infrastructure, for example basic to ingress to advanced.

Scaling out generally consists of:

  • Adding additional subnets
  • Adding new kubernetes node pools
  • Redefining the LogScale cluster

The process is as follows:

  1. Update your tfvars to the new architecture type.
  2. Apply infrastructure changes:
    shell
    terraform apply -target module.azure-core -target module.azure-keyvault -target module.azure-kubernetes -target module.logscale-storage-account -var-file $TFVAR_FILE
  3. Apply the kubernetes changes:
    shell
    terraform apply -target module.logscale -var-file $TFVAR_FILE

When the process is complete, clean out any terminated pods from the Logscale Cluster Node Management UI.

Important

Moving from larger architectures to smaller, for example from advanced to basic, is not recommended.