Requirements and Build Information
The following sections describe the requirements and prerequisites for the Azure reference platform.
Prerequisites
Before starting the deployment, ensure you have the following tools and access:
Terraform 1.10.5: Terraform is the infrastructure as code tool used to manage the deployment. Version 1.10.5 is recommended at this time due to known issues in 1.11.0/1.11.1. kubectl 1.232+: kubectl is the command-line tool for interacting with the Kubernetes cluster. Azure Command Line 2.68.0+: The Azure Command Line (az cli) allows you to interact with Azure services from the command line. Owner Access to Azure Subscription: For full architecture deployment, owner access is expected to the target Azure subscription.
It is additionally recommended, but not required, to install Helm 3.17.0 or later, for troubleshooting Helm-based Kubernetes deployments.
Azure Access Requirements
The account running this Terraform needs to be assigned the Owner role for the target subscription due to the assignment of roles to the managed identity used by the kubernetes control plane. Role assignment for the Kubernetes cluster is as follows:
Reader - scoped to the Disk Encryption Set created during this process. Allows identity to read the disk encryption set used for node disk encryption. Network Contributor - scoped to the resource group created by this terraform. Allows identity to bind a managed load balancer to a public IP created during the Terraform run for environment access. Key Vault Crypto User - scoped to the Azure Key Vault created during this process. Allows the disk set encryption managed identity the ability to use the key vault for disk encryption.
IP-based Access Restrictions
There are four variables that control public access to the environment.
You set these in your Terraform configuration file, set by the
TFVAR_FILE
variable, as shown in the following example:
ip_ranges_allowed_to_kubeapi = ["192.168.3.32/32", "192.168.4.1/32"]
ip_ranges_allowed_https = ["192.168.1.0/24"]
ip_ranges_allowed_to_bastion = ["192.168.3.32/32", "192.168.4.1/32"]
ip_ranges_allowed_kv_access = ["192.168.3.32/32", "192.168.4.1/32"]
Type | Description |
---|---|
ip_ranges_allowed_to_kubeapi | The Kubernetes API is publicly available by default. This variable limits access to the API and impacts the ability to run Kubernetes API commands. |
ip_ranges_allowed_https | The ingress endpoint for UI Access to Logscale and Ingestion to logscale is publicly available. This limits access. |
ip_ranges_allowed_to_bastion | If you choose to build a bastion host during this process, this limits access to SSH to the host. |
ip_ranges_allowed_kv_access | Access to the Azure Key Vault is limited to ranges defined here. |
Note
ip_ranges_allowed_kv_access
and
ip_ranges_allowed_to_kubeapi
must be set correctly
for Terraform to operate as expected.
Kubernetes Namespace Separation
Multiple namespaces are created in Kubernetes during the terraform
application process in order to promote security and separation of the
applications. All namespaces are created using variable
var.k8s_namespace_prefix
(default: log). Assuming the
default value for k8s_namespace_prefix
, terraform
creates the following namespaces in kubernetes:
Type | Description |
---|---|
log | Logscale Humio Operator, Strimzi Kafka Brokers / Controllers (Optional), Strimzi Kafka Operator (Optional), Ingestion Generator Pods (Optional) |
log-topolvm | TopoLVM Controller and Nodes |
log-cert | Cert Manager |
log-ingress | NGINX ingress controllers |
Cluster Size Configuration
The cluster_size.tpl
file specifies the available
parameters for different sizes of LogScale clusters. This template
defines various cluster sizes, for example xsmall
,
small
, medium
and their associated
configurations, including node counts, instance types, disk sizes, and
resource limits. The Terraform configuration uses this template to
dynamically configure the LogScale deployment based on the selected
cluster size.
The data from cluster_size.tpl
is retrieved and
rendered by the locals.tf
file. The
locals.tf
file uses the jsondecode
function to parse the template and select the appropriate cluster size
configuration based on the logscale_cluster_size
variable.
Example:
# Local Variables
locals {
# Render a template of available cluster sizes
cluster_size_template = jsondecode(templatefile("${path.module}/cluster_size.tpl", {}))
cluster_size_rendered = {
for key in keys(local.cluster_size_template) :
key => local.cluster_size_template[key]
}
cluster_size_selected = local.cluster_size_rendered[var.logscale_cluster_size]
}
Setting Logscale Configuration Variables
LogScale will be configured with a default set of configuration
values that can be overridden or added to by defining
var.user_logscale_envars
in your
TFVAR_FILE
. For example, to change default values for
LOCAL_STORAGE_MIN_AGE_DAYS
and
LOCAL_STORAGE_PERCENTAGE
, you can set this in your
TFVAR_FILE
, as shown in the following example:
user_logscale_envvars = [ { "name" = "LOCAL_STORAGE_MIN_AGE_DAYS", "value" = "7" }, { "name" = "LOCAL_STORAGE_PERCENTAGE", "value" = "85" } ]
This mechanism also supports referencing Kubernetes secrets should you provision them outside this Terraform:
user_logscale_envvars = [
{
"name" = "SECRET_LOGSCALE_CONFIGURATION",
"valueFrom" = {
"secretKeyRef" = {
"key" = "secret_value"
"name" = "kubernetes_secret_name"
}
}
},
{ "name" = "LOCAL_STORAGE_MIN_AGE_DAYS", "value" = "7" },
{ "name" = "LOCAL_STORAGE_PERCENTAGE", "value" = "85" }
]
The default environment values set by this Terraform are as follows:
Configuration Name | Value |
---|---|
AZURE_BUCKET_STORAGE_ENABLED
| true |
AZURE_STORAGE_USE_HTTP_PROXY
| false |
AZURE_STORAGE_ACCOUNTNAME
| var.azure_storage_account_name |
AZURE_STORAGE_BUCKET
| var.azure_storage_container_name |
AZURE_STORAGE_ENDPOINT_BASE
| var.azure_storage_endpoint_base |
AZURE_STORAGE_OBJECT_KEY_PREFIX
| var.name_prefix |
AZURE_STORAGE_REGION
| var.azure_storage_region |
AZURE_STORAGE_ACCOUNTKEY
| Kubernetes Secret: var.k8s_secret_storage_access_key |
AZURE_STORAGE_ENCRYPTION_KEY
| Kubernetes Secret: var.k8s_secret_encryption_key |
KAFKA_COMMON_SECURITY_PROTOCOL
| SSL |
USING_EPHEMERAL_DISKS
| true |
LOCAL_STORAGE_PERCENTAGE
| 80 |
LOCAL_STORAGE_MIN_AGE_DAYS
| 1 |
KAFKA_BOOTSTRAP_SERVERS
| var.kafka_broker_servers |
KAFKA_SERVERS
| var.kafka_broker_servers |
PUBLIC_URL
| https://${var.logscale_public_fqdn} |
AUTHENTICATION_METHOD
| static |
STATIC_USERS
| Kubernetes Secret: var.k8s_secret_static_user_logins |
KAFKA_COMMON_SSL_TRUSTSTORE_TYPE *
| PKCS12 |
KAFKA_COMMON_SSL_TRUSTSTORE_PASSWORD *
| Kubernetes Secret: local.kafka_truststore_secret_name |
KAFKA_COMMON_SSL_TRUSTSTORE_LOCATION *
| /tmp/kafka/ca.p12 |
Values marked with *
are removed when
var.provision_kafka_servers
is set to
false
.
Bring Your Own Kafka
If Kafka already exists and meets the following expectations, it can be used in place of Strimzi created by this Terraform. Expected configuration:
Client Authentication: None (TBD) KRaft Mode: Enabled TLS Communications: Enabled
In order to use your own Kafka, make the following modifications to the execution instructions:
Set terraform variable provision_kafka_servers
tofalse
.Set terraform variable byo_kafka_connection_string
to your connection string.Do not execute the build of Strimzi in the following instructions.
Bring Your Own Certificate
By default, a Let's Encrypt certificate will be generated and placed on the ingress controller. You can bring your own certificate to the ingress by:
Importing or generating a certificate in Azure Keyvault. See Azure Key Vault Certificate Import for more information. Passing this information to module logscale in main.tf
module "logscale" {
source = "./modules/kubernetes/logscale"
<truncated for readability>
use_custom_certificate = true
custom_tls_certificate_keyvault_entry = "my-keyvault-item-name"
<truncated for readability>
}
A module exists in this Terraform that allows for provisioning a certificate via Azure Key Vault. This is self-signed by default but the module allows for alternative certificate issuers depending on your environment.
module "azure-selfsigned-cert" {
source = "./modules/azure/certificate"
azure_keyvault_id = module.azure-keyvault.keyvault_id
logscale_public_fqdn = "this-is-a-test.local"
name_prefix = local.resource_name_prefix
subject_alternative_names = [module.azure-core.ingress-pub-fqdn, "othername.local"]
cert_issuer = "Self"
}
Targeted Terraform
When leveraging this Terraform repository, you must run terraform using
the -target
flag to apply specific modules. The latter
half of the terraforming process requires access to a Kubernetes API to
successfully plan and apply changes.
After the environment is fully built, the targeted approach isn't strictly required but remains recommended to ensure proper order of operations.
Building a Bastion Host
A module is provided to build a bastion host. This host will have a public IP attached to it for SSH access and can be used for operations in the environment when setting up private kubernetes API access:
terraform apply -target module.bastion-host-1 -var-file $TFVAR_FILE