top of page

Deploying OpenCTI on AKS using Helm

  • Writer: William Clarkson-Antill
    William Clarkson-Antill
  • 5 days ago
  • 4 min read

OpenCTI is an open‑source cyber threat intelligence platform designed to manage and visualise knowledge about cyber threats. This post documents how I deployed OpenCTI on Azure Kubernetes Service (AKS) using a Visual Studio subscription. It summarises the sequence of commands run, the pitfalls encountered and how they were resolved. Where appropriate, references to official documentation are provided for further reading.


Prerequisites

  • Azure account with Visual Studio subscription credits. Make sure you’re signed in to the correct subscription before creating resources.

  • Azure CLI and Kubectl. Azure Cloud Shell already has these tools installed.

  • Helm 3+ – the OpenCTI chart requires Helm; instructions for installing Helm are provided in the chart documentation.

  • At least two AKS nodes. Microsoft recommends a minimum of two nodes to run workloads reliably. I provisioned a three‑node cluster using the default VM size.


Step 1 – Create a resource group and AKS cluster

I started by creating a dedicated resource group and an AKS cluster in the australiaeast region. The Azure CLI commands below show the generic pattern – substitute names and regions to suit your needs.


# Set the subscription context

az account set --subscription "<Azure Subscription>"

#Create a resource group

az group create --name opencti-rg --location australiaeast

# Provision a three‑node AKS

clusteraz aks create \  --resource-group opencti-rg \  --name opencti-aks \  --node-count 3 \  --node-vm-size Standard_B4ms \  --generate-ssh-keys

After the cluster was created, I retrieved its credentials so that kubectl could communicate with it:

az aks get-credentials --resource-group opencti-rg --name opencti-aks# Verify connectivitykubectl get nodes

Step 2 – Configure Helm repository and namespace

OpenCTI provides an official Helm chart hosted by the devops-ia project. The chart’s README describes how to add the repository and install the chart. I added the repository, updated the local cache and created a dedicated namespace:


# Add and refresh the OpenCTI repositoryhelm repo add opencti https://devops-ia.github.io/helm-openctihelm repo update


# Create a namespace for OpenCTIkubectl create namespace opencti


Step 3 – Prepare a values file

The chart does not work out of the box; you need to provide a custom values.yaml file. The project includes a ci/ci-common-values.yaml example that demonstrates a minimal working configuration. I created my own values.yaml based on that example, replacing sensitive values with my own and using Kubernetes Secrets for passwords and tokens.

Key customisations included:


  • Release name and naming conventions. I set fullnameOverride: opencti-ci so that all resources use a consistent prefix.

  • Admin credentials. I defined APP__ADMIN__EMAIL, then created a secret (opencti-ci-credentials) containing APP__ADMIN__PASSWORD, APP__ADMIN__TOKEN and APP__HEALTH_ACCESS_KEY. These secrets were referenced via envFromSecrets, avoiding plain‑text credentials in environment variables.

  • Service configuration. The default service type is ClusterIP; I later changed this to LoadBalancer to expose the API publicly.

  • Dependencies. OpenCTI relies on OpenSearch, MinIO, RabbitMQ and Redis. For my initial deployment I disabled persistent storage for these dependencies to simplify provisioning. (Persistence can be enabled later with proper storage classes and sizes.)


Here is a simplified excerpt from my values file, with placeholder values:

# Minimal values.yaml excerptfullnameOverride: opencti-cienv: 

APP__ADMIN__EMAIL: admin@example.com 

APP__BASE_PATH: "/"  # Other application flags…envFromSecrets:  APP__ADMIN__PASSWORD:    name: opencti-ci-credentials    key: APP__ADMIN__PASSWORD  APP__ADMIN__TOKEN:    name: opencti-ci-credentials    key: APP__ADMIN__TOKEN  APP__HEALTH_ACCESS_KEY:    name: opencti-ci-credentials    key: APP__HEALTH_ACCESS_KEYsecrets:  - name: credentials    data:      APP__ADMIN__PASSWORD: <YourStrongPassword>     

APP__ADMIN__TOKEN: <YourUniqueAdminToken>     

APP__HEALTH_ACCESS_KEY: <YourHealthKey>service:  type: ClusterIP   # Later changed to LoadBalancer  port: 80  targetPort: 4000


Step 4 – Install the Helm chart

With the configuration ready, I installed the chart. The release name (opencti) is independent of the fullnameOverride value:

helm install opencti opencti/opencti \  --namespace opencti \  -f ./values.yaml

Helm created all the necessary resources: the OpenCTI server, worker and connector pods along with the dependency charts for OpenSearch, MinIO, RabbitMQ and Redis. The initial deployment required patience — the server performs a first‑time initialisation that can take several minutes, during which the API isn’t yet listening.


Step 5 – Troubleshooting pod readiness

After the installation, kubectl get pods -n opencti showed that some pods were stuck in Init or Pending states. To troubleshoot:


I used kubectl logs <pod> -n opencti --all-containers --tail=100 to view the logs of the server, worker and connector pods. The server log eventually printed API ready on port 4000, indicating that the service was listening.


I disabled persistence for dependencies in the values file when PVCs were stuck in Pending state due to missing storage classes.


I waited for the ready checks (configured by testConnection and readyChecker in the values file) to complete; these checks ensure that OpenSearch, MinIO, RabbitMQ and Redis are reachable before starting the main application.


Step 6 – Exposing the service externally

Since my local workstation wasn’t on the same network as the cluster, I needed to expose OpenCTI via a public IP. I edited the service section in my values.yaml and changed the type to LoadBalancer:service:  type: LoadBalancer  port: 80  targetPort: 4000

After running a Helm upgrade, Kubernetes created an Azure load balancer. I retrieved the external IP using:

kubectl get svc --namespace opencti opencti-ci-server -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

With that IP, I could access the OpenCTI web interface from my browser at http://<external-ip>. I logged in using the admin email and password defined in the secret.


Hardening considerations

Deploying any service on a public cloud requires careful security hardening. After getting OpenCTI running, I identified several areas to improve security:


Restrict load‑balancer access. Use the loadBalancerSourceRanges field in the service manifest to limit which IP ranges can reach the external IP. Alternatively, place the service behind an Ingress controller or Application Gateway with TLS termination and Web Application Firewall.


Enable Microsoft Defender for Containers. Azure’s Defender offering can scan container images, assess configuration and provide runtime threat detection.


Integrate with Microsoft Entra ID and Kubernetes RBAC. Limiting API server access via Entra ID and RBAC is recommended for secure clusters.


Apply network policies. Blocking pod access to the instance metadata endpoint helps prevent credential exposure.


Drop unnecessary privileges. Run containers as non‑root and avoid privileged escalation to limit the impact of a compromise.


Keep the cluster updated. Regularly upgrading AKS ensures you receive security patches and new features.


Conclusion

Deploying OpenCTI on Azure Kubernetes Service using Helm is straightforward but requires a customised configuration file and an understanding of the underlying dependencies. The critical steps include creating an appropriately sized AKS cluster, adding the OpenCTI Helm repository, customising the values.yaml to set admin credentials and connectivity parameters, and waiting patiently for the platform to initialise. Once up and running, exposing the service via a load balancer provides remote access, but this should be accompanied by proper security hardening such as IP restrictions, TLS termination and RBAC integration. With these measures in place, OpenCTI can serve as a powerful threat intelligence platform within your Azure environment.


Kubernetes on Azure tutorial - Create an Azure Kubernetes Service (AKS) cluster - Azure Kubernetes Service | Microsoft Learn


Best practices for cluster security - Azure Kubernetes Service | Microsoft Learn

Comments


Subscribe

Thanks for submitting!

bottom of page