Terraform stories. – DEV Community
December 22, 2024

Terraform stories. – DEV Community


SAP Kyma with dynamic OIDC credentials and HCP Terraform

HCP Terraform already supports Dynamic credentials Work with Kubernetes vendors with AWS and GCP platforms.

I have extended this support to SAP BTP, Kyma runtime cluster with SAP Business Technology Platform.

Let’s see how…



1. Set up Kubernetes


Configure HCP Terraform OIDC identity provider using SAP Kyma cluster.

SAP Kyma supports gardeners oidc bud extensionthus effectively allowing any number of OIDC providers to exist in a single bud cluster.

The following operations must be completed in advance during the kyma cluster boot phase.

OpenIDConnect_HCP

locals  ./kubectl apply --kubeconfig $KUBECONFIG -n $NAMESPACE -f - 

    else
      echo $crd
    fi

     )
   EOF
 
Enter full screen mode

Exit full screen mode

resource "terraform_data" "bootstrap-tfc-oidc" {
  triggers_replace = {
    always_run = "${timestamp()}"
  }

  # the input becomes a definition of an OpenIDConnect provider as a non-sensitive json encoded string 
  #
  input = [ 
      nonsensitive(local.OpenIDConnect_HCP) 
      ]

 provisioner "local-exec" {
   interpreter = ["/bin/bash", "-c"]
   command = <  bootstrap-kymaruntime-bot.json
      echo $OpenIDConnect | ./kubectl apply --kubeconfig $KUBECONFIG -n $NAMESPACE -f - 

    else
      echo $crd
    fi

     )
   EOF
 }
}
Enter full screen mode

Exit full screen mode

Therefore, the following OpenIDConnect CR will be available in your kyma cluster.

OIDC identity resolves authentication requests to the Kubernetes API. However, it must be authorized before interacting with the cluster API.

To do this, a custom cluster role must be created for the terraform OIDC identity in the kyma cluster using the “User” and/or “Group” themes.

For OIDC identities from TFC (HCP Terraform), the format of the role binding “user” value is as follows:

organization::project::workspace::run_phase:.
Enter full screen mode

Exit full screen mode

I chose to generate these RBAC identities in the initial kyma cluster terraform configuration, therefore, adding plan and Apply The stage identifier of the initial kyma execution environment configuration is administrator.

User ID

/ https://developer.hashicorp.com/terraform/cloud-docs/run/run-environment#environment-variables
//
variable "TFC_WORKSPACE_NAME" {
  // HCP Terraform automatically injects the following environment variables for each run. 
  description = "The name of the workspace used in this run."
  type        = string
}

variable "TFC_PROJECT_NAME" {
  // HCP Terraform automatically injects the following environment variables for each run. 
  description = "The name of the project used in this run."
  type        = string
}

variable "TFC_WORKSPACE_SLUG" {
  // HCP Terraform automatically injects the following environment variables for each run. 
  description = "The slug consists of the organization name and workspace name, joined with a slash."
  type        = string
}

// organization::project::workspace::run_phase:.
locals {
  organization_name = split("https://dev.to/", var.TFC_WORKSPACE_SLUG)[0]
  user_plan = "organization:${local.organization_name}:project:${var.TFC_PROJECT_NAME}:workspace:${var.TFC_WORKSPACE_NAME}:run_phase:plan"
  user_apply = "organization:${local.organization_name}:project:${var.TFC_PROJECT_NAME}:workspace:${var.TFC_WORKSPACE_NAME}:run_phase:apply"
}
Enter full screen mode

Exit full screen mode

This way, once the kyma runtime environment is configured, the required identities are in place in the kyma cluster.

After bootstrapping the kyma cluster using HCP Terraform’s OIDC provider, you can bind the RBAC role to the group.

Group identity

resource "kubernetes_cluster_role_binding_v1" "oidc_role" {
  //depends_on = [  ] 

  metadata {
    name = "terraform-identity-admin"
  }
  //
  // Groups are extracted from the token claim designated by 'rbac_group_oidc_claim'
  //
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "cluster-admin"
  }
  subject {
    api_group = "rbac.authorization.k8s.io"
    kind      = "Group"
    name      = var.tfc_organization_name
    namespace = ""
  }  
}
Enter full screen mode

Exit full screen mode


2. Configure HCP Terraform


Required environment variables

HCP Terraform will require these two environment variables to enable kubernetes dynamic credentials

changeable value notes
TFC_KUBERNETES_PROVIDER_AUTH TFC_KUBERNETES_PROVIDER_AUTH[_TAG] real Must be present and set to true, otherwise HCP Terraform will not attempt to authenticate to Kubernetes.
TFC_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE TFC_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE[_TAG] TFC_DEFAULT_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE Audience name in the cluster OIDC configuration, such as kubernetes.

You can set them as workspace variables, or use variable sets if you want to share Kubernetes roles across multiple workspaces.


3. Configuration provider

HCP Terraform will allocate tfc_kubernetes_dynamic_credentials Change the kubeconfig token validity period to 90 minutes.

tfc_kubernetes_dynamic_credentials

 variable "tfc_kubernetes_dynamic_credentials" {
  description = "Object containing Kubernetes dynamic credentials configuration"
  type = object({
    default = object({
      token_path = string
    })
    aliases = map(object({
      token_path = string
    }))
  })
}

output "kube_token" {
  sensitive = true
  value = file(var.tfc_kubernetes_dynamic_credentials.default.token_path)
}
Enter full screen mode

Exit full screen mode

Provider configuration

terraform {
/**/ 
  cloud {
    organization = ""


    workspaces {
      project = "terraform-stories"
      tags = ["runtime-context"]      
    }
  }
/**/ 
  required_providers {
    btp = {
      source  = "SAP/btp"
    }    
    # https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs
    kubernetes = {
      source  = "hashicorp/kubernetes"
    }
    # https://registry.terraform.io/providers/alekc/kubectl/latest/docs
    kubectl = {
      source  = "alekc/kubectl"
      //version = "~> 2.0"
    }
  }
}
Enter full screen mode

Exit full screen mode

provider "kubernetes" {
 host                   = var.cluster-endpoint-url
 cluster_ca_certificate = base64decode(var.cluster-endpoint-ca)
 token                  = file(var.tfc_kubernetes_dynamic_credentials.default.token_path)
}

provider "kubectl" {
 host                   = var.cluster-endpoint-url
 cluster_ca_certificate = base64decode(var.cluster-endpoint-ca)
 token                  = file(var.tfc_kubernetes_dynamic_credentials.default.token_path)
 load_config_file       = false

}
Enter full screen mode

Exit full screen mode

One can retrieve both host and cluster_ca_certificate From kyma cluster kubeconfig is as follows:

kyma cluster kubeconfig

locals {
  labels = btp_subaccount_environment_instance.kyma.labels
}

data "http" "kubeconfig" {

  depends_on = [btp_subaccount_environment_instance.kyma]

  url = jsondecode(local.labels)["KubeconfigURL"]

  lifecycle {
    postcondition {
      condition     = can(regex("kind: Config",self.response_body))
      error_message = "Invalid content of downloaded kubeconfig"
    }
    postcondition {
      condition     = contains([200], self.status_code)
      error_message = self.response_body
    }
  } 

}

# yaml formatted default (oid-based) kyma kubeconfig
locals {
  kubeconfig = data.http.kubeconfig.response_body

  cluster_ca_certificate = base64decode(local.kubeconfig.clusters[0].cluster.certificate-authority-data)
 host                   = local.kubeconfig.clusters[0].cluster.server
}
Enter full screen mode

Exit full screen mode


4. Retrieve kyma cluster configuration


example

kyma cluster shot_info

data "kubernetes_config_map_v1" "shoot_info" {
  metadata {
    name = "shoot-info"
    namespace = "kube-system"
  }
}

output "shoot_info" {
  value =  jsondecode(jsonencode(data.kubernetes_config_map_v1.shoot_info.data))
}
Enter full screen mode

Exit full screen mode

shoot_info = {
        domain            = ".kyma.ondemand.com"
        extensions        = "shoot-auditlog-service,shoot-cert-service,shoot-dns-service,shoot-lakom-service,shoot-networking-filter,shoot-networking-problemdetector,shoot-oidc-service"
        kubernetesVersion = "1.30.6"
        maintenanceBegin  = "200000+0000"
        maintenanceEnd    = "000000+0000"
        nodeNetwork       = "10.250.0.0/16"
        nodeNetworks      = "10.250.0.0/16"
        podNetwork        = "100.64.0.0/12"
        podNetworks       = "100.64.0.0/12"
        projectName       = "kyma"
        provider          = "azure"
        region            = "westeurope"
        serviceNetwork    = "100.104.0.0/13"
        serviceNetworks   = "100.104.0.0/13"
        shootName         = ""
    }
Enter full screen mode

Exit full screen mode

kyma cluster availability zone

data "kubernetes_nodes" "k8s_nodes" {}

locals {
  k8s_nodes = { for node in data.kubernetes_nodes.k8s_nodes.nodes : node.metadata.0.name => node }
}

data "jq_query" "k8s_nodes" {

  data =  jsonencode(local.k8s_nodes)
  query = "[ .[].metadata[] | { NAME: .name, ZONE: .labels.\"topology.kubernetes.io/zone\", REGION: .labels.\"topology.kubernetes.io/region\" } ]"
}

output "k8s_zones" { 
  value = jsondecode(data.jq_query.k8s_nodes.result)
}
Enter full screen mode

Exit full screen mode

k8s_zones = [
        {
            NAME   = "shoot--kyma---cpu-worker-0-z1-5759f-j6tsf"
            REGION = "westeurope"
            ZONE   = "westeurope-1"
        },
        {
            NAME   = "shoot--kyma---cpu-worker-0-z2-76d84-br7v6"
            REGION = "westeurope"
            ZONE   = "westeurope-2"
        },
        {
            NAME   = "shoot--kyma---cpu-worker-0-z3-5b77f-scbpv"
            REGION = "westeurope"
            ZONE   = "westeurope-3"
        },
    ]
Enter full screen mode

Exit full screen mode

kyma cluster module list

data "kubernetes_resource" "KymaModules" {
  api_version    = "operator.kyma-project.io/v1beta2"
  kind           = "Kyma"

  metadata {
    name      = "default"
    namespace = "kyma-system"
  }  
} 

locals {
  KymaModules = data.kubernetes_resource.KymaModules.object.status.modules
}

data "jq_query" "KymaModules" {
  depends_on = [
        data.kubernetes_resource.KymaModules
  ] 
  data =  jsonencode(local.KymaModules)
  query = "[ .[] | { channel, name, version, state, api: .resource.apiVersion, fqdn } ]"
}


output "KymaModules" {
  value =  jsondecode(data.jq_query.KymaModules.result)
}

Enter full screen mode

Exit full screen mode

KymaModules = [
        {
            api     = "operator.kyma-project.io/v1alpha1"
            channel = "regular"
            fqdn    = "kyma-project.io/module/btp-operator"
            name    = "btp-operator"
            state   = "Ready"
            version = "1.1.18"
        },
        {
            api     = "operator.kyma-project.io/v1alpha1"
            channel = "regular"
            fqdn    = "kyma-project.io/module/serverless"
            name    = "serverless"
            state   = "Ready"
            version = "1.5.1"
        },
        {
            api     = "connectivityproxy.sap.com/v1"
            channel = "regular"
            fqdn    = "kyma-project.io/module/connectivity-proxy"
            name    = "connectivity-proxy"
            state   = "Ready"
            version = "1.0.4"
        },
        {
            api     = "operator.kyma-project.io/v1alpha1"
            channel = "regular"
            fqdn    = "kyma-project.io/module/api-gateway"
            name    = "api-gateway"
            state   = "Ready"
            version = "2.10.1"
        },
        {
            api     = "operator.kyma-project.io/v1alpha2"
            channel = "regular"
            fqdn    = "kyma-project.io/module/istio"
            name    = "istio"
            state   = "Ready"
            version = "1.11.1"
        },
    ]
Enter full screen mode

Exit full screen mode

2024-12-21 21:52:29

Leave a Reply

Your email address will not be published. Required fields are marked *